One Argument Against An Army

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-15T18:39:43.000Z · LW · GW · Legacy · 37 comments

Contents

37 comments

I talked about a style of reasoning in which not a single contrary argument is allowed, with the result that every non-supporting observation has to be argued away. Here I suggest that when people encounter a contrary argument, they prevent themselves from downshifting their confidence by rehearsing already-known support.

Suppose the country of Freedonia is debating whether its neighbor, Sylvania, is responsible for a recent rash of meteor strikes on its cities. There are several pieces of evidence suggesting this: the meteors struck cities close to the Sylvanian border; there was unusual activity in the Sylvanian stock markets before the strikes; and the Sylvanian ambassador Trentino was heard muttering about “heavenly vengeance.”

Someone comes to you and says: “I don’t think Sylvania is responsible for the meteor strikes. They have trade with us of billions of dinars annually.” “Well,” you reply, “the meteors struck cities close to Sylvania, there was suspicious activity in their stock market, and their ambassador spoke of heavenly vengeance afterward.” Since these three arguments outweigh the first, you keep your belief that Sylvania is responsible—you believe rather than disbelieve, qualitatively. Clearly, the balance of evidence weighs against Sylvania.

Then another comes to you and says: “I don’t think Sylvania is responsible for the meteor strikes. Directing an asteroid strike is really hard. Sylvania doesn’t even have a space program.” You reply, “But the meteors struck cities close to Sylvania, and their investors knew it, and the ambassador came right out and admitted it!” Again, these three arguments outweigh the first (by three arguments against one argument), so you keep your belief that Sylvania is responsible.

Indeed, your convictions are strengthened. On two separate occasions now, you have evaluated the balance of evidence, and both times the balance was tilted against Sylvania by a ratio of 3 to 1.

You encounter further arguments by the pro-Sylvania traitors—again, and again, and a hundred times again—but each time the new argument is handily defeated by 3 to 1. And on every occasion, you feel yourself becoming more confident that Sylvania was indeed responsible, shifting your prior according to the felt balance of evidence.

The problem, of course, is that by rehearsing arguments you already knew, you are double-counting the evidence. This would be a grave sin even if you double-counted all the evidence. (Imagine a scientist who does an experiment with 50 subjects and fails to obtain statistically significant results, so the scientist counts all the data twice.)

But to selectively double-count only some evidence is sheer farce. I remember seeing a cartoon as a child, where a villain was dividing up loot using the following algorithm: “One for you, one for me. One for you, one-two for me. One for you, one-two-three for me.”

As I emphasized in the last essay, even if a cherished belief is true, a rationalist may sometimes need to downshift the probability while integrating all the evidence. Yes, the balance of support may still favor your cherished belief. But you still have to shift the probability down—yes, down—from whatever it was before you heard the contrary evidence. It does no good to rehearse supporting arguments, because you have already taken those into account.

And yet it does appear to me that when people are confronted by a new counterargument, they search for a justification not to downshift their confidence, and of course they find supporting arguments they already know. I have to keep constant vigilance not to do this myself! It feels as natural as parrying a sword-strike with a handy shield.

With the right kind of wrong reasoning, a handful of support—or even a single argument—can stand off an army of contradictions.

37 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Vladimir_Nesov2 · 2007-08-15T21:36:07.000Z · LW(p) · GW(p)

Just a question of bookkeeping - online confidence update can be no less misleading, even if all facts are processed once. Million negative arguments can have negligible total effect if they happen to be dependent in non-obvious way.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-15T21:41:18.000Z · LW(p) · GW(p)

online confidence update can be no less misleading

Not if processed correctly, but, of course, if you process incorrectly anything can happen.

Million negative arguments can have negligible total effect if they happen to be dependent

Agreed. Or even if they're independent, but small; or independent, but outweighed by ten million positive evidences.

comment by Byrne · 2007-08-15T22:25:08.000Z · LW(p) · GW(p)

On the other hand, each new argument might reduce the implicit quality of the arguers. Imagine a succession of wilder excuses, rather than a series of increasingly damning data, and you could justify the hawkish Freedonian's view.

Perhaps the way to avoid both of these strategies is to address new evidence in batches. First, you and the hawks add up all your arguments. Then you state them, and consider. A week later, you both state all the new evidence that's come to light since, address how it affects your interpretation of the old data, etc.

You don't want a situation like the evolution/creationism debate, in which creationists are ever ready to point out new gaps in the fossil record (apparently unaware that filling in a gap between A and C creates two new gaps -- between A and B and between B and C).

comment by Constant2 · 2007-08-15T22:31:07.000Z · LW(p) · GW(p)

Interesting thought, but this sort of abstract discussion about a possible error would greatly benefit from real and documented examples in which the described error cropped up.

Replies from: akshatrathi
comment by akshatrathi · 2009-11-22T00:49:18.138Z · LW(p) · GW(p)

I am sure that I must've done this as well.

I have to keep constant vigilance not to do this myself!

A documented example would definitely be appreciated so that we know what we are looking for in a particular situation. Otherwise getting stuck in this loop of winning arguments by double-counting evidence is very easy.

comment by Charlie · 2007-08-16T01:44:34.000Z · LW(p) · GW(p)

It's "one for you, one for me, two for you, one - two for me, three for you, one - two - three for you" your way wouldn't even fool Elmer Fudd.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-16T02:10:37.000Z · LW(p) · GW(p)

Charlie, thanks.

Constant, virtually any modern hot-button political issue will do.

Byrne, the world's stupidest person may say the sun is shining but that doesn't make it dark out. Weak arguments don't go out the other side and become counterarguments.

comment by Nick_Tarleton · 2007-08-16T02:24:05.000Z · LW(p) · GW(p)

Yes, but if my opponent makes arguments a seven-year-old can see through, that increases the probability that they are dishonest, and decreases my trust in any claims they may make. (If they present a complete line of reasoning, that of course isn't changed, but I mean claims of fact that I can't easily verify, or arguments leaving out several inferential steps.)

comment by Constant2 · 2007-08-16T11:41:49.000Z · LW(p) · GW(p)

"Constant, virtually any modern hot-button political issue will do."

Actually, I think it would be pretty hard to come up with unambiguous examples of this, because what you're describing is not misbehavior that occurs in any given encounter, but a pattern over time in which an individual changes his own beliefs in the wrong way in response to the evidence. This is hard to demonstrate for at least two reasons. First, since it occurs over time rather than on a single occasion it's difficult to observe. Second, since what you're really talking about (the revision of one's beliefs) occurs inside a person's head there's the problem of gaining access to the person's head.

But if it is difficult to come up with unambiguous examples of it, then by the same token it is hard to observe in the first place. Any supposed observation of it will almost certainly require a large element of speculation about what is going on inside someone else's head.

What can we actually observe? Relevant to what you describe, we can observe two things:

1) We know generally that people's political views often harden over time. And since they do it in different directions, then in at least some cases the hardening is unlikely to be occurring for the right (the rational truth-seeking) reasons.

2) People do observably rehearse already-known support.

But (2) in itself is perfectly legitimate. Meanwhile (1) already has many explanations apart from the phenomenon that you are speculating exists. It's a much observed and much talked about phenomenon, and what you have done here is added only one more speculation about why it happens to the bulging library of explanations. While you are not necessarily wrong, at the same time as far as I can see there isn't all that much compelling evidence in favor of your speculation.

comment by Bob_Unwin5 · 2007-08-16T13:03:11.000Z · LW(p) · GW(p)

Even if you do want to integrate contrary evidence, it can be hard to do so quickly enough to continue a normal conversation, especially if the evidence is quite unexpected.

For example, suppose I have come to believe that in war X, the victory of the Reds against the Greens was always likely to happen. That is, from the first skirmish between the two sides, one could (with only the information about the two sides available at that point in time) confidently bet on the Reds winning. And the Reds did actually win.

If I know nothing of heuristics and biases, and someone counters my assertion about war X by mentioning the Hindsight Bias, then it may take me quite a long time to integrate this new evidence into my model of the world. I will need to think about the epistemic weight of heuristics and biases information, and ask how closely the conditions of tests of bias resemble my own. If my belief about war X depends in part on evidence about what professional historians believe, then I will have to consider the potentially thorny question of how much professional historians are subject to Hindsight Bias. Of course, my subjective probability in the inevitability of Red victory will go down, but the important question of how much it goes down cannot be answered so easily.

So, what we should often do in face-to-face discussion when we get new evidence is say, "please give me some time to integrate that new evidence into my model". This would be taken by many as a concession of defeat, just as would saying "After conditionalization on your evidence I have lowered by credence in P", and so will be hard for people to do in practice.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-16T14:31:35.000Z · LW(p) · GW(p)

This would be taken by many as a concession of defeat, just as would saying "After conditionalization on your evidence I have lowered by credence in P", and so will be hard for people to do in practice.

When I'm talking to people of that quality, I very rarely need to integrate substantial new evidence into my model.

If evidence is not substantial you have no obligation to make a big deal of the fact that you are updating on it; this would be a Gricean deception.

When someone does present you with substantial new evidence, you should consider that you may be dealing with one knowledgeable in the subject area; if so, conceding defeat, or showing before others that you have been presented with substantial new evidence, should not be out of the question. What categorical imperative would you want to apply for people who encounter substantial new evidence?

If the appearance of conceding defeat is (for whatever reason) terribly scary, then you may, perhaps, choose between acknowledging new evidence internally and saying nothing about this externally; or you may fail to acknowledge it even internally. Neither course of action is especially virtuous, but self-deception is not more virtuous than silence.

Replies from: Carinthium
comment by Carinthium · 2010-11-13T01:35:38.369Z · LW(p) · GW(p)

Other than moral reasons, what flaws are there in the course of action of arguing against the evidence whilst acknowledging internally that your opponent is right?

Replies from: TheOtherDave
comment by TheOtherDave · 2010-11-13T03:39:02.742Z · LW(p) · GW(p)

Off-hand here are three pragmatic costs of doing so, as stripped of moral language as I can get them:

1) Cognitive dissonance. For most of us, behavior influences belief, so behaving as though the presented evidence wasn't compelling can (and likely will) interfere with our ability to properly incorporate that evidence in our thinking.

2a) Reputation. If I fail to signal internal state reliably, I may develop a reputation as an unreliable signaler. There are social costs to that, as well as nastier words for it.

2b) Reputation, again. Evidence of true things is a valuable thing to have. If someone gives it to me and I refuse to acknowledge it, I'm refusing to acknowledge a gift. There are social costs to that as well.

3) Operant conditioning opportunity costs. Making an argument that others find compelling is emotionally rewarding for most people. If the person you're arguing with is one of those people, and you signal that you found their argument compelling, basic conditioning principles make it more likely that the next time they have evidence you ought to find compelling they'll share it with you. Conversely, if you don't signal it, they're less likely to do it again. Therefore, continuing to argue as though the evidence were uncompelling means losing chances to get benefits later.

comment by pdf23ds · 2007-08-16T18:44:35.000Z · LW(p) · GW(p)

What can we actually observe? Relevant to what you describe, we can observe two things:

Also, we can observe (I read about a study that showed this) that people who intentionally seek out arguments that oppose their position tend to strengthen their belief in their position relative to people who only read friendly arguments. I forget exactly where I read this, unfortunately.

comment by CarlShulman · 2007-08-16T19:03:31.000Z · LW(p) · GW(p)

Chris,

Confirmation bias lab experiments show that people provided with mixed evidence wind up strengthening their initial beliefs. To clarify, are you saying that this study gave people the option to "intentionally seek out arguments that oppose their position," or recalling the ordinary confirmation bias experiments?

comment by pdf23ds · 2007-08-16T19:23:55.000Z · LW(p) · GW(p)

Carl, you might confuse people by addressing me by my real name when I'm not using it.

Unfortunately, my recollection of that study is weak. To the best of my memory, though, it was only a survey of people's blog/news reading habits combined with a test of dogmatism, not one where they actually instructed people in different groups to read more of fewer opposing arguments. I forget what the terms for these types of studies are.

comment by Nick_Tarleton · 2007-08-16T19:52:43.000Z · LW(p) · GW(p)

Perhaps people tend to read low-quality opposing arguments only, for whatever reason, resulting in the effect I described above. They could even be reading really low-quality opposition for pure entertainment - I sometimes do this, but on reflection it's probably bad for me.

comment by CarlShulman · 2007-08-16T20:04:49.000Z · LW(p) · GW(p)

"Unfortunately, my recollection of that study is weak. To the best of my memory, though, it was only a survey of people's blog/news reading habits combined with a test of dogmatism, not one where they actually instructed people in different groups to read more of fewer opposing arguments." One worry would be that ideological enthusiasm leads to greater interest in politics in general.

"Perhaps people tend to read low-quality opposing arguments only, for whatever reason" Blogs may link to arguments from the opposing side in order to mock them, selecting those which are most objectionable from the perspective of their ideological allies.

comment by pdf23ds · 2007-08-17T01:41:39.000Z · LW(p) · GW(p)

I do remember a bit more detail about one study, though. Two different groups, one sympathetic to Israelis and the other to Palestinians (and maybe a third with no sympathies) were shown the same, fairly neutral article describing quite a bit about the conflict. Both groups held their positions more strongly after reading the article, and both groups thought the article was hostile to their positions.

comment by Doug_S. · 2007-08-17T04:08:39.000Z · LW(p) · GW(p)
both groups thought the article was hostile to their positions

They might both be correct about that.

comment by Nick_Tarleton · 2007-08-17T14:29:37.000Z · LW(p) · GW(p)

Likely both groups hold false, self-serving factual beliefs and perceive denial of those beliefs (which a true article will inevitably contain) as hostile to them (and agreement with them as neutral).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-20T21:07:28.000Z · LW(p) · GW(p)

this sort of abstract discussion about a possible error would greatly benefit from real and documented examples in which the described error cropped up

this OB comment

comment by TheOtherDave · 2010-10-25T01:03:13.969Z · LW(p) · GW(p)

I wonder if it's easier to counter this effect by deliberately rehearsing previously encountered arguments against my position each time I encounter a new one, than it is by resisting the impulse to rehearse arguments for it.

Replies from: liliet-b
comment by Liliet B (liliet-b) · 2019-12-07T12:08:10.339Z · LW(p) · GW(p)

I'd go with this. Gather all the evidence in one place as you're attempting to update... Otherwise you might miss that shiny new counterevidence actually screens off some old counterevidence you'd already updated on, or is screened off by it and you don't need to update at all.

comment by undermind · 2011-04-12T00:55:04.170Z · LW(p) · GW(p)

While this is certainly a nasty pitfall of rationalization, it is necessary to rehearse the evidence from time to time, for those of us without perfect memories. Otherwise, we end up in the situation "I know there was a good reason I believed this but I don't remember what it was"; this occurs to me far too often. Retracing all of the evidence that led to a particular belief is terribly time-consuming and impractical ("I know this was in a neuroscience book I read three years ago..."). Forgetting why you hold a particular belief is almost as bad as having no reason at all, and every rationalist should naturally strive to avoid this.
Of course, the time to rehearse why you hold a particular belief is not when being confronted with opposing arguments.

Replies from: None, NancyLebovitz, liliet-b
comment by [deleted] · 2011-04-12T01:16:09.551Z · LW(p) · GW(p)

An alternative to rehearsing is retesting. Not always practical, but sometimes practical. Retesting can go much quicker than the initial discovery, because often it is much easier to (re-)verify a solution than it is to come up with it. (this has an obvious surface relationship to the P versus NP problem)

comment by NancyLebovitz · 2011-04-12T07:23:49.355Z · LW(p) · GW(p)

Forgetting why you hold a particular belief is almost as bad as having no reason at all

Upvoted because of this line.

comment by Liliet B (liliet-b) · 2019-12-07T12:13:58.793Z · LW(p) · GW(p)

As noted above, rehearsing all the evidence against your position alongside your own should be a counter. As in the article's example, the math should not be "1 vs 3 every time", but it should not be "1 vs 3 the first time, 1 vs 0 the second and subsequent times" either. It should be "1 vs 3, then 2 vs 3, then..."

In actual debate practice, it might confuse the other person that you're listing their points for them, but I've found it a helpful practice anyway.

comment by MinibearRex · 2011-07-21T19:10:36.589Z · LW(p) · GW(p)

There is more discussion of this post here as part of the Rerunning the Sequences series.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-07-21T19:31:34.755Z · LW(p) · GW(p)

No, there's currently not. Please don't make such comments by default.

Edit: Argh, you've made a whole bunch of them, including linking to the older discussion posts that also had no discussion in them. I think we should remove the whole thing. Was this policy discussed somewhere before?

Replies from: jsalvatier
comment by jsalvatier · 2011-07-21T19:39:11.625Z · LW(p) · GW(p)

Since right now there is no regular discussion, it might be good stop making this comment, but if we get into more controversial posts are there are regular comments, I think it would be good to start this up again. Doing it on a case by case seems like too much work.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-07-21T19:47:33.173Z · LW(p) · GW(p)

I don't believe it's too much work, because one can just systematically go through all old sequence reruns posts and add references where appropriate, which reduces work per post.

Replies from: MinibearRex
comment by MinibearRex · 2011-07-22T04:57:02.472Z · LW(p) · GW(p)

It's a part of the original set of instructions. I've never seen any further discussion of it, so I've been doing it, although personally I would be happy to stop. I don't see a particular benefit to doing it. If there ever is a new issue brought up on the old post that has been discussed on the rerun, that seems like the time to direct that user's attention to the previous discussion.

If anyone has a particular desire for this commenting to continue, speak now.

Replies from: Unnamed
comment by Unnamed · 2011-07-22T05:54:13.925Z · LW(p) · GW(p)

I'm editing the instructions now to remove that step.

It was part of the original instructions, but most people weren't following it, and since the sequence reruns aren't generating much discussion it's better not to do it.

comment by ike · 2014-05-01T19:18:35.989Z · LW(p) · GW(p)

The problem, of course, is that by rehearsing arguments you already knew, you are double-counting the evidence This would be a grave sin even if you double-counted all the evidence.

Typo: there's a missing period between "evidence" and "This".

comment by TitaniumDragon · 2015-02-24T05:36:52.559Z · LW(p) · GW(p)

The real flaw here is that counting arguments is a poor way to make decisions.

"They don't have the ability to make said meteor strikes" is enough on its own to falsify the hypothesis unless you have evidence to the contrary.

As Einstein said about "100 Authors Against Einstein", if he was wrong, they would have only needed one.

comment by Self (CuriousMeta) · 2024-11-25T09:45:33.174Z · LW(p) · GW(p)

Very cool. Less of a distinct mental handle, more of a subtle mental strategy one can find oneself executing across time.