A Rational Argument

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-02T18:35:48.000Z · LW · GW · Legacy · 41 comments

Contents

41 comments

You are, by occupation, a campaign manager, and you’ve just been hired by Mortimer Q. Snodgrass, the Green candidate for Mayor of Hadleyburg. As a campaign manager reading a book on rationality, one question lies foremost on your mind: “How can I construct an impeccable rational argument that Mortimer Q. Snodgrass is the best candidate for Mayor of Hadleyburg?”

Sorry. It can’t be done.

“What?” you cry. “But what if I use only valid support to construct my structure of reason? What if every fact I cite is true to the best of my knowledge, and relevant evidence under Bayes’s Rule?”1

Sorry. It still can’t be done. You defeated yourself the instant you specified your argument’s conclusion in advance.

This year, the Hadleyburg Trumpet sent out a 16-item questionnaire to all mayoral candidates, with questions like “Can you paint with all the colors of the wind?” and “Did you inhale?” Alas, the Trumpet’s offices are destroyed by a meteorite before publication. It’s a pity, since your own candidate, Mortimer Q. Snodgrass, compares well to his opponents on 15 out of 16 questions. The only sticking point was Question 11, “Are you now, or have you ever been, a supervillain?”

So you are tempted to publish the questionnaire as part of your own campaign literature . . . with the 11th question omitted, of course.

Which crosses the line between rationality and rationalization. It is no longer possible for the voters to condition on the facts alone; they must condition on the additional fact of their presentation, and infer the existence of hidden evidence.

Indeed, you crossed the line at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it. “What!” you cry. “A campaign should publish facts unfavorable to their candidate?” But put yourself in the shoes of a voter, still trying to select a candidate—why would you censor useful information? You wouldn’t, if you were genuinely curious. If you were flowing forward from the evidence to an unknown choice of candidate, rather than flowing backward from a fixed candidate to determine the arguments.

A “logical” argument is one that follows from its premises. Thus the following argument is illogical:

This syllogism is not rescued from illogic by the truth of its premises or even the truth of its conclusion. It is worth distinguishing logical deductions from illogical ones, and to refuse to excuse them even if their conclusions happen to be true. For one thing, the distinction may affect how we revise our beliefs in light of future evidence. For another, sloppiness is habit-forming.

Above all, the syllogism fails to state the real explanation. Maybe all squares are rectangles, but, if so, it’s not because they are both quadrilaterals. You might call it a hypocritical syllogism—one with a disconnect between its stated reasons and real reasons.

If you really want to present an honest, rational argument for your candidate, in a political campaign, there is only one way to do it:

Only in this way can you offer a rational chain of argument, one whose bottom line was written flowing forward from the lines above it. Whatever actually decides your bottom line is the only thing you can honestly write on the lines above.

1See “What Is Evidence? [? · GW]” in Map and Territory.

41 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Flynn · 2007-10-02T20:11:07.000Z · LW(p) · GW(p)

So are you suggesting that it impossible for someone else to construct an unbiased argument for you?

After all, it's only a small step to observe that it's impossible to ever know whether someone else has the motives of the campaign manager in this case.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-02T20:15:08.000Z · LW(p) · GW(p)So are you suggesting that it impossible for someone else to construct an unbiased argument for you?

You can never construct an unbiased argument for anything, except by an improbable coincidence that any wise person will refuse to believe in.

After all, it's only a small step to observe that it's impossible to ever know whether someone else has the motives of the campaign manager in this case.

Valid evidence is valid, whatever the motives of the one who cites it; the world's stupidest person may say the sun is shining, but that doesn't make it dark out. But you'd be wise to take responsibility for adding up the evidence yourself, and try to check one or more sides to see if any arguments were omitted. (Just don't expect the evidence to balance. It shouldn't.)

comment by James_Bach · 2007-10-02T20:51:54.000Z · LW(p) · GW(p)

I like the spirit of what you're saying, but I'm not convinced that you've made a rational argument for it. Also, I'm concerned that you might have started with the conclusion that a rational argument must flow forward and constructed an account to justify it. If so, in your terms, though not in mine, that would make your conclusion irrational.

I think it can be perfectly rational to think backwards from any conclusion you want to any explanation that fits. Rationality is among other things about being bound by the requirement of consistency in reasoning. It's about creating an account from the evidence. But it's also about evaluating evidence, and that part is where it gets problematic.

In an open and complex world like the one we live in every day, weighing evidence is largely a non-rational (para-rational? quasi-rational?) process. We are operating only with bounded rationality and collections of murky impressions. So, your idea of making a checklist and somehow discovering who the best candidate is is already doomed. There is no truly evidence-driven way of doing that, because evidence does not drive reasoning-- it's our BELIEFS about evidence that drive reasoning. Our beliefs are mostly not a product of a rational process.

A logical explanation is one that follows from premises to conclusions without violating any rule of logic. Additionally, all logical explanations of real world situations involves a claim that the logical model we put forward corresponds usefully to the state of the real world. What we called a "cat" in our reasoning corresponded to that furry thing we understand as a cat, etc. If I can think backwards from a conclusion without finding an absurd premise, then I have a logical explanation. (It may be wrong, of course.)

To attack my self-consistent, logical account of a situation that suggests that X is TRUE, based solely on the fact that I was looking for evidence that X is true, is equivalent to an ad hominem fallacy. I think you can certainly suspect that my argument is weak, and it probably is, but you can't credibly attack my sound argument simply because you don't like me, or you don't like my method of arriving at my sound argument. A lot of science would have to be thrown out if a scientist wasn't allowed to search for evidence to support something he hoped would be true. Also, as you know, many theorems have been proven using backward reasoning.

If you want to attack the argument, you can attack it rationally by offering counter-evidence, or an alternative reasoning that is more consistent with more reliable facts. Furthermore, our entire legal system is built on the idea that two opposing sides in a dispute, marshaling the best stories they can marshal, will provide judges and juries with a good basis on which to decide the dispute.

Instead of calling it irrational, I would say that it's a generally self-deceptive practice to start from a conclusion and work backward. I don't trust that process, but I couldn't disqualify an argument solely on those grounds.

Instead of prescribing forward reasoning only, I would prescribe self-critical thinking and de-biasing strategies.

(BTW, one of the reasons I don't vote is that I am confident that I cannot, under any circumstances, EVER, have sufficient and reliable information about the candidates to allow me to make a good decision. So, I believe all voting decisions people actually make are irrational.)

Replies from: pnrjulius, PetjaY, LESS
comment by pnrjulius · 2012-06-09T00:23:32.588Z · LW(p) · GW(p)

The argument could turn out valid, by coincidence; but the process of making it isn't valid, so given the vast space of all possible arguments... it's probably not valid. Indeed, as nearly all advertising, propaganda, political campaigns, etc. are not.

comment by PetjaY · 2015-01-07T20:43:29.177Z · LW(p) · GW(p)

You only need to have better information than average voter for your vote to improve result of election. Though then again, effect of 1 vote is usually so small that the rational choice would be to vote for whatever gives you more social status.

comment by LESS · 2020-05-12T23:54:33.814Z · LW(p) · GW(p)

What you need to remember is that all of this applies to probabilistic arguments with probabilistic results - of course deductive reasoning can be done backward. However, when evidence is presented as contribution to a belief, omitting some (as you will, inevitably, when reasoning backward) disentangles the ultimate belief from the object thereof. If some evidence doesn't contribute, the (probabilistic) belief can't reflect reality. You seem to conceptualize arguments as requiring the outcome if they're valid and their premises are true, which doesn't describe the vast majority.

comment by g · 2007-10-02T21:02:22.000Z · LW(p) · GW(p)

James, in regard to your last paragraph: I very much doubt whether your decision not to vote is itself a good one, by the standards you've just espoused. After all, if you don't have enough information to decide between voting for X and voting for Y, how can you have enough information to decide between voting for X and voting for no one? Seems to me that you have to make a decision (which might end up being the decision to cast no vote, of course) and the fact that you don't have enough evidence to be strongly convinced that your decision is best doesn't relieve you of the responsibility for making it.

comment by Tom_McCabe · 2007-10-02T22:39:01.000Z · LW(p) · GW(p)

"(BTW, one of the reasons I don't vote is that I am confident that I cannot, under any circumstances, EVER, have sufficient and reliable information about the candidates to allow me to make a good decision. So, I believe all voting decisions people actually make are irrational.)"

See http://lesswrong.com/lw/h8/tsuyoku_naritai_i_want_to_become_stronger/.

comment by blobusus · 2007-10-02T22:49:42.000Z · LW(p) · GW(p)

Hmmm. If I understand you correctly, then two people could produce an identical argument but one would be incorrect because he did it backwards? Do you suppose that there is an implied arrow of time in every syllogism?

Replies from: pnrjulius
comment by pnrjulius · 2012-06-09T00:23:54.936Z · LW(p) · GW(p)

The argument could turn out valid, by coincidence; but the process of making it isn't valid, so given the vast space of all possible arguments... it's probably not valid. Indeed, as nearly all advertising, propaganda, political campaigns, etc. are not.

comment by Robin_Hanson2 · 2007-10-02T23:39:46.000Z · LW(p) · GW(p)

Many of you seem to think there is an axiom of reasoning that says the persuasiveness of an argument must be independent of what you know about the process that produced that argument. There is no such axiom, nor should there be.

comment by TGGP4 · 2007-10-03T00:10:03.000Z · LW(p) · GW(p)

Voting is irrational because the probability that your vote will have any effect on the outcome is about zero. I discuss that more and have a back-and-forth in the comment section here.

Replies from: pnrjulius
comment by pnrjulius · 2012-06-09T00:24:56.379Z · LW(p) · GW(p)

But it isn't zero... and we know that if people systematically obeyed that advice, the world would be much worse off.

Voting may be a Tragedy of the Commons, but it's not just simpliciter irrational.

Replies from: wedrifid
comment by wedrifid · 2012-06-09T00:33:01.327Z · LW(p) · GW(p)

Voting may be a Tragedy of the Commons

At least, it is in insane countries where it isn't compulsory.

Voting is analogous to taxes and should be legally enforced as such. (Or, rather, the public service of attending a voting booth and scribbling something arbitrary that may or may not be a vote on a piece of paper should be compulsory.)

Replies from: None, Lumifer, Salemicus, Estarlio
comment by [deleted] · 2013-09-04T15:04:32.315Z · LW(p) · GW(p)

Voting is analogous to taxes and should be legally enforced as such.

Well, informed voting is, but how do you reliably check if somebody was well-informed as they voted, to legally enforce it?

Replies from: wedrifid
comment by wedrifid · 2013-09-04T22:37:34.429Z · LW(p) · GW(p)

Well, informed voting is, but how do you reliably check if somebody was well-informed as they voted, to legally enforce it?

Only requiring informed voters to vote would be a potentially useful optimisation. As you point out that distinction does not seem to be practical.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-09-06T05:16:34.962Z · LW(p) · GW(p)

So what problem is mandatory voting supposedly solving again?

Replies from: wedrifid, army1987
comment by wedrifid · 2013-09-06T06:54:04.866Z · LW(p) · GW(p)

I'm tapping out of this conversation. It's predisposing me towards racism. I'm sure anybody actually interested will have no problem finding a book on game theory.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-09-06T07:23:42.410Z · LW(p) · GW(p)

Taboo "racism". From context it seems to mean [having beliefs that while more accurate make me uncomfortable].

comment by A1987dM (army1987) · 2013-09-14T16:52:13.232Z · LW(p) · GW(p)

In countries without mandatory voting, if voting is more inconvenient for certain groups than for others, the latter will be over-weighed in the election. With mandatory voting, casting a valid vote is no more and no less inconvenient than spoiling the ballot, so that's not an issue -- all eligible people who ceteris paribus would prefer, no matter how slightly, to vote will do so.

(Unlike wedrifid I'm not in a country with mandatory voting, BTW.)

comment by Lumifer · 2013-09-04T15:13:00.446Z · LW(p) · GW(p)

the public service of attending a voting booth and scribbling something arbitrary that may or may not be a vote on a piece of paper should be compulsory.

Why? I fail to seem any gains from that. Neither do I see any major empirical differences between countries with compulsory voting and countries without.

Replies from: wedrifid
comment by wedrifid · 2013-09-04T22:43:09.771Z · LW(p) · GW(p)

Why? I fail to seem any gains from that.

In general the correct response to most "I fail to see" or "I can't imagine" claims is to observe that this could be either a fact about the problem or a fact about the speaker's imagination.

Neither do I see any major empirical differences between countries with compulsory voting and countries without.

The current solution to the tragedy of the commons is brainwashing with patriotism and relying on poorly calibrated tribal-political instincts to get by. This works well enough and I honestly don't think this is a problem that particularly needs addressing, compared to all the other things that can be done. It is merely a minor systemic insanity.

comment by Salemicus · 2013-09-04T15:53:37.164Z · LW(p) · GW(p)

I agree that voting is a Tragedy of the Commons - but in the exact opposite way to how you frame it. Because people don't fully internalise the costs and benefits of their votes, but value self-expression, (1) it is very cheap for the ill-informed to use their votes to signal expressively, and (2) there is little incentive to become a well-informed voter. For a given level of political ignorance, we get far too much voting.

To my mind, voting is analagous to pollution and should be taxed as such.

comment by Estarlio · 2013-09-04T23:03:04.043Z · LW(p) · GW(p)

Why? Assuming I vote randomly all I'm doing is increasing the noise to signal ratio. If everyone you force to do it votes randomly then it'll average out.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-09-06T04:59:59.572Z · LW(p) · GW(p)

It's worse than that. The randomness is biased in ways that can be systematically manipulated.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-03T00:13:08.000Z · LW(p) · GW(p)

Many of you seem to think there is an axiom of reasoning that says the persuasiveness of an argument must be independent of what you know about the process that produced that argument. There is no such axiom, nor should there be.

In particular, depending on the process that produces an argument, you may have to infer the existence of evidence not seen.

Hmmm. If I understand you correctly, then two people could produce an identical argument but one would be incorrect because he did it backwards? Do you suppose that there is an implied arrow of time in every syllogism?

More like... Hamlet might be just as good if it had been written by monkeys on typewriters instead of Shakespeare, but there's a reason why it wasn't.

Even if things come out equally by luck in one world, it would have different entanglements in possible worlds. The entanglements wouldn't follow. It's like the lottery ticket that happens to win in your Everett branch or Tegmark duplicate - buying it still wasn't a rational act. Only a forward-flowing algorithm will make the entanglements match up.

comment by Tom_McCabe2 · 2007-10-03T00:28:39.000Z · LW(p) · GW(p)

"Only a forward-flowing algorithm will make the entanglements match up."

To try and rephrase this in simpler language: You do not know the truth. You want to discover the truth. The only thing you get scored on is how close you are to the truth. If you decide "XYZ is a great guy" because XYZ is writing your paycheck, writing down lots of elaborate arguments will not improve your score, because the only thing you get scored on was already written, before you started writing the arguments. If you start writing arguments and then conclude that XYZ is a great guy, you may improve on your score, because you get a chance to change your mind. Changing your mind becomes mandatory as the truth becomes more complex; if you decide on the truth for personal convenience out of 2^100 or 2^200 possible options, you're never going to hit it if you don't work to improve your accuracy.

comment by mtraven2 · 2007-10-03T18:45:26.000Z · LW(p) · GW(p)

This has approximately zero relationship to the way political campaigns (or anything else) happens in the real world, where campaign managers are part of an ideologically biased social network. In fact, their job is essentially to strengthen the connections between voters and a candidate, by whatever means necessary, mostly through propaganda (aka advertising) that combines emotional appeal with the occasional smidgen of rational argument.

Maybe it would be a better world if people didn't work this way, but they do, and I don't see any prospect of changing this. I'm not even sure how rationality can be applied to most electoral issues. Take the issue of abortion. Either you believe abortion is immoral, or not. You can apply rationality to figure out which candidate supports your moral point of view, but it's not much help in setting your root moral values. So how can you make an unbiased choice?

Elections are all about trying to get people who share your biases into power. I know the self-proclaimed rationalists here think the whole process is icky, but part of being rational is dealing with the real world, not the world as you would like it.

That being said, there's room in the electoral process for a bias in favor of rationality, science, humanism, and enlightenment. I think it's pretty clear which of the two major political parties in the US favor those values.

Replies from: pnrjulius
comment by pnrjulius · 2012-06-09T00:31:21.091Z · LW(p) · GW(p)

Rationality has plenty to say about whether abortion is morally permissible.

Are fetuses sentient, for example? Do they feel pain? What would happen socially, economically, if we outlawed abortion? Who would benefit? Who would be harmed? How much?

If you're a strict utilitarian, moral problems reduce to factual problems. But even if you're not, facts often have a great deal to say about morality. This is especially true in issues like economics and foreign policy, where the goals are largely undisputed and it's the facts and methods that are in question. I challenge you to find an American politician who says he wants to increase poverty or undermine American national security. "We need 10% of Americans to starve! And by the way, I hope China invades!" (I guess I should hedge my bets and say that such bizarre people may exist---after all, Creationists do---but they aren't likely to get a lot of votes from any party.)

Also, rationality can assess the arguments used for and against political positions. If one side is using a lot of hard data and the other one is making a lot of logical fallacies... that's should give you a pretty good idea of which side to be on. (It's no guarantee, but what is?)

Replies from: PetjaY
comment by PetjaY · 2015-01-07T20:57:15.022Z · LW(p) · GW(p)

First you need to decide what gives utility points to you, which is a moral problem. I consider most computer programs to be sentient, with their work memory being sentience, i also see pain as just a bit of programming that makes creatures avoid things causing it, not different from some regulators i have programmed. Therefore i don´t care if fetuses are sentient or feel pain, so for me that does not affect the utility calculation. But most people do not agree.

comment by Raw_Power · 2011-07-06T21:03:27.435Z · LW(p) · GW(p)

Actually this would work nicely if the body that makes this survey doesn't work for any of the candidates, but either has independent votes or is funded by the voters. It would then be in their best interest to show the voters all the evidence, rather than "all the true evidence that serves my candidate".

In other words, if you want to intervene in politics as a rational agent, you shouldn't work for any party: you should work for the public at large! Which brings us to the following question: what is the necessity, nay, the justification for parties existing in this day and age? Aren't there better alternatives in making governments be the faithful servants of popular will, rather than, say, of their own existence or of the interests of a particular group of people?

Replies from: pnrjulius, Grognor
comment by pnrjulius · 2012-06-09T00:32:22.926Z · LW(p) · GW(p)

There are such organizations, and in general the information they put out is a lot more reliable, for exactly these reasons.

Replies from: Raw_Power
comment by Raw_Power · 2012-07-02T21:11:24.946Z · LW(p) · GW(p)

Name three.

Replies from: pnrjulius
comment by pnrjulius · 2012-07-05T04:32:06.832Z · LW(p) · GW(p)

Politico, PolitiFact, FactCheck.org

Replies from: Raw_Power
comment by Raw_Power · 2012-07-09T14:44:08.326Z · LW(p) · GW(p)

Thank you very much for sharing these. I am very glad to find out that such organizations exist.

comment by Grognor · 2012-07-02T21:27:52.803Z · LW(p) · GW(p)

[...] what is the necessity, nay, the justification for parties existing in this day and age?

It's a good question. The answer is "none, because people are crazy and the world is mad".

Replies from: Raw_Power
comment by Raw_Power · 2012-07-09T14:47:06.699Z · LW(p) · GW(p)

That's a bit of a non-explanation: it predicts anything, and nothing. How about, instead, you name three specific patterns of craziness (you know, fallacies, errors in judgment, bad heuristics, and so on) that are decisive factors in this state of affairs.

Replies from: Grognor
comment by Grognor · 2012-07-09T21:02:42.805Z · LW(p) · GW(p)

No. The whole point of that phrase is to not get overly complicated in explaining other people's failures.

Replies from: Raw_Power
comment by Raw_Power · 2012-07-19T09:37:42.437Z · LW(p) · GW(p)

Explaining and rationalizing/justifying are two different things. Pleading the "humanity is insane" is, to put it bluntly, unproductive and lazy. If you want to say "don't think about it too hard, it's not worth the effort", then say that, and spare us the theatrics.

comment by pnrjulius · 2012-06-09T00:21:27.496Z · LW(p) · GW(p)

This is why I think an adversarial court system is fundamentally defective.

Granted, inquisitorial court systems have flaws as well... but in principle it seems like an inquisition is actually what we want. We want to know what happened, not find out who is better at arguing.

Replies from: DaFranker
comment by DaFranker · 2012-08-01T19:48:15.131Z · LW(p) · GW(p)

A bayesian-rational inquisition judge is in principle the ideal court system. The problem is to ensure that this judge remains conform to requirements (a problem very akin to the unresolved reflectively self-consistent proof of friendly self-modification in the Friendly AI field), and that it always has enough power to enforce decisions.

The ideal system is one where a superintelligence not only knows what happened, but can causally prove that it will not happen again, and thus safely proceed to letting everyone off (including the proven-guilty party) to go about their business.