The Three Little Piggies of Rationality
post by Sam Enright (sam-enright) · 2022-02-15T20:36:33.574Z · LW · GW · 2 commentsContents
2 comments
There are three related logical fallacies, which I call the Three Little Piggies of rationality. A strawman is when you argue against a simplified view that your opponent doesn’t have. A steelman is when you argue against a more sophisticated view than the one your opponent has. And a weakman is when you argue against your opponent’s actual position, but your opponent is unrepresentatively stupid of people who hold that belief in general.
Strawmanning is obviously bad. It’s less obvious that steelmanning is good. Why argue against a view that your opponent doesn’t have? I can think of a few reasons:
1. The actual view is a logical subset of the more sophisticated view, such that, if you defeat the stronger argument, you defeat the weaker one too.
2. An interlocutor will be insulted if you think they’re dumber than they are, but not if you think they’re smarter than they are. The steelman is the most sophisticated position that they could reasonably have. A steelman is a hedge against being mistaken about what their actual position is.
3. Arguing against unsophisticated positions doesn’t bring you any closer to the truth. Steelman your opponent because that way the belief you’re dealing with has some chance of being true.
Of these, I’m most uncertain of (1). Why would a worse argument be a subset of a better one? You might say that this is just the definition of a steelman: it has the property that, if it’s false, then the view that’s being steelmanned is false. But in this case, a steelman is quite complicated to construct, as you have to ensure it has a precise logical property.
There is a related problem with steelmanning in practice, which is that it is all too often an attempt to shoehorn someone’s argument into your worldview. Imagine a conversation in which a deontologist expresses a belief that is “steelmanned” by a consequentialist:
Deontologist’s belief: Universal healthcare is bad because I shouldn’t have to pay for someone’s bad choices, like being fat or a smoker.
Consequentialist’s steelman: Universal healthcare creates moral hazard, because people do not bear the cost of their irresponsible actions, like eating junk food and smoking.
Notice the bait-and-switch. The initial belief is about moral obligation, and the steelman is only about consequences. A believer in personal responsibility may still favour a market healthcare system even if it increases rates of irresponsible behaviour.
Ozy Brennan gives another example:
Marxist’s belief: In our society, employers exploit workers.
Non-Marxist’s steelman: We do not generally say that people are being “exploited” when they enter into a completely voluntary agreement. The most reasonable understanding of exploitation in this context is that employees are not being paid the value of their labour because they have insufficient bargaining power.
Non-Marxist’s refutation of steelman: This can be solved with some regulations strengthening unions, or a universal basic income.
The mistake in this steelman is the part where the non-Marxist assumed that there metaphysically exist autonomous individuals that can consent to contracts in a capitalist society. I don’t know how you argue against this Marxist’s belief, or how to steelman it; looking at the reasons why they accepted Marxism to begin with is a start.
The weakman is the most enigmatic of the Little Piggies. Notice that the problem with weakmanning is not the act itself but the conclusion you draw from it. It’s fine to argue against someone’s actual belief, but if that person is unrepresentatively stupid, you shouldn’t have this prejudice your future interactions with people who hold this belief. But it’s tricky to say what counts as ‘unrepresentative’. Most people have poorly thought-out views about most things. I am one of them. A “weakman” will likely be of someone who is representatively stupid of people who hold that belief.
Excessively avoiding weakmanning leads to the smartest pro-X academics arguing against the smartest anti-X academics for complicated reasons that are entirely unrelated to why any normal people believe these things.
The net effect of the Three Piggies – straw, steel and weak – is that it’s difficult and maybe incoherent to approvingly cite a thinker who is on a higher rung than you in sophistication. Suppose I am having a conversation with someone who opposes foreign aid, and they point me to a highly sophisticated anti-aid thinker. That thinker is arguing for an anti-aid view more sophisticated than my interlocutor believes, and against a pro-aid view more sophisticated than the one I believe. So, the relevance is unclear.
The only way I’ve found to escape a kind of epistemic nihilism is to posit that most frequently-discussed social and philosophical views have only a few rounds of refinement available to them before they bottom out in the best possible version of that view. It’s relatively straightforward to state what determinist philosophers believe, for example.
There is a rationalist fable that goes as follows:
When the Big Bad Rationality Wolf reached the strawman, he huffed, and he puffed, and he blew the man down.
When the Big Bad Rationality Wolf reached the weakman, he huffed, and he puffed, and he blew the man down. However, he was careful not to make undue generalisations from this experience.
When the Big Bad Rationality Wolf reached the steelman, he huffed, and he puffed, but he couldn’t blow the man down. The man cried out “You will not blow me down - not by the hair on my chinny chin chin!” The Wolf charitably interpreted him as saying that it was very unlikely that he would be able to blow him down, and moved on.
2 comments
Comments sorted by top scores.
comment by TLW · 2022-02-19T03:09:23.839Z · LW(p) · GW(p)
Why argue against a view that your opponent doesn’t have? I can think of a few reasons:
Another reason, in practice. (I am well aware this is terrible in theory.)
If have limited time / aren't going to come back later to follow up on your post, a properly executed steelman can fend off someone later on down the line coming back and 'refuting' your argument with the steelman version.
comment by TLW · 2022-02-19T02:52:52.473Z · LW(p) · GW(p)
Where does "human communication isn't perfect and I think you were badly attempting to explain (strong) argument X' as opposed to deliberately explaining (weak) argument X" fall into this?
It's arguably a variant of 2, but I think it's worth explicitly splitting out.