Dangers of steelmanning / principle of charity

post by gothgirl420666 · 2014-01-16T06:35:31.625Z · LW · GW · Legacy · 94 comments

Contents

94 comments

As far as I can tell, most people around these parts consider the principle of charity and its super saiyan form, steelmanning, to be Very Good Rationalist Virtues. I basically agree and I in fact operate under these principles more or less automatically now. HOWEVER, no matter how good the rule is, there are always exceptions, which I have found myself increasingly concerned about.

This blog post that I found in the responses to Yvain's anti-reactionary FAQ argues that even though the ancient Romans had welfare, this policy was motivated not for concern for the poor or for a desire for equality like our modern welfare policies, but instead "the Roman dole was wrapped up in discourses about a) the might and wealth of Rome and b) goddess worship... The dole was there because it made the emperor more popular and demonstrated the wealth of Rome to the people. What’s more, the dole was personified as Annona, a goddess to be worshiped and thanked." 

So let's assume this guy is right, and imagine that an ancient Roman travels through time to the present day. He reads an article by some progressive arguing (using the rationale one would typically use) that Obama should increase unemployment benefits. "This makes no sense," the Roman thinks to himself. "Why would you give money to someone who doesn't work for it? Why would you reward lack of virtue? Also, what's this about equality? Isn't it right that an upper class exists to rule over a lower class?" Etc. 

But fortunately, between when he hopped out of the time machine and when he found this article, a rationalist found him and explained to him steelmanning and the principle of charity. "Ah, yes," he thinks. "Now I remember what the rationalist said. I was not being so charitable. I now realize that this position kind of makes sense, if you read between the lines. Giving more unemployment benefits would, now that I think about it, demonstrate the power of America to the people, and certainly Annona would approve. I don't know why whoever wrote this article didn't just come out and say that, though. Maybe they were confused". 

Hopefully you can see what I'm getting at. When you regularly use the principle of charity and steelmanning, you run the risk of:

1. Sticking rigidly to a certain worldview/paradigm/established belief set, even as you find yourself willing to consider more and more concrete propositions. The Roman would have done better to really read what the modern progressive's logic was, think about it, and try to see where he was coming from than to automatically filter it through his own worldview. If he consistently does this he will never find himself considering alternative ways of seeing the world that might be better.  

2. Falsely developing the sense that your worldview/paradigm/established belief set is more popular than it is. Pretty much no one today holds the same values that an ancient Roman does, but if the Roman goes around being charitable all the time then he will probably see his own beliefs reflected back at him a fair amount.

3. Taking arguments more seriously than you possibly should. I feel like I see all the time on rationalist communities people say stuff like "this argument by A sort of makes sense, you just need to frame it in objective, consequentialist terms like blah blah blah blah blah" and then follow with what looks to me like a completely original thought that I've never seen before. But why didn't A just frame her argument in objective, consequentialist terms? Do we assume that what she wrote was sort of a telephone-game approximation of what was originally a highly logical consequentialist argument? If so where can I find that argument? And if not, why are we assuming that A is a crypto-consequentialist when she probably isn't? And if we're sure that objective, consequentialist logic is The Way To Go, then shouldn't we be very skeptical of arguments that seem like their basis is in some other reasoning system entirely? 

4. Just having a poor model of people's beliefs in general, which could lead to problems.

Hopefully this made sense, and I'm sorry if this is something that's been pointed out before.

94 comments

Comments sorted by top scores.

comment by Anatoly_Vorobey · 2014-01-13T20:23:49.953Z · LW(p) · GW(p)

To me, charitable reading and steelmanning are rather different, though related.

To read charitably is to skip over, rather than use for your own rhetorical advantage, things in your interlocutor's words like ambiguity, awkwardness, slips of tongue, inessential mistakes. On the freeway of discussion, charitable reading is the great smoother-over of cracks and bumps of "I didn't mean it like that" and "that's not what it says". It is always a way towards a meeting of the minds, towards understanding better What That Person Really Wanted To Say - but nothing beyond that. If you're not sure whether something is a charitable reading, ask yourself if the interlocutor would agree - or would have agreed when you're arguing with a text whose author is absent or dead - that this is what they really meant to say.

I prefer "charitable reading" and not "the principle of charity" because the latter might be applied very broadly. We might assume all kinds of things about the interlocutor's words acting out of what we perceive as charity. For example, "let's pretend you never said that" in response to a really stupid or vile statement might strike many people as an application of the principle of charity, but it is clearly not a charitable reading. And that's good - it's really a different sort of thing, whether desirable or not.

Steelmanning, on the other hand, is all about changing the argument against your position to a stronger one against your position. The "against your position" part is left out of some good explanations in other comments here, but I think it's crucial. Steelmanning is not a courtesy or a service to my interlocutor. It is a service to me. It is my attempt to build the strongest case I can against my position, so I can shatter it or see it survive the challenge. The interlocutor might not agree, if I were to ask them, that my steelmanned argument is really stronger than theirs; that's no matter. I'm not doing it for them, I'm doing it for myself.

When you look at it like this, there should be no danger of confusing the steelmanned argument with the interlocutor's original one. The steelmanned argument is properly yours, it is based on the original argument but should not be attributed to the interlocutor even rhetorically. There's no benefit to the conversation from doing that. You're not doing anyone a favor by pretending they said something they didn't.

In a conversation, live or close to live, charitable reading is always the appropriate and virtuous thing to do, but steelmanning your interlocutor's argument might not be. It often is appropriate, but that isn't a given. Remember, the steelmanned argument is your creation and is meant for you, you owe it to yourself to test your beliefs with it, but not necessarily in the context of this conversation. Not because concealing it is an easier way to victory, but rather because what's steelmanned for you might not be steelmanned or even interesting to your interlocutor. Their argument said A, and you may have found a way to strengthen it further to say B, but they might not want to claim B, to defend B, to agree that B is stronger than A. That said, if you do think the steelmanned argument would be useful to them, by all means introduce it, but explicitly as your own. Some phrases that are commonly said in such cases would be: "I see your point here, and I would even add ... but still, I would disagree...", or "You could also say that...", or you can propose a back-and-forth: "I think this is wrong because of... You might want to reply that... But to that, I would say..." In all these cases, the interlocutor is free to agree or disagree with your explicitly introduced steelman.

Now, going to the example in the post, where the ancient Roman chooses to interpret a progressive argument for increasing welfare as "really" carrying between lines the ancient Roman rationale. He is not doing a charitable reading of his interlocutor's words - they would definitely not agree that this is what they meant to say. And he is not steelmanning anything either, because he hasn't strengthened an argument against his own position; rather, he fortified his existing beliefs by manufacturing another fake confirmation. If he were to modify the progressive's argument in some way that would make it harder for him to interpret it in the ancient-Roman sense, that would be steelmanning.

To sum up:

  • Charitable reading is always done for the sake of the discussion, to improve its usefulness, to reduce noise, and to avoid conscious or unwitting misrepresentation. It should never introduce anything to the argument that its original owner wouldn't have recognized as what they said. It's always a good idea.
  • Steelmanning is always done for your own sake. It always says something new that the original owner of the argument didn't think of or at least didn't say. When put back into the discussion, it should be introduced explicitly as your words. Steelmanning is usually a good idea whenever something important to you is being discussed. Steelmanning every trivial thing is tedious and silly; you're doing it for youself, so you get to decide what should be steelmanned.
Replies from: gothgirl420666, AllAmericanBreakfast, ThisSpaceAvailable
comment by gothgirl420666 · 2014-01-14T07:59:30.936Z · LW(p) · GW(p)

Steelmanning is not a courtesy or a service to my interlocutor. It is a service to me. It is my attempt to build the strongest case I can against my position, so I can shatter it or see it survive the challenge. The interlocutor might not agree, if I were to ask them, that my steelmanned argument is really stronger than theirs; that's no matter. I'm not doing it for them, I'm doing it for myself.

Steelmanning is always done for your own sake. It always says something new that the original owner of the argument didn't think of or at least didn't say. When put back into the discussion, it should be introduced explicitly as your words.

Remember, the steelmanned argument is your creation and is meant for you, you owe it to yourself to test your beliefs with it, but not necessarily in the context of this conversation. Not because concealing it is an easier way to victory, but rather because what's steelmanned for you might not be steelmanned or even interesting to your interlocutor. Their argument said A, and you may have found a way to strengthen it further to say B, but they might not want to claim B, to defend B, to agree that B is stronger than A. That said, if you do think the steelmanned argument would be useful to them, by all means introduce it, but explicitly as your own.

I agree, and this is sort of what I find problematic, I'll explain in a second. (Notice that all four "risks" I mentioned are risks to the Roman and not the progressive.)

Now, going to the example in the post, where the ancient Roman chooses to interpret a progressive argument for increasing welfare as "really" carrying between lines the ancient Roman rationale. He is not doing a charitable reading of his interlocutor's words - they would definitely not agree that this is what they meant to say. And he is not steelmanning anything either, because he hasn't strengthened an argument against his own position; rather, he fortified his existing beliefs by manufacturing another fake confirmation. If he were to modify the progressive's argument in some way that would make it harder for him to interpret it in the ancient-Roman sense, that would be steelmanning.

I think I was a little unclear here, sorry. Imagine that the Roman is already against increasing welfare, for whatever reason. He first reads the progressive article and thinks that the progressive's argument is dumb. He then remembers steelmanning and re-interprets the article as arguing that welfare reform would incur Anonna's favor. He finally realizes that the position isn't that bad when seen in this light, and begins to be a little less certain that increasing welfare would be a bad idea. This is sort of what I was imagining when I wrote the post. The belief that's being tested is not the entire ancient Roman worldview, it's whether or not welfare should be increased.

The thing is, when the Roman creates the new argument "increasing welfare would incur Anonna's favor", that's a completely new idea that he came up with himself, and as such it should be held skeptically. Imagine if Anonna in fact liked welfare when it was in the form of gold coins and hated it when it's in the form of a vague baseless digital currency, and the Roman had no idea, not being an Anonnan priest. However, he might mistakenly think that the fact that the idea "we should increase welfare for equality" is fairly popular and held by smart people is authority for the idea "increasing welfare would incur Anonna's favor", but in fact these are pretty distinct ideas.

I feel like the steelmanning process usually outputs a new argument that you can look at and say "yeah, that kind of makes sense". But I was reading some of the "Tupac is alive" conspiracy theories the other day, and I thought they kind of make sense. For me, an argument kind of making sense is pretty bad evidence for its truth - good evidence would be if I read the argument, then the rebuttals, then the rebuttals to the rebuttals, then the rebuttals to the rebuttals to the rebuttals, and etc. until I finally found a point where I could say "okay, that really does makes sense". But I haven't had the time, or likely the ability, to do this with most arguments, so I usually form my beliefs off of vague intuitions around authority. What I guess I'm afraid of is that I'll conflate my original steelman with a superficially similar popular argument, and then these intuitions will get corrupted and I'll be confused.

Obviously the Roman thing is a pretty dumb cartoony example and it seems too obvious to fall for in real life, but I feel like this usually works on a much more subtle, implicit level, and in fact I think that's why I have a lot of trouble putting it into words. I find this topic really confusing to talk about, so hopefully I didn't say anything too dumb. I think I mainly agree with your post, though, and what everyone else is saying. Again, I think steelmanning is 90% a good thing.

comment by DirectedEvolution (AllAmericanBreakfast) · 2020-10-16T08:12:43.340Z · LW(p) · GW(p)

Sometimes, our native curiosity is a poor guide to the questions we should really be asking.

The Roman will have to see the lack of temples to Annona, read the histories of ancient Rome, experience a whole lot of people making fun of his toga and the police doing nothing to stop it, and marvel at the magic that the common citizens of this new land carry in their pocket.

But even that might not work if the cherished belief is infectious enough. People will lead miserable lives, commit unspeakable acts, deny their own senses, and go to an early grave in order to maintain a false belief or avoid an uncomfortable thought.

Steelmanning only helps in the fortunate case where curiosity and intuition are fairly trustworthy guides in our pursuit of a meaningful truth.

comment by ThisSpaceAvailable · 2014-01-23T01:31:57.079Z · LW(p) · GW(p)

To read charitably is to skip over, rather than use for your own rhetorical advantage, things in your interlocutor's words like ambiguity, awkwardness, slips of tongue, inessential mistakes. On the freeway of discussion, charitable reading is the great smoother-over of cracks and bumps of "I didn't mean it like that" and "that's not what it says". It is always a way towards a meeting of the minds, towards understanding better What That Person Really Wanted To Say - but nothing beyond that. If you're not sure whether something is a charitable reading, ask yourself if the interlocutor would agree - or would have agreed when you're arguing with a text whose author is absent or dead - that this is what they really meant to say. ... Now, going to the example in the post, where the ancient Roman chooses to interpret a progressive argument for increasing welfare as "really" carrying between lines the ancient Roman rationale. He is not doing a charitable reading of his interlocutor's words - they would definitely not agree that this is what they meant to say.

The first quote implies a subjective standard for charitable reading; charitable reading is when one reads the argument in a way they believe the other person would agree with. The second, on the other hand, implies an objective standard: a reading is charitable if it is what the other person actually would agree with. Can you clarify this issue?

Steelmanning, on the other hand, is all about changing the argument against your position to a stronger one against your position. The "against your position" part is left out of some good explanations in other comments here, but I think it's crucial.

If you end up being convinced by your own steelmanned argument, is that steelmanning? It's against your original position, but for your new position. Isn't there a temptation to come up with as strong as an argument as possible given the constraint that the steelmanned argument be just weak enough to not be convincing?

comment by whales · 2014-01-13T08:10:11.200Z · LW(p) · GW(p)

I'm reminded of Bret Victor's recent comment on reading Latour:

It’s tempting to judge what you read: "I agree with these statements, and I disagree with those." However, a great thinker who has spent decades on an unusual line of thought cannot induce their context into your head in a few pages. It’s almost certainly the case that you don’t fully understand their statements. Instead, you can say: "I have now learned that there exists a worldview in which all of these statements are consistent." And if it feels worthwhile, you can make a genuine effort to understand that entire worldview. You don't have to adopt it. Just make it available to yourself, so you can make connections to it when it's needed.

That, to me, is a principle of charity well applied. I wouldn't at all say that steelmanning is a stronger form of that -- a rationalist trying to steelman Latour would be like your Roman trying to steelman progressivism. Steelmanning is about constructing what you see as stronger versions of an argument, while the principle of charity is about trying to get into your interlocutor's head under the assumption that whatever they're saying or doing seems reasonable and right to them. The latter is much harder and rarer, in my experience, although that's not to say the former isn't more valuable in some situations.

You describe some real problems with steelmen. I think a first-order defense against them is just to ask whether your interlocutor agrees with your steelman or not.

Replies from: jsteinhardt, TheOtherDave, ChrisHallquist
comment by jsteinhardt · 2014-01-13T08:49:32.459Z · LW(p) · GW(p)

This is my favorite quote in several months :). You should add it to the Rationality Quotes thread.

Replies from: whales
comment by whales · 2014-01-13T09:04:40.433Z · LW(p) · GW(p)

Done, thanks for the reminder.

comment by TheOtherDave · 2014-01-13T19:16:41.148Z · LW(p) · GW(p)

Agreed completely.
This formulation of the principle of charity also reminds me a lot of Miller's law.

comment by ChrisHallquist · 2014-02-17T04:49:00.339Z · LW(p) · GW(p)

It’s tempting to judge what you read: "I agree with these statements, and I disagree with those." However, a great thinker who has spent decades on an unusual line of thought cannot induce their context into your head in a few pages. It’s almost certainly the case that you don’t fully understand their statements. Instead, you can say: "I have now learned that there exists a worldview in which all of these statements are consistent."

False. Seems pretty obvious that lots of people have inconsistent worldviews.

Replies from: Vaniver
comment by Vaniver · 2014-02-17T19:52:19.301Z · LW(p) · GW(p)

False.

I must say, I find this statement rather amusing in context.

Seems pretty obvious that lots of people have inconsistent worldviews.

Does the original quote describe all, or almost all people? It looks like it describes great thinkers- that is, people who should give you pause when they disagree with you. And if this is the first time you've met someone, you don't know whether or not they're a great thinker, and you may be overweighting a perceived inconsistency in your reading of their statement of their beliefs in your determination of whether or not they're a good enough thinker to puzzle through.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-02-17T20:39:44.633Z · LW(p) · GW(p)

Er, good point. It didn't occur to me to think so-called "great thinkers" are that much less likely to be inconsistent than most people. But on reflection I stand by that. See e.g. Eric Schwitzgebel on Kant.

comment by Apprentice · 2014-01-13T11:04:31.547Z · LW(p) · GW(p)

Quoth Yvain:

I no longer try to steelman BETA-MEALR [Ban Everything That Anyone Might Experience And Later Regret] arguments as utilitarian. When I do, I just end up yelling at my interlocutor, asking how she could possibly get her calculations so wrong, only for her to reasonably protest that she wasn’t make any calculations and what am I even talking about?

Replies from: Douglas_Knight, Calvin
comment by Douglas_Knight · 2014-01-13T20:20:19.581Z · LW(p) · GW(p)

Thanks. I didn't believe the original post without examples.

comment by Calvin · 2014-01-13T12:34:16.504Z · LW(p) · GW(p)

He is such obviously superior to his opponents, isn't he? I am not fan of such one sided accounts, as other side could as easily write:

I no longer try to steelman BETA-MEALR [Ban Everything That Anyone Might Experience And Later Regret] arguments as utilitarian. When I do, I just end up yelling at my interlocutor, asking how she could possibly get her arguments so wrong, only for her to reasonably protest that she wasn’t making any arguments, but instead juggled numbers pulled directly from her ass, what am I even talking about?

Is it really a proof of superior debating skills, or piece of evidence against steelmaning?

Replies from: Apprentice, ephion
comment by Apprentice · 2014-01-13T12:46:03.521Z · LW(p) · GW(p)

Yes, you could turn the quote upside down and it would still work. That was kind of the point. For effective communication it's not a good idea to talk as if your opponent is operating on your assumptions rather than her own assumptions.

Replies from: Calvin
comment by Calvin · 2014-01-13T12:53:42.111Z · LW(p) · GW(p)

Well, this is something certainly I agree with, and after looking for the context of the quote I see that it can be interpreted that way.

I agree, that my interpretation wasn't very, well... charitable, but without context it really reads like yet another chronicle of superior debater celebrating victory over someone, who dared to be wrong on the Internet.

Replies from: JGWeissman
comment by JGWeissman · 2014-01-13T14:50:05.354Z · LW(p) · GW(p)

It seems to me that in the quote Yvain is admitting an error, not celebrating victory. Try taking his use of the word "reasonably" at face value.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-17T01:55:24.521Z · LW(p) · GW(p)

Really? When I read that article, I thought he was ramming home his point that his opponents are secretly deontologists there - hence the title of the post in question. Perhaps I too have failed to apply the principle of charity.

(Insert metahumourous joke about not bothering because of the OP's topic here.)

Replies from: JGWeissman
comment by JGWeissman · 2014-01-17T02:33:26.982Z · LW(p) · GW(p)

I thought he was ramming home his point that his opponents are secretly deontologists there

I think the point was that his opponents are openly deontologists, making openly deontological arguments for their openly deontological position, and therefor they are rightly confused and not moved by Yvain's refutation of a shoehorning of their position into a consequentialist argument when they never made any such argument, which Yvain now understands and therefor he doesn't do that anymore.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-17T04:09:57.785Z · LW(p) · GW(p)

Well, this is what comes immediately after the quoted paragraph, for context:

And yet without the utilitarian angle, this whole argument falls apart on exactly the “proving too much” grounds pushed by our hypothetical politician above. If you want to ban euthanasia, why not ban health care? If you want to ban prostitution, why not McJobs? If you want to ban BDSM, why not all consensual sex? If you don’t have a good quantitative argument ready, you sure can’t support it on qualitative grounds alone.

Look around you. Just look around you. Have you figured out what we’re looking for yet? That’s right. The answer is sacred values and taboo trade-offs.

So my interpretation doesn't seem entirely unreasonable. I haven't finished rereading the whole post yet, though.

Replies from: JGWeissman
comment by JGWeissman · 2014-01-17T04:36:54.746Z · LW(p) · GW(p)

Arguing that the consequentialist approach is better than the deontological approach is different than skipping that step and going straight to refuting your own consequentialist argument for the position others were arguing on deontological grounds. Saying they should do some expected utility calculations is different than saying the expected utility calculations the haven't done are wrong.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-18T14:39:07.751Z · LW(p) · GW(p)

Except he isn't doing that. He's misrepresenting people's arguments (due to misunderstanding?), tearing his strawman apart, and then "explaining" the poor quality of this argument by declaring that his opponents are lying about their beliefs, and their actual beliefs consist of simple deontological rules.

What bothers me is when they do this and they pretend they’re making a value-free statement about respecting the rights of others. “Oh, well, we’re a liberal democracy and people should be able to do whatever they like with their own bodies, but I’m just worried about people being euthanized against their will, and that would be a violation of consent, and a good liberal democracy like us wouldn’t want to violate consent, nosirree!”

No. You do not care how many people are kept alive without their consent, just like you do not care how many people work McJobs without their consent, or how many people feel pressured into going to social gatherings they don’t want to attend. You care about consent solely when it serves the purpose of your sacred values. You would gladly violate the consent of a billion people on some unrelated issue rather than risk a single consent violation of your own personal pet project.

... and obviously, an arbitrary set of deontological rules is not an argument, so he no longer has to actually disprove it.

I'm starting to think I need to write a larger deconstruction of his post, actually, but I hope you see what I mean. (Thank Azathoth that Yvain is such a clear writer and thinker so I can show this so simply with quotes like this. Although I suppose he wouldn't have as many of us caring what he writes if it wasn't worth reading.)

Replies from: JGWeissman
comment by JGWeissman · 2014-01-18T15:47:00.166Z · LW(p) · GW(p)

Yvain says that people claim to be using one simple deontological rule "Don't violate consent" when in fact they are using a complicated collection of rules of the form "Don't violate consent in this specific domain" while not following other rules of that form.

And yet, you accuse him of strawmanning their argument to be simple.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-18T20:07:11.813Z · LW(p) · GW(p)

Yvain says that people claim to be using one simple deontological rule "Don't violate consent"

Sort of, yes. I definitely need to write a full post on why I believe his criticism is subtly unfair in various ways - likely because this is an emotional subject for him, so he is somewhat less inclined to pull his punches and steelman opposing views; and he is both a brilliant writer and a brilliant thinker.

in fact they are using a complicated collection of rules of the form "Don't violate consent in this specific domain" while not following other rules of that form

Actually, he accuses them of claiming to, and advocating following those rules only in those situations where doing so agrees with their agenda - which he characterizes, not unreasonably. A charge of hypocrisy, rather than inconsistency.

And yet, you accuse him of strawmanning their argument to be simple.

I do? I accuse him of strawmanning their arguments to be cartoonishly poor arguments, but simple...?

Ah! Are you perhaps referring to my characterization of simple deontological rules ("thou shalt not kill" etc.)? Yes, I would generally reject those as overly simple - there are many situation where one might be called upon to kill for the greater good, for example.

(There are vast differences between deontological ethics, rule utilitarianism, and the optimal laws for legal systems both real and hypothetical.)

comment by ephion · 2014-01-14T01:15:31.016Z · LW(p) · GW(p)

Speaking of the Principle of Charity...

Replies from: Calvin
comment by Calvin · 2014-01-14T01:18:20.107Z · LW(p) · GW(p)

Yes, I do stand corrected.

comment by Viliam_Bur · 2014-01-13T08:48:22.333Z · LW(p) · GW(p)

I consider steelmanning to be a safeguard against reversing stupidity, especially in political contexts. If my opponent says X, I am likely to say non-X for many bad reasons such as: my opponent defends X using arguments I disagree with.

But if you show me that X can also be defended using arguments I would agree with, then I will be less likely to automatically throw X away, and I will be more able to consider X on its own merits.

Steelmanning is good for understanding "X can also be defended by good arguments", and is dangerous because it provides a bad model of my opponent. (Unless my original model was so bad that the steelmanning didn't make it worse; which wouldn't be completely unexpected in a political debate.)

In your example, the time-travelling Roman would get a completely bad idea about why Obama wants to increase unemployment benefits. But he would get a useful insight about why he might want to support increasing unemployment benefits. It's bad for modelling Obama, it's good for thinking about possible consequences of the unemployment benefits. (If you were an Annona-worshipping Roman, you would want to realize she would be happy about the unemployment benefits.)

Analogically, steelmanning Reactionary ideas is bad for modelling Reactionaries, but it could be good to think about some topics they raise. It may help you identify parts of their set of ideas which may have value for you, outside of the original set.

But this difference is really not obvious, and people are likely to get it wrong. Perhaps it would be good to say it explicitly when steelmanning anything: "This is not why those people defend their ideas, but it's why someone else might find the same ideas meaningful."

Replies from: None
comment by [deleted] · 2014-01-13T17:43:14.031Z · LW(p) · GW(p)

A nitpick:

Analogically, steelmanning Reactionary ideas is bad for modelling Reactionaries

This doesn't follow. Many reactionaries are either intimately involved in LW culture or are sympathetic towards it, so an LW member trying to model a reactionary is probably doing a much better job than a Roman modeling Obama.

Replies from: gothgirl420666, Randy_M
comment by gothgirl420666 · 2014-01-14T08:09:24.230Z · LW(p) · GW(p)

This is interesting because I sort of see most Reactionary positions as already being steelmen - putting beliefs that most people think are based on superstitious, bigoted, backwards modes of reasoning into consequentialist, logical, LW-style rhetoric.

(Is there a notable difference between the politics held by someone described as "Reactionary" and someone described as "far-right"? I can't figure this out. "Reactionary" seems to me like basically meaning "far-right, but smart".)

Replies from: Moss_Piglet, Douglas_Knight, MugaSofer
comment by Moss_Piglet · 2014-01-14T19:34:26.037Z · LW(p) · GW(p)

(Is there a notable difference between the politics held by someone described as "Reactionary" and someone described as "far-right"? I can't figure this out. "Reactionary" seems to me like basically meaning "far-right, but smart".)

"Far Right" implicitly invokes the Overton Window; most anything you can''t comfortably say in public anymore is Far Right, even if it is actually thought by the majority of people or was itself a leftist position a few decades ago. Saying something is Far Right or Far Left from an assumed neutral position can be useful to elucidate the boundaries of conventional thought, or to exploit anchoring in an unsophisticated audience, but provides little information on it's own.

In general, Reactionaries want to reboot society[1] to before some big event which symbolizes the beginning of visible civilizational decline (Like May 1968, Reconstruction, the French Revolution, the English Civil War, the Protestant Reformation, the Edict of Milan, etc.), whereas Conservatives try to keep the status quo from deteriorating further with constant patches. That said, today's conservatism becomes tomorrow's reaction as the traditions they failed to conserve are destroyed fully in the next Great Leap Forward.

I realize that's general to the point of vagueness but it's tough to hit a moving target in the first place even when you know what you're aiming at. Golden Dawn and the Tea Party are both "far right," and neither are particularly reactionary IMO, but they're also fairly dissimilar so comments about one don't apply much to the other.

[1]One big stumbling-block to understanding this is the wrongheaded idea that technological advance and moral "progress" are inseparable, seeing the return of (for example) an ancien-regime style aristocracy as somehow necessitating turning off the internet or throwing away antibiotics. Culture and technology are certainly linked, so you should expect large shifts in one to affect the other, but human nature itself changes fairly slowly and it is very suspicious to see alterations to social organization racing ahead of the demographic changes which naturally guide them.

Replies from: Lumifer
comment by Lumifer · 2014-01-14T20:27:45.856Z · LW(p) · GW(p)

Do you think this is a good parallel (if we are borrowing terms from religious studies):

conservatives == traditionalists
reactionaries == fundamentalists

?

Replies from: Moss_Piglet, Nornagest
comment by Moss_Piglet · 2014-01-14T22:08:01.875Z · LW(p) · GW(p)

The comparison doesn't have a great connotation, given that "fundamentalist" is typically an epithet, but it's not too far off in terms of the denotation.

Personally though, I would say it's more of an Exoteric / Esoteric split; conservatives seem to spend most of their effort preserving outward forms and rituals of their cultures in an effort to keep the fire going, where reactionaries see it as burnt out already and so look back for the essential (in both senses of the word) elements to spark a new one. A good example is comparing Chesterton's Catholic apology with Evola's promotion of Tradition, not to imply that you can't be a Catholic reactionary but just as an example of a differing mindset. Of course, esoterica being what it is, it's a bit tough to get a grip on and much easier to talk about than to understand fully.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-17T01:59:21.290Z · LW(p) · GW(p)

"fundamentalist" is typically an epithet

Of course, "reactionary" was also traditionally a derogatory term. So perhaps that isn't surprising.

Replies from: Lumifer
comment by Lumifer · 2014-01-17T02:02:17.916Z · LW(p) · GW(p)

Of course, "reactionary" was also traditionally a derogatory term.

In the context of religious studies "fundamentalist" is not derogatory but descriptive.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-17T03:37:56.741Z · LW(p) · GW(p)

Most epithets start out as descriptive terms with some sort of negative connotation.

comment by Nornagest · 2014-01-14T21:48:44.980Z · LW(p) · GW(p)

IANAR, but "fundamentalist" connotes strong deontological beliefs to me, and in particular a stance wherein anything violating some established creed X is definitionally considered evil. That tends to imply at least self-perceived reaction within religious contexts, since most religions' moral contents were developed relative to mores at the times and places of their founding; also because many religions include doctrine describing some sort of lost golden age. But the reverse doesn't seem to be true: we can imagine wanting to rewind parts of society to some prior state on strictly consequentialist grounds, without invoking any particular deontology.

(Indeed, given the amount of variation over time it would be surprising if there weren't historical situations we'd prefer, unless we believe in some sort of ethical teleology or an Yvain-style deal where the ethical sophistication we can get away with supporting scales with technical capability, at least in agrarian/industrial societies. I find the latter somewhat convincing, myself.)

comment by Douglas_Knight · 2014-01-15T15:00:02.973Z · LW(p) · GW(p)

The words have been used inconsistently throughout the centuries, but the etymology is that a reactionary wants to roll back a change, while a far-right person might have the same politics, without the implication of having noticed where society is.

comment by MugaSofer · 2014-01-17T01:45:51.351Z · LW(p) · GW(p)

I sort of see most Reactionary positions as already being steelmen - putting beliefs that most people think are based on superstitious, bigoted, backwards modes of reasoning into consequentialist, logical, LW-style rhetoric.

... and then finding the steelman so persuasive you convert to the other side.

"Reactionary" seems to me like basically meaning "far-right, but smart".

Zing!

comment by Randy_M · 2014-01-14T14:27:53.255Z · LW(p) · GW(p)

But the point is, is steelmanning someone making a better model of them than just taking them at their own words? If the point is in fact to understand them, rather than to challenge your own position, and they are arguing competantly and honestly, it probably is. Edit: Meant "is not"!

comment by David_Gerard · 2014-01-14T09:46:41.975Z · LW(p) · GW(p)

The Wikipedia formulation is "write for your enemy", i.e. state their position sufficiently well that they would accept you have stated their position sufficiently well. This is useful in that they are generally present and will let you know in no uncertain terms if you've failed to achieve this. This is only a guideline, as unreasonable opponents do exist and the stretch is so that you can write a better neutral article.

Replies from: ThisSpaceAvailable
comment by ThisSpaceAvailable · 2014-01-24T01:04:02.218Z · LW(p) · GW(p)

I have found unreasonable opponents to be unreasonably common, and attempts at clarification are quite often ignored, or even elicit hostility, and often are simply exploited.

Replies from: David_Gerard
comment by David_Gerard · 2014-01-24T17:02:52.506Z · LW(p) · GW(p)

This is, of course, true. Nevertheless, "write for the enemy" is still a useful guideline.

comment by Kaj_Sotala · 2014-01-13T16:41:58.298Z · LW(p) · GW(p)

But why didn't A just frame her argument in objective, consequentialist terms? Do we assume that what she wrote was sort of a telephone-game approximation of what was originally a highly logical consequentialist argument? If so where can I find that argument? And if not, why are we assuming that A is a crypto-consequentialist when she probably isn't?

I never thought that steelmanning implied necessarily assuming that A would agree with the steelmanned version. If A says something that seems to have a reasonable point behind it but is expressed badly, then yes, in that case the steelmanned version can be something that they'd agree with. But they might also say something that was obviously wrong and not worth engaging with - but which nonetheless sparked an idea about something that was more reasonable, and which might be interesting to discuss.

In either case, we've replaced a bad argument with a better one that seems worth considering and discussing. Whether or not A really intended the argument to be understood like that doesn't matter that much.

To take a more concrete example, in What Data Generated That Thought?, I wrote:

All outcomes are correlated with causes; most statements are evidence of something. Michael Vassar once gave the example of a tribe of people who thought that faeries existed, lived in a nearby forest, and you could see them once you became old enough. It later turned out that the tribe had a hereditary eye disease which caused them to see things from the corners of their eyes once they got old. The tribe's theory of what was going on was wrong, but it was still based on some true data about the real world. A scientifically minded person could have figured out what was going on, by being sufficiently curious about the data that generated that belief.

If the person giving the original argument is the tribe, the original argument is "faeries exist", and the steelmanned argument is "these people carry the genes for a hereditary eye disease", then our steelmanned version certainly isn't what the tribe originally intended. But what does it matter? Steelmanning their argument still gave us potentially useful information.

comment by [deleted] · 2014-01-13T08:37:05.574Z · LW(p) · GW(p)

There's another way it can go wrong:

"You claim X, which sounds pretty bizarre to me so I'll charitably assume you meant a weaker version X' that fits in my worldview, and I'll forget that you originally claimed an argument for X."

Replies from: gothgirl420666, bbleeker
comment by gothgirl420666 · 2014-01-14T08:15:30.021Z · LW(p) · GW(p)

That's pretty much the same thing as my point #1.

comment by Sabiola (bbleeker) · 2014-01-16T11:17:06.951Z · LW(p) · GW(p)

Wouldn't that be strawmanning though, not steelmanning?

Replies from: None
comment by [deleted] · 2014-01-16T11:46:33.093Z · LW(p) · GW(p)

I think it's not quite the same. Strawmanning is inventing a less defensible, normally more extreme, version of an argument. This is inventing a more defensible, less extreme version of an argument.

Replies from: ThisSpaceAvailable, Eugine_Nier
comment by ThisSpaceAvailable · 2014-01-24T01:06:38.417Z · LW(p) · GW(p)

No, this isn't creating a less extreme argument, it's creating a less extreme thesis.

comment by Eugine_Nier · 2014-01-17T03:08:59.555Z · LW(p) · GW(p)

Strawmanning is inventing a less defensible, normally more extreme, version of an argument. This is inventing a more defensible, less extreme version of an argument.

Being more defensible and being more extreme are not the same thing, in fact frequently it is the more extreme versions of arguments that are easier to defend.

comment by ChrisHallquist · 2014-01-15T03:13:10.774Z · LW(p) · GW(p)

Excellent points. I've never been a huge fan of steelmanning. A couple more:

  1. People talk as if steelmanning is inherently a virtue, but in practice they're selective about what they steelman. You won't see many steelmannings of Young Earth Creationism around these parts--or even plain vanilla theism. If people are going to steelman, it would be nice for them to be more up-front about why they chose to steelman this particular argument (or when they're telling someone else "hey why aren't you steelmanning the person you're attacking," be upfront about what that particular argument deserves steelmanning.

  2. If you choose which arguments to steelman more or less at random, or for bad reasons, it seems like it's a violation of privileging the hypothesis.

Replies from: Prismattic, MugaSofer
comment by Prismattic · 2014-01-15T05:14:41.780Z · LW(p) · GW(p)

What would steelmanning Young Earth Creationism even look like? Young Earth Creationism is already the playdoughman* version of Creationism. If you steelman it, it wouldn't be Young Earth anymore.

*If I'm never remembered for anything else in the rationalosphere, I would like to be known as the creator of the term "playdoughmanning".

Replies from: Kawoomba, ChrisHallquist, army1987, MugaSofer
comment by Kawoomba · 2014-01-16T10:43:26.243Z · LW(p) · GW(p)

If I'm never remembered for anything else in the rationalosphere, I would like to be known as the creator of the term "playdoughmanning".

Please stop with the prismatticmanning of tortured neologisms. The ensuing syllabilistic explosion might pose a memetic hazard (Great Filter = Tower of Babble).

Replies from: Will_Sawin
comment by Will_Sawin · 2014-01-16T18:30:27.631Z · LW(p) · GW(p)

It would be amusing if the single primary reason that the universe is not buzzing with life and civilization is that any sufficiently advanced society develops terminology and jargon too complex to be comprehensible, and inevitably collapses because of that.

Replies from: Kawoomba
comment by Kawoomba · 2014-01-16T18:56:28.635Z · LW(p) · GW(p)

?Que?

Replies from: Will_Sawin
comment by Will_Sawin · 2014-01-16T22:50:58.938Z · LW(p) · GW(p)

That's what the Great Filter is, no?

comment by ChrisHallquist · 2014-01-16T02:13:36.783Z · LW(p) · GW(p)

What would steelmanning Young Earth Creationism even look like? Young Earth Creationism is already the playdoughman* version of Creationism. If you steelman it, it wouldn't be Young Earth anymore.

I don't dispute this, except that this is also how I feel about some of the views some other people in the rationalist community think deserve to be steelmanned.

Indeed, is there ever a case where it isn't at least plausible that the steelman version of view X wouldn't be view X anymore?

comment by A1987dM (army1987) · 2014-01-16T17:36:09.362Z · LW(p) · GW(p)

What would steelmanning Young Earth Creationism even look like?

http://squid314.livejournal.com/327646.html

comment by MugaSofer · 2014-01-17T01:39:07.952Z · LW(p) · GW(p)

What would steelmanning Young Earth Creationism even look like?

Aliens Did It? Seems to be used in that capacity sometimes.

playdoughman

That's a real view so weak it resembles a strawman of a real belief, yes?

comment by MugaSofer · 2014-01-17T01:36:54.719Z · LW(p) · GW(p)

Yes, that would be tribal affiliations showing. I always assumed the cure for this was more steelmanning...

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-01-17T03:29:21.647Z · LW(p) · GW(p)

Which part? Is lack of interest in steel manning YEC just a sign of tribal affiliations?

Replies from: MugaSofer
comment by MugaSofer · 2014-01-17T03:34:52.475Z · LW(p) · GW(p)

Both points - the lack of consistency, and privileging the hypothesis.

(In fairness, it's more than "tribal affliliations". There are probably all sorts of biases creating this particular danger in being half a rationalist.)

comment by Kawoomba · 2014-01-16T06:54:20.998Z · LW(p) · GW(p)

Steelmanning is optimal * when looking for true beliefs about the world **, as long as you're aware that the source of the argument only provided a weaker form of the argument ***.

* In an environment without any resource constraints, which unfortunately never is the case. Still, if you got time on your hands and nothing else to do ...

** Arguments in their maximally persuasive form have more potential to shift your beliefs in the correct direction. Neglecting a potential strong form of an argument is tantamount to ignoring evidence.

*** So steelmanning the cold fusion crackpot's argument may have brought you to firmly believe in cold fusion, that's fine as long as you don't forget that the crackpot still believes in the right conclusion for the wrong reasons (the weak form of the argument), and is as such still a crackpot.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-17T03:16:21.940Z · LW(p) · GW(p)

So steelmanning the cold fusion crackpot's argument may have brought you to firmly believe in cold fusion, that's fine as long as you don't forget that the crackpot still believes in the right conclusion for the wrong reasons (the weak form of the argument), and is as such still a crackpot.

Of course, if you're finding that someone seems to repeatedly arrive at the right conclusion for "the wrong reasons" you should take this as evidence that said reasons are better than you thought.

Replies from: derefr, army1987
comment by derefr · 2014-01-17T17:58:53.777Z · LW(p) · GW(p)

In such cases, it more-often-than-not seems to me that the arguer has arrived at their conclusion through intuition, and is now attempting to work back to defensible arguments without those arguments being ones that would convince them, if they didn't first have the intuition.

comment by A1987dM (army1987) · 2014-01-18T07:15:48.884Z · LW(p) · GW(p)

Not necessarily. Survivorship bias.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-13T11:20:49.795Z · LW(p) · GW(p)

I endorse all of these problems as real. Too much steelmanning blinds to the reality of the rather incompetent civilization and malfunctioning species we live in.

Replies from: gjm, MugaSofer
comment by gjm · 2014-01-13T11:30:15.859Z · LW(p) · GW(p)

Issue 1 above has nothing to do with losing sight of how incompetent our civilization and most of its individuals are. It's about almost the opposite problem: trying to be charitable to someone else by adjusting their position to be more like your own, at the risk of messing it up in the process.

[EDITED because I wrote "Issue 2" where I meant "Issue 1", and also to fix up a minor consistency arising from the fact that an earlier draft had had "Issues 1 and 2".]

comment by MugaSofer · 2014-01-17T01:50:50.465Z · LW(p) · GW(p)

Can you give an example of someone who had this problem?

It seems like almost the opposite problem is more common (aspiring rationalists "steelmanning" popular but wrong concepts, and then adopting the steelmanned version, in a sort of high-level refusal to update.)

(And, of course, it seems like the refusal to steelman at all, instead attacking a demonized strawman version of the Other Side,is a ore common issue still - but that should go without saying.)

comment by MrLovingKindness · 2014-01-16T11:27:13.835Z · LW(p) · GW(p)

"The dole was there because it made the emperor more popular" and that is the same reason it exists today. Charitable social policies exist primarily to buy votes. Take Head Start as one of many, many examples of failed programs: http://nypost.com/2010/01/28/head-start-a-tragic-waste-of-money/. $166 billion wasted on a program that is demonstrably no help. It seems to be a dismal failure, but continues to exist, because it sounds good and gets votes. The reason why there are so many seemingly failed government programs, is because those programs that are still around are "successful" in the sense that they bought votes for their political proponents. It is like evolutionary biology. Programs that buy votes for politicians "survive", because politicians who support those programs get voted into office. Programs that don't buy votes for politicians don't survive, because there is no one in office to support them.

comment by Ishaan · 2014-01-13T13:21:06.644Z · LW(p) · GW(p)

Switch to Main?

comment by jsteinhardt · 2014-01-13T08:14:28.754Z · LW(p) · GW(p)

Great article, I hadn't heard this argument before but I think it's a good point. I'll also mention that I think the Ideological Turing Test does a good job of combating some of your worries here, although of course has its own dangers.

comment by Alexandros · 2014-01-13T11:34:34.099Z · LW(p) · GW(p)

I suspect there's a difference between steelmanning as in removing unnecessary assumptions or context, and steelmanning as in completely changing the logical foundation of the argument, just retaining the bottom line proposition, as our Roman seems to be doing. They are both valid, with the second one more vulnerable to the problems you mention, especially #1.

Also, either way, it would be a mistake to take the steelmanned argument and attribute it back to the source of the original argument. This seems to be the cause of your problems #2 and #4, maybe also #3 to the degree it's an actual problem, and isn't really necessary for steelmanning as I understand it. If you want to understand the person, or the culture, then you should mine their raw utterances for all the information you can get. But if you are looking to believe truth, then it helps to strengthen your opposition to the point where they can offer your current beliefs a proper fight, since you can't depend on the opposition doing that for you.

I think to some degree there is a difference between the principle of charity and steelmanning. I'd guess your problems lie more with the principle of charity, but maybe I shouldn't be too charitable in this case?

comment by hyporational · 2014-01-13T06:35:20.420Z · LW(p) · GW(p)

I think it would be useful to identify subcategories of what people mean by steelmanning and then see if we can approve some of those.

Replies from: gedymin
comment by gedymin · 2014-01-13T18:35:24.187Z · LW(p) · GW(p)

"Bad" steelmanning: a form of misunderstanding your opponent (as in the Roman example).

"Good" steelmanning: marshalling the best form of the argument against your position and defeating it. Also known as charitable interpretation.

I don't think steelmanning is particularly dangerous. It should be quite easy to recognize and avoid "bad" steelmanning, which is the whole source of the danger. If the Roman is truly a rationalist, he should be aware of his very limited knowledge of the modern society and the dangers of substituting an argument in his situation. Steelmanning in his situation is a clear example of irrational behavior.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-01-13T20:00:45.309Z · LW(p) · GW(p)

I think it's also steelmanning if you don't end up defeating the improved argument.

comment by Stefan_Schubert · 2014-01-16T13:15:26.180Z · LW(p) · GW(p)

Very good, although I have heard similar arguments (though less elaborated) in conversation. The principle of charity (or steelmanning - never heard that term before) certainly is important, but sometimes it just goes too far. At one seminar I used to attend the seminar leader used to "re-interpret" the most confused and illogical argument, saying - "did you mean so and so", to which the interpretee of course invariably gratefully responded yes (though of course he never in his life had come up with such an interesting argument). The whole thing was a bit of a comedy...the problem was, though, that most people didn't see what was happening (including both the re-interpreter and the re-interpretee), so people got a false impression the re-interpretee was much less confused than he really was.

One wonders if incidents like these contribute to the false impression that academic abilities are roughly equal, something that I discuss here: http://lesswrong.com/lw/jhy/division_of_cognitive_labour_in_accordance_with/

Another thing that's seldom pointed out is that we should use the principle of charity selectively. Some people have a strong track-record for saying interesting things, so if there is something they've said that we think is confused, wicked or false, we should take another look at it and see if we can make more sense of it. Other people have a very poor track-record, and with them it is rather the other way around: confused, wicked and/or false claims is the normal, and we should rather take another look at our interpretation if it tells us they've said something deep and interesting.

comment by Oligopsony · 2014-01-13T04:42:51.591Z · LW(p) · GW(p)

Taking arguments more seriously than you possibly should. I feel like I see all the time on rationalist communities people say stuff like "this argument by A sort of makes sense, you just need to frame it in objective, consequentialist terms like blah blah blah blah blah" and then follow with what looks to me like a completely original thought that I've never seen before.

Rather than - or at least in addition to - being a bug, this strikes me as one of charity's features. Most arguments are, indeed, neither original nor very good. Inasmuch as you can substitute them for more original and/or coherent claims, then so much the better, I say.

Replies from: asr
comment by asr · 2014-01-13T06:16:59.400Z · LW(p) · GW(p)

Rather than - or at least in addition to - being a bug, this strikes me as one of charity's features. Most arguments are, indeed, neither original nor very good. Inasmuch as you can substitute them for more original and/or coherent claims, then so much the better, I say.

Yes. But it's not doing any favors to anybody if you pretend that a new coherent argument is the same as an old incoherent argument. In my experience, the authors of the previous argument are often hesitant to agree with the new rephrasing -- it's not written in the terms they use to understand the world.

Replies from: Calvin
comment by Calvin · 2014-01-13T07:25:39.339Z · LW(p) · GW(p)

It is also likely not written in the way they understand the world. I mean If charity is assuming that the other person is saying something interesting and worth consideration, such approach strikes me as an exact opposite:

Here, this is your bad, unoriginal argument, but I changed it into something better.

I mean, if you are better at arguing for the other side than your opposition, why do you even speak with them?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-15T22:32:14.930Z · LW(p) · GW(p)

I thought this was in Main when I promoted it. Any reason not to move it to Main?

Replies from: gothgirl420666
comment by gothgirl420666 · 2014-01-16T06:30:51.687Z · LW(p) · GW(p)

Uh, I don't really know how it works that well, this is only the second post I've written. I can't think of a reason not to.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-01-16T10:00:05.432Z · LW(p) · GW(p)

As someone who steelmans and interprets other people charitably a lot, I hadn't thought of the problems this could cause. I've managed to change my mind about a lot of things in the past few years; I wonder how much of this is because I didn't have any beliefs I held very strongly before, and don't hold many of my current beliefs all that strongly either.

comment by halcyon · 2014-02-18T19:04:55.327Z · LW(p) · GW(p)

Original thoughts arriving in the form of difficult-to-express intuitions is a common phenomenon. Early analytic philosophers were struggling with the right way to express the intuition that "greater(7, 5) = true" and "lesser(5, 7) = true" represent the same fact. Now we know that the correct answer is to derive both as consequences of the same abstract model of the relevant entities (such as natural numbers) whose existence is to an extent independent of the language used to describe it. The function of the model is to take the way a language is used and tie it to a specific semantic structure. It is possible to construct an isomorphic model using a different set of linguistic definitions, a different language.

When we are dealing with more subjective models than numbers, I think most people grope around trying to pinpoint which of a set of models that are identical on certain points best suits the criteria satisfying their background mental makeup for the meaning they want to express. George Orwell's advice to let the meaning pick the words rather than words the meaning might be related to this game of carefully picking a meaning first, and then searching for the words that best express that meaning. So while I agree with some of the points raised in the main post, I would suggest that other people not using the same words to make the same argument as you is not a great reason to think it is incompatible with yours in the places that matter to each of you.

It might be worthwhile to examine where two models we're comparing diverge in objective consequences that are important, and where divergence is only a matter of subjective language use. The question of prime significance in the Roman example for me is not whether charity is for the glory of the emperor or for the betterment of the poor, but whether the Roman is in favor of undertaking charity in such a way that appearance of imperial glory is optimized or such that the effects of the social safety net are optimized.

If he says that imperial glory demands the optimization of the social safety net, then our differences might amount to a fight over words, at least over this bit of argumentation, from my side. He may attach importance and place emphasis elsewhere, but unless I abstract away some background information by defining criteria to identify local isomorphisms, no two arguments made by two people might ever be the same. If he says that he wants to optimize imperial glory irrespective of its effects on the poor, then I have bigger things to worry about. It's wrong for me to conflate my argument with his in that case, because the premise that the Roman is making the same argument as me is false in the areas that matter to me. In fact, owing to my misunderstanding, the stronger reasons I provide for "his argument" will in any case be irrelevant to the argument the Roman is actually making.

As long as you exercise careful discernment in determining whether or not differences are meaningful to the argument being made or quibbles over words, and to identify where the differences come from, which parts of the argument are important to each of you and why you disagree, I don't see where the problem is. Once all these provisos and disclaimers are out of the way, feel free to be inspired by the Roman's argument for charity to invent your original Age of Enlightenment argument for charity.

The Roman might even respond with, "Yes, finally you're starting to see sense. Charity is a good thing like I maintained all along. You still don't appreciate the true importance of imperial glory, but you'll come around to my way of thinking eventually." Or if he wants to convince you that charity is a good thing within your system of ethics, if only because he thinks it's important for you to broadcast American glory for some reason, or if he just wanted to challenge your worldview as an impartial critic by asking why you don't support charity if helping others is so important to you, then your own argument might be just the thing he was looking for, irrespective of the alienness of his worldview.

Does that make sense?

comment by TruePath · 2014-01-29T15:32:52.664Z · LW(p) · GW(p)

It seems to me there are too separate issues.

1) Do you act like other people actually SAID the better argument (or interpretation of that argument) that you can put in his mouth?

2) Do you suggest the better alternative in debates and discussions of the idea before arguing against it.


2 is certainly a good idea while all the problems come from item 1. Indeed, I would suggest that both parties do best when everyone ACTS LIKE OTHER PEOPLE SAID WHATEVER YOU JUDGE TO BE MOST LIKELY THEY ACTUALLY INTENDED TO SAY. So you don't don't then on misspeaking nor do you pretend they argued for some straw-man position. However, everyone benefits the most when they learn why what they actually argued wasn't right,(especially if you offer a patched version when available).

This way people actually learn when they make erroneous arguments but the best arguments on each side are still addressed.

comment by TruePath · 2014-01-29T15:26:32.188Z · LW(p) · GW(p)

Indeed, I think a huge reason for the lack of useful progress in philosophy is too much charity.

People charitably assume that if they don't fully understand something (and aren't themselves an expert in the area) the person advancing the notion is likely contributing something of value that you just don't understand yet.

This is much of the reason for the continued existence of continental philosophy drivel like claims that set theory entails morality or the deeply confused erudite crap in Being and Time. Anyone who isn't actually an expert in this kind of philosophy feels it would be uncharitable (or at least seem uncharitable) to get up and denounce it as psuedo-philosophical mumbo-jumbo it is. It may seem harmless but the existence of this kind of stuff within the boundaries of philosophy means that less extreme but also wrong views are also not cut out.

Charity is more directly harmful within analytic (logic/math based philosophy as opposed to continental nonsense) philosophy where people frequently make the naive assumption that various theories, e.g., the definite description theory of reference and the baptismal naming theory of reference, are somehow either right or wrong and argue for these positions just as they would argue for the claims about the fundamental theory of physics. Yet, more sophisticated philosophers have frequently realized this entire naive realism viewpoint is flawed. There isn't a real thing meaning, just speech and writing, and thus these theories can only be taken as theoretical tools that help provide a useful framework for organizing patterns observed in speech acts and despite their incompatible assumptions can both be useful as approximations.

Unfortunately, I have observed time and time again that in situations like this the insight isn't passed on since it would be uncharitable to assume the philosophers who publish in this manner aren't really just debating which is a better approximation to help organize patterns in speech/writing.

Similarly charity stops people from being called out when they continue to wrestle in print with problems (surprise quiz etc..) that have a clear correct solution that was given decades ago since it would be uncharitable to assume (as it true) they simply don't have a good grip on the way mathematics can be applied or fails to apply to real world situations.

comment by Gunnar_Zarncke · 2014-01-19T00:24:26.848Z · LW(p) · GW(p)

wrong thread

comment by tom_cr · 2014-01-16T20:09:16.191Z · LW(p) · GW(p)

Not sure if I properly understood the original post - apologies if I'm just restating points already made, but I see it like this.

Whatever it consists of, it's pretty much the definition of rationality that it increases expected utility. Assuming that the intermediate objective of a rationalist technique like steelmanning is to bring us closer to the truth, then there are 2 trivial cases where steelmanning is not rational:

(1) When the truth has low utility. (If a lion starts chasing me, I will temporarily abandon my attempt to find periodicity in the digits of pi.)

(2) When the expected impact of the resulting update to my estimate of what is true is negligible.

No doubt, there is need for some skill to estimate when such cases hold.

Replies from: blacktrance
comment by blacktrance · 2014-01-16T20:16:39.787Z · LW(p) · GW(p)

I think the point is that while steelmanning can get you closer to the truth about the conclusion of an argument, it can unintentionally get you further from the truth about what argument a person is making. If I say "X is true because of Y" and you steelman it into "X is true because of Z", it's important to remember that I believe "X is true because of Y" and not "X is true because of Z".

Replies from: tom_cr
comment by tom_cr · 2014-01-16T21:21:50.200Z · LW(p) · GW(p)

Thanks, I was half getting the point, but is this really important, as you say? If my goal is to gain value by assessing whether or not your proposition is true, why would this matter?

If the goal is to learn something about the person you are arguing with (maybe not as uncommon as I'm inclined to think?), then certainly, care must be taken. I suppose the procedure should be to form a hypothesis of the type "Y was stated in an inefficient attempt to express Z," where Z constitutes possible evidence for X, and to examine the plausibility of that hypothesis.

comment by JonahS (JonahSinick) · 2014-01-16T09:14:09.762Z · LW(p) · GW(p)

This is a very nice post that highlights an important issue that I hadn't previously been fully conscious of.

comment by cousin_it · 2014-01-13T12:39:03.442Z · LW(p) · GW(p)

Thank you for writing that.

To me steelmanning sometimes feels like rationalizing, only instead of rationalizing your position, you rationalize your opponent's. It might be still useful, though.

comment by drethelin · 2014-01-13T04:51:37.581Z · LW(p) · GW(p)

steelmanning is really just for winning arguments extra hard in the view of your rational audience/cheerleaders.

Replies from: Calvin
comment by Calvin · 2014-01-13T05:44:22.732Z · LW(p) · GW(p)

Personally, I think principle of charity has more to do with having respect for ideas and arguments of the other person. I mean, let's say that someone says that he doesn't eat shrimps, because God forbids him from eating shrimps. If I am being charitable I am going to slightly alter his argument by saying that bible explicitly forbids shrimps. That way we don't have to get sidetracked discussing other topics.

You said that shrimps are wretched in the eyes of lord, and while I agree that old testament explicitly forbids eating them... blah blah....

That way, we can actually have a meaningful and polite conversation. To illustrate negative example, let's assume that he is going to counter by saying that God explicitly told him not to eat shrimps today. There is a certain temptation to rationalize his position to fit my worldview, say:

You say that your moral intuition forbids you from eating shrimps...

The problem is, that second use is opposite of charity or steel-manning. It is basically internalized version of saying "this guy is far too stupid to make a good argument, so I am going to help him by bringing him up to speed". Principle of Charity turns into Principle of Hubris and conversation turns into one-man-show of intellectual masturbation from my side. I mean, look at me I can argue straw-fundamentalist Christian position using better than he himself can!

To summarize, assuming that your interlocutor is a smart person capable of making good arguments without your help is a good principle to follow, especially as it is often true.

Replies from: Brillyant
comment by Brillyant · 2014-01-14T19:02:08.970Z · LW(p) · GW(p)

If I am being charitable I am going to slightly alter his argument by saying that bible explicitly forbids shrimps.

I don't know that this is being charitable. In this case to be charitable, I'd make the assumption that someone who told me God forbid them to eat something was drawing from OT law and not nitpick.

assuming that your interlocutor is a smart person capable of making good arguments without your help is a good principle to follow, especially as it is often true.

"Smart person" and "capable of making good arguments" are different things, and both are relative and open to many definitions.

As a former Fundamentalist Christian, I don't claim to be smart or very good at making arguments, but I'd say it is not a useful heuristic to enter into a debate or discussion assuming a sincere adherent of that belief system is capable of making a rational argument.