Tolerate Tolerance

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-21T07:34:12.259Z · LW · GW · Legacy · 87 comments

Contents

87 comments

One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flaws in reasoning.  This doesn't strictly follow.  You could end up, say, rejecting your religion, just because you spotted more or deeper flaws in the reasoning, not because you were, by your nature, more annoyed at a flaw of fixed size.  But realistically speaking, a lot of us probably have our level of "annoyance at all these flaws we're spotting" set a bit higher than average.

That's why it's so important for us to tolerate others' tolerance if we want to get anything done together.

For me, the poster case of tolerance I need to tolerate is Ben Goertzel, who among other things runs an annual AI conference, and who has something nice to say about everyone.  Ben even complimented the ideas of M*nt*f*x, the most legendary of all AI crackpots.  (M*nt*f*x apparently started adding a link to Ben's compliment in his email signatures, presumably because it was the only compliment he'd ever gotten from a bona fide AI academic.)  (Please do not pronounce his True Name correctly or he will be summoned here.)

But I've come to understand that this is one of Ben's strengths—that he's nice to lots of people that others might ignore, including, say, me—and every now and then this pays off for him.

And if I subtract points off Ben's reputation for finding something nice to say about people and projects that I think are hopeless—even M*nt*f*x—then what I'm doing is insisting that Ben dislike everyone I dislike before I can work with him.

Is that a realistic standard?  Especially if different people are annoyed in different amounts by different things?

But it's hard to remember that when Ben is being nice to so many idiots.

Cooperation is unstable, in both game theory and evolutionary biology, without some kind of punishment for defection.  So it's one thing to subtract points off someone's reputation for mistakes they make themselves, directly.  But if you also look askance at someone for refusing to castigate a person or idea, then that is punishment of non-punishers, a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.

The danger of punishing nonpunishers is something I remind myself of, say, every time Robin Hanson points out a flaw in some academic trope and yet modestly confesses he could be wrong (and he's not wrong).  Or every time I see Michael Vassar still considering the potential of someone who I wrote off as hopeless within 30 seconds of being introduced to them.  I have to remind myself, "Tolerate tolerance!  Don't demand that your allies be equally extreme in their negative judgments of everything you dislike!"

By my nature, I do get annoyed when someone else seems to be giving too much credit.  I don't know if everyone's like that, but I suspect that at least some of my fellow aspiring rationalists are.  I wouldn't be surprised to find it a human universal; it does have an obvious evolutionary rationale—one which would make it a very unpleasant and dangerous adaptation.

I am not generally a fan of "tolerance".  I certainly don't believe in being "intolerant of intolerance", as some inconsistently hold.  But I shall go on trying to tolerate people who are more tolerant than I am, and judge them only for their own un-borrowed mistakes.

Oh, and it goes without saying that if the people of Group X are staring at you demandingly, waiting for you to hate the right enemies with the right intensity, and ready to castigate you if you fail to castigate loudly enough, you may be hanging around the wrong group.

Just don't demand that everyone you work with be equally intolerant of behavior like that.  Forgive your friends if some of them suggest that maybe Group X wasn't so awful after all...

87 comments

Comments sorted by top scores.

comment by MBlume · 2009-03-21T08:33:47.168Z · LW(p) · GW(p)

I'm going to make a controversial suggestion: one useful target of tolerance might be religion.

I think we pretty much all understand that the supernatural is an open and shut case. Because of this, religion is a useful example of people getting things screamingly, disastrously wrong. And so we tend to use that as a pointer to more subtle ways of being wrong, which we can learn to avoid. This is good.

However, when we speak too frequently, and with too much naked disdain, of religion, these habits begin to have unintended negative effects.

It would be useful to have resources on general rationality to which to point our theist friends, in order to raise their overall level of sanity to the point where religion can fall away on its own. This is not going to work if these resources are blasting religion right from the get-go. Our friends are going to feel attacked, quickly close their browsers, and probably not be too well-disposed towards us the next time we speak (this may not be an entirely hypothetical example).

I'm not talking about respect. That would be far too much to ask. If we were to speak of religion as though it could genuinely be true, we would be spectacular liars. Still, not bringing up the topic when it's not necessary, using another example if there happens to be one available, would, I think, significantly increase the potential audience for our writing.

Replies from: billswift, MarkusRamikin, Eliezer_Yudkowsky, ciphergoth
comment by billswift · 2009-03-21T14:47:44.874Z · LW(p) · GW(p)

The problem with tolerating religion is that, as Dawkins pointed out, it has received too much tolerance already. One reason religion is so widespread and obnoxious is that it has been so off limits to criticism for so long.

Replies from: sketerpot
comment by sketerpot · 2011-02-10T21:44:02.360Z · LW(p) · GW(p)

A good solution to this is to have some diversity of rhetoric. Some people can be blunt, others openly contemptuous, and others more friendly and overtly tolerant. There's room enough for all of these.

The less tolerant people destroy the special immunity to criticism that religion has long enjoyed, and get to be seen as the "extremists". Meanwhile they make the sweetness-and-light folks look more moderate by comparison, which is a useful thing. A lot of people reflexively reject extremism, which they define as simply the most extreme views that they're hearing expressed on a contentious issue. Make the extremists more extreme, and more moderate versions of their viewpoint become more socially acceptable.

Someone has to play the villains in this story.

comment by MarkusRamikin · 2011-06-27T09:46:33.578Z · LW(p) · GW(p)

I'm very much in favor of what you wrote there. I've been thinking to start a separate thread about this some time. Though feel free to beat me to it, I won't be ready to do so very soon anyway. But here's a stab at what I'm thinking.

This is from the welcome thread:

A note for theists: you will find LW overtly atheist. We are happy to have you participating, but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false.

This is fair. I could, in principle, sit down and discuss rationality with a group having such a disclaimer, except in favor of religion, assuming they got promoted to my attention for some unrelated good reason (like I've been linked to an article and read that one and two more and I found them all impressive). Not going to happen in practice, probably, but you get my drift.

Except that's not the vibe of what Less Wrong is actually like, IMO, that we're "happy to have" these people. Atheism strikes me as a belief that's necessary for acceptance to the tribe. This is not a Good Thing, for many reasons, the simplest of which is that atheism is not rationality. Reversed stupidity is not intelligence; people can be atheists for stupid reasons, too. So seeing that atheism seems to be necessary here in order to follow our arguments and see our point, people will be suspicious of those arguments and points. If you can't make your case about something that in principle isn't about religion, without using religion in the reasoning, it's probably not a good case.

What I'd advocate would be not using religion as examples of obvious inanity, in support of some other point, like in this, otherwise great, post:

http://lesswrong.com/lw/1j7/the_amanda_knox_test_how_an_hour_on_the_internet/

Now I'm not in favor of censoring religion out and pretending we're not 99% atheists here or whatever the figure is. If the topic of some article is tied to religion, then sure, anything goes - you'll need good arguments anyway or you won't have a post and people will call you on using applause lights instead of argumentation.

But, more subtly: if the topic is some bias or rationality tool, and religion is a good example of how that bias operates/tool fails to be applied, then go ahead and show that example after the bias/tool has already been convincingly established in more neutral terms. It's one of the reasons why we explain Bayes' theorem in terms of mammographies, not religion.

Feedback would be welcome.

Replies from: abramdemski, MugaSofer, Kratoklastes
comment by abramdemski · 2012-09-11T06:42:31.064Z · LW(p) · GW(p)

I think this is a good analysis.

However, in some areas, it is particularly difficult to keep things separate. The two cultures are simply very different; discussions have a way of finding the largest differences.

To be more specific: a recent conversation about rationalism came to the point of whether we could depend on the universe not to kill us. (To put it as it was in the conversation: there must be justice in the universe.)

comment by MugaSofer · 2013-01-11T12:43:00.846Z · LW(p) · GW(p)

Well, I think you're absolutely right except, perhaps, regarding the claim that "Atheism strikes me as a belief that's necessary for acceptance to the tribe." I'm not an atheist, and while when I mention this fact I get mobbed by people asking me to refute arguments I've heard a thousand times before, I've never found myself or seen others be rejected as members of the tribe for admitting to religious beliefs.

comment by Kratoklastes · 2013-01-10T06:13:44.765Z · LW(p) · GW(p)

I can think of another 3 reasons to explain Bayes theorem in terms of mammograms (or "mammographies" if you prefer) - boobs, torture and the mathematical ignorance of physicians.

Tolerance is over-rated (although it's a Masonic virtue so I'm supposed to like it): to me, the word has supercilious connotations - kind of "I'm going to permit you to persist in error, unmolested, coz I'm just that awesome".

I prefer acceptance: after you have harangued someone with everything that's wrong with their view of the problem, give up and accept that they're idiots.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T11:07:47.883Z · LW(p) · GW(p)

Tolerance is over-rated (although it's a Masonic virtue so I'm supposed to like it): to me, the word has supercilious connotations - kind of "I'm going to permit you to persist in error, unmolested, coz I'm just that awesome".

I prefer acceptance: after you have harangued someone with everything that's wrong with their view of the problem, give up and accept that they're idiots.

Firstly, that is the most blatant derailing of a thread I have ever seen.

Secondly, the main advantage of "tolerance" is that most people cannot, by definition, be in a better position to judge on certain issues than most other people - and indeed will almost certainly be wrong about at least some of their beliefs. Thus, it is irrational to impose your beliefs on others if you have no reason to think you are more rational then they are (see also Auman's Agreement Theorem.) Of course, it is also irrational to believe you are right in this situation, but at least it's not harming people.

The most extreme example of this principle would be someone programming in their beliefs regarding morality directly into a Seed AI. Since they are almost certainly wrong about something, the AI will then proceed to destroy the world and tile the universe with orgasmium or whatever.

Replies from: Kratoklastes
comment by Kratoklastes · 2013-01-10T21:01:18.861Z · LW(p) · GW(p)

What was the title of the post? Something about tolerance, if I'm not mistaken.

As to your 'secondly' point... I absolutely agree with the statement that "most people cannot, by definition, be in a better position to judge on certain issues than most other people" (emphasis mine - in fact I would extend that to say on most issues of more than minimal complexity).

Absolutely key point to bear in mind is that if you harangue someone about a problem when you're not in a better position to judge on that particular issue, you're being an asshat. That's why I tend to limit my haranguing to matters of (deep breath)...

  • Economics (in which I have a double-major First, with firsts in Public Finance, Macro, Micro, Quantitative Economic Policy, International Economics, Econometric Theory and Applied Econometrics) and
  • Econometrics (and the statistical theory underpinning it) for which I took straight Firsts at Masters;
  • Quantitative analysis of economic policy (and economic modelling generally). which I did for a living for half a decade and taught to undergraduates (3rd year and Honours).

I babble with muted authority on

  • expectations (having published on, and having been asked to advise my nation's Treasury on, modelling them in financial markets within macroeconometric models), and
  • the modelling paradigm in general (having worked for almost a decade at one of the world's premier economic modelling think tanks, and having dabbled in a [still-incomplete] PhD in stochastic simulation using a computable general-equilibrium model).

And yet I constantly find myself being told things about economics, utility maximisation, agency problems, and so forth, by autodidacts who think persentio ergo rectum is a research methodology.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-11T12:47:23.613Z · LW(p) · GW(p)

What was the title of the post? Something about tolerance, if I'm not mistaken.

So why not comment on the post, hmm?

Absolutely key point to bear in mind is that if you harangue someone about a problem when you're not in a better position to judge on that particular issue, you're being an asshat.

Oh, of course. If you genuinely have good reason to believe you know better than (group) beyond the evidence you have that you are right then it is perfectly reasonable to act on it. But since most of the time you're probably not in that position, it seems to me that cultivating tolerance is a good idea.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-21T08:46:32.107Z · LW(p) · GW(p)

I'm going to make a controversial suggestion: one useful target of tolerance might be religion.

I'll try to tolerate your tolerance.

(I blog using any examples that come to hand, but when I canonicalize I try to remove explicit mentions of religion where possible. Bear in mind that intelligent religious people with Escher-minds will see the implications early on, though.)

Replies from: PhilGoetz, MBlume
comment by PhilGoetz · 2009-03-21T16:27:32.499Z · LW(p) · GW(p)

You canonicalize?

Where can we find your canon, and is it marked as canonical?

Replies from: MichaelGR
comment by MichaelGR · 2009-03-21T17:14:57.743Z · LW(p) · GW(p)

This might (partly) answer your question:

http://www.overcomingbias.com/2007/09/why-im-blooking.html

Replies from: PhilGoetz
comment by PhilGoetz · 2009-03-22T02:06:02.547Z · LW(p) · GW(p)

So he means a future canon? I can't go somewhere today and find it?

(I disapprove of anyone calling some of their own non-fiction works 'canonical', but without conviction, never having thought about it before.)

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-03-22T02:27:49.170Z · LW(p) · GW(p)

The term "canonical" has a somewhat different definition in the fields of math and computer science. Eliezer is probably using it influenced by this definition, in the sense of "converting his writing into canonical form", as opposed to an ad-hoc or temporary form. In my experience, the construction "canonicalize" refers almost exclusively to this sense of the word.

See the Jargon File entry for clarification.

comment by MBlume · 2009-03-21T09:23:10.668Z · LW(p) · GW(p)

Bear in mind that intelligent religious people with Escher-minds will see the implications early on, though.

Sadly true.

comment by Paul Crowley (ciphergoth) · 2009-03-21T09:10:46.498Z · LW(p) · GW(p)

I think you point up the problem with your own suggestion - we have to have examples of rationality failure to discuss, and if we choose an example on which we agree less (eg something to do with AGW) then we will end up discussing the example instead of what it is intended to illustrate. We keep coming back to religion not just because practically every failure of rationality there is has a religious example, but because it's something we agree on.

Replies from: MBlume, brianm
comment by MBlume · 2009-03-21T09:19:41.576Z · LW(p) · GW(p)

we have to have examples of rationality failure to discuss

It should be noted that if all goes according to plan, we won't have religion as a relevant example for too much longer. One day (I hope) we will need to teach rationality without being able to gesture out the window at a group of intelligent adults who think crackers turn into human flesh on the way down their gullets.

Why not plan ahead?

ETA: Now I think of it, crackers do, of course, turn into human flesh, it just happens a bit later.

Replies from: Eliezer_Yudkowsky, ciphergoth
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-21T09:42:54.020Z · LW(p) · GW(p)

It's not so much that I'm trying to hide my atheism, or that I worry about offending theists - then I wouldn't speak frankly online. The smart ones are going to notice, if you talk about fake explanations, that this applies to God; and they're going to know that you know it, and that you're an atheist. Admittedly, they may be much less personally offended if you never spell out the application - not sure why, but that probably is how it works.

And I don't plan far enough ahead for a day when religion is dead, because most of my utility-leverage comes before then.

But rationality is itself, not atheism or a-anything; and therefore, for aesthetic reasons, when I canonicalize (compile books or similar long works), I plan to try much harder to present what rationality is, and not let it be a reaction to or a refutation of anything.

Writing that way takes more effort, though.

Replies from: anonym
comment by anonym · 2009-03-22T00:11:33.710Z · LW(p) · GW(p)

they may be much less personally offended if you never spell out the application - not sure why, but that probably is how it works.

Once you connect the dots and make the application explicit, they feel honor-bound to take offense and to defend their theism, regardless of whether they personally want to take offense or not. In their mind, making the application explicit shifts the discussion from being about ideas to being about their core beliefs and thus about their person.

Replies from: JohnH
comment by JohnH · 2011-05-18T14:29:49.781Z · LW(p) · GW(p)

For me, this appears to be correct.

comment by Paul Crowley (ciphergoth) · 2009-03-21T09:31:26.321Z · LW(p) · GW(p)

If all goes according to plan, by then we will be able to bring up more controversial examples without debate descending into nonsense. Let's cross that bridge when we come to it.

comment by brianm · 2009-03-21T12:07:48.115Z · LW(p) · GW(p)

I think there are other examples with just as much agreement on their wrongness, many of which have a much lower degree of investment even for their believers. Astrology for instance has many believers, but they tend to be fairly weak beliefs, and don't produce such a defensive reaction when criticized. Lots of other superstitions also exist, so sadly I don't think we'll run out of examples any time soon.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-03-21T13:31:40.640Z · LW(p) · GW(p)

But because people aren't so invested in it, they mostly won't work so hard to rationalise it; mostly people who are really trying to be rational will simply drop it, and you're left with a fairly flabby opposition. Whereas lots of smart people who really wanted to be clear-thinking have fought to hang onto religion, and built huge castles of error to defend it.

comment by James_Miller · 2009-03-21T15:32:32.217Z · LW(p) · GW(p)

"of someone who I wrote off as hopeless within 30 seconds of being introduced to them."

Few college professors would do this because many students are unimpressive when you first talk with them but than do brilliantly on exams and papers.

Replies from: FiftyTwo
comment by FiftyTwo · 2011-12-19T22:44:39.929Z · LW(p) · GW(p)

I've known people be hopeless for months, then suddenly for no observable reason begin acting brilliantly, another reminder that small data sets aren't sufficient to predict a system as complex as human behaviour.

comment by mathemajician · 2009-03-21T12:08:49.866Z · LW(p) · GW(p)

I usually have something nice to say about most things, even the ideas of some pretty crazy people. Perhaps less so online, but more in person. In my case the reason is not tolerance, but rather a habit that I have when I analyse things: when I see something I really like I ask myself, "Ok, but what's wrong with this?" I mentally try to take an opposing position. Many self described "rationalists" do this, habitually. The more difficult one is the reverse: when I see something I really don't like, but where the person (or better, a whole group) is clearly serious about it and has spent some time on it, I force myself to again flip sides and try to argue for their ideas. Over the years I suspect I've learnt more from the latter than the former. Externally, I might just sound like I'm being very tolerant.

comment by Matt_Simpson · 2009-03-21T07:54:38.461Z · LW(p) · GW(p)

Note that tolerance is part of a general conversion strategy. Nitpicking everyone who disagrees with you in the slightest isn't likely to make friends, but it is likely to make your opponents think you are an arrogant jerk. Sometimes you just have to keep it too yourself.

comment by PhilGoetz · 2009-03-22T00:55:35.889Z · LW(p) · GW(p)

Punishing for non-punishment is an essential dynamic for preserving some social hierarchies, at least in schoolyards and in Nazi Germany.

Abby was just telling me this afternoon that psychologists today believe that when kids are picked on in school, it's their own fault - either they are too shy, or they are bullies. (There is a belief that bullies are picked on in school, something I never saw evidence of in my school days except when it was me doing the picking-on.)

My theory is that the purpose of picking on kids in school is not to have effects on the kid picked on, but to warn everyone else that they will be picked on if they fail to conform. A kid is thus likely to be picked on if they don't respond to social pressures. Thus the advice that every parent gives their children, "Just ignore them if they pick on you," is the worst possible advice. Fight back, or conform; failing to respond requires them to make an example of you and does not impose any cost on them for doing so.

Wolves have a very strict social hierarchy, but I've never noticed evidence of punishment for a failure to punish. So this behavior isn't necessary.

Replies from: CronoDAS, FiftyTwo
comment by CronoDAS · 2009-03-22T20:29:40.826Z · LW(p) · GW(p)

The theory is that bullies are often in the middle of a bullying hierarchy. For example, when I was in high school, one of my friends was harassed by seniors when he was a freshman. When he became a senior himself, he, in turn, harassed freshmen, saying that he was going to give as good as he got.

From what I've read, in high school at least, bullies tend to be those in the middle of the social hierarchy; those at the top (the most popular) are secure in their position and can afford to be nice, while those who are at risk for backsliding work hard at making sure there is at least one person who is a more tempting victim than they are.

comment by FiftyTwo · 2011-12-19T22:46:51.652Z · LW(p) · GW(p)

Seem to be assuming there is a higher purpose for bullying, which seems to be making a mistake along the same lines as the parable of group selection.

Possibly bullies bully because they enjoy it and aren't stopped from doing so. What additional explanation is needed?

Replies from: Osuniev
comment by Osuniev · 2013-03-15T16:23:22.477Z · LW(p) · GW(p)

Well, as a kid I got bullied at school, quite a bit, and I DO remember bullying other a handful of times.

I remember being conscious about it and feeling like shit for it, but at the same time being so relieved because as long as someone else was being bullied, I wasn't.

I certainly did not enjoy it, mainly because it contradicted my vision of myself as a courageous victim.

comment by Annoyance · 2009-03-21T15:36:55.616Z · LW(p) · GW(p)

We can and should reach whatever conclusions about people we wish. But we should be very slow to fail to observe and accept new evidence about them.

Excluding people from discussion may screen out their nonsense (or at least the things you thought were nonsense), but it also prevents you from discovering that you made a hasty decision. Once you've started ignoring someone, you can no longer observe what they say - and possibly find that they're smarter than you thought they were.

It's worth acquiring new data even from those you've discarded, at least once in a while.

comment by [deleted] · 2010-01-12T02:14:19.970Z · LW(p) · GW(p)

M*nt*f*x! K*b*! Y*g-S*th*th!

Replies from: Blueberry
comment by Blueberry · 2010-01-12T04:25:12.145Z · LW(p) · GW(p)

Obviously the other two need to be bowdlerized, but what's wrong with attracting Kibo? I think he'd fit in well here.

Replies from: BlackHumor
comment by BlackHumor · 2011-07-02T12:54:06.346Z · LW(p) · GW(p)

H*st*r! H*st*r! H*st*r!

comment by patrissimo · 2009-03-21T18:53:11.435Z · LW(p) · GW(p)

I think there is an important distinction between cheap and expensive tolerance. If I am sitting on a plane and don't have a good book and am talking to my seatmate, and they seem stupid and irrational, being tolerant is likely to lead to an enjoyable conversation. I may even learn something.

But if I am deciding what authors to read, whose arguments to think about more seriously, etc., then it seems irrational to not judge and prioritize with my limited time.

And this relates to indirect tolerance - someone who doesn't judge and prioritize good arguments but instead listens to and talks to everyone is someone whose links and recommendations are less valuable because they have done less filtering. On the other hand, they are more likely to convert people, and when they do find good ideas they are more likely to be good ideas I wouldn't otherwise encounter. So it's tricky. Seems like the ideal is to read people who are intolerant but read tolerant people, so they have the broadest base of ideas, but still select the best.

Replies from: Cyan
comment by Cyan · 2009-03-21T19:19:49.738Z · LW(p) · GW(p)

The advice isn't about your attitude towards your seatmate's stupidity and irrationality. It's directed at your rationalist buddy sitting on your other side -- she's being advised not to be annoyed at you if you choose to be tolerant.

comment by Mike Bishop (MichaelBishop) · 2009-03-21T16:06:38.868Z · LW(p) · GW(p)

Eliezer is correct, but this post should be followed up by one about the many places where failing to punish non-punishers, in other words, tolerating free-riders, has negative consequences.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-03-21T17:33:55.708Z · LW(p) · GW(p)

If you transgress, I might have a problem with you. If you actively shield a transgressor, I might have a problem with you. If you just don't punish a transgressor, the circumstances where I might have a problem are pretty rare I think!

comment by orthonormal · 2020-07-13T03:04:36.342Z · LW(p) · GW(p)

The application of this principle to [outrage over the comments and commenters which a blogger declines to delete/ban] is left as an exercise for the reader.

comment by PhilGoetz · 2009-03-21T16:41:06.418Z · LW(p) · GW(p)

My attitude toward Ben's tolerance depends on the context. When he does it as a person, I appreciate it. When he does it as chair of AGI, I don't. There were some very good presentations this year, but there were also some very bad time-wasters.

But probably I should blame the reviewers instead.

comment by steven0461 · 2009-03-21T12:52:06.026Z · LW(p) · GW(p)

Damn M-nt-f-x! Damn every one that won't damn M-nt-f-x!! Damn every one that won't put lights in his windows and sit up all night damning M-nt-f-x!!!

Replies from: pjeby
comment by pjeby · 2009-03-21T15:27:40.456Z · LW(p) · GW(p)

Damn M-nt-f-x! Damn every one that won't damn M-nt-f-x!! Damn every one that won't put lights in his windows and sit up all night damning M-nt-f-x!!!

Since I saw this comment before the post it goes with, I thought it was some sort of rant about people not using Emacs for their comments. ;-)

comment by mingyuan · 2020-10-19T23:23:23.603Z · LW(p) · GW(p)

Prescient.

comment by Alicorn · 2009-03-21T15:24:41.922Z · LW(p) · GW(p)

Great post. I think I'd already sort of started trying to do this, although I couldn't have put it as well. Now what I want to know is how much to tolerate people who are less tolerant than me. I'm not quite sure what to do when I meet someone who is infuriated by patterns of thinking that I consider only trivially erroneous or understandable under certain circumstances.

Replies from: Eliezer_Yudkowsky, PhilGoetz
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-21T19:30:38.343Z · LW(p) · GW(p)

I'd say, tolerate them! Though I speak as one with a certain conflict of interest, being on the other side of that judgment. But it seems like the logical mirror image and hence still the thing to do. Judge people only on non-borrowed trouble?

comment by PhilGoetz · 2009-03-22T01:59:33.375Z · LW(p) · GW(p)

What about tolerating people who don't tolerate you? I think this calls for a tit-for-tat strategy.

Replies from: Osuniev
comment by Osuniev · 2013-03-15T16:18:00.287Z · LW(p) · GW(p)

´Well, tolerating them has a good chance of signalling to neutral observers that you are not a pompous jerk, and therefore listen to your ideas favorably.

comment by Celer · 2011-05-18T12:23:23.460Z · LW(p) · GW(p)

I am going to disagree with the idea that 'being "intolerant of intolerance"' is inherently inconsistent. The problem is with the word tolerance, which contains multiple meanings. I think that it is morally wrong to discriminate against people for things that they can't change. Believing that someone of a different race can't possibly be intelligent is a moral wrong. Furthermore, it is so indicative of stupidity that I do not wish to associate with such a person, if they are in a culture where theirs is the minority view.To put it another way, to preserve my time and energy, I am going to avoid dealing with people who have some traits, and one of these traits is racism. This does technically mean that I am "intolerant of intolerance." However, given that you are Eliezer Yudkowsky and I am a random HS student, it is likely that you are correct. Could you explain to me why you believe that I am wrong, or how I misinterpreted that.

Replies from: Desrtopa, JohnH
comment by Desrtopa · 2011-07-27T15:40:16.392Z · LW(p) · GW(p)

I think that it is morally wrong to discriminate against people for things that they can't change. Believing that someone of a different race can't possibly be intelligent is a moral wrong.

The second statement here doesn't follow from the first. If intelligence is something that a person can't change, then it follows that it's morally wrong to discriminate against someone for being unintelligent. It doesn't follow that it is morally wrong to believe that one factor a person cannot change (their race) can determine other factors that they cannot change, such as their intelligence.

Whether there are actually average inherent genetic differences in intelligence between races is still a matter of some debate (although the issue is so politically charged that it's hard to get any effective unbiased research done, and attempting to do so can be dangerous for one's reputation.) It's certainly unlikely that any race exists that has negligible odds of any particular individual reaching an arbitrarily defined cutoff point for "intelligent" compared to other races, but this is an empirical matter which is to be determined on the basis of evidence, and moral considerations have no bearing on whether or not it's true.

comment by JohnH · 2011-05-18T14:20:27.853Z · LW(p) · GW(p)

If one is intolerant of intolerance then one is just as intolerant as those that are claimed to be intolerant from which one should not tolerate oneself.

Not wishing to associate with someone is not indicative of being intolerant of them, though assuming they are not intelligent may be.

Replies from: Celer
comment by Celer · 2011-05-18T15:07:54.409Z · LW(p) · GW(p)

To make sure we are not arguing over words, Googling "tolerate" returns two definitions. "1. Allow the existence, occurrence, or practice of (something that one does not necessarily like or agree with) without interference.

  1. Accept or endure (someone or something unpleasant or disliked) with forbearance."

I am using the second, not the first. I don't see the point of dealing with someone who is explicitly intolerant of a group of people based on no conscious choice of their own, and should have examined their own beliefs, without a very significant reason to do so. This is because they are less likely to have interesting thoughts or experiences, and furthermore I would not feel comfortable dealing with them in many social settings.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-05-18T16:32:25.118Z · LW(p) · GW(p)

Let's stop talking about race since it may or may not be relevant and deal directly with IQ.

I don't see the point of dealing with someone who is explicitly intolerant of a group of people based on no conscious choice of their own,

Someone's IQ is certainly not based on any conscious choice of their own. So your argument seems to imply that we should not be intolerant of people with low IQs.

This is because they are less likely to have interesting thoughts or experiences, and furthermore I would not feel comfortable dealing with them in many social settings.

On the other hand this argument works even better as an argument for avoiding interacting with, i.e., being intolerant of, people with low IQs.

So which is it, should we be intolerant of people with low IQs, or should we be intolerant of people who are intolerant of people with low IQs? Your argument seems to imply both.

Replies from: Celer
comment by Celer · 2011-05-24T23:14:30.577Z · LW(p) · GW(p)

I have not abandoned this. I am simply trying to rework my moral system such that it allows me to both choose whom I want to spend time with in a useful fashion while not being hypocritical in the process. I will get back to you with my results.

comment by Alicorn · 2009-03-22T04:00:23.831Z · LW(p) · GW(p)

There's a question of whether there's an important difference in kind between sorts of tolerance. Here's an analogy which might or might not work: assume that, in general, a driver of a vehicle drives as fast as they think it is safe for cars to be driven in general. Only impatience would cause them to not tolerate people who drive slower than they; a safety concern could cause them to be upset by people who drive faster, since they consider that speed unsafe. Say you have two people who each drive at 50 mph. One of them tolerates only slower drivers but wants to ticket faster drivers and the other tolerates all drivers. The first driver could have a legitimate issue with the second one. They don't disagree about how fast it's safe to drive - they disagree about whether it is appropriate to expect that safety standard of others. Some kinds of statements are dangerous - perhaps not to the degree or in the way that cars are, but dangerous, like slanderous statements or ones that incite to riot or ones that are lies or ones that betray confidences or ones that mislead the gullible or ones that involve occupied inflammable theatrical venues. Refusing to castigate people who express those kinds of statements might - I'm not confident of this - itself be worthy of censure. Or perhaps I'm missing the point and those aren't the kinds of statements the tolerators of which should be tolerated?

comment by Furcas · 2009-03-21T20:43:52.926Z · LW(p) · GW(p)

I don't get it. You want us to work with those who refuse to 'punish' foolishness but who aren't fools themselves to, presumably, fight against foolishness. All right, I can see the sense in that.

Why does it follow that we should censor ourselves when dealing with these non-foolish foolishness enablers? Why can't we work with them and show our disapproval of their enabling?

Replies from: Nominull
comment by Nominull · 2009-03-21T21:59:31.637Z · LW(p) · GW(p)

Because in human society, voicing your disapproval is a form of punishment, which people will react badly to, and will make it hard to work with them.

Replies from: Furcas
comment by Furcas · 2009-03-21T22:11:39.826Z · LW(p) · GW(p)

Not everyone reacts badly to disapproval and criticism, only fools. The people Eliezer wants us to work with are assumed to not be fools.

If some of them refuse to work with us unless we pretend to respect everything they think and do, it only shows that they weren't really interested in promoting rationality in the first place.

Replies from: Nominull
comment by Nominull · 2009-03-21T23:13:25.112Z · LW(p) · GW(p)

Who's the real fool, the one who reacts badly to disapproval and criticism, or the one who thinks he doesn't?

We're all human here, Furcas. Being able to listen to someone criticizing your actions without getting angry is a goal worth achieving, in my opinion, but I haven't achieved it. And maybe I'm not worth bothering with, but I imagine that many of the people who are worth bothering with are the same way. I would bet that even our Glorious Leader Eliezer is that way.

Replies from: Eliezer_Yudkowsky, Furcas
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-21T23:59:26.437Z · LW(p) · GW(p)

Depends on what I believe about the person criticizing me. On average you'd probably win the bet, but there are exceptions.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-03-22T04:09:52.028Z · LW(p) · GW(p)

Are there exceptions in terms of your first root reaction, or just in terms of your eventual net reaction? I work to consciously reward people for accurately criticizing me, and to consciously reduce the inaccurate negative feelings I form about those who criticize me. But there are major recent gaps in my performance here, and while I'm improving by increasing my awareness of the distortions I generate and of the "outside view" probabilities involved, I don't expect to soon remove the root response.

If anyone has good strategies for dealing well with criticism, I'm interested.

comment by Furcas · 2009-03-21T23:33:21.324Z · LW(p) · GW(p)

Depending on what it is that the other person disapproves of about me, I might feel annoyed or offended. But so what if I do? If I'm cooperating with a person to achieve something that's important to me, learning that he thinks one of my beliefs is stupid (for example) isn't going to change anything about my resolve to cooperate. Feeling otherwise would be foolish.

As it turns out, I have been called 'dogmatic' and 'fundamentalist' and various other charming adjectives because of my belief that an essential part of fostering the growth of a rational society consists of creating a social climate in which irrationality is seen in a bad light, and the best way to do that is conversational intolerance of unreason. I can't say I enjoyed being called dogmatic, but it hasn't affected my desire to cooperate with those I see as mostly rational enablers of foolishness. If I can get over my hurt feelings, why can't they?

Replies from: Osuniev
comment by Osuniev · 2013-03-15T16:30:28.878Z · LW(p) · GW(p)

Maybe because you are hurting and getting hurt, but these "enablers of foolishness" are getting hurt while they don't (consciously) hurt others, and therefore would probably consider unfair to be attacked.

comment by Paul Crowley (ciphergoth) · 2009-03-21T07:56:02.833Z · LW(p) · GW(p)

a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.

Could I get a reference for this? I wanted to refer someone else to it, and my Google searches failed me.

Replies from: Eliezer_Yudkowsky
comment by byrnema · 2009-03-21T23:12:57.198Z · LW(p) · GW(p)

In a situation where someone who seems to be very like-minded is more tolerant to another person X than I would be, I would be very interested in why, if I don't already know. Perhaps my friend has reasons that I would agree with, if I only knew them. (Some pragmatic reasons come to mind.)

If I still disagree with my friend, even after knowing his reasons, I would then express the disagreement and see if I couldn't convert my friend on the basis of our common views. If I fail to convert him, it is because our views differ in some way. Is the view we disagree on minor or major? I would base my annoyance/intolerance/punishment on the view we disagree on, not his tolerance of person X.

There is the possibility that it is not possible or practical to find out my friend's reasons for the inappropriate tolerance. In that case, if my friend really does seem reasonable in all other ways, so that I am strongly convinced he really is like-minded, than I would have to give him the benefit of the doubt. There is the possibility that if I knew his reasons, I would agree.

(Later edit: This was my first comment on Less Wrong!)

Replies from: Blueberry
comment by Blueberry · 2010-01-12T03:24:20.277Z · LW(p) · GW(p)

If I fail to convert him, it is because our views differ in some way.

Not necessarily. It could just be a personality difference, and you don't actually disagree on any beliefs.

Replies from: pdf23ds
comment by pdf23ds · 2010-01-12T08:42:06.949Z · LW(p) · GW(p)

The way "views" is usually used, it includes "values".

comment by PhilGoetz · 2009-03-21T16:35:03.425Z · LW(p) · GW(p)

... punishment of non-punishers, a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.

Have you done the math? This would have important implications for the development of intolerant societies - it was clearly crucial to Nazism - but I've never heard of any studies on the subject. People are still working on first-order punishment.

A good reference on that; Simon Gächter, Elke Renner, and Martin Sefton, "The Long-Run Benefits of Punishment", Science 5 December 2008 322: 1510 [DOI: 10.1126/science.1164744] (in Brevia)

Abstract: Experiments have shown that punishment enhances socially beneficial cooperation but that the costs of punishment outweigh the gains from cooperation. This challenges evolutionary models of altruistic cooperation and punishment, which predict that punishment will be beneficial. We compared 10- and 50-period cooperation experiments. With the longer time horizon, punishment is unambiguously beneficial.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-03-21T17:35:36.493Z · LW(p) · GW(p)

See my comment above giving the references: the math shows that punishing non-punishers is an evolutionary stable strategy that can enforce cooperation where simpler strategies fail.

comment by Annoyance · 2009-03-21T15:05:26.556Z · LW(p) · GW(p)

Whether someone agrees with us isn't as important as why.

If someone has sufficiently low standards of quality that they fail to disapprove of even the worst garbage, then they're of little use in distinguishing value from nonsense.

As a great deal of nonsense is not only passively but actively harmful (not just failing to be correct, but inclining people towards error), it is vitally important to tell the two apart. People who can't or won't do this are not only not-helpful, but make our tasks harder.

Strive to have good standards and apply them. Don't worry about being tolerant or intolerant -- the right mix of behaviors will naturally arise from the application of correct standards.

comment by [deleted] · 2009-03-21T09:38:25.925Z · LW(p) · GW(p)

The communities that I've been a part of which I liked the best, which seemed to have the most interesting people, were also the nastiest and least tolerant.

If you can't call a retard a retard, you end up with a bunch of retards, and then the other people leave. When eventually someone nice came to power, this is invariably what happened.

Replies from: AnnaSalamon, Vladimir_Golovin, ciphergoth
comment by AnnaSalamon · 2009-03-21T11:06:30.299Z · LW(p) · GW(p)

Eliezer isn't suggesting that you refrain from calling fools "fools". He's suggesting you tolerate people who are otherwise non-foolish except that they don't call fools "fools".

Tolerating fools might not be a good idea. Tolerating non-fools who themselves tolerate fools is, AFAICT, a glaringly good idea. If you create an atmosphere where everyone has to hate the same people... we run into some of the failure modes of objectivism.

Replies from: Eliezer_Yudkowsky, Annoyance
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-21T19:26:50.046Z · LW(p) · GW(p)

...I think this post might be over the meta threshold where some people lack the reflective gear and simply can't process the intended meaning! Really, I went to some lengths to spell it out here!

comment by Annoyance · 2009-03-21T15:08:43.007Z · LW(p) · GW(p)

"If you create an atmosphere where everyone has to hate the same people... "

Again: it's why those people have to be hated that's important.

If standards reflect real properties of reality, people who're seeking the truth will tend to generate similar standards. If people have similar standards, they'll tend to reach the same sorts of judgments.

What matters is that our judgments arise from accurate standards, not from merely imitating others. Error leads to condition X, but it doesn't follow that ~X is therefore correct.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-03-21T17:38:01.946Z · LW(p) · GW(p)

If you never feel the need to say "Damn X for not damning Y" then good for you, but I think that is at least sometimes felt, and leads to judgements not being as you describe independent.

Replies from: Annoyance
comment by Annoyance · 2009-03-21T21:34:11.078Z · LW(p) · GW(p)

Only if the judgers care what others think of them.

There are some very real advantages to being a sociopath if you want to be a rationalist... and some very real advantages to societies that have a sufficiently great concentration of sociopaths.

comment by Vladimir_Golovin · 2009-03-21T10:48:33.120Z · LW(p) · GW(p)

I steer clear of such communities, unless I need to extract some specific bit of information out of them (and I leave immediately when I'm done). Perhaps that's because in my upbringing calling someone a fool (let alone a retard) was considered extremely rude.

If you can't call a retard a retard

Do you know the person you're calling a retard well enough, our you're judging by a couple of their posts? Would you say "you are a retard" to their face in real life? When you call someone a retard, what do you imply, "your mental abilities in general are very poor" or "you are incompetent at activity X which we discuss here"?

comment by Paul Crowley (ciphergoth) · 2009-03-21T09:51:26.469Z · LW(p) · GW(p)

In my experience, actually ejecting disruptive people from an online community can have a powerful positive effect, but replying to them with insults only encourages them and achieves nothing.

comment by David Mears (david-mears) · 2018-08-14T13:34:54.070Z · LW(p) · GW(p)

In Hanson and Simler's 'The Elephant in the Brain', they mention Axelrod's (1986) "meta-norm" modelling which shows that cooperation is stable only when non-punishers are punished.

comment by abramdemski · 2012-09-11T06:52:01.173Z · LW(p) · GW(p)

Just a small point-- tolerating tolerance seems to me to be a less powerful tool than the principle of charity, of which plenty has been said on this site. For me, the image:

One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flaws in reasoning.

doesn't even start to feel right for me from a 'should' perspective (though it is quite familiar from an 'is' perspective). My image of a rationalist is someone exceptionally concerned with making sense of what others are saying, because arguments are not battles.

comment by amitpamin · 2012-06-18T22:35:14.855Z · LW(p) · GW(p)

I have a massively huge problem with this. Every time a non-fiction author or scientist I respect gives credit to a non-rational I cringe inside. I have to will myself to remember that just because they have a lower rationality threshold, does not automatically discredit their work.

comment by pjeby · 2009-03-21T15:24:46.534Z · LW(p) · GW(p)

IAWY. However, regarding the practice of reminding yourself every time in order to prevent the behavior, why expend two units of mental force, opposing each other, when you could just remove both forces? It'd be more efficient just to get rid of whatever underlying belief or judgment makes you feel the need to be intolerant of the tolerant... and you'd suffer less.

comment by steven0461 · 2009-03-21T14:55:20.528Z · LW(p) · GW(p)

I'm programmed to get angry when there's misbehavior and I don't know that I can just shut this off when the misbehavior consists of underpunishing. Maybe I should try channeling the anger toward the nonpunishee rather than the nonpunisher?

comment by [deleted] · 2015-08-25T12:10:31.871Z · LW(p) · GW(p)

This post has motivated me to put my foot down aroudn one friend who is so bitchy about others.