Posts
Comments
That’s helpful, thanks!
I know nothing about this topic. In particular, I haven’t heard of Michael Bailey and Kevin Hsu before.
But I do know that there are terrible arguments on both sides of every issue—even issues where there is also healthy discourse and very good arguments.
Are Michael Bailey and Kevin Hsu the least-bad arguers for their position? Or are they especially well-known / famous / widely-respected figure-heads of that side of the debate? If so, you should say that somewhere.
Otherwise it sounds (to non-knowledgeable ears like mine) like you were just searching for an idiot on that side of the aisle, and hey you found one, and now you can call them out for their idiocy and spread guilt-by-association to everyone else who has wound up reaching similar conclusions.
Hmm, my two-sentence summary attempt for this post would be: "In recent drama-related posts, the comment section discussion seems very soldier-mindset instead of scout-mindset, including things like up- and down-voting comments based on which "team" they support rather than soundness of reasoning, and not conceding / correcting errors when pointed out, etc. This is a failure of the LW community and we should brainstorm how to fix it."
If that's a bad summary, it might not be Duncan's fault, I kinda skimmed.
The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all.
There's certainly something to that. But in the other direction, there's the Claudette Colvin vs Rosa Parks anecdote, where (as I understand it) civil rights campaigners declined to signal-boost and take a stand on a case that they thought the general public would be unsympathetic to (an unmarried pregnant teen defender), and instead waited for a more PR-friendly test case to come along. We can't know the counterfactual, but I see that as a plausibly reasonable and successful strategic decision.
The toxoplasma of rage dynamic is to go out of your way to seek the least PR-friendly test cases, because that's optimal for in-group signaling. I view that as a failure mode to be kept in mind (while acknowledging that sometimes defending scoundrels is exactly the right thing to do).
I want to say loud and clear that I don't think the only two options are (1) "saying X in a way that will predictably and deeply hurt lots of people and/or piss them off" and (2) "not saying X at all". There's also the option of (3) "saying X in a way that will bring anti-X-ers to change their mind and join your side". And also sometimes there's (4) "saying X in a kinda low-key way where anti-X-ers won't really care or take notice, or at least won't try to take revenge on things that we care about".
My sense is that there's safety-in-numbers in saying "obviously Tiananmen Square is a thing that happened", in a way that there is not safety-in-numbers in saying "obviously TBC is a perfectly lovely normal book full of interesting insights written in good faith by a smart and reasonable person who is not racist in the slightest".
But still, if lots and lots of people in China believe Z, and I were writing a post that says "Here's why Z is false", I would try to write it in a way that might be persuasive to initially-skeptical Chinese readers. And if I were writing a post that says "Z is false, and this has interesting implications on A,B,C", I would try to open it with "Side note: I'm taking it for granted that Z is false for the purpose of this post. Not everyone agrees with me that Z is false. But I really think I'm right about this, and here's a link to a different article that makes that argument in great detail."
we are talking about the book’s provenance / authorship / otherwise “metadata”—and certainly not about the book’s impact
A belief that "TBC was written by a racist for the express purpose of justifying racism" would seem to qualify as "worth mentioning prominently at the top" under that standard, right?
And it would be quite unreasonable to suggest that a post titled “Book Review: The Protocols of the Elders of Zion” is somehow inherently “provocative”, “insulting”, “offensive”, etc., etc.
I imagine that very few people would find the title by itself insulting; it's really "the title in conjunction with the first paragraph or two" (i.e. far enough to see that the author is not going to talk up-front about the elephant in the room).
Hmm, maybe another better way to say it is: The title plus the genre is what might insult people. The genre of this OP is "a book review that treats the book as a serious good-faith work of nonfiction, which might have some errors, just like any nonfiction book, but also presumably has some interesting facts etc." You don't need to read far or carefully to know that the OP belongs to this genre. It's a very different genre from a (reasonable) book review of "Protocols of the Elders of Zion", or a (reasonable) book review of "Mein Kampf", or a (reasonable) book review of "Harry Potter".
Hmm, I think you didn't get what I was saying. A book review of "Protocols of the Elders of Zion" is great, I'm all for it. A book review of "Protocols of the Elders of Zion" which treats it as a perfectly lovely normal book and doesn't say anything about the book being a forgery until you get 28 paragraphs into the review and even then it's barely mentioned is the thing that I would find extremely problematic. Wouldn't you? Wouldn't that seem like kind of a glaring omission? Wouldn't that raise some questions about the author's beliefs and motives in writing the review?
Do you view those facts as evidence that I’m an unreasonable person?
Yeah.
Do you ever, in your life, think that things are true without checking? Do you think that the radius of earth is 6380 km? (Did you check? Did you look for skeptical sources?) Do you think that lobsters are more closely related to shrimp than to silverfish? (Did you check? Did you look for skeptical sources?) Do you think that it's dangerous to eat an entire bottle of medicine at once? (Did you check? Did you look for skeptical sources?)
I think you're holding people up to an unreasonable standard here. You can't do anything in life without having sources that you generally trust as being probably correct about certain things. In my life, I have at time trusted sources that in retrospect did not deserve my trust. I imagine that this is true of everyone.
Suppose we want to solve that problem. (We do, right?) I feel like you're proposing a solution of "form a community of people who have never trusted anyone about anything". But such community would be empty! A better solution is: have a bunch of Scott Alexanders, who accept that people currently have beliefs that are wrong, but charitably assume that maybe those people are nevertheless open to reason, and try to meet them where they are and gently persuade them that they might be mistaken. Gradually, in this way, the people (like former-me) who were trusting the wrong sources can escape of their bubble and find better sources, including sources who preach the virtues of rationality.
We're not born with an epistemology instruction manual. We all have to find our way, and we probably won't get it right the first time. Splitting the world into "people who already agree with me" and "people who are forever beyond reason", that's the wrong approach. Well, maybe it works for powerful interest groups that can bully people around. We here at lesswrong are not such a group. But we do have the superpower of ability and willingness to bring people to our side via patience and charity and good careful arguments. We should use it! :)
I don't think my suggestions are getting pushback; I think that my suggestions are being pattern-matched to "let's all self-censor / cower before the woke mob" and everyone loves having that debate at the slightest pretense. For example, I maintain that my suggestion of "post at another site and linkpost from here, in certain special situations" is next-to-zero-cost, for significant benefit. Indeed, some people routinely post-elsewhere-and-linkpost, for no reason in particular. (The OP author already has a self-hosted blog, so there's no inconvenience.) This seems to me like a prudent, win-win move, and if people aren't jumping on it, I'm tempted to speculate that people are here for the fun signaling not the boring problem-solving / world-optimizing.
Imposing restrictions on our prolific writers
That's not a useful framing. The mods have indicated that they won't impose restrictions. Instead, I am trying to persuade people.
Sorry, what? A book which you (the hypothetical Person A) have never read (and in fact have only the vaguest notion of the contents of) has personally caused you to suffer? And by successfully (!!) “advocating for racism”, at that? This is… well, “quite a leap” seems like an understatement; perhaps the appropriate metaphor would have to involve some sort of Olympic pole-vaulting event. This entire (supposed) perspective is absurd from any sane person’s perspective.
I have a sincere belief that The Protocols Of The Elders Of Zion directly contributed to the torture and death of some of my ancestors. I hold this belief despite having never read this book, and having only the vaguest notion of the contents of this book, and having never sought out sources that describe this book from a "neutral" point of view.
Do you view those facts as evidence that I'm an unreasonable person?
Further, if I saw a post about The Protocols Of The Elders Of Zion that conspicuously failed to mention anything about people being oppressed as a result of the book, or a post that buried said discussion until after 28 paragraphs of calm open-minded analysis, well, I think I wouldn't read through the whole piece, and I would also jump to some conclusions about the author. I stand by this being a reasonable thing to do, given that I don't have unlimited time.
By contrast, if I saw a post about The Protocols Of The Elders Of Zion that opened with "I get it, I know what you've heard about this book, but hear me out, I'm going to explain why we should give this book a chance with an open mind, notwithstanding its reputation…", then I would certainly consider reading the piece.
My comment here argues that a reasonable person could find this post insulting.
Suppose that Person A finds Statement X demeaning, and you believe that X is not in fact demeaning to A, but rather A was misunderstanding X, or trusting bad secondary sources on X, or whatever.
What do you do?
APPROACH 1: You say X all the time, loudly, while you and your friends high-five each other and congratulate yourselves for sticking it to the woke snowflakes.
APPROACH 2: You try sincerely to help A understand that X is not in fact demeaning to A. That involves understanding where A is coming from, meeting A where A is currently at, defusing tension, gently explaining why you believe A is mistaken, etc. And doing all that before you loudly proclaim X.
I strongly endorse Approach 2 over 1. I think Approach 2 is more in keeping with what makes this community awesome, and Approach 2 is the right way to bring exactly the right kind of people into our community, and Approach 2 is the better way to actually "win", i.e. get lots of people to understand that X is not demeaning, and Approach 2 is obviously what community leaders like Scott Alexander would do (as for Eliezer, um, I dunno, my model of him would strongly endorse approach 2 in principle, but also sometimes he likes to troll…), and Approach 2 has nothing to do with self-censorship.
~~
Getting back to the object level and OP. I think a lot of our disagreement is here in the details. Let me explain why I don't think it is "plainly obvious to any even remotely reasonable person that the OP is not intended as any insult to anyone".
Imagine that Person A believes that Charles Murray is a notorious racist, and TBC is a book that famously and successfully advocated for institutional racism via lies and deceptions. You don't have to actually believe this—I don't—I am merely asking you to imagine that Person A believes that.
Now look at the OP through A's eyes. Right from the title, it's clear that OP is treating TBC as a perfectly reasonable respectable book by a perfectly reasonable respectable person. Now A starts scanning the article, looking for any serious complaint about this book, this book which by the way personally caused me to suffer by successfully advocating for racism, and giving up after scrolling for a while and coming up empty. I think a reasonable conclusion from A's perspective is that OP doesn't think that the book's racism advocacy is a big deal, or maybe OP even thinks it's a good thing. I think it would be understandable for Person A to be insulted and leave the page without reading every word of the article.
Once again, we can lament (justifiably) that Person A is arriving here with very wrong preconceptions, probably based on trusting bad sources. But that's the kind of mistake we should be sympathetic to. It doesn't mean Person A is an unreasonable person. Indeed, Person A could be a very reasonable person, exactly the kind of person who we want in our community. But they've been trusting bad sources. Who among us hasn't trusted bad sources at some point in our lives? I sure have!
And if Person A represents a vanishingly rare segment of society with weird idiosyncratic wrong preconceptions, maybe we can just shrug and say "Oh well, can't please everyone." But if Person A's wrong preconceptions are shared by a large chunk of society, we should go for Approach 2.
I like the norm of "If you're saying something that lots of people will probably (mis)interpret as being hurtful and insulting, see if you can come up with a better way to say the same thing, such that you're not doing that." This is not a norm of censorship nor self-censorship, it's a norm of clear communication and of kindness. I can easily imagine a book review of TBC that passes that test. But I think this particular post does not pass that test, not even close.
If a TBC post passed that test, well, I would still prefer that it be put off-site with a linkpost and so on, but I wouldn't feel as strongly about it.
I think "censorship" is entirely the wrong framing. I think we can have our cake and eat it too, with just a little bit of effort and thoughtfulness.
This is exactly the sort of thing we should not be doing.
I should also add that Duncan has a recent post enthusiastically endorsing the idea that we should try to anticipate how other people might misinterpret what we say, and clarify that we do not in fact mean those things. That post got a lot of upvotes and no negative comments. But it seems to me that Duncan's advice is in direct opposition to Robin's advice. Do you think Duncan's post is really bad advice? Or if not, how do you reconcile them?
it is truly shocking to see someone endorsing this standard on Less Wrong
I don't think I was endorsing it, I was stating (what I believe to be) a fact about how lots of people perceive certain things.
I used the term "provocative" as a descriptive (not normative) statement: it means "a thing that provokes people". I didn't run a survey, but my very strong belief is that "provocative" is an accurate description here.
I do think we should take actions that achieve goals we want in the universe we actually live in, even if this universe is different than the universe we want to live in. If something is liable to provoke people, and we wish it weren't liable to provoke people, we should still consider acting as if it is in fact liable to provoke people. For example, we can consider what are the consequences of provoking people, and do we care, and if we do care, how much effort and cost is required to not provoke people. My suggestion is that this is a case where provoking people has really bad potential consequences, and where not provoking people is an eminently feasible alternative with minimal costs, and therefore we should choose to not provoke people.
This is exactly the sort of thing we should not be doing.
I read Robin's blog post as saying that disclaimers are kinda annoying (which is fair enough), not that they are a very very bad thing that must never be done. I think we can take it on a case-by-case basis, weighing the costs and benefits.
…until the connections are easily followed by, say, the NYT or any random internet sleuth…
I think there's a widespread perception in society that "being a platform that hosts racist content" is very much worse than "being a site where one can find a hyperlink to racist content". I'm not necessarily endorsing that distinction, but I'm quite confident that it exists in many people's minds.
I'm not seeing a provocative title or framing
Hmm, maybe you're from a different part of the world / subculture or something. But in cosmopolitan USA culture, merely mentioning TBC (without savagely criticizing it in the same breath) is widely and instantly recognized as a strongly provocative and hurtful and line-crossing thing to do. Saying "Hey, I'm just reading TBC with a curious and open mind, I'm not endorsing every word" is kinda like perceived to be kinda like saying "Hey, I'm just studying the philosophy of Nazism with a curious and open mind, I'm not endorsing every word" or "Hey, I'm just reading this argument for legalizing child rape with a curious and open mind, I'm not endorsing every word" or whatever.
If you're writing a post that some rape law doesn't actually help with rape despite popular perceptions, you open with a statement that rape is in fact bad and you do in fact want to reduce it, and you write it in a way that's sympathetic to people who have been really harmed by rape. By the same token, if you're writing a post that says reading TBC does not actually perpetuate racism despite popular perceptions, you open with a statement that racism is bad and you do in fact want to reduce it, and you write it in a way that's sympathetic to people who have been really seriously harmed by racism.
Strong-downvoted. I want lesswrong to be a peaceful place where we can have polite boring truth-seeking arguments without incurring reputational risk / guilt-by-association. I understand the benefit of having polite boring truth-seeking arguments about racism-adjacent topics that take sides in an incredibly incendiary culture war. However, there is also a cost—namely, there's a public good called "right now there is minimal reputational risk of being publicly IRL known as a lesswrong participant", and each time there's a post about racism-adjacent topics that takes sides in an incredibly incendiary culture war, we're shooting a flaming arrow at that public good, and hoping we get lucky and the whole thing doesn't burn to the ground.
There are simple ways to get almost all the benefit with almost none of the cost, namely: (1) post on a different site (especially a different site that allows comments) and (if you must) do a linkpost here, (2) pick a post title / framing that's less provocative (UPDATE: see my comment here for why I think reasonable people could find this post provocative / insulting), (3) put more effort into not saying racist / racist-adjacent things (or to be charitable, "things that would plausibly come across as racist"), like (to pick one example) how the word "better" in "a secretary or a dentist who is one standard deviation better than average is worth a 40% premium in salary" seems to be thoughtlessly equating g-factor with moral worth, (4) seriously engage with intelligent criticism of this book, including describing in a sympathetic / ITT-passing way why people might have found this book hurtful or problematic, or if you can't be bothered to do that, then maybe you shouldn't write the post in the first place.
Seconding this: When I did classified work at a USA company, I got the strong impression that (1) If I have any financial problems or mental health problems, I need to tell the security office immediately; (2) If I do so, the security office would immediately tell the military, and then the military would potentially revoke my security clearance. Note that some people get immediately fired if they lose their clearance. That wasn't true for me—but losing my clearance would have certainly hurt my future job prospects.
My strong impression was that neither the security office nor anyone else had any intention to help us employees with our financial or mental health problems. Nope, their only role was to exacerbate personal problems, not solve them. There's an obvious incentive problem here; why would anyone disclose their incipient financial or mental health problems to the company, before they blow up? But I think from the company's perspective, that's a feature not a bug. :-P
(As it happens, neither myself nor any of my close colleagues had financial or mental health problems while I was working there. So it's possible that my impressions are wrong.)
Elaborating on Teerth Aloke's answer, I think you should be more, um, consequentialist about this whole thing. "...To think in terms of desired outcomes, and to ask: “What is the likeliest way that the outcome in question might occur?” ...then repeat this process until we backchain to interventions that actors can take today." (ref)
So the first step is for you to decide: what is my goal? In principle, there can be a lot of possibilities:
- Your goal is to mitigate climate change and/or its negative impacts
- Your goal is to impress your like-minded friends with how much you care about climate change
- Your goal is to have something interesting on your resume when you later apply for jobs and internships and college
- Your goal is to not attend school for a while
- Your goal is to alleviate the boredom and angst of our messed-up modern infantilizing overly-extended childhood years
- Etc.
I expect you to say "Hey anon03, what's your deal? Why are you attacking me? Obviously it's the first one". And maybe that's true, I don't know you, I'm very open-minded to it being 100% the first one. This was not meant to be a loaded question. But I just suggest that you think about it really hard and look inside yourself and make sure you say it's the first one because it really is and not because you want it to be. Make sure it's super-duper-true. Make sure you're 100% sure. Make sure you feel it in your bones, and you're not just fooling yourself. Make sure you have read Scout Mindset before you answer the question.
(If it's not 100% the first one, I suggest you read Purchase Fuzzies and Utilons Separately and follow the advice by making some plans that purely optimize for the first bullet point and making separate plans that purely optimize for any of the other bullet points that motivate you.)
OK, now let's say that your answer is 100% the first bullet point. So, you're trying to mitigate climate change and/or its negative consequences as effectively as possible, given the power and tools at your disposal. Well congratulations, you are now 90% of the way towards effective altruism. (The other 10% is taking it to its logical conclusion by considering whether or not climate change is the best cause to be working on, or whether it's possible that something else is even more helpful and urgent to work on. You can still decide it's climate change, that's fine—I'm just saying that you really win your effective altruism badge by having that be the result of a thoughtful decision, taken after considering and understanding the various other awful urgent problems in the world, like the risk of nuclear holocaust, the massive-scale torture of animals at factory farms, poor people dying of easily-preventable illnesses, risks of much worse future pandemics, risks to democracy, etc. etc.)
So, if the students do a climate strike, what are you hoping the consequence will be, and what's the probability that that consequence comes about, and if it does, how many tons of CO2 will be removed from the atmosphere (or not emitted)? Is there a particular law that you're hoping will be passed? Who would pass that law? Why haven't they passed it already? Probably the answer to that last one relates to their incentives, and/or their beliefs. Well, would the strike change their beliefs? Would it change their incentives? How much? Are there examples of similar interventions working or not working? Are there examples of similar interventions being counterproductive, e.g. by turning off people who were previously not strongly opinionated?
And then you do the annoying thing where you consider other possible things you could do for the climate, for the same amount of time / effort / risk / whatever. What if you spend extra time babysitting, and send the proceeds to climate offsets, or to a climate-change philanthropy? Solar cells are basically free money for homeowners, but not everyone in your community has one ... what if you go ask them why not, and then figure out how to make the process easier and more pleasant? What if you donate to nuclear power advocacy? What if you study materials science and try to get a career making better solar cells? Etc. Etc. Then you weigh all your options on a spreadsheet ... and do something! (And post that spreadsheet to the EA Forum!! They love that kind of stuff!) Good places to start for this step might be here and here.
I think it would be both more effective and more LessWrong-norm-ish to argue that there was no widespread election fraud rather than claim that there was no widespread election fraud. Like, describe and refute specific claims, or at least tell readers that you dug into them before dismissing them (I assume you did). It only takes a sentence or two! You link to the best arguments on both sides, and then say you read them both, and this one checks out, and that one is full of easily-refuted lies and confusions. Or whatever. Otherwise what's the point? Most of your audience already agrees with you, the rest will assume you're just another sucker who blindly trusts the lamestream media... :-P
I'm concerned that these efforts are just delaying the inevitable. (Which is still very worthwhile!!) But in the longer run, we're just doomed!
Like, the people in the military and defense contractors developing autonomous drone navigation systems are doing the exact same thing as probably dozens of university researchers, drone agriculture technology companies, Amazon, etc. In fact the latter are probably doing it better!
So ideally we want a high technological barrier between what's legal and the weapons that we don't want to exist, otherwise anyone can immediately build the weapons. What's the nature of that technological barrier? Right now it's the navigation / AI, but again that's not gonna last, unless we block drone navigation AI at companies and universities which is not politically feasible. What else is there? The drone hardware? Nope. The weapon carried by the drone? I mean, with some string etc., a quadcopter can carry a handgun or little bomb or whatever, so this doesn't seem like much of a technological barrier, although it's better than nothing, and certainly it's better than having a nicely-packaged armed drone directly for sale. So yeah, I'm inclined to say that we're just doomed to have these things for sale, at least by organized crime groups, sooner or later. I don't know, that's just the conclusion I jump to with no particular knowledge.
I'd recommend against buying firearms: there's a lot of practice that goes into using them well, and if you don't know what you're doing you're probably going to make things worse.
Anyone know anything about buying pepper spray??
I have a 1000+ karma non-anonymous lesswrong account (not this one obviously) and I can tell you that if lesswrong got a reputation as a hereditarian hangout, I would delete that account and move my blog posts to WordPress or something instead. I don't want to take that risk to my reputation, to my current and future jobs, and to my current and future relationships.