Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles
post by Zack_M_Davis · 2024-03-02T22:05:49.553Z · LW · GW · 22 commentsThis is a link post for http://unremediatedgender.space/2024/Mar/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles/
Contents
The New York Times's Other Shoe Drops (February 2021) The Politics of the Apolitical A Leaked Email Non-Scandal (February 2021) Yudkowsky Doubles Down (February 2021) It Matters Whether People's Beliefs About Themselves Are Actually True A Filter Affecting Your Evidence The Stated Reasons Not Being the Real Reasons Is a Form of Community Harm People Who Are Trying to Be People Want to Improve Their Self-Models Criticism of Public Statements Is About the Public Statements, Not Subjective Intent Recap of Yudkowsky's History of Public Statements on Transgender Identity An Adversarial Game The Battle That Matters None 22 comments
It was not the sight of Mitchum that made him sit still in horror. It was the realization that there was no one he could call to expose this thing and stop it—no superior anywhere on the line, from Colorado to Omaha to New York. They were in on it, all of them, they were doing the same, they had given Mitchum the lead and the method. It was Dave Mitchum who now belonged on this railroad and he, Bill Brent, who did not.
—Atlas Shrugged by Ayn Rand
Quickly recapping my Whole Dumb Story so far: ever since puberty, I've had this obsessive sexual fantasy about being magically transformed into a woman, which got contextualized by these life-changing Sequences of blog posts by Eliezer Yudkowsky that taught me (amongst many other things) how fundamentally disconnected from reality my fantasy was. So it came as a huge surprise when, around 2016, the "rationalist" community that had formed around the Sequences seemingly unanimously decided that guys like me might actually be women in some unspecified metaphysical sense. A couple years later, having strenuously argued against the popular misconception that the matter could be resolved by simply redefining the word woman (on the grounds that you can define the word any way you like), I flipped out when Yudkowsky prevaricated about how his own philosophy of language says that you can't define a word any way you like, prompting me to join with allies to persuade him to clarify. When that failed, my attempts to cope with the "rationalists" being fake led to a series of small misadventures culminating in Yudkowsky eventually clarifying the philosophy-of-language issue after I ran out of patience and yelled at him over email.
Really, that should have been the end of the story—with a relatively happy ending, too: that it's possible to correct straightforward philosophical errors, at the cost of almost two years of desperate effort by someone with Something to Protect [LW · GW].
That wasn't the end of the story, which does not have such a relatively happy ending.
The New York Times's Other Shoe Drops (February 2021)
On 13 February 2021, "Silicon Valley's Safe Space", the anticipated New York Times piece on Slate Star Codex, came out. It was ... pretty lame? (Just lame, not a masterfully vicious hit piece.) Cade Metz did a mediocre job of explaining what our robot cult is about, while pushing hard on the subtext to make us look racist and sexist, occasionally resorting to odd constructions that were surprising to read from someone who had been a professional writer for decades. ("It was nominally a blog", Metz wrote of Slate Star Codex. "Nominally"?) The article's claim that Alexander "wrote in a wordy, often roundabout way that left many wondering what he really believed" seemed more like a critique of the many's reading comprehension than of Alexander's writing.
Although that poor reading comprehension may have served a protective function for Scott. A mob that attacks over things that look bad when quoted out of context can't attack you over the meaning of "wordy, often roundabout" text that they can't read. The Times article included this sleazy guilt-by-association attempt:
In one post, [Alexander] aligned himself with Charles Murray, who proposed a link between race and I.Q. in "The Bell Curve." In another, he pointed out that Mr. Murray believes Black people "are genetically less intelligent than white people."[1]
But Alexander only "aligned himself with Murray" in "Three Great Articles On Poverty, And Why I Disagree With All Of Them" in the context of a simplified taxonomy of views on the etiology of poverty. This doesn't imply agreement with Murray's views on heredity! (A couple of years earlier, Alexander had written that "Society Is Fixed, Biology Is Mutable": pessimism about our Society's ability to intervene to alleviate poverty does not amount to the claim that poverty is "genetic.")
Alexander's reply statement pointed out the Times's obvious chicanery, but (I claim) introduced a distortion of its own—
The Times points out that I agreed with Murray that poverty was bad, and that also at some other point in my life noted that Murray had offensive views on race, and heavily implies this means I agree with Murray's offensive views on race. This seems like a weirdly brazen type of falsehood for a major newspaper.
It is a weirdly brazen invalid inference. But by calling it a "falsehood", Alexander heavily implies he disagrees with Murray's offensive views on race: in invalidating the Times's charge of guilt by association with Murray, Alexander validates Murray's guilt.
But anyone who's read and understood Alexander's work should be able to infer that Scott probably finds it plausible that there exist genetically mediated differences in socially relevant traits between ancestry groups (as a value-free matter of empirical science with no particular normative implications). For example, his review of Judith Rich Harris on his old LiveJournal indicates that he accepts the evidence from twin studies for individual behavioral differences having a large genetic component, and section III of his "The Atomic Bomb Considered As Hungarian High School Science Fair Project" indicates that he accepts genetics as an explanation for group differences in the particular case of Ashkenazi Jewish intelligence.[2]
There are a lot of standard caveats that go here which Alexander would no doubt scrupulously address if he ever chose to tackle the subject of genetically-mediated group differences in general: the mere existence of a group difference in a "heritable" trait doesn't imply a genetic cause of the group difference (because the groups' environments could also be different). It is entirely conceivable that the Ashkenazi IQ advantage is real and genetic, but black–white IQ gap is fake and environmental.[3] Moreover, group averages are just that—averages. They don't imply anything about individuals and don't justify discrimination against individuals.
But anyone who's read and understood Charles Murray's work, knows that Murray also includes the standard caveats![4] (Even though the one about group differences not implying anything about individuals is technically wrong.) The Times's insinuation that Scott Alexander is a racist like Charles Murray seems like a "Gettier attack": the charge is essentially correct, even though the evidence used to prosecute the charge before a jury of distracted New York Times readers is completely bogus.
The Politics of the Apolitical
Why do I keep bringing up the claim that "rationalist" leaders almost certainly believe in cognitive race differences (even if it's hard to get them to publicly admit it in a form that's easy to selectively quote in front of New York Times readers)?
It's because one of the things I noticed while trying to make sense of why my entire social circle suddenly decided in 2016 that guys like me could become women by means of saying so, is that in the conflict between the "rationalists" and mainstream progressives, the defensive strategy of the "rationalists" is one of deception.
In this particular historical moment, we end up facing pressure from progressives, because—whatever our object-level beliefs about (say) sex, race, and class differences, and however much most of us would prefer not to talk about them—on the meta level, our creed requires us to admit it's an empirical question, not a moral one—and that empirical questions have no privileged reason to admit convenient answers [LW · GW].
I view this conflict as entirely incidental, something that would happen in some form in any place and time [LW · GW], rather than being specific to American politics or "the left". In a Christian theocracy, our analogues would get in trouble for beliefs about evolution; in the old Soviet Union, our analogues would get in trouble for thinking about market economics (as a positive technical discipline [LW · GW] adjacent to game theory, not yoked to a particular normative agenda).[5]
Incidental or not, the conflict is real, and everyone smart knows it—even if it's not easy to prove that everyone smart knows it, because everyone smart is very careful about what they say in public. (I am not smart.)
So The New York Times implicitly accuses us of being racists, like Charles Murray, and instead of pointing out that being a racist like Charles Murray is the obviously correct position that sensible people will tend to reach in the course of being sensible, we disingenuously deny everything.[6]
It works surprisingly well. I fear my love of Truth is not so great that if I didn't have Something to Protect, I would have happily participated in the cover-up.
As it happens, in our world, the defensive cover-up consists of throwing me under the bus. Facing censure from the progressive egregore for being insufficiently progressive, we can't defend ourselves ideologically. (We think we're egalitarians, but progressives won't buy that because we like markets too much.) We can't point to our racial diversity. (Mostly white if not Jewish, with a handful of East and South Asians, exactly as you'd expect from chapters 13 and 14 of The Bell Curve.) Subjectively, I felt like the sex balance got a little better after we hybridized with Tumblr and Effective Altruism (as contrasted with the old days) but survey data doesn't unambiguously back this up.[7]
But trans! We have plenty of those! In the same blog post in which Scott Alexander characterized rationalism as the belief that Eliezer Yudkowsky is the rightful caliph, he also named "don't misgender trans people" as one of the group's distinguishing norms. Two years later, he joked that "We are solving the gender ratio issue one transition at a time".
The benefit of having plenty of trans people is that high-ranking members of the progressive stack can be trotted out as a shield to prove that we're not counterrevolutionary right-wing Bad Guys. Thus, Jacob Falkovich noted (on 23 June 2020, just after Slate Star Codex went down), "The two demographics most over-represented in the SlateStarCodex readership according to the surveys are transgender people and Ph.D. holders", and Scott Aaronson noted (in commentary on the February 2021 New York Times article) that "the rationalist community's legendary openness to alternative gender identities and sexualities" should have "complicated the picture" of our portrayal as anti-feminist.
Even the haters grudgingly give Alexander credit for "The Categories Were Made for Man, Not Man for the Categories": "I strongly disagree that one good article about accepting transness means you get to walk away from writing that is somewhat white supremacist and quite fascist without at least acknowledging you were wrong", wrote one.
Under these circumstances, dethroning the supremacy of gender identity ideology is politically impossible. All our Overton margin [LW · GW] is already being spent somewhere else; sanity on this topic is our dump stat.
But this being the case, I have no reason to participate in the cover-up. What's in it for me? Why should I defend my native subculture from external attack, if the defense preparations themselves have already rendered it uninhabitable to me?
A Leaked Email Non-Scandal (February 2021)
On 17 February 2021, Topher Brennan, disapproving of Scott and the community's defense against the Times, claimed that Scott Alexander "isn't being honest about his history with the far-right", and published an email he had received from Scott in February 2014 on what Scott thought some neoreactionaries were getting importantly right.
I think that to people who have read and understood Alexander's work, there is nothing surprising or scandalous about the contents of the email. He said that biologically mediated group differences are probably real and that neoreactionaries were the only people discussing the object-level hypotheses or the meta-level question of why our Society's intelligentsia is obfuscating the matter. He said that reactionaries as a whole generate a lot of garbage but that he trusted himself to sift through the noise and extract the novel insights. The email contains some details that Alexander hadn't blogged about—most notably the section headed "My behavior is the most appropriate response to these facts", explaining his social strategizing vis á vis the neoreactionaries and his own popularity. But again, none of it is surprising if you know Scott from his writing.
I think the main reason someone would consider the email a scandalous revelation is if they hadn't read Slate Star Codex that deeply—if their picture of Scott Alexander as a political writer was "that guy who's so committed to charitable discourse that he wrote up an explanation of what reactionaries (of all people) believe—and then turned around and wrote up the definitive explanation of why they're totally wrong and you shouldn't pay them any attention." As a first approximation, it's not a terrible picture. But what it misses—what Scott knows—is that charity isn't about putting on a show of superficially respecting your ideological opponent before concluding (of course) that they're wrong. Charity is about seeing what the other guy is getting right.
The same day, Yudkowsky published a Facebook post that said[8]:
I feel like it should have been obvious to anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside and has no ethics and will hurt you if you trust them, but in case it wasn't obvious consider the point made explicitly. (Subtext: Topher Brennan. Do not provide any link in comments to Topher's publication of private emails, explicitly marked as private, from Scott Alexander.)
I was annoyed at how the discussion seemed to be ignoring the obvious political angle, and the next day, 18 February 2021, I wrote a widely Liked comment: I agreed that there was a grain of truth to the claim that our detractors hate us because they're evil bullies, but stopping the analysis there seemed incredibly shallow and transparently self-serving.
If you listened to why they said they hated us, it was because we were racist, sexist, transphobic fascists. The party-line response seemed to be trending toward, "That's obviously false: Scott voted for Elizabeth Warren, look at all the social democrats on the Less Wrong/Slate Star Codex surveys, &c. They're just using that as a convenient smear because they like bullying nerds."
But if "sexism" included "It's an empirical question whether innate statistical psychological sex differences of some magnitude exist, it empirically looks like they do, and this has implications about our social world" (as articulated in, for example, Alexander's "Contra Grant on Exaggerated Differences"), then the "Slate Star Codex et al. are crypto-sexists" charge was absolutely correct. (Crypto-racist, crypto-fascist, &c. left as an exercise for the reader.)
You could plead, "That's a bad definition of sexism," but that's only convincing if you've been trained in using empiricism and open discussion to discover policies with utilitarian-desirable outcomes. People whose education came from California public schools plus Tumblr didn't already know that. (I didn't know that at age 18 back in 'aught-six, and we didn't even have Tumblr then.) In that light, you could see why someone who was more preöccupied with eradicating bigotry than protecting the right to privacy might find "blow the whistle on people who are claiming to be innocent but are actually guilty (of thinking bad thoughts)" to be a more compelling consideration than "respect confidentiality requests".[9]
Here, I don't think Scott has anything to be ashamed of—but that's because I don't think learning from right-wingers is a crime. If our actual problem was "Genuinely consistent rationalism is realistically always going to be an enemy of the state, because the map that fully reflects the territory is going to include facts that powerful coalitions would prefer to censor, no matter what specific ideology happens to be on top in a particular place and time [LW · GW]", but we thought our problem was "We need to figure out how to exclude evil bullies", then we were in trouble!
Yudkowsky replied that everyone had a problem of figuring out how to exclude evil bullies. We also had an inevitable Kolmogorov complicity problem, but that shouldn't be confused with the evil bullies issue, even if bullies attack via Kolmogorov issues.
I'll agree that the problems shouldn't be confused. I can easily believe that Brennan was largely driven by bully-like motives even if he told himself a story about being a valiant whistleblower defending Cade Metz's honor against Scott's deception.
But I think it's important to notice both problems, instead of pretending that the only problem was Brennan's disregard for Alexander's privacy. Without defending Brennan's actions, there's a non-evil-bully case for wanting to reveal information, rather than participate in a cover-up to protect the image of the "rationalists" as non-threatening to the progressive egregore. If the orchestrators of the cover-up can't even acknowledge to themselves that they're orchestrating a cover-up, they're liable to be confusing themselves about other things, too.
As it happened, I had another social media interaction with Yudkowsky that same day, 18 February 2021. Concerning the psychology of people who hate on "rationalists" for alleged sins that don't particularly resemble anything we do or believe, he wrote:
Hypothesis: People to whom self-awareness and introspection come naturally, put way too much moral exculpatory weight on "But what if they don't know they're lying?" They don't know a lot of their internals! And don't want to know! That's just how they roll.
In reply, Michael Vassar tagged me. "Michael, I thought you weren't talking to me (after my failures of 18–19 December)?" I said. "But yeah, I wrote a couple blog posts about this thing", linking to "Maybe Lying Doesn't Exist" [LW · GW] and "Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle" [LW · GW]
After a few moments, I decided it was better if I explained the significance of Michael tagging me:
Oh, maybe it's relevant to note that those posts were specifically part of my 21-month rage–grief campaign of being furious at Eliezer all day every day for lying-by-implicature about the philosophy of language? But, I don't want to seem petty by pointing that out! I'm over it!
And I think I would have been over it ...
—except that Yudkowsky reopened the conversation four days later, on 22 February 2021, with a new Facebook post explaining the origins of his intuitions about pronoun conventions. It concludes that "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he", with a default for those-who-haven't-asked that goes by gamete size' and to say that this just is the normative definition. Because it is logically rude, not just socially rude, to try to bake any other more complicated and controversial definition into the very language protocol we are using to communicate."
(Why!? Why reopen the conversation, from the perspective of his chessboard? Wouldn't it be easier to just stop digging? Did my highly-Liked Facebook comment and Twitter barb about him lying by implicature temporarily bring my concerns to the top of his attention, despite the fact that I'm generally not that important?)
Yudkowsky Doubles Down (February 2021)
I eventually explained what was wrong with Yudkowsky's new arguments at the length of 12,000 words in March 2022's "Challenges to Yudkowsky's Pronoun Reform Proposal",[10]. Briefly: given a conflict over pronoun conventions, there's not going to be a "right answer", but we can at least be objective in describing what the conflict is about, and Yudkowsky wasn't doing that. Given that we can't coordinate a switch to universal singular they, the pronouns she and he continue to have different meanings in the minds of native English speakers, in the sense that your mind forms different probabilistic expectations of someone taking feminine or masculine pronouns. That's why trans people want to be referred to by the pronoun corresponding to their chosen gender: if there were literally no difference in meaning, there would be no reason to care. Thus, making the distinction on the basis of gender identity rather than sex has consequences [LW · GW]; by proclaiming his "simplest and best protocol" without acknowledging the ways in which it's not simple and not unambiguously the best, Yudkowsky was falsely portraying the policy debate as one-sided [LW · GW]. Furthermore, this misrepresentation would have harmful effects insofar as anyone was dumb enough to believe it: gender-dysphoric people deciding whether or not to socially transition need a correct model of how English pronouns work in the real world in order to perform an accurate cost–benefit analysis.
I have more to say here (that I decided to cut from "Challenges") about the meta-level political context. The February 2021 post on pronouns is a fascinating document, in its own way—a penetrating case study on the effects of politics on a formerly great mind.
Yudkowsky begins by setting the context of "[h]aving received a bit of private pushback" on his willingness to declare that asking someone to use a different pronoun is not lying.
But the reason he got a bit ("a bit") of private pushback was because the November 2018 Twitter thread in question was so blatantly optimized to intimidate and delegitimize people who want to use language to reason about biological sex. The pushback wasn't about using trans people's preferred pronouns (I do that, too), or about not wanting pronouns to imply sex (sounds fine, if we were defining a conlang from scratch); the problem is using an argument that's ostensibly about pronouns to sneak in an implicature ("Who competes in sports segregated around an Aristotelian binary is a policy question [ ] that I personally find very humorous") that it's dumb and wrong to want to talk about the sense in which trans women are male and trans men are female, as a fact about reality that continues to be true even if it hurts someone's feelings, and even if policy decisions made on the basis of that fact are not themselves facts (as if anyone had doubted this).
In that context, it's revealing that in this February 2021 post attempting to explain why the November 2018 thread seemed like a reasonable thing to say, Yudkowsky doubles down on going out of his way to avoid acknowledging the reality of biological sex. He learned nothing! We're told that the default pronoun for those who haven't asked goes by "gamete size", on the grounds that it's "logically rude to demand that other people use only your language system and interpretation convention in order to communicate, in advance of them having agreed with you about the clustering thing."
But I've never measured how big someone's gametes are, have you? We only infer whether strangers' bodies are configured to produce small or large gametes by observing a variety of correlated characteristics. Thus, the complaint that sex-based pronoun conventions rudely demand that people "agree[ ] [...] about the clustering thing" is hypocritical, because Yudkowsky's proposal also expects people to agree about the clustering thing. Furthermore, for trans people who don't pass but are visibly trying to (without having explicitly asked for pronouns), one presumes that we're supposed to use the pronouns corresponding to their gender presentation, not their natal sex.
Thus, Yudkowsky's "gamete-size default" proposal can't be taken literally. The only way I can make sense of it is to interpret it as a flail at the prevailing reality that people are good at noticing what sex other people are, but that we want to be kind to people who are trying to appear to be the other sex.
One could argue that this is hostile nitpicking on my part: that the use of "gamete size" as a metonym for sex here is either an attempt to provide an unambiguous definition (because if you said sex, female, or male, someone could ask what you meant by that), or that it's at worst a clunky choice of words, not an intellectually substantive decision.
But the post seems to suggest that the motive isn't simply to avoid ambiguity. Yudkowsky writes:
In terms of important things? Those would be all the things I've read—from friends, from strangers on the Internet, above all from human beings who are people—describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate, or perhaps at all.
And I'm not happy that the very language I use, would try to force me to take a position on that; not a complicated nuanced position, but a binarized position, simply in order to talk grammatically about people at all.
What does the "tossed into a bucket" metaphor refer to, though? I can think of many things that might be summarized that way, and my sympathy for the one who does not like to be tossed into a bucket depends on exactly what real-world situation is being mapped to the bucket.
If we're talking about overt gender role enforcement—things like, "You're a girl, therefore you need to learn to keep house for your future husband," or "You're a man, therefore you need to toughen up"—then indeed, I strongly support people who don't want to be tossed into that kind of bucket.
(There are historical reasons for the buckets to exist, but I'm eager to bet on modern Society being rich enough and smart enough to either forgo the buckets, or at least let people opt out of the default buckets without causing too much trouble.)
But importantly, my support for people not wanting to be tossed into gender role buckets is predicated on their reasons having genuine merit—things like "The fact that I'm a juvenile female human doesn't mean I'll have a husband; I'm actually planning to become a nun", or "Your expectation that I be able to toughen up is not reasonable given the individuating information you have about me in particular being huge crybaby, even if most adult human males are tougher than me". I don't think people have a general right to prevent others from using sex categories to make inferences or decisions about them, because that would be crazy. If a doctor were to recommend I get a prostate cancer screening on account of my being male and therefore at risk for prostate cancer, it would be bonkers for me to reply that I don't like being tossed into a Male Bucket like that.
When piously appealing to the feelings of people describing reasons they do not want to be tossed into a Male Bucket or a Female Bucket, Yudkowsky does not seem to be distinguishing between reasons that have merit, and reasons that do not. The post continues (bolding mine):
In a wide variety of cases, sure, ["he" and "she"] can clearly communicate the unambiguous sex and gender of something that has an unambiguous sex and gender, much as a different language might have pronouns that sometimes clearly communicated hair color to the extent that hair color often fell into unambiguous clusters.
But if somebody's hair color is halfway between two central points? If their civilization has developed stereotypes about hair color they're not comfortable with, such that they feel that the pronoun corresponding to their outward hair color is something they're not comfortable with because they don't fit key aspects of the rest of the stereotype and they feel strongly about that? If they have dyed their hair because of that, or plan to get hair surgery, or would get hair surgery if it were safer but for now are afraid to do so? Then it's stupid to try to force people to take complicated positions about those social topics before they are allowed to utter grammatical sentences.
I agree that a language convention in which pronouns map to hair color seems pretty bad. The people in this world should probably coordinate on switching to a better convention, if they can figure out how.
But taking the convention as given, a demand to be referred to as having a hair color that one does not have seems outrageous to me!
It makes sense to object to the convention forcing a binary choice in the "halfway between two central points" case. That's an example of genuine nuance brought on by a genuine complication to a system that falsely assumes discrete hair colors.
But "plan to get hair surgery"? "Would get hair surgery if it were safer but for now are afraid to do so"? In what sense do these cases present a challenge to the discrete system and therefore call for complication and nuance? There's nothing ambiguous about these cases: if you haven't, in fact, changed your hair color, then your hair is, in fact, its original color. The decision to get hair surgery does not propagate backwards in time. The decision to get hair surgery cannot be imported from a counterfactual universe in which it is safer. People who, today, do not have the hair color that they would prefer are, today, going to have to deal with that fact as a fact.[11]
Is the idea that we want to use the same pronouns for the same person over time, so that if we know someone is going to get hair surgery—they have an appointment with the hair surgeon at this-and-such date—we can go ahead and switch their pronouns in advance? Okay, I can buy that.
But extending that to the "would get hair surgery if it were safer" case is absurd. No one treats conditional plans assuming speculative future advances in medical technology the same as actual plans. I don't think this case calls for any complicated, nuanced position, and I don't see why Eliezer Yudkowsky would suggest that it would, unless the real motive is obfuscation—unless, at some level, Eliezer Yudkowsky doesn't expect his followers to deal with facts?
It Matters Whether People's Beliefs About Themselves Are Actually True
Maybe the problem is easier to see in the context of a non-gender example. My previous hopeless ideological war was against the conflation of schooling and education: I hated being tossed into the Student Bucket, as it would be assigned by my school course transcript, or perhaps at all.
I sometimes describe myself as mildly "gender dysphoric", because our culture doesn't have better widely understood vocabulary for my beautiful pure sacred self-identity thing. But if we're talking about suffering and emotional distress, my "student dysphoria" was vastly worse than any "gender dysphoria" I've ever felt.
(I remember being particularly distraught one day at the end of community college physics class, and stumbling to the guidance counselor to inquire urgently about just escaping this place with an associate's degree, rather than transferring to a university to finish my bachelor's as planned. I burst into tears again when the counselor mentioned that there would be a physical education requirement. It wasn't that a semester of P.E. would be difficult; it was the indignity of being subject to such meaningless requirements before Society would see me as a person.)
But crucially, my tirades against the Student Bucket described reasons not just that I didn't like it, but that the bucket was wrong on the empirical merits: people can and do learn important things by studying and practicing out of their own curiosity and ambition. The system was in the wrong for assuming that nothing you do matters unless you do it on the command of a designated "teacher" while enrolled in a designated "course".
And because my war footing was founded on the empirical merits, I knew that I had to update to the extent that the empirical merits showed that I was in the wrong. In 2010, I took a differential equations class "for fun" at the local community college, expecting to do well and thereby prove that my previous couple years of math self-study had been the equal of any schoolstudent's.
In fact, I did very poorly and scraped by with a C. (Subjectively, I felt like I "understood the concepts" and kept getting surprised when that understanding somehow didn't convert into passing quiz scores.) That hurt. That hurt a lot.
It was supposed to hurt. One could imagine a less reflective person doubling down on his antagonism to everything school-related in order to protect himself from being hurt—to protest that the teacher hated him, that the quizzes were unfair, that the answer key must have had a printing error—in short, that he had been right in every detail all along and that any suggestion otherwise was credentialist propaganda.
I knew better than to behave like that. My failure didn't mean I had been wrong about everything, that I should humbly resign myself to the Student Bucket forever and never dare to question it again—but it did mean that I must have been wrong about something. I could update myself incrementally [LW · GW]—but I did need to update. (Probably, that "math" encompasses different subskills, and that my glorious self-study had unevenly trained some skills and not others: there was nothing contradictory or unreal about my successfully generalizing one of the methods in the differential equations textbook to arbitrary numbers of variables while also struggling with the class's assigned problem sets.)
Someone who uncritically validated my dislike of the Student Bucket rather than assessing my reasons, would be hurting me, not helping me—because in order to navigate the real world, I need a map that reflects the territory, not a map that reflects my narcissistic fantasies. I'm a better person for straightforwardly facing the shame of getting a C in community college differential equations, rather than denying it or claiming that it didn't mean anything. Part of updating myself incrementally was that I would get other chances to prove that my autodidacticism could match the standard set by schools, even if it hadn't that time. (My professional and open-source programming career obviously does not owe itself to the two Java courses I took at community college. When I audited honors analysis at UC Berkeley "for fun" in 2017, I did fine on the midterm. When I interviewed for a new dayjob in 2018, the interviewer, noting my lack of a degree, said he was going to give a version of the interview without a computer science theory question. I insisted on the "college" version of the interview, solved a dynamic programming problem, and got the job. And so on.)
If you can see why uncritically affirming people's current self-image isn't the solution to "student dysphoria", it should be clear why the same applies to gender dysphoria. There's a general underlying principle: it matters whether that self-image is true.
In an article titled "Actually, I Was Just Crazy the Whole Time", FtMtF detransitioner Michelle Alleva contrasts her current beliefs with those when she decided to transition. While transitioning, she accounted for many pieces of evidence about herself ("dislikes attention as a female", "obsessive thinking about gender", "doesn't fit in with the girls", &c.) in terms of the theory "It's because I'm trans." But now, Alleva writes, she thinks she has a variety of better explanations that, all together, cover the original list: "It's because I'm autistic," "It's because I have unresolved trauma," "It's because women are often treated poorly" ... including "That wasn't entirely true" (!).
This is a rationality skill. Alleva had a theory about herself, which she revised upon further consideration of the evidence. Beliefs about oneself aren't special and can—must—be updated using the same methods that you would use to reason about anything else—just as a recursively self-improving AI would reason the same about transistors "inside" the AI and transistors "in the environment." [LW · GW][12]
This also isn't a particularly advanced rationality skill. This is basic—something novices should grasp during their early steps along the Way.
Back in 2009, in the early days of Less Wrong, when I hadn't yet grown out of my teenage ideological fever dream of psychological sex differences denialism, there was a poignant exchange in the comment section between me and Yudkowsky. Yudkowsky had claimed that he had "never known a man with a true female side, and [...] never known a woman with a true male side, either as authors or in real life." [LW(p) · GW(p)] Offended at our leader's sexism, I passive-aggressively asked him to elaborate [LW(p) · GW(p)]. In his response [LW(p) · GW(p)], he mentioned that he "sometimes wish[ed] that certain women would appreciate that being a man is at least as complicated and hard to grasp and a lifetime's work to integrate, as the corresponding fact of feminity [sic]."
I replied [LW(p) · GW(p)] (bolding added):
I sometimes wish that certain men would appreciate that not all men are like them—or at least, that not all men want to be like them—that the fact of masculinity is not necessarily something to integrate [LW · GW].
I knew. Even then, I knew I had to qualify my not liking to be tossed into a Male Bucket. I could object to Yudkowsky speaking as if men were a collective with shared normative ideals ("a lifetime's work to integrate"), but I couldn't claim to somehow not be male, or even that people couldn't make probabilistic predictions about me given the fact that I'm male ("the fact of masculinity"), because that would be crazy. The culture of early Less Wrong wouldn't have let me get away with that.
It would seem that in the current year, that culture is dead—or if it has any remaining practitioners, they do not include Eliezer Yudkowsky.
A Filter Affecting Your Evidence
At this point, some readers might protest that I'm being too uncharitable in harping on the "not liking to be tossed into a [...] Bucket" paragraph. The same post also explicitly says that "[i]t's not that no truth-bearing propositions about these issues can possibly exist." I agree that there are some interpretations of "not lik[ing] to be tossed into a Male Bucket or Female Bucket" that make sense, even though biological sex denialism does not make sense. Given that the author is Eliezer Yudkowsky, should I not give him the benefit of the doubt and assume that he meant to communicate the reading that does make sense, rather than the reading that doesn't make sense?
I reply: given that the author is Eliezer Yudkowsky—no, obviously not. I have been "trained in a theory of social deception that says that people can arrange reasons, excuses, for anything", such that it's informative "to look at what ended up happening, assume it was the intended result, and ask who benefited." If Yudkowsky just wanted to post about how gendered pronouns are unnecessary and bad as an apolitical matter of language design, he could have written a post just making that narrow point. What ended up happening is that he wrote a post featuring sanctimonious flag-waving about the precious feelings of people "not lik[ing] to be tossed into a Male Bucket or Female Bucket", and concluding with a policy proposal that gives the trans activist coalition everything they want, proclaiming this "the simplest and best protocol" without so much as acknowledging the arguments on the other side of the policy debate [LW · GW]. I don't think it's crazy for me to assume this was the intended result, and to ask who benefited.
When smart people act dumb, it's often wise to conjecture that their behavior represents optimized stupidity [LW · GW]—apparent "stupidity" that achieves a goal through some channel other than their words straightforwardly reflecting reality. Someone who was actually stupid wouldn't be able to generate text so carefully fine-tuned to reach a gender-politically convenient conclusion without explicitly invoking any controversial gender-political reasoning. Where the text is ambiguous about whether biological sex is a real thing that people should be able to talk about, I think the point is to pander to biological sex denialists without technically saying anything unambiguously false that someone could call out as a "lie."
On a close reading of the comment section, we see hints that Yudkowsky does not obviously disagree with this interpretation of his behavior? First, we get a disclaimer comment:
It unfortunately occurs to me that I must, in cases like these, disclaim that—to the extent there existed sensible opposing arguments against what I have just said—people might be reluctant to speak them in public, in the present social atmosphere. That is, in the logical counterfactual universe where I knew of very strong arguments against freedom of pronouns, I would have probably stayed silent on the issue, as would many other high-profile community members, and only Zack M. Davis would have said anything where you could hear it.
This is a filter affecting your evidence; it has not to my own knowledge filtered out a giant valid counterargument that invalidates this whole post. I would have kept silent in that case, for to speak then would have been dishonest.
Personally, I'm used to operating without the cognitive support of a civilization in controversial domains, and have some confidence in my own ability to independently invent everything important that would be on the other side of the filter and check it myself before speaking. So you know, from having read this, that I checked all the speakable and unspeakable arguments I had thought of, and concluded that this speakable argument would be good on net to publish, as would not be the case if I knew of a stronger but unspeakable counterargument in favor of Gendered Pronouns For Everyone and Asking To Leave The System Is Lying.
But the existence of a wide social filter like that should be kept in mind; to whatever quantitative extent you don't trust your ability plus my ability to think of valid counterarguments that might exist, as a Bayesian you should proportionally update in the direction of the unknown arguments you speculate might have been filtered out.
The explanation of the problem of political censorship filtering evidence [LW · GW] here is great, but the part where Yudkowsky claims "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter" is laughable. The point I articulated at length in "Challenges" (that she and he have existing meanings that you can't just ignore, given that the existing meanings are what motivate people to ask for new pronouns in the first place) is obvious.
It would arguably be less embarrassing for Yudkowsky if he were lying about having tried to think of counterarguments. The original post isn't that bad if you assume that Yudkowsky was writing off the cuff, that he just didn't put any effort into thinking about why someone might disagree. I don't have a problem with selective argumentation that's clearly labeled as such: there's no shame in being an honest specialist who says, "I've mostly thought about these issues though the lens of ideology X, and therefore can't claim to be comprehensive or even-handed; if you want other perspectives, you'll have to read other authors and think it through for yourself."
But if he did put in the effort to aspire to the virtue of evenness—enough that he felt comfortable bragging about his ability to see the other side of the argument—and still ended up proclaiming his "simplest and best protocol" without mentioning any of its obvious costs, that's discrediting. If Yudkowsky's ability to explore the space of arguments is that bad, why would you trust his opinion about anything?
Furthermore, the claim that only I "would have said anything where you could hear it" is also discrediting of the community. Transitioning or not is a major life decision for many of the people in this community. People in this community need the goddamned right answers to the questions I've been asking in order to make that kind of life decision sanely (whatever the sane decisions turn out to be). If the community is so bad at exploring the space of arguments that I'm the only one who can talk about the obvious decision-relevant considerations that code as "anti-trans" when you project into the one-dimensional subspace corresponding to our Society's usual culture war, why would you pay attention to the community at all? Insofar as the community is successfully marketing itself to promising young minds as the uniquely best place in the world for reasoning and sensemaking, then "the community" is fraudulent (misleading people about what it has to offer in a way that's optimized to move resources to itself). It needs to either rebrand—or failing that, disband—or failing that, be destroyed.
The "where you could hear it" clause is particularly bizarre—as if Yudkowsky assumes that people in "the community" don't read widely. It's gratifying to be acknowledged by my caliph—or it would be, if he were still my caliph—but I don't think the points I've been making since 2016, about the relevance of autogynephilia to male-to-female transsexualism, and the reality of biological sex (!), are particularly novel.
I think I am unusual in the amount of analytical rigor I can bring to bear on these topics. Similar points are often made by authors such as Kathleen Stock or Corinna Cohn or Aaron Terrell—or for that matter Steve Sailer—but those authors don't have the background to formulate it in the language of probabilistic graphical models the way I do. That part is a genuine value-add of the "rationalist" memeplex—something I wouldn't have been able to do without the influence of Yudkowsky's Sequences, and all the math books I studied afterwards because the vibe of the Overcoming Bias comment section in 2008 made that seem like an important and high-status thing to do.
But the promise of the Sequences was in offering a discipline of thought that could be applied to everything you would have read and thought about anyway. This notion that if someone in the community didn't say something, then Yudkowsky's faithful students wouldn't be able to hear it, would have been rightfully seen as absurd: Overcoming Bias was a gem of the blogoshere, not a substitute for the rest of it. (Nor was the blogosphere a substitute for the University library, which escaped the autodidact's resentment of the tyranny of schools by selling borrowing privileges to the public for $100 a year.) To the extent that the Yudkowsky of the current year takes for granted that his faithful students don't read Steve Sailer, he should notice that he's running a cult or a fandom rather than an intellectual community.
Yudkowsky's disclaimer comment mentions "speakable and unspeakable arguments"—but what, one wonders, is the boundary of the "speakable"? In response to a commenter mentioning the cost of having to remember pronouns as a potential counterargument, Yudkowsky offers us another clue as to what's going on here:
People might be able to speak that. A clearer example of a forbidden counterargument would be something like e.g. imagine if there was a pair of experimental studies somehow proving that (a) everybody claiming to experience gender dysphoria was lying, and that (b) they then got more favorable treatment from the rest of society. We wouldn't be able to talk about that. No such study exists to the best of my own knowledge, and in this case we might well hear about it from the other side to whom this is the exact opposite of unspeakable; but that would be an example.
(As an aside, the wording of "we might well hear about it from the other side" (emphasis mine) is very interesting, suggesting that the so-called "rationalist" community is, in fact, a partisan institution. An intellectual community dedicated to refining the art of human rationality would not have an other side.)
I think (a) and (b) as stated are clearly false, so "we" (who?) aren't losing much by allegedly not being able to speak them. But what about some similar hypotheses, that might be similarly unspeakable for similar reasons?
Instead of (a), consider the claim that (a′) self-reports about gender dysphoria are substantially distorted by socially-desirable responding tendencies—as a notable example, heterosexual males with sexual fantasies about being female often falsely deny or minimize the erotic dimension of their desire to change sex.[13]
And instead of (b), consider the claim that (b′) transitioning is socially rewarded within particular subcultures (although not Society as a whole), such that many of the same people wouldn't think of themselves as trans if they lived in a different subculture.
I claim that (a′) and (b′) are overwhelmingly likely to be true. Can "we" talk about that? Are (a′) and (b′) "speakable", or not? We're unlikely to get clarification from Yudkowsky, but based on the Whole Dumb Story I've been telling you about how I wasted the last eight years of my life on this, I'm going to guess that the answer is broadly No: "we" can't talk about that. (I can say it, and people can debate me in a private Discord server where the general public isn't looking, but it's not something someone of Yudkowsky's stature can afford to acknowledge.)
But if I'm right that (a′) and (b′) should be live hypotheses and that Yudkowsky would consider them "unspeakable", that means "we" can't talk about what's actually going on with gender dysphoria and transsexuality, which puts the whole post in a different light: making sense of the discussion requires analyzing what isn't being said.
In another comment, Yudkowsky lists some gender-transition interventions he named in his November 2018 "hill of meaning in defense of validity" Twitter thread—using a different bathroom, changing one's name, asking for new pronouns, and getting sex reassignment surgery—and notes that none of these are calling oneself a "woman". He continues:
[Calling someone a "woman"] is closer to the right sort of thing ontologically to be true or false. More relevant to the current thread, now that we have a truth-bearing sentence, we can admit of the possibility of using our human superpower of language to debate whether this sentence is indeed true or false, and have people express their nuanced opinions by uttering this sentence, or perhaps a more complicated sentence using a bunch of caveats, or maybe using the original sentence uncaveated to express their belief that this is a bad place for caveats. Policies about who uses what bathroom also have consequences and we can debate the goodness or badness (not truth or falsity) of those policies, and utter sentences to declare our nuanced or non-nuanced position before or after that debate.
Trying to pack all of that into the pronouns you'd have to use in step 1 is the wrong place to pack it.
Sure, if we were designing a constructed language from scratch with the understanding that a person's "gender" is a contested social construct rather than their sex being an objective and undisputed fact, then yes: in that situation which we are not in, you definitely wouldn't want to pack sex or gender into pronouns. But it's a disingenuous derailing tactic to grandstand about how people need to alter the semantics of their existing native language so that we can discuss the real issues under an allegedly superior pronoun convention when, by your own admission, you have no intention whatsoever of discussing the real issues!
(Lest the "by your own admission" clause seem too accusatory, I should note that given constant behavior, admitting it is much better than not admitting it, so huge thanks to Yudkowsky for the transparency on this point.)
As discussed in "Challenges", there's an instructive comparison to languages that have formality-based second person pronouns, like tú and usted in Spanish. It's one thing to advocate for collapsing the distinction and just settling on one second-person singular pronoun for the Spanish language. That's principled.
It's another thing altogether to try to prevent a speaker from using tú to indicate disrespect towards a social superior (on the stated rationale that the tú/usted distinction is dumb and shouldn't exist) while also refusing to entertain the speaker's arguments for why their interlocutor is unworthy of the deference that would be implied by usted (because such arguments are "unspeakable" for political reasons). That's psychologically abusive.
If Yudkowsky actually possessed (and felt motivated to use) the "ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking", it would be obvious to him that "Gendered Pronouns for Everyone and Asking To Leave the System Is Lying" isn't the hill anyone would care about dying on if it weren't a Schelling point. A lot of TERF-adjacent folk would be overjoyed to concede the (boring, insubstantial) matter of pronouns as a trivial courtesy if it meant getting to address their real concerns of "Biological Sex Actually Exists" and "Biological Sex Cannot Be Changed With Existing or Foreseeable Technology" [LW · GW] and "Biological Sex Is Sometimes More Relevant Than Subjective Gender Identity." The reason so many of them are inclined to stand their ground and not even offer the trivial courtesy of pronouns is because they suspect, correctly, that pronouns are being used as a rhetorical wedge to keep people from talking or thinking about sex.
The Stated Reasons Not Being the Real Reasons Is a Form of Community Harm
Having analyzed the ways in which Yudkowsky is playing dumb here, what's still not entirely clear is why. Presumably he cares about maintaining his credibility as an insightful and fair-minded thinker. Why tarnish that by putting on this haughty performance?
Of course, presumably he doesn't think he's tarnishing it—but why not? He graciously explains in the Facebook comments:
I think that in a half-Kolmogorov-Option environment where people like Zack haven't actually been shot and you get away with attaching explicit disclaimers like this one, it is sometimes personally prudent and not community-harmful to post your agreement with Stalin about things you actually agree with Stalin about, in ways that exhibit generally rationalist principles, especially because people do know they're living in a half-Stalinist environment [...] I think people are better off at the end of that.
Ah, prudence! He continues:
I don't see what the alternative is besides getting shot, or utter silence about everything Stalin has expressed an opinion on including "2 + 2 = 4" because if that logically counterfactually were wrong you would not be able to express an opposing opinion.
The problem with trying to "exhibit generally rationalist principles" in a line of argument that you're constructing in order to be prudent and not community-harmful is that you're thereby necessarily not exhibiting the central rationalist principle that what matters is the process that determines your conclusion, not the reasoning you present to others after the fact.
The best explanation of this I know of was authored by Yudkowsky himself in 2007, in a post titled "A Rational Argument" [LW · GW]. It's worth quoting at length. The Yudkowsky of 2007 invites us to consider the plight of a political campaign manager:
As a campaign manager reading a book on rationality, one question lies foremost on your mind: "How can I construct an impeccable rational argument that Mortimer Q. Snodgrass is the best candidate for Mayor of Hadleyburg?"
Sorry. It can't be done.
"What?" you cry. "But what if I use only valid support to construct my structure of reason? What if every fact I cite is true to the best of my knowledge, and relevant evidence under Bayes's Rule?"
Sorry. It still can't be done. You defeated yourself the instant you specified your argument's conclusion in advance.
The campaign manager is in possession of a survey of mayoral candidates on which Snodgrass compares favorably to others, except for one question. The post continues (bolding mine):
So you are tempted to publish the questionnaire as part of your own campaign literature ... with the 11th question omitted, of course.
Which crosses the line between rationality and rationalization. It is no longer possible for the voters to condition on the facts alone; they must condition on the additional fact of their presentation, and infer the existence of hidden evidence.
Indeed, you crossed the line at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it. "What!" you cry. "A campaign should publish facts unfavorable to their candidate?" But put yourself in the shoes of a voter, still trying to select a candidate—why would you censor useful information? You wouldn't, if you were genuinely curious. If you were flowing forward from the evidence to an unknown choice of candidate, rather than flowing backward from a fixed candidate to determine the arguments.
The post then briefly discusses the idea of a "logical" argument, one whose conclusions follow from its premises. "All rectangles are quadrilaterals; all squares are quadrilaterals; therefore, all squares are rectangles" is given as an example of illogical argument, even though both premises are true (all rectangles and squares are in fact quadrilaterals) and the conclusion is true (all squares are in fact rectangles). The problem is that the conclusion doesn't follow from the premises; the reason all squares are rectangles isn't because they're both quadrilaterals. If we accepted arguments of the general form "all A are C; all B are C; therefore all A are B", we would end up believing nonsense.
Yudkowsky's conception of a "rational" argument—at least, Yudkowsky's conception in 2007, which the Yudkowsky of the current year seems to disagree with—has a similar flavor: the stated reasons should be the real reasons. The post concludes:
If you really want to present an honest, rational argument for your candidate, in a political campaign, there is only one way to do it:
- Before anyone hires you, gather up all the evidence you can about the different candidates.
- Make a checklist which you, yourself, will use to decide which candidate seems best.
- Process the checklist.
- Go to the winning candidate.
- Offer to become their campaign manager.
- When they ask for campaign literature, print out your checklist.
Only in this way can you offer a rational chain of argument, one whose bottom line was written flowing forward from the lines above it. Whatever actually decides your bottom line is the only thing you can honestly write on the lines above.
I remember this being pretty shocking to read back in 'aught-seven. What an alien mindset, that you somehow "can't" argue for something! It's a shockingly high standard for anyone to aspire to—but what made Yudkowsky's Sequences so life-changing was that they articulated the existence of such a standard. For that, I will always be grateful.
... which is why it's bizarre that the Yudkowsky of the current year acts like he's never heard of it! If your actual bottom line is that it is sometimes personally prudent and not community-harmful to post your agreement with Stalin, then sure, you can totally find something you agree with to write on the lines above! Probably something that "exhibits generally rationalist principles", even! It's just that any rationalist who sees the game you're playing is going to correctly identify you as a partisan hack on this topic and take that into account when deciding whether they can trust you on other topics.
"I don't see what the alternative is besides getting shot," Yudkowsky muses (where, presumably, "getting shot" is a metaphor for any undesirable consequence, like being unpopular with progressives). Yes, an astute observation. And any other partisan hack could say exactly the same, for the same reason. Why does the campaign manager withhold the results of the 11th question? Because he doesn't see what the alternative is besides getting shot (being fired from the campaign).
Yudkowsky sometimes [LW · GW] quotes Calvin and Hobbes: "I don't know which is worse, that everyone has his price, or that the price is always so low." If the idea of being fired from the Snodgrass campaign or being unpopular with progressives is so terrifying to you that it seems analogous to getting shot, then sure—say whatever you need to say to keep your job or your popularity, as is personally prudent. You've set your price.
I just—would have hoped that abandoning the intellectual legacy of his Sequences, would be a price too high for such a paltry benefit?
Michael Vassar said, "Rationalism starts with the belief that arguments aren't soldiers, and ends with the belief that soldiers are arguments." By accepting that soldiers are arguments ("I don't see what the alternative is besides getting shot"), Yudkowsky is accepting the end of rationalism in this sense. If the price you put on the intellectual integrity of your so-called "rationalist" community is similar to that of the Snodgrass for Mayor campaign, you shouldn't be surprised if intelligent, discerning people accord similar levels of credibility to the two groups' output.
Yudkowsky names the alleged fact that "people do know they're living in a half-Stalinist environment" as a mitigating factor. But as Zvi Mowshowitz points out, the false assertion that "everybody knows" something is typically used to justify deception: if "everybody knows" that we can't talk about biological sex, then no one is being deceived when our allegedly truthseeking discussion carefully steers clear of any reference to the reality of biological sex even when it's extremely relevant.
But if everybody knew, then what would be the point of the censorship? It's not coherent to claim that no one is being harmed by censorship because everyone knows about it, because the appeal of censorship to dictators like Stalin is precisely to maintain a state of not everyone knowing [LW · GW].
For the savvy people in the know, it would certainly be convenient if everyone secretly knew [LW · GW]: then the savvy people wouldn't have to face the tough choice between acceding to Power's demands (at the cost of deceiving their readers) and informing their readers (at the cost of incurring Power's wrath).
Policy debates should not appear one-sided. [LW · GW] Faced with this dilemma, I can't say that defying Power is necessarily the right choice: if there really were no options besides deceiving your readers and incurring Power's wrath, and Power's wrath would be too terrible to bear, then maybe deceiving your readers is the right thing to do.
But if you cared about not deceiving your readers, you would want to be sure that those really were the only two options. You'd spend five minutes by the clock looking for third alternatives [LW · GW]—including, possibly, not issuing proclamations on your honor as leader of the so-called "rationalist" community on topics where you explicitly intend to ignore politically unfavorable counterarguments. Yudkowsky rejects this alternative on the grounds that it allegedly implies "utter silence about everything Stalin has expressed an opinion on including '2 + 2 = 4' because if that logically counterfactually were wrong you would not be able to express an opposing opinion".
I think he's playing dumb here. In other contexts, he's written about "attack[s] performed by selectively reporting true information" and "[s]tatements which are technically true but which deceive the listener into forming further beliefs which are false". He's undoubtedly familiar with the motte-and-bailey doctrine as described by Nicholas Shackel and popularized by Scott Alexander. I think that if he wanted to, Eliezer Yudkowsky could think of some relevant differences between "2 + 2 = 4" and "the simplest and best protocol is, "He refers to the set of people who have asked us to use he".
If you think it's "sometimes personally prudent and not community-harmful" to go out of your way to say positive things about Republican candidates and never, ever say positive things about Democratic candidates (because you live in a red state and "don't see what the alternative is besides getting shot"), you can see why people might regard you as a Republican shill, even if every sentence you said was true.[14] If you tried to defend yourself against the charge of being a Republican shill by pointing out that you've never told any specific individual, "You should vote Republican," that's a nice motte, but you shouldn't expect devoted rationalists to fall for it.
Similarly, when Yudkowsky wrote in June 2021, "I have never in my own life tried to persuade anyone to go trans (or not go trans)—I don't imagine myself to understand others that much", it was a great motte. I don't doubt the literal motte stated literally.
And yet it seems worth noting that shortly after proclaiming in March 2016 that he was "over 50% probability at this point that at least 20% of the ones with penises are actually women", he made a followup post celebrating having caused someone's transition:
Just checked my filtered messages on Facebook and saw, "Your post last night was kind of the final thing I needed to realize that I'm a girl."
==DOES ALL OF THE HAPPY DANCE FOREVER==
He later clarified on Twitter, "It is not trans-specific. When people tell me I helped them, I mostly believe them and am happy."
But if Stalin is committed to convincing gender-dysphoric males that they need to cut their dicks off, and you're committed to not disagreeing with Stalin, you shouldn't mostly believe it when gender-dysphoric males thank you for providing the final piece of evidence they needed to realize that they need to cut their dicks off, for the same reason a self-aware Republican shill shouldn't uncritically believe it when people thank him for warning them against Democrat treachery. We know—he's told us very clearly—that Yudkowsky isn't trying to be a neutral purveyor of decision-relevant information on this topic; he's not going to tell us about reasons not to transition. He's playing on a different chessboard.
People Who Are Trying to Be People Want to Improve Their Self-Models
"[P]eople do know they're living in a half-Stalinist environment," Yudkowsky claims. "I think people are better off at the end of that," he says. But who are "people", specifically? One of the problems with utilitarianism is that it doesn't interact well with game theory. If a policy makes most people better off, at the cost of throwing a few others under the bus, is enacting that policy the right thing to do?
Depending on the details, maybe—but you probably shouldn't expect the victims to meekly go under the wheels without a fight. That's why I've been telling you this 85,000-word sob story about how I didn't know, and I'm not better off.
In one of Yudkowsky's roleplaying fiction threads, Thellim, a woman hailing from a saner alternate version of Earth called dath ilan [? · GW], expresses horror and disgust at how shallow and superficial the characters in Jane Austen's Pride and Prejudice are, in contrast to what a human being should be:
[...] the author has made zero attempt to even try to depict Earthlings as having reflection, self-observation, a fire of inner life; most characters in Pride and Prejudice bear the same relationship to human minds as a stick figure bears to a photograph. People, among other things, have the property of trying to be people; the characters in Pride and Prejudice have no visible such aspiration. Real people have concepts of their own minds, and contemplate their prior ideas of themselves in relation to a continually observed flow of their actual thoughts, and try to improve both their self-models and their selves. It's impossible to imagine any of these people, even Elizabeth, as doing that thing Thellim did a few hours ago, where she noticed she was behaving like Verrez and snapped out of it. Just like any particular Verrez always learns to notice he is being Verrez and snap out of it, by the end of any of his alts' novels.
When someone else doesn't see the problem with Jane Austen's characters, Thellim redoubles her determination to explain the problem: "She is not giving up that easily. Not on an entire planet full of people."
Thellim's horror at the fictional world of Jane Austen is basically how I feel about "trans" culture in the current year. It actively discourages self-modeling! People who have cross-sex fantasies are encouraged to reify them into a gender identity which everyone else is supposed to unquestioningly accept. Obvious critical questions about what's actually going on etiologically, what it means for an identity to be true, &c. are strongly discouraged as hateful and hurtful.
The problem is not that I think there's anything wrong with fantasizing about being the other sex and wanting the fantasy to be real—just as Thellim's problem with Pride and Prejudice is not her seeing anything wrong with wanting to marry a suitable bachelor. These are perfectly respectable goals.
The problem is that people who are trying to be people, people who are trying to achieve their goals in reality, do so in a way that involves having concepts of their own minds, and trying to improve both their self-models and their selves, and that's not possible in a culture that tries to ban as heresy the idea that it's possible for someone's self-model to be wrong.
A trans woman I follow on Twitter complained that a receptionist at her workplace said she looked like some male celebrity. "I'm so mad," she fumed. "I look like this right now"—there was a photo attached to the Tweet—"how could anyone ever think that was an okay thing to say?"
It is genuinely sad that the author of those Tweets didn't get perceived in the way she would prefer! But the thing I want her to understand, a thing I think any sane adult (on Earth, and not just dath ilan) should understand—
It was a compliment! That receptionist was almost certainly thinking of someone like David Bowie or Eddie Izzard, rather than being hateful. The author should have graciously accepted the compliment and done something to pass better next time.[15] The horror of trans culture is that it's impossible to imagine any of these people doing that—noticing that they're behaving like a TERF's hostile stereotype of a narcissistic, gaslighting trans-identified man and snapping out of it.
In a sane world, people would understand that the way to ameliorate the sadness of people who aren't being perceived how they prefer is through things like better and cheaper facial feminization surgery, not emotionally blackmailing people out of their ability to report what they see. I don't want to relinquish my ability to notice what women's faces look like, even if that means noticing that mine isn't one. I can endure being sad about that if the alternative is forcing everyone to doublethink around their perceptions of me.
In a world where surgery is expensive, but some people desperately want to change sex and other people want to be nice to them, there are incentives to relocate our shared concept of "gender" onto things like ornamental clothing that are easier to change than secondary sex characteristics.
But I would have expected people with an inkling of self-awareness and honesty to notice the incentives, and the problems being created by them, and to talk about the problems in public so that we can coordinate on the best solution, whatever that turns out to be?
And if that's too much to expect of the general public—
And if it's too much to expect garden-variety "rationalists" to figure out on their own without prompting from their betters—
Then I would have at least expected Eliezer Yudkowsky to take actions in favor of rather than against his faithful students having these basic capabilities for reflection, self-observation, and ... speech? I would have expected Eliezer Yudkowsky to not actively exert optimization pressure in the direction of transforming me into a Jane Austen character.
Criticism of Public Statements Is About the Public Statements, Not Subjective Intent
This is the part where Yudkowsky or his flunkies accuse me of being uncharitable, of failing at perspective-taking and embracing conspiracy theories. Obviously, Yudkowsky doesn't think of himself as trying to transform his faithful students into Jane Austen characters. Perhaps, then, I have failed to understand his position? As Yudkowsky put it:
The Other's theory of themselves usually does not make them look terrible. And you will not have much luck just yelling at them about how they must really be doing
terrible_thing
instead.
But the substance of my complaints is not about Yudkowsky's conscious subjective narrative [LW · GW]. I don't have a lot of uncertainty about Yudkowsky's theory of himself, because he told us that, very clearly: "it is sometimes personally prudent and not community-harmful to post your agreement with Stalin about things you actually agree with Stalin about, in ways that exhibit generally rationalist principles, especially because people do know they're living in a half-Stalinist environment." I don't doubt that that's how the algorithm feels from the inside [LW · GW].
But my complaint is about the work the algorithm is doing in Stalin's service, not about how it feels; I'm talking about a pattern of publicly visible behavior stretching over years, not claiming to be a mind-reader. (Thus, "take actions" in favor of/against, rather than "be"; "exert optimization pressure in the direction of", rather than "try".) I agree that everyone has a story in which they don't look terrible, and that people mostly believe their own stories, but it does not therefore follow that no one ever does anything terrible.
I agree that you won't have much luck yelling at the Other about how they must really be doing terrible_thing
. But if you have the receipts of the Other repeatedly doing the thing in public from 2016 to 2021, maybe yelling about it to everyone else might help them stop getting suckered by the Other's empty posturing.
Let's recap.
Recap of Yudkowsky's History of Public Statements on Transgender Identity
In January 2009, Yudkowsky published "Changing Emotions" [LW · GW], essentially a revision of a 2004 mailing list post responding to a man who said that after the Singularity, he'd like to make a female but "otherwise identical" copy of himself. "Changing Emotions" insightfully points out the deep technical reasons why men who sexually fantasize about being women can't achieve their dream with foreseeable technology—and not only that, but that the dream itself is conceptually confused: a man's fantasy about it being fun to be a woman isn't part of the female distribution; there's a sense in which it can't be fulfilled.
It was a good post! Yudkowsky was merely using the sex change example to illustrate a more general point about the difficulties of applied transhumanism [LW · GW], but "Changing Emotions" was hugely influential on me; I count myself much better off for having understood the argument.
But seven years later, in a March 2016 Facebook post, Yudkowsky proclaimed that "for people roughly similar to the Bay Area / European mix, I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women."
This seemed like a huge and surprising reversal from the position articulated in "Changing Emotions". The two posts weren't necessarily inconsistent, if you assume gender identity is a real property synonymous with "brain sex", and that the harsh (almost mocking) skepticism of the idea of true male-to-female sex change in "Changing Emotions" was directed at the sex-change fantasies of cis men (with a male gender-identity/brain-sex), whereas the 2016 Facebook post was about trans women (with a female gender-identity/brain-sex), which are a different thing.
But this potential unification seemed dubious to me, especially given how the 2016 Facebook post posits that trans women are "at least 20% of the ones with penises" (!) in some population, while the 2004 mailing list post notes that "spending a week as a member of the opposite sex may be a common sexual fantasy". After it's been pointed out, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female but 'otherwise identical' copy of himself" and "guy in 2016 Berkeley who identifies as a trans woman" are the same guy. So in October 2016, I wrote to Yudkowsky noting the apparent reversal and asking to talk about it. Because of the privacy rules I'm adhering to in telling this Whole Dumb Story, I can't confirm or deny whether any such conversation occurred.
Then, in November 2018, while criticizing people who refuse to use trans people's preferred pronouns, Yudkowsky proclaimed that "Using language in a way you dislike, openly and explicitly and with public focus on the language and its meaning, is not lying" and that "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning." But that seemed like a huge and surprising reversal from the position articulated in "37 Ways Words Can Be Wrong" [LW · GW].
(And this November 2018 reversal on the philosophy of language was much more inexplicable than the March 2016 reversal on the psychology of sex, because the latter is a complicated empirical question about which reasonable people might read new evidence differently and change their minds. In contrast, there's no plausible good reason for him to have reversed course on whether words can be wrong.)
After attempts to clarify via email failed, I eventually wrote "Where to Draw the Boundaries?" [LW · GW] to explain the relevant error in general terms, and Yudkowsky eventually clarified his position in September 2020.
But then in February 2021, he reopened the discussion to proclaim that "the simplest and best protocol is, 'He refers to the set of people who have asked us to use he, with a default for those-who-haven't-asked that goes by gamete size' and to say that this just is the normative definition", the problems with which post I explained in March 2022's "Challenges to Yudkowsky's Pronoun Reform Proposal" and above.
End recap.
An Adversarial Game
At this point, the nature of the game is clear. Yudkowsky wants to make sure he's on peaceful terms with the progressive zeitgeist, subject to the constraint of not writing any sentences he knows to be false [LW · GW]. Meanwhile, I want to make sense of what's actually going on in the world as regarding sex and gender, because I need the correct answer to decide whether or not to cut my dick off.
On "his turn", he comes up with some pompous proclamation that's obviously optimized to make the "pro-trans" faction look smart and good and the "anti-trans" faction look dumb and bad, "in ways that exhibit generally rationalist principles."
On "my turn", I put in an absurd amount of effort explaining in exhaustive, exhaustive detail why Yudkowsky's pompous proclamation, while perhaps not technically making any unambiguously false atomic statements [LW · GW], was substantively misleading compared to what any serious person would say if they were trying to make sense of the world without worrying what progressive activists would think of them.
At the start, I never expected to end up arguing about the minutiæ of pronoun conventions, which no one would care about if contingencies of the English language hadn't made them a Schelling point for things people do care about. The conversation only ended up here after a series of derailings. At the start, I was trying to say something substantive about the psychology of straight men who wish they were women.
In the context of AI alignment theory, Yudkowsky has written about a "nearest unblocked strategy" phenomenon: if you prevent an agent from accomplishing a goal via some plan that you find undesirable, the agent will search for ways to route around that restriction, and probably find some plan that you find similarly undesirable for similar reasons.
Suppose you developed an AI to maximize human happiness subject to the constraint of obeying explicit orders. It might first try forcibly administering heroin to humans. When you order it not to, it might switch to administering cocaine. When you order it to not to forcibly administer any kind of drug, it might switch to forcibly implanting electrodes in humans' brains, or just paying the humans to take heroin, &c.
It's the same thing with Yudkowsky's political risk minimization subject to the constraint of not saying anything he knows to be false. First he comes out with "I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women" (March 2016). When you point out that his own writings from seven years before explain why that's not true [LW · GW], then the next time he revisits the subject, he switches to "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning" (November 2018). When you point out that his earlier writings also explain why that's not true either [LW · GW], he switches to "It is Shenanigans to try to bake your stance on how clustered things are [...] into the pronoun system of a language and interpretation convention that you insist everybody use" (February 2021). When you point out that that's not what's going on, he switches to ... I don't know, but he's a smart guy; in the unlikely event that he sees fit to respond to this post, I'm sure he'll be able to think of something—but at this point, I have no reason to care. Talking to Yudkowsky on topics where getting the right answer would involve acknowledging facts that would make you unpopular in Berkeley is a waste of everyone's time; he has a bottom line [LW · GW] that doesn't involve trying to inform you.
Accusing one's interlocutor of bad faith is frowned upon for a reason. We would prefer to live in a world where we have intellectually fruitful object-level discussions under the assumption of good faith, rather than risk our fora degenerating into accusations and name-calling, which is unpleasant and (more importantly) doesn't make any intellectual progress.
Accordingly, I tried the object-level good-faith argument thing first. I tried it for years. But at some point, I think I should be allowed to notice the nearest-unblocked-strategy game which is obviously happening. I think there's some number of years and some number of thousands of words of litigating the object level (about gender) and the meta level (about the philosophy of categorization) after which there's nothing left to do but jump up to the meta-meta level of politics and explain, to anyone capable of hearing it, why I think I've accumulated enough evidence for the assumption of good faith to have been empirically falsified.[16]
What makes all of this especially galling is that all of my heretical opinions are literally just Yudkowsky's opinions from the 'aughts! My thing about how changing sex isn't possible with existing or foreseeable technology because of how complicated humans (and therefore human sex differences) are? Not original to me! I filled in a few technical details, but again, this was in the Sequences as "Changing Emotions" [LW · GW]. My thing about how you can't define concepts any way you want because there are mathematical laws governing which category boundaries compress [LW · GW] your anticipated experiences [LW · GW]? Not original to me! I filled in [LW · GW] a few technical details [LW · GW], but we had a whole Sequence about this. [LW · GW]
Seriously, do you think I'm smart enough to come up with all of this independently? I'm not! I ripped it all off from Yudkowsky back in the 'aughts when he still cared about telling the truth. (Actively telling the truth, and not just technically not lying.) The things I'm hyperfocused on that he thinks are politically impossible to say in the current year are almost entirely things he already said, that anyone could just look up!
I guess the egregore doesn't have the reading comprehension for that?—or rather, the egregore has no reason to care about the past; if you get tagged by the mob as an Enemy, your past statements will get dug up as evidence of foul present intent, but if you're playing the part well enough today, no one cares what you said in 2009?
Does he expect the rest of us not to notice? Or does he think that "everybody knows"?
But I don't think that everybody knows. And I'm not giving up that easily. Not on an entire subculture full of people.
The Battle That Matters
In February 2021, Yudkowsky defended his behavior (referring back to his November 2018 Tweets):
I think that some people model civilization as being in the middle of a great battle in which this tweet, even if true, is giving comfort to the Wrong Side, where I would not have been as willing to tweet a truth helping the Right Side. From my perspective, this battle just isn't that close to the top of my priority list. I rated nudging the cognition of the people-I-usually-respect, closer to sanity, as more important; who knows, those people might matter for AGI someday. And the Wrong Side part isn't as clear to me either.
There are a number of things that could be said to this,[17] but most importantly: the battle that matters—the battle with a Right Side and a Wrong Side—isn't "pro-trans" vs. "anti-trans". (The central tendency of the contemporary trans rights movement is firmly on the Wrong Side, but that's not the same thing as all trans people as individuals.) That's why Jessica Taylor joined our posse to try to argue with Yudkowsky in early 2019. (She wouldn't have if my objection had been, "Trans is Wrong; trans people Bad.") That's why Somni—one of the trans women who infamously protested the 2019 CfAR reunion for (among other things) CfAR allegedly discriminating against trans women—understands what I've been saying.
The battle that matters—and I've been explicit about this, for years—is over this proposition eloquently stated by Scott Alexander in November 2014 (redacting the irrelevant object-level example):
I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should.
This is a battle between Feelings and Truth, between Politics and Truth.
In order to take the side of Truth, you need to be able to tell Joshua Norton that he's not actually Emperor of the United States (even if it hurts him).
You need to be able to tell a prideful autodidact that the fact that he's failing quizzes in community college differential equations class is evidence that his study methods aren't doing what he thought they were (even if it hurts him).
And you need to be able to say, in public, that trans women are male and trans men are female with respect to a concept of binary sex that encompasses the many traits that aren't affected by contemporary surgical and hormonal interventions (even if it hurts someone who does not like to be tossed into a Male Bucket or a Female Bucket as it would be assigned by their birth certificate, and—yes—even if it probabilistically contributes to that person's suicide).
If you don't want to say those things because hurting people is wrong, then you have chosen Feelings.
Scott Alexander chose Feelings, but I can't hold that against him, because Scott is explicit about only speaking in the capacity of some guy with a blog.[18] You can tell that he never wanted to be a religious leader; it just happened because he writes faster than everyone else. I like Scott. Scott is alright. I feel sad that such a large fraction of my interactions with him over the years have taken such an adversarial tone.
Eliezer Yudkowsky did not unambiguously choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he knows to be unambiguously false. And the reason I can hold it against him is because Eliezer Yudkowsky does not identify as just some guy with a blog. Eliezer Yudkowsky is absolutely trying to be a religious leader. He markets himself a rationality master so superior to mere Earthlings that he might as well be from dath ilan, who "aspires to make sure [his] departures from perfection aren't noticeable to others". He complains that "too many people think it's unvirtuous to shut up and listen to [him]".
In making such boasts, I think Yudkowsky is opting in to being held to higher standards than other mortals. If Scott Alexander gets something wrong when I was trusting him to be right, that's disappointing, but I'm not the victim of false advertising, because Scott Alexander doesn't claim to be anything more than some guy with a blog. If I trusted him more than that, that's on me.
If Eliezer Yudkowsky gets something wrong when I was trusting him to be right, and refuses to acknowledge corrections (in the absence of an unsustainable 21-month nagging campaign), and keeps inventing new galaxy-brained ways to be wrong in the service of his political agenda of being seen to agree with Stalin without technically lying, then I think I am the victim of false advertising.[19] His marketing bluster was designed to trick people like me into trusting him, even if my being dumb enough to believe him is on me.[20]
Perhaps he thinks it's unreasonable for someone to hold him to higher standards. As he wrote on Twitter in February 2021:
It's strange and disingenuous to pretend that the master truthseekers of any age of history, must all have been blurting out everything they knew in public, at all times, on pain of not possibly being able to retain their Art otherwise. I doubt Richard Feynman was like that. More likely is that, say, he tried to avoid telling outright lies or making public confusions worse, but mainly got by on having a much-sharper-than-average dividing line in his mine between peer pressure against saying something, and that thing being false.
I've read Surely You're Joking, Mr. Feynman. I cannot imagine Richard Feynman trying to get away with the "sometimes personally prudent and not community-harmful" excuse. I think if there were topics Richard Feynman didn't think he could afford to be honest about, he—or really, anyone who valued their intellectual integrity more than their image as a religious authority fit to issue proclamations about all areas of life—would just not issue sweeping public statements on that topic while claiming the right to ignore counterarguments on the grounds of having "some confidence in [their] own ability to independently invent everything important that would be on the other side of the filter and check it [themself] before speaking".
The claim to not be making public confusions worse might be credible if there were no comparable public figures doing better. But other science educators in the current year such as Richard Dawkins, University of Chicago professor Jerry Coyne, or ex-Harvard professor Carole Hooven have been willing to stand up for the scientific truth that biological sex continues to be real even when it hurts people's feelings.
If Yudkowsky thinks he's too important for that (because his popularity with progressives has much greater impact on the history of Earth-originating intelligent life than Carole Hooven's), that might be the right act-consequentialist decision, but one of the consequences he should be tracking is that he's forfeiting the trust of everyone who expected him to live up to the basic epistemic standards successfully upheld by biology professors.
It looks foolish in retrospect, but I did trust him much more than that. Back in 2009 when Less Wrong was new, we had a thread of hyperbolic "Eliezer Yudkowsky Facts" [LW · GW] (in the style of Chuck Norris facts). "Never go in against Eliezer Yudkowsky when anything is on the line" [LW(p) · GW(p)], said one of the facts—and back then, I didn't think I would need to.
Part of what made him so trustworthy back then was that he wasn't asking for trust. He clearly did think it was unvirtuous to just shut up and listen to him [LW · GW]: "I'm not sure that human beings realistically can trust and think at the same time," he wrote [LW · GW]. He was always arrogant, but it was tempered by the expectation of being held to account by arguments rather than being deferred to as a social superior. "I try in general to avoid sending my brain signals which tell it that I am high-status, just in case that causes my brain to decide it is no longer necessary," he wrote [LW · GW].
He visibly cared about other people being in touch with reality [LW · GW]. "I've informed a number of male college students that they have large, clearly detectable body odors. In every single case so far, they say nobody has ever told them that before," he wrote [LW(p) · GW(p)]. (I can testify that this is true: while sharing a car ride with Anna Salamon in 2011, he told me I had B.O.)[21]
That person is dead now, even if his body is still breathing. Without disclosing any specific content from private conversations that may or may not have happened, I think he knows it.
If the caliph has lost his belief in [LW · GW] the power of intellectual honesty, I can't necessarily say he's wrong on the empirical merits. It is written that our world is beyond the reach of God [LW · GW]; there's no law of physics that says honesty must yield better consequences than propaganda.
But since I haven't lost my belief in honesty, I have the responsibility to point out that the formerly rightful caliph has relinquished his Art and lost his powers.
The modern Yudkowsky writes:
When an epistemic hero seems to believe something crazy, you are often better off questioning "seems to believe" before questioning "crazy", and both should be questioned before shaking your head sadly about the mortal frailty of your heroes.
I notice that this advice fails to highlight the possibility that the "seems to believe" is a deliberate show (judged to be personally prudent and not community-harmful), rather than a misperception on your part. I am left shaking my head in a weighted average of [LW · GW] sadness about the mortal frailty of my former hero, and disgust at his duplicity. If Eliezer Yudkowsky can't unambiguously choose Truth over Feelings, then Eliezer Yudkowsky is a fraud.
A few clarifications are in order here. First, this usage of "fraud" isn't a meaningless boo light [LW · GW]. I specifically and literally mean it in Merriam-Webster's sense 2.a., "a person who is not what he or she pretends to be"—and I think I've made my case. (The "epistemic hero" posturing isn't compatible with the "sometimes personally prudent and not community-harmful" prevarication; he needs to choose one or the other.) Someone who disagrees with my assessment needs to argue that I've gotten some specific thing wrong, rather than objecting to character attacks on procedural grounds [LW · GW].
Second, it's a conditional: if Yudkowsky can't unambiguously choose Truth over Feelings, then he's a fraud. If he wanted to come clean, he could do so at any time.
He probably won't. We've already seen from his behavior that he doesn't give a shit what people like me think of his intellectual integrity. Why would that change?
Third, given that "fraud" is a semantically meaningful description rather than an emotive negative evaluation, I should stress that evaluation is a separate step. If being a fraud were necessary for saving the world, maybe being a fraud would be the right thing to do? More on this in the next post. (To be continued.)
It was unevenly sloppy of the Times to link the first post, "Three Great Articles On Poverty, And Why I Disagree With All Of Them", but not the second, "Against Murderism"—especially since "Against Murderism" is specifically about Alexander's skepticism of racism as an explanatory concept and therefore contains objectively more damning sentences to quote out of context than a passing reference to Charles Murray. Apparently, the Times couldn't even be bothered to smear Scott with misconstruals of his actual ideas, if guilt by association did the trick with less effort on the part of both journalist and reader. ↩︎
As far as aligning himself with Murray more generally, it's notable that Alexander had tapped Murray for Welfare Czar in a hypothetical "If I were president" Tumblr post. ↩︎
It's just—how much do you want to bet on that? How much do you think Scott wants to bet? ↩︎
For example, the introductory summary for Ch. 13 of The Bell Curve, "Ethnic Differences in Cognitive Ability", states: "Even if the differences between races were entirely genetic (which they surely are not), it should make no practical difference in how individuals deal with each other." ↩︎
I've wondered how hard it would have been to come up with MIRI's logical induction result (which describes an asymptotic algorithm for estimating the probabilities of mathematical truths in terms of a betting market composed of increasingly complex traders) in the Soviet Union. ↩︎
In January 2023, when Nick Bostrom preemptively apologized for a 26-year-old email to the Extropians mailing list that referenced the IQ gap and mentioned a slur, he had some [EA · GW] detractors [EA · GW] and a few defenders [EA · GW], but I don't recall seeing much defense of the 1996 email itself.
But if you're familiar with the literature and understand the use–mention distinction, the literal claims in the original email are entirely reasonable. (There are additional things one could say about what prosocial functions are being served by the taboos against what the younger Bostrom called "the provocativeness of unabashed objectivity", which would make for fine mailing-list replies, but the original email can't be abhorrent simply for failing to anticipate all possible counterarguments.)
I didn't speak up at the time of the old-email scandal, either. I had other things to do with my attention and Overton budget. ↩︎
We go from 89.2% male in the 2011 Less Wrong survey [LW · GW] to a virtually unchanged 88.7% male on the 2020 Slate Star Codex survey—although the 2020 EA survey [EA · GW] says only 71% male, so it depends on how you draw the category boundaries of "we." ↩︎
The post was subsequently edited a number of times in ways that I don't think are relevant to my discussion here. ↩︎
It seems notable (though I didn't note it at the time of my comment) that Brennan didn't break any promises. In Brennan's account, Alexander "did not first say 'can I tell you something in confidence?' or anything like that." Scott unilaterally said in the email, "I will appreciate if you NEVER TELL ANYONE I SAID THIS, not even in confidence. And by 'appreciate', I mean that if you ever do, I'll probably either leave the Internet forever or seek some sort of horrible revenge", but we have no evidence that Topher agreed.
To see why the lack of a promise is potentially significant, imagine if someone were guilty of a serious crime (like murder or stealing billions of dollars of their customers' money) and unilaterally confessed to an acquaintance but added, "Never tell anyone I said this, or I'll seek some sort of horrible revenge." In that case, I think more people's moral intuitions would side with the reporter. ↩︎
The title is an allusion to Yudkowsky's "Challenges to Christiano's Capability Amplification Proposal" [LW · GW]. ↩︎
If the problem is with the pronoun implying stereotypes and social roles in the language as spoken, such that another pronoun should be considered more correct despite the lack of corresponding hair color, you should be making that case on the empirical merits, not appealing to hypothetical surgeries. ↩︎
Note, I'm specifically praising the form of the inference, not necessarily the conclusion to detransition. If someone else in different circumstances weighed up the evidence about themself, and concluded that they are trans in some specific objective sense on the empirical merits, that would also be exhibiting the skill. For extremely sex-atypical same-natal-sex-attracted transsexuals, you can at least see why the "born in the wrong body" story makes some sense as a handwavy first approximation. It's just that for males like me, and separately for females like Michelle Alleva, the story doesn't pay rent [LW · GW]. ↩︎
The idea that self-reports can be motivatedly inaccurate without the subject consciously "lying" should not be novel to someone who co-blogged with Robin Hanson for years! ↩︎
It's instructive to consider that Cade Metz could just as credibly offer the same excuse [LW(p) · GW(p)]. ↩︎
Also, passing as a woman isn't the same thing as actually being female. But expecting people to accept an imitation as the real thing without the imitation even succeeding at resembling the real thing is seriously nuts. ↩︎
Obviously, if we're abandoning the norm of assuming good faith, it needs to be abandoned symmetrically. I think I'm adhering to standards of intellectual conduct and being transparent about my motivations, but I'm not perfect, and, unlike Yudkowsky, I'm not so absurdly mendaciously arrogant to claim "confidence in my own ability to independently invent everything important" (!) about my topics of interest. If Yudkowsky or anyone else thinks they have a case that I'm being culpably intellectually dishonest, they of course have my blessing and encouragement to post it for the audience to evaluate. ↩︎
Note the striking contrast between "A Rational Argument" [LW · GW], in which the Yudkowsky of 2007 wrote that a campaign manager "crossed the line [between rationality and rationalization] at the point where [they] considered whether the questionnaire was favorable or unfavorable to [their] candidate, before deciding whether to publish it", and these 2021 Tweets, in which Yudkowsky seems nonchalant about "not hav[ing] been as willing to tweet a truth helping" one side of a cultural dispute, because "this battle just isn't that close to the top of [his] priority list". Well, sure! Any hired campaign manager could say the same: helping the electorate make an optimally informed decision just isn't that close to the top of their priority list, compared to getting paid.
Yudkowsky's claim to have been focused on nudging people's cognition towards sanity seems dubious: if you're focused on sanity, you should be spontaneously noticing sanity errors in both political camps. (Moreover, if you're living in what you yourself describe as a "half-Stalinist environment", you should expect your social environment to make proportionately more errors on the "pro-Stalin" side, because Stalinists aren't facing social pressure to avoid errors.) As for the rationale that "those people might matter to AGI someday", judging by local demographics, it seems much more likely to apply to trans women themselves than their critics! ↩︎
The authors of the HEXACO personality model may have gotten something importantly right in grouping "honesty" and "humility" as a single factor. ↩︎
Yudkowsky once wrote of Stephen Jay Gould [LW · GW] that "[c]onsistently self-serving scientific 'error', in the face of repeated correction and without informing others of the criticism, blends over into scientific fraud." I think the same standard applies here. ↩︎
Perhaps some readers will consider this point to be more revealing about my character rather than Yudkowsky's: that everybody knows his bluster wasn't supposed to be taken seriously, so I have no more right to complain about "false advertising" than purchasers of a "World's Best" ice-cream who are horrified (or pretending to be) that it may not objectively be the best in the world.
Such readers may have a point. If you already knew [LW · GW] that Yudkowsky's pose of epistemic superiority was phony (because everyone knows), then you are wiser than I was. But I think there are a lot of people in the "rationalist" subculture who didn't know (because we weren't anyone). This post is for their benefit. ↩︎
A lot of the epistemic heroism here is just in noticing [LW · GW] the conflict between Feelings and Truth, between Politeness and Truth, rather than necessarily acting on it. If telling a person they smell bad would predictably meet harsh social punishment, I couldn't blame someone for consciously choosing silence and safety over telling the truth.
What I can and do blame someone for is actively fighting for Feelings while misrepresenting himself as the rightful caliph of epistemic rationality. There are a lot of trans people who would benefit from feedback that they don't pass but aren't getting that feedback by default. I wouldn't necessarily expect Yudkowsky to provide it. (I don't, either.) I would expect the person who wrote the Sequences not to proclaim that the important thing is the feelings of people who do not like to be tossed into a Smells Bad bucket, which don't bear on the factual question of whether someone smells bad. ↩︎
22 comments
Comments sorted by top scores.
comment by habryka (habryka4) · 2024-03-03T01:26:28.269Z · LW(p) · GW(p)
I've read Surely You're Joking, Mr. Feynman. I cannot imagine Richard Feynman trying to get away with the "sometimes personally prudent and not community-harmful" excuse.
IIRC Feynman publicly maintained and announced that he didn't have an exceptionally high IQ or was generally not much smarter than his peers, in a way that seemed deceptive in kind of similar ways.
His son (who I think occasionally comments on LW) also briefly commented on this.
In-general Feynman seemed to me like someone who was pretty cavalier with public discourse. I could dig up the references, but the last few times I checked he routinely exaggerated things, and often made points about society that seemed pretty clearly contradicted by other things he believed.
Replies from: Zack_M_Davis, SaidAchmiz↑ comment by Zack_M_Davis · 2024-03-04T07:48:59.289Z · LW(p) · GW(p)
IQ seems like the sort of thing Feynman could be "honestly" motivatedly wrong about. The thing I'm trying to point at is that Feynman seemingly took pride in being a straight talker, in contrast to how Yudkowsky takes pride in not lying.
These are different things. Straight talkers sometimes say false or exaggerated things out of sloppiness, but they actively want listeners to know their reporting algorithm [LW · GW]. Prudently selecting which true sentences to report in the service of a covert goal is not lying, but it's definitely not straight talk.
↑ comment by Said Achmiz (SaidAchmiz) · 2024-03-03T02:12:10.829Z · LW(p) · GW(p)
I would love to see references for this!
Replies from: rsaarelm↑ comment by rsaarelm · 2024-03-03T10:03:18.099Z · LW(p) · GW(p)
James Gleick's Genius cites a transcript of "Address to Far Rockaway High School" from 1965 (or 1966 according to this from California Institute of Technology archives for Feynman talking about how he got a not-exceptionally-high 125 for his IQ score. Couldn't find an online version of the transcript anywhere with a quick search.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2024-03-03T10:32:57.378Z · LW(p) · GW(p)
Sorry, I meant that I’d like to see references for @habryka’s last sentence specifically (i.e., the part for which he says “I could dig up the references”). The IQ thing doesn’t seem to be that.
comment by RamblinDash · 2024-03-04T01:28:31.606Z · LW(p) · GW(p)
Maybe I just don't get it because I'm not part of the Berkeley Community, I just read the writing. But my immediate reaction to this is like, why does Zack care so much about what Eliezer (2024) does or does not think? Or even whether, these days, he is or is not a fraud?
Like if you thought what he wrote in 2007 was great, just listen to that? Many (all?) authors who write great books have also written worse books. Maybe Zack's opinion is falling a long way from wherever it was.
But perhaps he would be happier to adopt a more ecumenical non-Berkeley-ite stance, which I think has been common all along outside The Berkeley Community, and which is something like "Eliezer wrote some great stuff that was very influential on my thinking and that I still believe was very insightful, and I really appreciate that. I enjoy reading LW more than I think I'd enjoy the marginal alternative use of reading time, but I don't go too far out of my way to pay attention to or care about what he's up to these days." - rather than assigning himself an Epic Quest to Win This Argument.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2024-03-04T03:41:25.164Z · LW(p) · GW(p)
You are perhaps wiser than me. (See also footnote 20 [LW · GW].)
Replies from: RamblinDash↑ comment by RamblinDash · 2024-03-04T04:23:05.215Z · LW(p) · GW(p)
I think I'm trying to make a different point than footnote 20?
It seems like you are taking me to be saying something like "You shouldn't care what EY thinks about this Trans issue because "Everybody Knows" not to take his statements on this seriously" - that's how I read FN20.
Whereas I think my point is much more general and really not specific to Trans at all - like why be so deeply invested in the contents of some one guy's mind, at all? On any issue?
EY wrote some great (book-like objects). Inspiring, even. Worldview changing. But, like, whatever his opinions are today (on any issue), my view is mostly like, who cares? Either his arguments are convincing or they aren't.
By analogy, suppose (counter factually) that I think that Barack Obama was the greatest president in history (he wasn't, but he has to be alive for this analogy to work). Does that mean that I should decide what I think about today's political and policy problems based on what Obama thinks? Such that if Obama was wrong about something, I should engage in an epic quest to Get Obama's Attention and get him to admit he's wrong? I mean, that would be ridiculous, right?
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2024-03-04T04:42:55.467Z · LW(p) · GW(p)
Yes, that would be ridiculous. It would also be ridiculous in a broadly similar way if someone spent eight years in the prime of their life prosecuting a false advertising lawsuit against a "World's Best" brand ice-cream for not actually being the best in the world.
But if someone did somehow make that mistake, I could see why they might end up writing a few blog posts afterwards telling the Whole Dumb Story.
Replies from: RamblinDash↑ comment by RamblinDash · 2024-03-04T11:47:58.779Z · LW(p) · GW(p)
Absolutely! I value your voice. But, and excuse me if this is a misread, your posts in this series read to me like you are still trying to convince yourself and/or him.
It reads like you are a sort of rationalist Martin Luther criticizing the Pope. But, like, there are already a lot of metaphorically-protestant rationalists.
Replies from: Avnix↑ comment by Sweetgum (Avnix) · 2024-10-09T06:01:32.894Z · LW(p) · GW(p)
Who are some other examples?
Replies from: RamblinDash↑ comment by RamblinDash · 2024-10-09T13:30:55.166Z · LW(p) · GW(p)
Well, you don't see them as much because they don't necessarily interact with the metaphorical pope(s)/cardinal(s)/etc. I'm just talking about all the thousands of people who have read the sequences and/or other foundational rationalist texts, interpreted them for themselves, and did their best to apply those lessons in their own lives. Many such people exist! They just don't live in the Bay Area, don't necessarily go to rationalist meetups, and might not be active LW posters. So the reason I don't have examples for you is precisely because Active in the Rationalist Community is highly correlated with both "LW readers are likely to know who this person is" and/or "this person is publicly identifiable as a Rationalist," and also with "Catholic within this metaphor" -- Official Rationalist Spaces are effectively catholic churches, in the metaphor. Of course you won't find a ton of protestants there!
The Protestant/Catholic schism was fundamentally over whether the Bible should be interpreted by the Pope and the Catholic church, with the role of the faithful to listen to their priest and take what they say as the Received Interpretation of the Word of God, or instead whether each individual Christian should become literate, and read and interpret the Bible for themselves. Of course, there were particular points of dispute but they all stemmed from this - is it possible for the Pope to be wrong, and if so, what does that say about our faith?
The Catholic position was, it is not possible for the Pope to be wrong within the bounds of our faith, and therefore if there is proof that the Pope is wrong then it would prove that our faith is wrong. The Protestant position was, like, of course its possible for the Pope to be wrong, he's just some guy. So when Martin Luther was saying "it is possible for the Pope to be wrong" that was a big f'in deal. But you don't see modern protestants going around defiantly asserting that it's possible for the Pope to be wrong - they know there are millions of people who already agree with this idea, and it just kinda seems silly or beside the point for them? A protestant generally doesn't care what the Pope thinks any more than they care what other prominent world leaders think.
This comment probably won't get a ton of readership on an old post, but if you understand my metaphor and think you are "protestant," please react with Checkmark, even if you are mostly an LW lurker. If you understand and you think you are "catholic", react with Xmark. If you think this metaphor makes no sense or is fundamentally wrong, then I guess react with something else.
comment by Zack_M_Davis · 2024-03-02T22:07:41.212Z · LW(p) · GW(p)
(I think this is the best and most important post in the sequence; I suspect that many readers who didn't and shouldn't bother with the previous three posts, may benefit from this one.)
Replies from: Viliamcomment by TAG · 2024-03-04T15:07:53.727Z · LW(p) · GW(p)
Truth and Feelings can be reconciled, so long as you are not extreme about either.: if your true beliefs are hurtful you can keep them to yourself. Your worldview can be kept separate from your persona. The problem is when you bring a third thing -- the thing known as sincerity or tactlessness, depending on whether or not you believe in it -- into the picture. If you feel obliged to tell the truth, you are going to hurt feelings.
This used to be well known, but is becoming unknown because of an increasing tendency to use words like "truth and "honesty" in a way that encompasses offering unsolicited opinions in addition to avoid lying. If you can't make a verbal distinction, its hard to make a conceptual one.
He visibly cared about other people being in touch with reality [LW · GW]. “I’ve informed a number of male college students that they have large, clearly detectable body odors. In every single case so far, they say nobody has ever told them that before,” he wrote [LW(p) · GW(p)]. (I can testify that this is true: while sharing a car ride with Anna Salamon in 2011, he told me I had B.O.)[21] [LW · GW]
Well, that goes beyond having true beliefs and only making true statements.
comment by Viliam · 2024-03-03T19:00:13.960Z · LW(p) · GW(p)
One important thing I learned when following the links was that some people learn about transsexuality before they learn about autism. (Probably many young people these days.) Which can have a big impact on how they evaluate their experience, because there are many things that could plausibly be interpreted as an evidence in either direction, such as "I do not have preferences or behavior typical for my gender".
For example, there was a moment when I learned about autism, and it felt like it explained a few things: "oh, I don't like drinking beer and watching football because I am an autistic nerd". No more explanation needed.
I imagine that in a parallel reality where I somehow never heard about autism, but everyone talks about transsexuality all the time, it would make sense to take all divergence from stereotypical masculinity as evidence of being trans. (And fail to notice that I do not have stereotypically female preferences and behavior either? Well, maybe I'm in a denial. Or maybe taking the right hormones is going to fix all of that.) This probably still wouldn't be enough to convince me that I am trans, but I would be quite confused.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2024-03-03T19:55:48.124Z · LW(p) · GW(p)
I don't think that there are many people who hear about the concept of transgederism earlier than about the concept of gender non-conformity. The latter perfectly explains not liking beer and football as well.
I imagine that in a parallel reality where I somehow never heard about autism, but everyone talks about transsexuality all the time, it would make sense to take all divergence from stereotypical masculinity as evidence of being trans.
Personally, I've considered the possibility of myself being trans much earlier than being autistic. At some moment in my life I've dissolved the concept of gender for myself and thus I think that classifying myself as agender is a more accurate than calling myself a man. And yet the fact that I do not possess much appreciation for stereotypical masculinity is a sufficient evidence that I'm actually a woman, never seriously crossed my mind for obvious reasons that you've mentioned yourself:
And fail to notice that I do not have stereotypically female preferences and behavior either?
And yes, later, I figured out that I'm on the spectrum, which seems quite likely to be correlated with my agenderism. Doesn't feel as if this revelation suddenly "made me a man" again.
Well, maybe I'm in a denial. Or maybe taking the right hormones is going to fix all of that.)
People could be seriously confused in a way you are talking about only if they do not hear about anything else at all other than binary transgenderism. So a huge political movement invalidating gender non-conformity, non-binarism and neurodivercity in favor of binary-transgenderism could be a potential problem. But this is absolutely not the way our world is as all these topics go hand in hand.
In general, all this scaremongering about "poor autistic children who are just caught in a fad" seems making very little sense, as soon as we actually think about it for a couple of minutes, instead of treating it in a pattern matching mode. As a social group, our kind is some of the least likely candidate to be swayed by pure social pressure.
↑ comment by localdeity · 2024-03-04T06:10:20.765Z · LW(p) · GW(p)
In general, all this scaremongering about "poor autistic children who are just caught in a fad" seems making very little sense, as soon as we actually think about it for a couple of minutes
It's an empirical question whether a bunch of poor autistic children are getting caught in a fad. I don't think merely thinking about it can tell you whether it's happening.
As a social group, our kind is some of the least likely candidate to be swayed by pure social pressure.
Perhaps so. On the other hand, perhaps our kind (a) already knows we're weird, and (b) when given a possible explanation for why we're weird, is inclined to accept it.
On the subject of whether people are pushing kids to think they're the other gender before they understand autism as an alternative explanation (or before they understand much of anything)... This video contains a person describing ways of perceiving "gender messages" from children aged 1-2 years old, saying that one could be "misgendering" such children, that there can be a "pre-verbal communication about gender, and the message back should not be to negate any of those expressions, but to go with them, and see where they go"; and that "Children will know as early as the beginning of the second year of life".
The person is described as:
A well known subscriber to the “gender affirmative” approach to trans-identified children is Diane Ehrensaft, PhD., a clinical and developmental psychologist. Dr. Ehrensaft, author of The Gender Creative Child, plays a powerful role in the burgeoning field of pediatric transgenderism. She is director and chief psychologist for the University of California-San Francisco children’s hospital gender clinic, and is also an associate professor of pediatrics at UCSF. She sits on the Board of Directors of Gender Spectrum, a San Francisco Bay area organization which is heavily involved in matters pertaining to trans-identified children and youth.
In February, Dr. Ehrensaft, along with other pediatric transition specialists, including Joel Baum, MS (senior director of professional development and family services at Gender Spectrum), presented at a conference and continuing education event in Santa Cruz, California.
Of course, I don't know how many people follow her advice.
comment by cousin_it · 2024-03-03T11:40:43.528Z · LW(p) · GW(p)
I feel like instead of flipping out you could just say "eh, I don't agree with this community's views on gender, I'm more essentialist overall". You don't actually have to convince anyone or get convinced by them. Individual freedom and peaceful coexistence is fine. The norm that "Bayesians can't agree to disagree" should burn in a fire.
Replies from: tailcalled↑ comment by tailcalled · 2024-03-03T16:42:12.529Z · LW(p) · GW(p)
I guess one challenge with this is Zack doesn't exactly seem to agree with sex essentialism. Rather he used to be a strong opponent of it and then Eliezer told him(?) that he shouldn't blindly oppose sex essentialism because often sex essentialism is correct and important, and now Zack wants the rationalist community to stop randomly lashing out against sex essentialism and instead invest in figuring out which part of sex essentialism are good vs bad? Whereas the rationalist community is mostly too propagandistic/lazy/busy to investigate sex essentialism systematically.
(Zack bets mostly on "too propagandistic"; this is clearly the case for Eliezer Yudkowsky and Scott Alexander, as he has documented well in this post. But there's plenty of rationalists with different opinions, and also the rationalist community doesn't invest that much into figuring out other socially significant matters, so I would personally put more weight on the lazy/busy aspect.)
Replies from: tailcalled↑ comment by tailcalled · 2024-03-03T17:20:31.474Z · LW(p) · GW(p)
🤔 I guess even with Eliezer Yudkowsky and Scott Alexander, one could question how much the problem is that they started being propagandistic, versus that they never put grit in to research these topics in the first place. Much of their earlier sex essentialist rhetoric seems derived from easy anecdotes and opinions from other sex essentialists. What's the biggest rationalist research project that overlaps with sex essentialism?
comment by Ya Polkovnik (yurii-burak-1) · 2024-09-24T12:11:36.981Z · LW(p) · GW(p)
As someone who agrees with Stalin in, well, most of the questions (with a few exceptions, some of them are quite important, but it's not that important to explain there), what could I say?
-
There are newer papers on IQ and race, and a lot of critique of that one.
-
Ability to solve IQ tests can differ in a single person, for like 10+ points. https://www.reuters.com/article/business/healthcare-pharmaceuticals/study-finds-poverty-reduces-brain-power-idUSL6N0GU1PO/
-
My dad have scored around 170, and considers IQ tests pointless.
-
I agree with your point about the male-looking face.
-
It's pointless to pretend that LGBT+ is not an organisation with ideology. By the way, their ideology is too backwards and reactionary in relation to what do WE plan, regarding the sex and family.
-
Yes, setting a question of racial IQ dependency is bad. Human intelligence is dependent on its development around their whole life and usage. Making such questions, you begin to act to create proofs for them (self-fulfilling prediction, aha), and thus go against the basic principles of democracy.
-
Yes, "high IQ jews" goes into the same dumpster.