If Clarity Seems Like Death to Them

post by Zack_M_Davis · 2023-12-30T17:40:42.622Z · LW · GW · 191 comments

This is a link post for http://unremediatedgender.space/2023/Dec/if-clarity-seems-like-death-to-them/

Contents

  Origins of the Rationalist Civil War (April–May 2019)
  Exit Wounds (May 2019)
  Squabbling On and With lesswrong.com (May–July 2019)
  A Beleaguered Ally Under Fire (July–August 2019)
  An Poignant-to-Me Anecdote That Fits Here Chronologically But Doesn't Particularly Foreshadow Anything (August 2019)
  Philosophy Blogging Interlude! (August–October 2019)
  The Caliph's Madness (August and November 2019)
  A Worthy Critic At Last (November 2019)
  Writer's Block (November 2019)
  Interactions With a Different Rationalist Splinter Group (November–December 2019)
  Philosophy Blogging Interlude 2! (December 2019)
  A Newtonmas Party (December 2019)
  Further Discourses on What the Categories Were Made For (January–February 2020)
  A Private Document About a Disturbing Hypothesis (early 2020)
  The New York Times Pounces (June 2020)
  Philosophy Blogging Interlude 3! (mid-2020)
  A Couple of Impulsive Emails (September 2020)
  A Private Catastrophe (December 2020)
  A False Dénouement (January 2021)
None
191 comments

"—but if one hundred thousand [normies] can turn up, to show their support for the [rationalist] community, why can't you?"

I said wearily, "Because every time I hear the word community, I know I'm being manipulated. If there is such a thing as the [rationalist] community, I'm certainly not a part of it. As it happens, I don't want to spend my life watching [rationalist and effective altruist] television channels, using [rationalist and effective altruist] news systems ... or going to [rationalist and effective altruist] street parades. It's all so ... proprietary. You'd think there was a multinational corporation who had the franchise rights on [truth and goodness]. And if you don't market the product their way, you're some kind of second-class, inferior, bootleg, unauthorized [nerd]."

—"Cocoon" by Greg Egan (paraphrased)[1]

Recapping my Whole Dumb Story so far: in a previous post, "Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems", I told you about how I've always (since puberty) had this obsessive erotic fantasy about being magically transformed into a woman and how I used to think it was immoral to believe in psychological sex differences, until I read these great Sequences of blog posts by Eliezer Yudkowsky which incidentally pointed out how absurdly impossible my obsessive fantasy was [LW · GW] ...

—none of which gooey private psychological minutiæ would be in the public interest to blog about, except that, as I explained in a subsequent post, "Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer", around 2016, everyone in the community that formed around the Sequences suddenly decided that guys like me might actually be women in some unspecified metaphysical sense, and the cognitive dissonance from having to rebut all this nonsense coming from everyone I used to trust drove me temporarily insane from stress and sleep deprivation ...

—which would have been the end of the story, except that, as I explained in a subsequent–subsequent post, "A Hill of Validity in Defense of Meaning", in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in a way that suggested that people were philosophically confused if they disputed that men could be women in some unspecified metaphysical sense.

Anyone else being wrong on the internet like that wouldn't have seemed like a big deal, but Scott Alexander had semi-jokingly written that rationalism is the belief that Eliezer Yudkowsky is the rightful caliph. After extensive attempts by me and allies to get clarification from Yudkowsky amounted to nothing, we felt justified in concluding that he and his Caliphate of so-called "rationalists" was corrupt.

Origins of the Rationalist Civil War (April–May 2019)

Anyway, given that the "rationalists" were fake and that we needed something better, there remained the question of what to do about that, and how to relate to the old thing.

I had been hyperfocused on prosecuting my Category War, but the reason Michael Vassar and Ben Hoffman and Jessica Taylor[2] were willing to help me out was not because they particularly cared about the gender and categories example but because it seemed like a manifestation of a more general problem of epistemic rot in "the community."

Ben had previously worked at GiveWell and had written a lot about problems with the Effective Altruism (EA) movement; in particular, he argued that EA-branded institutions were making incoherent decisions under the influence of incentives to distort information in order to seek power.

Jessica had previously worked at MIRI, where she was unnerved by what she saw as under-evidenced paranoia about information hazards and short AI timelines [LW · GW]. (As Jack Gallagher, who was also at MIRI at the time, later put it [LW(p) · GW(p)], "A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work.")

To what extent were my gender and categories thing, and Ben's EA thing, and Jessica's MIRI thing, manifestations of the same underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit?

If there was a real problem, I didn't have a good grasp on it. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people and get a correction on that specific point. (Although as we had just discovered, even that might be too much to hope for.) But culture is the sum of lots and lots of little micro-actions by lots and lots of people. If your entire culture has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!" Any ideologue or religious person would agree with that. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on the brain's pattern-matching faculties to sum over thousands of little micro-actions as a gestalt.

Ben called the gestalt he saw the Blight, after the rogue superintelligence in Vernor Vinge's A Fire Upon the Deep. The problem wasn't that people were getting dumber; it was that they were increasingly behaving in a way that was better explained by their political incentives than by coherent beliefs about the world; they were using and construing facts as moves in a power game, albeit sometimes subject to genre constraints under which only true facts were admissible moves in the game.

When I asked Ben for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of MIRI executive director Nate Soares posting that he was "excited to see OpenAI joining the space", despite the fact that no one who had been following the AI risk discourse thought that OpenAI as originally announced was a good idea. Nate had privately clarified that the word "excited" wasn't necessarily meant positively—and in this case meant something more like "terrified."

This seemed to me like the sort of thing where a particularly principled (naïve?) person might say, "That's lying for political reasons! That's contrary to the moral law!" and most ordinary grown-ups would say, "Why are you so upset about this? That sort of strategic phrasing in press releases is just how the world works."

I thought explaining the Blight to an ordinary grown-up was going to need either lots of specific examples that were more egregious than this (and more egregious than the examples in Sarah Constantin's "EA Has a Lying Problem" or Ben's "Effective Altruism Is Self-Recommending"), or somehow convincing the ordinary grown-up why "just how the world works" isn't good enough, and why we needed one goddamned place in the entire goddamned world with unusually high standards.

The schism introduced new pressures on my social life. I told Michael that I still wanted to be friends with people on both sides of the factional schism. Michael said that we should unambiguously regard Yudkowsky and CfAR president (and my personal friend of ten years) Anna Salamon as criminals or enemy combatants who could claim no rights in regard to me or him.

I don't think I got the framing at this time. War metaphors sounded scary and mean: I didn't want to shoot my friends! But the point of the analogy (which Michael explained, but I wasn't ready to hear until I did a few more weeks of emotional processing) was specifically that soldiers on the other side of a war aren't necessarily morally blameworthy as individuals:[3] their actions are being directed by the Power they're embedded in.

I wrote to Anna (Subject: "Re: the end of the Category War (we lost?!?!?!)"):

I was just trying to publicly settle a very straightforward philosophy thing that seemed really solid to me

if, in the process, I accidentally ended up being an unusually useful pawn in Michael Vassar's deranged four-dimensional hyperchess political scheming

that's ... arguably not my fault


I may have subconsciously pulled off an interesting political maneuver. In my final email to Yudkowsky on 20 April 2019 (Subject: "closing thoughts from me"), I had written—

If we can't even get a public consensus from our de facto leadership on something so basic as "concepts need to carve reality at the joints in order to make probabilistic predictions about reality", then, in my view, there's no point in pretending to have a rationalist community, and I need to leave and go find something else to do (perhaps whatever Michael's newest scheme turns out to be). I don't think I'm setting my price for joining [LW · GW] particularly high here?[4]

And as it happened, on 4 May 2019, Yudkowsky retweeted Colin Wright on the "univariate fallacy"—the point that group differences aren't a matter of any single variable [LW · GW]—which was thematically similar to the clarification I had been asking for. (Empirically, it made me feel less aggrieved.) Was I wrong to interpret this as another "concession" to me? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.)

Separately, one evening in April, I visited the house where "Meredith" and her husband Mike and Kelsey Piper and some other people lived, which I'll call "Arcadia".[5] I said, essentially, "Oh man oh jeez, Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're the only ones backing me up on this incredibly basic philosophy thing and I don't feel like I have anywhere else to go." This culminated in a group conversation with the entire house, which I found unsettling. (Unfortunately, I didn't take notes and don't remember the details except that I had a sense of everyone else seeming to agree on things that I thought were clearly contrary to the spirit of the Sequences.)

The two-year-old son of Mike and "Meredith" was reportedly saying the next day that Kelsey doesn't like his daddy, which was confusing until it was figured out he had heard Kelsey talking about why she doesn't like Michael Vassar.[6]

And as it happened, on 7 May 2019, Kelsey wrote a Facebook comment displaying evidence of understanding my thesis.

These two datapoints led me to a psychological hypothesis: when people see someone wavering between their coalition and a rival coalition, they're intuitively motivated to offer a few concessions to keep the wavering person on their side. Kelsey could afford to speak as if she didn't understand the thing about sex being a natural category when it was just me freaking out alone, but visibly got it almost as soon as I could credibly threaten to walk (defect to a coalition of people she dislikes). Maybe my "closing thoughts" email had a similar effect on Yudkowsky, assuming he otherwise wouldn't have spontaneously tweeted something about the univariate fallacy two weeks later? This probably wouldn't work if you repeated it, or tried to do it consciously?

Exit Wounds (May 2019)

I started drafting a "why I've been upset for five months and have lost faith in the so-called 'rationalist' community" memoir-post. Ben said that the target audience to aim for was sympathetic but naïve people like I had been a few years ago, who hadn't yet had the experiences I'd had. This way, they wouldn't have to freak out to the point of being imprisoned and demand help from community leaders and not get it; they could just learn from me.

I didn't know how to continue it. I was too psychologically constrained; I didn't know how to tell the Whole Dumb Story without escalating personal conflicts or leaking info from private conversations.

I decided to take a break from the religious civil war and from this blog. I declared May 2019 as Math and Wellness Month.

My dayjob performance had been suffering for months. The psychology of the workplace is ... subtle. There's a phenomenon where some people are vastly more productive than others and everyone knows it, but no one is cruel enough to make it common knowledge. This is awkward for people who simultaneously benefit from the culture of common-knowledge-prevention allowing them to collect the status and money rents of being a $150K/year software engineer without actually performing at that level, who also read enough Ayn Rand as a teenager to be ideologically opposed to subsisting on unjustly-acquired rents rather than value creation. I didn't think the company would fire me, but I was worried that they should.

I asked my boss to temporarily assign me some easier tasks that I could make steady progress on. (We had a lot of LaTeX templating of insurance policy amendments that needed to get done.) If I was going to be psychologically impaired, it was better to be up-front about how I could best serve the company given that impairment, rather than hoping the boss wouldn't notice.

My intent of a break from the religious war didn't take. I met with Anna on the UC Berkeley campus and read her excerpts from Ben's and Jessica's emails. (She had not provided a comment on "Where to Draw the Boundaries?" [LW · GW] despite my requests, including in the form of two paper postcards that I stayed up until 2 a.m. on 14 April 2019 writing; spamming people with hysterical and somewhat demanding postcards felt more distinctive than spamming people with hysterical and somewhat demanding emails.)

I complained that I had believed our own marketing [LW · GW] material [LW · GW] about the "rationalists" remaking the world by wielding a hidden Bayesian structure of Science and Reason that applies outside the laboratory [LW · GW]. Was that all a lie? Were we not trying to do the thing anymore? Anna was dismissive: she thought that the idea I had gotten about "the thing" was never actually part of the original vision. She kept repeating that she had tried to warn me, and I didn't listen. (Back in the late 'aughts, she had often recommended Paul Graham's essay "What You Can't Say" to people, summarizing Graham's moral that you should figure out the things you can't say in your culture and then not say them, in order to avoid getting drawn into pointless conflicts.)

It was true that she had tried to warn me for years, and (not yet having gotten over my teenage ideological fever dream), I hadn't known how to listen. But this seemed fundamentally unresponsive to how I kept repeating that I only expected consensus on the basic philosophy of language and categorization (not my object-level special interest in sex and gender). Why was it so unrealistic to imagine that the smart people could enforce standards in our own tiny little bubble?

My frustration bubbled out into follow-up emails:

I'm also still pretty angry about how your response to my "I believed our own propaganda" complaint is (my possibly-unfair paraphrase) "what you call 'propaganda' was all in your head; we were never actually going to do the unrestricted truthseeking thing when it was politically inconvenient." But ... no! I didn't just make up the propaganda! The hyperlinks still work! I didn't imagine them! They were real! You can still click on them: "A Sense That More Is Possible" [LW · GW], "Raising the Sanity Waterline" [LW · GW]

I added:

Can you please acknowledge that I didn't just make this up? Happy to pay you $200 for a reply to this email within the next 72 hours

Anna said she didn't want to receive cheerful price [LW · GW] offers from me anymore; previously, she had regarded my occasionally throwing money at her to bid for her scarce attention[7] as good-faith libertarianism between consenting adults, but now she was afraid that if she accepted, it would be portrayed in some future Ben Hoffman essay as an instance of her using me. She agreed that someone could have gotten the ideals I had gotten out of those posts, but there was also evidence from that time pointing the other way (e.g., "Politics Is the Mind-Killer" [LW · GW]) and it shouldn't be surprising if people steered clear of controversy.

I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that whether I should cut my dick off would become a political issue. That was new evidence about whether the original vision was wise! I wasn't particularly trying to do politics with my idiosyncratic special interest; I was trying to think seriously about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit an epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, and they couldn't correct the mistake even after it was pointed out, then the "rationalists" were worse than useless to me. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I were part of a reference class that included AI researchers).

Fundamentally, I was skeptical that you could do consistently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky had described in "Entangled Truths, Contagious Lies" [LW · GW] and "Dark Side Epistemology" [LW · GW]: the need to lie about lying and cover up cover-ups propagates recursively. Anna was unusually skillful at thinking things without saying them; I thought people facing similar speech restrictions generally just get worse at thinking (plausibly[8] including Yudkowsky), and the problem gets worse as the group effort scales. (It's less risky to recommend "What You Can't Say" to your housemates than to put it on your 501(c)(3) organization's canonical reading list.) You can't optimize your group's culture for not talking about atheism without also optimizing against understanding Occam's razor [LW · GW]; you can't optimize for not questioning gender self-identity without also optimizing against understanding the 37 ways that words can be wrong [LW · GW].

Squabbling On and With lesswrong.com (May–July 2019)

Despite Math and Wellness Month and my intent to take a break from the religious civil war, I kept reading Less Wrong during May 2019, and ended up scoring a couple of victories in the civil war (at some cost to Wellness).

MIRI researcher Scott Garrabrant wrote a post about how "Yes Requires the Possibility of No" [LW · GW]. Information-theoretically, a signal sent with probability one transmits no information: you can only learn something from hearing a "Yes" if you believed that the answer could have been "No". I saw an analogy to my philosophy-of-language thesis, and mentioned it in a comment: if you want to believe that x belongs to category C, you might try redefining C in order to make the question "Is x a C?" come out "Yes", but you can only do so at the expense of making C less useful. Meaningful category-membership (Yes) requires the possibility of non-membership (No).

Someone objected that [LW(p) · GW(p)] she found it "unpleasant that [I] always bring [my] hobbyhorse in, but in an 'abstract' way that doesn't allow discussing the actual object level question"; it made her feel "attacked in a way that allow[ed] for no legal recourse to defend [herself]." I replied [LW(p) · GW(p)] that that was understandable, but that I found it unpleasant that our standard Bayesian philosophy of language somehow got politicized, such that my attempts to do correct epistemology were perceived as attacking people. Such a trainwreck ensued that the mods manually moved the comments to their own post [LW · GW]. Based on the karma scores and what was said,[9] I count it as a victory.

On 31 May 2019, a draft of a new Less Wrong FAQ [LW · GW] included a link to "The Categories Were Made for Man, Not Man for the Categories" as one of Scott Alexander's best essays. I argued that it would be better to cite almost literally any other Slate Star Codex post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: either Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, or I could call it a blatant lie because no rule of rationality says I shouldn't draw the category boundaries of "blatant lie" that way. Ruby Bloom, the new moderator who wrote the draft, was persuaded [LW(p) · GW(p)], and "... Not Man for the Categories" was not included in the final FAQ. Another "victory."

But "victories" weren't particularly comforting when I resented this becoming a political slapfight at all. I wrote to Anna and Steven Kaas (another old-timer who I was trying to "recruit" to my side of the civil war). In "What You Can't Say", Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what your real work is. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except in this one meta-level essay) was probably the right choice. But someone whose goal is to improve Society's collective ability to reason should probably be doing more fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improve our collective ability to reason" in a way that they don't hurt the mission of "make a lot of money writing software."

I said I didn't know if either of them had caught the "Yes Requires the Possibility" trainwreck, but wasn't it terrifying that the person who objected to my innocuous philosophy comment was a MIRI research associate? Not to demonize that commenter, because I was just as bad (if not worse) in 2008. The difference was that in 2008, we had a culture that could beat it out of me.

Steven objected that tractability and side effects matter, not just effect on the mission considered in isolation. For example, the Earth's gravitational field directly impedes NASA's mission, and doesn't hurt Paul Graham, but both NASA and Paul Graham should spend the same amount of effort (viz., zero) trying to reduce the Earth's gravity.

I agreed that tractability needed to be addressed, but the situation felt analogous to being in a coal mine in which my favorite of our canaries had just died. Caliphate officials (Eliezer, Scott, Anna) and loyalists (Steven) were patronizingly consoling me: sorry, I know you were really attached to that canary, but it's just a bird; it's not critical to the coal-mining mission. I agreed that I was unreasonably attached to that particular bird, but that's not why I expected them to care. The problem was what the dead canary was evidence of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question doesn't matter. (The causal graph [LW · GW] is the fork "canary death ← mine gas → danger" rather than the direct link "canary death → danger".) Ben and Michael and Jessica claimed to have spotted their own dead canaries. I felt like the old-timer Rationality Elders should have been able to get on the same page about the canary-count issue?

Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was a fragment of group theory and some probability theory that later turned out to be deeply relevant to understanding sex differences. So much for taking a break.

In June 2019, I made a linkpost on Less Wrong [LW · GW] to Tal Yarkoni's "No, It's Not The Incentives—It's you", about how professional scientists should stop using career incentives as an excuse for doing poor science. It generated a lot of discussion.

In an email (Subject: "LessWrong.com is dead to me"), Jessica identified Less Wrong moderator Raymond Arnold's comments [LW(p) · GW(p)] as her last straw. Jessica wrote:

LessWrong.com is a place where, if the value of truth conflicts with the value of protecting elites' feelings and covering their asses, the second value will win.

Trying to get LessWrong.com to adopt high-integrity norms is going to fail, hard, without a lot of conflict. (Enforcing high-integrity norms is like violence; if it doesn't work, you're not doing enough of it). People who think being exposed as fraudulent (or having their friends exposed as fraudulent) is a terrible outcome, are going to actively resist high-integrity discussion norms.

Posting on Less Wrong made sense as harm-reduction, but the only way to get people to stick up for truth would be to convert them to a whole new worldview, which would require a lot of in-person discussions. She brought up the idea of starting a new forum to replace Less Wrong.

Ben said that trying to discuss with the Less Wrong mod team would be a good intermediate step, after we clarified to ourselves what was going on; it might be "good practice in the same way that the Eliezer initiative was good practice." The premise should be, "If this is within the Overton window for Less Wrong moderators, there's a serious confusion on the conditions required for discourse"—scapegoating individuals wasn't part of it. He was less optimistic about harm reduction; participating on the site was implicitly endorsing it by submitting to the rule of the karma and curation systems.

"Riley" expressed sadness about how the discussion on "The Incentives" demonstrated that the community they loved—including dear friends—was in a bad way. Michael (in a separate private discussion) had said he was glad to hear about the belief-update. "Riley" said that Michael saying that also made them sad, because it seemed discordant to be happy about sad news. Michael wrote:

I['m] sorry it made you sad. From my perspective, the question is no[t] "can we still be friends with such people", but "how can we still be friends with such people" and I am pretty certain that understanding their perspective [is] an important part of the answer. If clarity seems like death to them and like life to us, and we don't know this, IMHO that's an unpromising basis for friendship.


I got into a scuffle with Ruby Bloom on his post on "Causal Reality vs. Social Reality" [LW · GW]. I wrote what I thought was a substantive critique [LW(p) · GW(p)], but Ruby complained that [LW(p) · GW(p)] my tone was too combative, and asked for more charity and collaborative truth-seeking[10] in any future comments.

(My previous interaction with Ruby had been my challenge to "... Not Man for the Categories" appearing on the Less Wrong FAQ. Maybe he couldn't let me "win" again so quickly?)

I emailed the posse about the thread, on the grounds that gauging the psychology of the mod team was relevant to our upcoming Voice vs. Exit choices. Meanwhile on Less Wrong, Ruby kept doubling down:

[I]f the goal is everyone being less wrong, I think some means of communicating are going to be more effective than others. I, at least, am a social monkey. If I am bluntly told I am wrong (even if I agree, even in private—but especially in public), I will feel attacked (if only at the S1 level), threatened (socially), and become defensive. It makes it hard to update and it makes it easy to dislike the one who called me out. [...]

[...]

Even if you wish to express that someone is wrong, I think this is done more effectively if one simultaneously continues to implicitly express "I think there is still some prior that you are correct and I curious to hear your thoughts", or failing that "You are very clearly wrong here yet I still respect you as a thinker who is worth my time to discourse with." [...] There's an icky thing here I feel like for there to be productive and healthy discussion you have to act as though at least one of the above statements is true, even if it isn't.

"Wow, he's really overtly arguing that people should lie to him to protect his feelings," Ben commented via email. I would later complain to Anna that Ruby's profile said he was one of two people to have volunteered for CfAR on three continents. If this was the level of performance we could expect from veteran CfAR participants, what was CfAR for?

I replied to Ruby that [LW(p) · GW(p)] you could just directly respond to your interlocutor's arguments. Whether you respect them as a thinker is off-topic. "You said X, but this is wrong because of Y" isn't a personal attack! I thought it was ironic that this happened on a post that was explicitly about causal vs. social reality; it's possible that I wouldn't have been so rigid about this if it weren't for that prompt.

(On reviewing the present post prior to publication, Ruby writes that he regrets his behavior during this exchange.)

Jessica ended up writing a post, "Self-Consciousness Wants Everything to Be About Itself" [LW · GW], arguing that tone arguments are mainly about people silencing discussion of actual problems in order to protect their feelings. She used as a central example a case study of a college official crying and saying that she "felt attacked" in response to complaints about her office being insufficiently supportive of a racial community.

Jessica was surprised by how well it worked, judging by Ruby mentioning silencing in a subsequent comment to me [LW(p) · GW(p)] (plausibly influenced by Jessica's post) and by an exchange between Ray and Ruby that she thought was "surprisingly okay" [LW(p) · GW(p)].

From this, Jessica derived the moral that when people are doing something that seems obviously terrible and in bad faith, it can help to publicly explain why the abstract thing is bad, without accusing anyone. This made sense because people didn't want to be held to standards that other people aren't being held to: a call-out directed at oneself personally could be selective enforcement, but a call-out of the abstract pattern invited changing one's behavior if the new equilibrium looked better.

Michael said that part of the reason this worked was because it represented a clear threat of scapegoating without actually scapegoating and without surrendering the option to do so later; it was significant that Jessica's choice of example positioned her on the side of the powerful social-justice coalition.


On 4 July 2019, Scott Alexander published "Some Clarifications on Rationalist Blogging", disclaiming any authority as a "rationalist" leader. ("I don't want to claim this blog is doing any kind of special 'rationality' work beyond showing people interesting problems [...] Insofar as [Slate Star Codex] makes any pretensions to being 'rationalist', it's a rationalist picnic and not a rationalist monastery.") I assumed this was inspired by Ben's request back in March that Scott "alter the beacon" so as to not confuse people about what the current-year community was. I appreciated it.


Jessica published "The AI Timelines Scam" [LW · GW], arguing that the recent prominence of "short" (e.g., 2030) timelines to transformative AI was better explained by political factors than by technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project.

(Remember, this was 2019. After seeing what GPT-3, DALL-E, PaLM, &c. could do during the "long May 2020", it now looks to me that the short-timelines people had better intuitions than Jessica gave them credit for.)

I still sympathized with the pushback from Caliphate supporters against using "scam"/"fraud"/"lie"/&c. language to include motivated elephant-in-the-brain-like distortions. I conceded that this was a boring semantic argument, but I feared that until we invented better linguistic technology, the boring semantic argument was going to continue sucking up discussion bandwidth with others.

"Am I being too tone-policey here?" I asked the posse. "Is it better if I explicitly disclaim, 'This is marketing advice; I'm not claiming to be making a substantive argument'?" (Subject: "Re: reception of 'The AI Timelines Scam' is better than expected!")

Ben replied, "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you?" He argued that investigations of financial fraud focus on false promises about money, rather than the psychological minutiæ of the perp's motives.

I replied that the concept of mens rea did seem necessary for maintaining good incentives, at least in some contexts. The law needs to distinguish between accidentally hitting a pedestrian in one's car ("manslaughter") and premeditated killing ("first-degree murder"), because traffic accidents are significantly less disincentivizable than offing one's enemies. (Anyone who drives at all is taking on some nonzero risk of committing vehicular manslaughter.) The manslaughter example was simpler than misinformation-that-moves-resources,[11] and it might not be easy for the court to determine "intent", but I didn't see what would reverse the weak principle that intent sometimes matters.

Ben replied that what mattered in the determination of manslaughter vs. murder was whether there was long-horizon optimization power toward the outcome of someone's death, not what sentiments the killer rehearsed in their working memory.

On a phone call later, Michael made an analogy between EA and Catholicism. The Pope was fraudulent, because the legitimacy of the Pope's position (and his claims to power and resources) rested on the pretense that he had a direct relationship with God, which wasn't true, and the Pope had to know on some level that it wasn't true. (I agreed that this usage of "fraud" made sense to me.) In Michael's view, Ben's charges against GiveWell were similar: GiveWell's legitimacy rested on the pretense that they were making decisions based on numbers, and they had to know at some level that they weren't doing that.


Ruby wrote a document about ways in which one's speech could harm people, which was discussed in the comments of a draft Less Wrong post by some of our posse members and some of the Less Wrong mods.[12]

Ben wrote:

What I see as under threat is the ability to say in a way that's actually heard, not only that opinion X is false, but that the process generating opinion X is untrustworthy, and perhaps actively optimizing in an objectionable direction. Frequently, attempts to say this are construed primarily as moves to attack some person or institution, pushing them into the outgroup. Frequently, people suggest to me an "equivalent" wording with a softer tone, which in fact omits important substantive criticisms I mean to make, while claiming to understand what's at issue.

Ray Arnold replied:

My core claim is: "right now, this isn't possible, without a) it being heard by many people as an attack, b) without people having to worry that other people will see it as an attack, even if they don't."

It seems like you see this something as "there's a precious thing that might be destroyed" and I see it as "a precious thing does not exist and must be created, and the circumstances in which it can exist are fragile." It might have existed in the very early days of LessWrong. But the landscape now is very different than it was then. With billions of dollars available and at stake, what worked then can't be the same thing as what works now.

(!!)[13]

Jessica pointed this out as a step towards discussing the real problem (Subject: "progress towards discussing the real thing??"). She elaborated in the secret thread: now that the "EA" scene was adjacent to real-world money and power, people were incentivized to protect their reputations (and beliefs related to their reputations) in anti-epistemic ways, in a way that they wouldn't if the scene were still just a philosophy club. This was catalyzing a shift of norms from "that which can be destroyed by the truth, should be" towards protecting feelings—where "protecting feelings" was actually about protecting power. The fact that the scene was allocating billions of dollars made it more important for public discussions to reach the truth, compared to philosophy club—but it also increased the likelihood of obfuscatory behavior that philosophy-club norms (like "assume good faith") didn't account for. We might need to extend philosophy-club norms to take into account the possibility of adversarial action: there's a reason that courts of law don't assume good faith. We didn't want to disproportionately punish people for getting caught up in obfuscatory patterns; that would just increase the incentive to obfuscate. But we did need some way to reveal what was going on.

In email, Jessica acknowledged that Ray had a point that it was confusing to use court-inspired language if we didn't intend to blame and punish people. Michael said that court language was our way to communicate "You don't have the option of non-engagement with the complaints that are being made." (Courts can summon people; you can't ignore a court summons the way you can ignore ordinary critics.)

Michael said that we should also develop skill in using social-justicey blame language, as was used against us, harder, while we still thought of ourselves as trying to correct people's mistakes rather than being in a conflict against the Blight. "Riley" said that this was a terrifying you-have-become-the-abyss suggestion; Ben thought it was obviously a good idea.

I was horrified by the extent to which Less Wrong moderators (!) seemed to be explicitly defending "protect feelings" norms. Previously, I had mostly been seeing the present struggle through the lens of my idiosyncratic Something to Protect as a simple matter of Bay Area political correctness. I was happy to have Michael, Ben, and Jessica as allies, but I hadn't been seeing the Blight as a unified problem. Now I was seeing something.

An in-person meeting was arranged for 23 July 2019 at the Less Wrong office, with Ben, Jessica, me, and most of the Less Wrong team (Ray, Ruby, Oliver Habryka, Vaniver, Jim Babcock). I don't have notes and don't really remember what was discussed in enough detail to faithfully recount it.[14] I ended up crying at one point and left the room for a while.

The next day, I asked Ben and Jessica for their takeaways via email (Subject: "peace talks outcome?"). Jessica said that I was a "helpful emotionally expressive and articulate victim" and that there seemed to be a consensus that people like me should be warned somehow that Less Wrong wasn't doing fully general sanity-maximization anymore. (Because community leaders were willing to sacrifice, for example, ability to discuss non-AI heresies in order to focus on sanity about AI in particular while maintaining enough mainstream acceptability and power.)

I said that for me and my selfish perspective, the main outcome was finally shattering my "rationalist" social identity. I needed to exhaust all possible avenues of appeal before it became real to me. The morning after was the first for which "rationalists ... them" felt more natural than "rationalists ... us".

A Beleaguered Ally Under Fire (July–August 2019)

Michael's reputation in the community, already not what it once was, continued to be debased even further.

The local community center, the Berkeley REACH,[15] was conducting an investigation as to whether to exclude Michael (which was mostly moot, as he didn't live in the Bay Area). When I heard that the committee conducting the investigation was "very close to releasing a statement", I wrote to them:

I've been collaborating with Michael a lot recently, and I'm happy to contribute whatever information I can to make the report more accurate. What are the charges?

They replied:

To be clear, we are not a court of law addressing specific "charges." We're a subcommittee of the Berkeley REACH Panel tasked with making decisions that help keep the space and the community safe.

I replied:

Allow me to rephrase my question about charges. What are the reasons that the safety of the space and the community require you to write a report about Michael? To be clear, a community that excludes Michael on inadequate evidence is one where I feel unsafe.

We arranged a call, during which I angrily testified that Michael was no threat to the safety of the space and the community. This would have been a bad idea if it were the cops, but in this context, I figured my political advocacy couldn't hurt.

Concurrently, I got into an argument with Kelsey Piper about Michael after she wrote on Discord that her "impression of Vassar's threatening schism is that it's fundamentally about Vassar threatening to stir shit up until people stop socially excluding him for his bad behavior." I didn't think that was what the schism was about (Subject: "Michael Vassar and the theory of optimal gossip").

In the course of litigating Michael's motivations (the details of which are not interesting enough to summarize here), Kelsey mentioned that she thought Michael had done immense harm to me—that my models of the world and ability to reason were worse than they were a year ago. I thanked her for the concern, and asked if she could be more specific.

She said she was referring to my ability to predict consensus and what other people believe. I expected people to be convinced by arguments that they found not only unconvincing, but so unconvincing they didn't see why I would bother. I believed things to be in obvious violation of widespread agreement that everyone else thought were not. My shocked indignation at other people's behavior indicated a poor model of social reality.

I considered this an insightful observation about a way in which I'm socially retarded. I had had similar problems with school. We're told that the purpose of school is education (to the extent that most people think of school and education as synonyms), but the consensus behavior is "sit in lectures and trade assignments for grades." Faced with what I saw as a contradiction between the consensus narrative and the consensus behavior, I would assume that the narrative was the "correct" version, and so I spent a lot of time trying to start conversations about math with everyone and then getting indignant when they'd say, "What class is this for?" Math isn't for classes; it's the other way around, right?

Empirically, no! But I had to resolve the contradiction between narrative and reality somehow, and if my choices were "People are mistakenly failing to live up to the narrative" and "Everybody knows the narrative is a lie; it would be crazy to expect people to live up to it", the former had been more appealing.

It was the same thing here. Kelsey said that it was predictable that Yudkowsky wouldn't make a public statement, even one as basic as "category boundaries should be drawn for epistemic and not instrumental reasons," because his experience of public statements was that they'd be taken out of context and used against MIRI by the likes of /r/SneerClub. This wasn't an update at all. (Everyone at "Arcadia" had agreed, in the house discussion in April.) Vassar's insistence that Eliezer be expected to do something that he obviously was never going to do had caused me to be confused and surprised by reality.[16]

Kelsey seemed to be taking it as obvious that Eliezer Yudkowsky's public behavior was optimized to respond to the possibility of political attacks from people who hate him anyway, and not the actuality of thousands of words of careful arguments appealing to his own writings from ten years ago. Very well. Maybe it was obvious. But if so, I had no reason to care what Eliezer Yudkowsky said, because not provoking SneerClub isn't truth-tracking, and careful arguments are. This was a huge surprise to me, even if Kelsey knew better.

What Kelsey saw as "Zack is losing his ability to model other people and I'm worried about him," I thought Ben and Jessica would see as "Zack is angry about living in simulacrum level 3 and we're worried about everyone else."

I did think that Kelsey was mistaken about how much causality to attribute to Michael's influence, rather than to me already being socially retarded. From my perspective, validation from Michael was merely the catalyst that excited me from confused-and-sad to confused-and-socially-aggressive-about-it. The latter phase revealed a lot of information, and not just to me. Now I was ready to be less confused—after I was done grieving.

Later, talking in person at "Arcadia", Kelsey told me that the REACH was delaying its release of its report about Michael because someone whose identity she could not disclose had threatened to sue. As far as my interest in defending Michael went, I counted this as short-term good news (because the report wasn't being published for now) but longer-term bad news (because the report must be a hit piece if Michael's mysterious ally was trying to hush it).

When I mentioned this to Michael on Signal on 3 August 2019, he replied:

The person is me, the whole process is a hit piece, literally, the investigation process and not the content. Happy to share the latter with you. You can talk with Ben about appropriate ethical standards.

In retrospect, I feel dumb for not guessing that Michael's mysterious ally was Michael himself. This kind of situation is an example of how norms protecting confidentiality distort information; Kelsey felt obligated to obfuscate any names connected to potential litigation, which led me to the infer the existence of a nonexistent person. I can't say I never introduce this kind of distortion myself (for I, too, am bound by norms), but when I do, I feel dirty about it.

As far as appropriate ethical standards go, I didn't approve of silencing critics with lawsuit threats, even while I agreed with Michael that "the process is the punishment." I imagine that if the REACH wanted to publish a report about me, I would expect to defend myself in public, having faith that the beautiful weapon of my Speech would carry the day against a corrupt community center—or for that matter, against /r/SneerClub.

This is arguably one of my more religious traits. Michael and Kelsey are domain experts and probably know better.

An Poignant-to-Me Anecdote That Fits Here Chronologically But Doesn't Particularly Foreshadow Anything (August 2019)

While visiting "Arcadia", "Meredith" and Mike's son (age 2¾ years) asked me, "Why are you a boy?"

After a long pause, I said, "Yes," as if I had misheard the question as "Are you a boy?" I think it was a motivated mishearing: it was only after I answered that I consciously realized that's not what the kid asked.

I think I would have preferred to say, "Because I have a penis, like you." But it didn't seem appropriate.

Philosophy Blogging Interlude! (August–October 2019)

I wanted to finish the memoir-post mourning the "rationalists", but I still felt psychologically constrained. So instead, I mostly turned to a combination of writing bitter [LW(p) · GW(p)] and insulting [LW(p) · GW(p)] comments [LW(p) · GW(p)] whenever I saw someone praise the "rationalists" collectively, and—more philosophy blogging!

In August 2019's "Schelling Categories, and Simple Membership Tests" [LW · GW], I explained a nuance that had only merited a passing mention in "Where to Draw the Boundaries?" [LW · GW]: sometimes you might want categories for different agents to coordinate on, even at the cost of some statistical "fit." (This was generalized from a "pro-trans" argument that had occurred to me, that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as.)

In September 2019's "Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists" [LW · GW], I presented a toy mathematical model of how censorship distorts group beliefs. I was surprised by how well-received it was (high karma, Curated within a few days, later included in the Best-of-2019 collection), especially given that it was explicitly about politics (albeit at a meta level, of course). Ben and Jessica had discouraged me from bothering when I sent them a draft. (Jessica said that it was obvious even to ten-year-olds that partisan politics distorts impressions by filtering evidence. "[D]o you think we could get a ten-year-old to explain it to Eliezer Yudkowsky?" I asked.)

In October 2019's "Algorithms of Deception!" [LW · GW], I exhibited some toy Python code modeling different kinds of deception. If a function faithfully passes its observations as input to another function, the second function can construct a well-calibrated probability distribution. But if the first function outright fabricates evidence, or selectively omits some evidence, or gerrymanders the categories by which it interprets its observations as evidence, the second function computes a worse probability distribution.

Also in October 2019, in "Maybe Lying Doesn't Exist" [LW · GW], I replied to Scott Alexander's "Against Lie Inflation", which was itself a generalized rebuke of Jessica's "The AI Timelines Scam". Scott thought Jessica was wrong to use language like "lie", "scam", &c. to describe someone being (purportedly) motivatedly wrong, but not necessarily consciously lying.

I was furious when "Against Lie Inflation" came out. (Furious at what I perceived as hypocrisy, not because I particularly cared about defending Jessica's usage.) Oh, so now Scott agreed that making language less useful is a problem?! But on further consideration, I realized he was actually being consistent in admitting appeals to consequences as legitimate. In objecting to the expanded definition of "lying", Alexander was counting "everyone is angrier" (because of more frequent accusations of lying) as a cost. In my philosophy, that wasn't a legitimate cost. (If everyone is lying, maybe people should be angry!)

The Caliph's Madness (August and November 2019)

I continued to note signs of contemporary Yudkowsky not being the same author who wrote the Sequences. In August 2019, he Tweeted:

I am actively hostile to neoreaction and the alt-right, routinely block such people from commenting on my Twitter feed, and make it clear that I do not welcome support from those quarters. Anyone insinuating otherwise is uninformed, or deceptive.

I argued that the people who smear him as a right-wing Bad Guy do so in order to extract these kinds of statements of political alignment as concessions; his own timeless decision theory would seem to recommend ignoring them rather than paying even this small Danegeld.

When I emailed the posse about it begging for Likes (Subject: "can't leave well enough alone"), Jessica said she didn't get my point. If people are falsely accusing you of something (in this case, of being a right-wing Bad Guy), isn't it helpful to point out that the accusation is false? It seemed like I was advocating for self-censorship on the grounds that speaking up helps the false accusers. But it also helps bystanders (by correcting the misapprehension) and hurts the false accusers (by demonstrating to bystanders that the accusers are making things up). By linking to "Kolmogorov Complicity and the Parable of Lightning" in my replies, I seemed to be insinuating that Yudkowsky was under some sort of duress, but this wasn't spelled out: if Yudkowsky would face social punishment for advancing right-wing opinions, did that mean he was under such duress that saying anything at all would be helping the oppressors?

The paragraph from "Kolmogorov Complicity" that I was thinking of was (bolding mine):

Some other beliefs will be found to correlate heavily with lightning-heresy. Maybe atheists are more often lightning-heretics; maybe believers in global warming are too. The enemies of these groups will have a new cudgel to beat them with, "If you believers in global warming are so smart and scientific, how come so many of you believe in lightning, huh?" Even the savvy Kolmogorovs within the global warming community will be forced to admit that their theory just seems to attract uniquely crappy people. It won't be very convincing. Any position correlated with being truth-seeking and intelligent will be always on the retreat, having to forever apologize that so many members of their movement screw up the lightning question so badly.

I perceived a pattern where people who are in trouble with the orthodoxy buy their own safety by denouncing other heretics: not just disagreeing with the other heretics because they are mistaken, which would be right and proper Discourse, but denouncing them ("actively hostile to") as a way of paying Danegeld.

Suppose there are five true heresies, but anyone who's on the record as believing more than one gets burned as a witch. Then it's impossible to have a unified rationalist community [LW · GW], because people who want to talk about one heresy can't let themselves be seen in the company of people who believe another. That's why Scott Alexander couldn't get the philosophy of categorization right in full generality, even though his writings revealed an implicit understanding of the correct way,[17] and he and I had a common enemy in the social-justice egregore. He couldn't afford to. He'd already spent his Overton budget on anti-feminism.

Alexander (and Yudkowsky and Anna and the rest of the Caliphate) seemed to accept this as an inevitable background fact of existence, like the weather. But I saw a Schelling point off in the distance where us witches stick together for Free Speech,[18] and it was tempting to try to jump there. (It would probably be better if there were a way to organize just the good witches, and exclude all the Actually Bad witches, but the Sorites problem on witch Badness made that hard to organize without falling back to the one-heresy-per-thinker equilibrium.)

Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" facts.) I agreed that conflating political positions with facts would be bad. I wasn't interested in defending the "alt-right" (whatever that means) broadly. But I had learned stuff from reading far-right authors (most notably Mencius Moldbug) and from talking with "Thomas". I was starting to appreciate what Michael had said about "Less precise is more violent" back in April when I was talking about criticizing "rationalists".

Jessica asked if my opinion would change depending on whether Yudkowsky thought neoreaction was intellectually worth engaging with. (Yudkowsky had said years ago [LW(p) · GW(p)] that Moldbug was low quality.)

I would never fault anyone for saying "I vehemently disagree with what little I've read and/or heard of this author." I wasn't accusing Yudkowsky of being insincere.

What I did think was that the need to keep up appearances of not being a right wing Bad Guy was a serious distortion of people's beliefs, because there are at least a few questions of fact where believing the correct answer can, in the political environment of the current year, be used to paint one as a right-wing Bad Guy. I would have hoped for Yudkowsky to notice that this is a rationality problem and to not actively make the problem worse. I was counting "I do not welcome support from those quarters" as making the problem worse insofar as it would seem to imply that if I thought I'd learned valuable things from Moldbug, that made me less welcome in Yudkowsky's fiefdom.

Yudkowsky certainly wouldn't endorse "Even learning things from these people makes you unwelcome" as stated, but "I do not welcome support from those quarters" still seemed like a pointlessly partisan silencing/shunning attempt, when one could just as easily say, "I'm not a neoreactionary, and if some people who read me are, that's obviously not my fault."

Jessica asked if Yudkowsky denouncing neoreaction and the alt-right would still seem harmful, if he were to also to acknowledge, e.g., racial IQ differences?

I agreed that that would be better, but realistically, I didn't see why Yudkowsky should want to poke that hornet's nest. This was the tragedy of recursive silencing: if you can't afford to engage with heterodox ideas, either you become an evidence-filtering clever arguer [LW · GW], or you're not allowed to talk about anything except math. (Not even the relationship between math and human natural language, as we had found out recently.)

It was as if there was a "Say Everything" attractor and a "Say Nothing" attractor, and my incentives were pushing me towards the "Say Everything" attractor—but that was only because I had Something to Protect in the forbidden zone and I was a decent programmer (who could therefore expect to be employable somewhere, just as James Damore eventually found another job). Anyone in less extreme circumstances would find themselves pushed toward the "Say Nothing" attractor.

It was instructive to compare Yudkowsky's new disavowal of neoreaction with one from 2013, in response to a TechCrunch article citing former MIRI employee Michael Anissimov's neoreactionary blog More Right:[19]

"More Right" is not any kind of acknowledged offspring of Less Wrong nor is it so much as linked to by the Less Wrong site. We are not part of a neoreactionary conspiracy. We are and have been explicitly pro-Enlightenment, as such, under that name. Should it be the case that any neoreactionary is citing me as a supporter of their ideas, I was never asked and never gave my consent. [...]

Also to be clear: I try not to dismiss ideas out of hand due to fear of public unpopularity. However I found Scott Alexander's takedown of neoreaction convincing and thus I shrugged and didn't bother to investigate further.

My criticism regarding negotiating with terrorists did not apply to the 2013 disavowal. More Right was brand encroachment on Anissimov's part that Yudkowsky had a legitimate interest in policing, and the "I try not to dismiss ideas out of hand" disclaimer importantly avoided legitimizing McCarthyist persecution.

The question was, what had specifically happened in the last six years to shift Yudkowsky's opinion on neoreaction from (paraphrased) "Scott says it's wrong, so I stopped reading" to (verbatim) "actively hostile"? Note especially the inversion from (both paraphrased) "I don't support neoreaction" (fine, of course) to "I don't even want them supporting me" (which was bizarre; humans with very different views on politics nevertheless have a common interest in not being transformed into paperclips).

Did Yudkowsky get new information about neoreaction's hidden Badness parameter sometime between 2013 and 2019, or did moral coercion from the left intensify (because Trump and because Berkeley)? My bet was on the latter.


However it happened, it didn't seem like the brain damage was limited to "political" topics, either. In November 2019, we saw another example of Yudkowsky destroying language for the sake of politeness, this time the context of him trying to wirehead his fiction subreddit by suppressing criticism-in-general.

That's my characterization, of course: the post itself talks about "reducing negativity". In a followup comment, Yudkowsky wrote (bolding mine):

On discussion threads for a work's particular chapter, people may debate the well-executedness of some particular feature of that work's particular chapter. Comments saying that nobody should enjoy this whole work are still verboten. Replies here should still follow the etiquette of saying "Mileage varied: I thought character X seemed stupid to me" rather than saying "No, character X was actually quite stupid."

But ... "I thought X seemed Y to me"[20] and "X is Y" do not mean the same thing! The map is not the territory [LW · GW]. The quotation is not the referent [LW · GW]. The planning algorithm that maximizes the probability of doing a thing is different from the algorithm that maximizes the probability of having "tried" to do the thing [LW · GW]. If my character is actually quite stupid, I want to believe that my character is actually quite stupid. [? · GW]

It might seem like a little thing of no significance—requiring "I" statements is commonplace in therapy groups and corporate sensitivity training—but this little thing coming from Eliezer Yudkowsky setting guidelines for an explicitly "rationalist" space made a pattern click [LW · GW]. If everyone is forced to only make claims about their map ("I think", "I feel") and not make claims about the territory (which could be construed to call other people's maps into question and thereby threaten them, because disagreement is disrespect), that's great for reducing social conflict but not for the kind of collective information processing that accomplishes cognitive work,[21] like good literary criticism. A rationalist space needs to be able to talk about the territory.

To be fair, the same comment I quoted also lists "Being able to consider and optimize literary qualities" as one of the major considerations to be balanced. But I think (I think) it's also fair to note that (as we had seen on Less Wrong earlier that year), lip service is cheap. It's easy to say, "Of course I don't think politeness is more important than truth," while systematically behaving as if you did.

"Broadcast criticism is adversely selected for critic errors," Yudkowsky wrote in the post on reducing negativity, correctly pointing out that if a work's true level of mistakenness is M, the i-th commenter's estimate of mistakenness has an error term of , and commenters leave a negative comment when their estimate M + is greater than their threshold for commenting , then the comments that get posted will have been selected for erroneous criticism (high ) and commenter chattiness (low ).

I can imagine some young person who liked Harry Potter and the Methods being intimidated by the math notation and indiscriminately accepting this wisdom from the great Eliezer Yudkowsky as a reason to be less critical, specifically. But a somewhat less young person who isn't intimidated by math should notice that this is just regression to the mean. The same argument applies to praise!

What I would hope for from a rationality teacher and a rationality community, would be efforts to instill the general skill of modeling things like regression to the mean and selection effects, as part of the general project of having a discourse that does collective information-processing.

And from the way Yudkowsky writes these days, it looks like he's ... not interested in collective information-processing? Or that he doesn't actually believe that's a real thing? "Credibly helpful unsolicited criticism should be delivered in private," he writes! I agree that the positive purpose of public criticism isn't solely to help the author. (If it were, there would be no reason for anyone but the author to read it.) But readers do benefit from insightful critical commentary. (If they didn't, why would they read the comments section?) When I read a story and am interested in reading the comments about a story, it's because I'm interested in the thoughts of other readers, who might have picked up subtleties I missed. I don't want other people to self-censor comments on any plot holes or Fridge Logic they noticed for fear of dampening someone else's enjoyment or hurting the author's feelings.

Yudkowsky claims that criticism should be given in private because then the target "may find it much more credible that you meant only to help them, and weren't trying to gain status by pushing them down in public." I'll buy this as a reason why credibly altruistic unsolicited criticism should be delivered in private.[22] Indeed, meaning only to help the target just doesn't seem like a plausible critic motivation in most cases. But the fact that critics typically have non-altruistic motives, doesn't mean criticism isn't helpful. In order to incentivize good criticism, you want people to be rewarded with status for making good criticisms. You'd have to be some sort of communist to disagree with this![23]

There's a striking contrast between the Yudkowsky of 2019 who wrote the "Reducing Negativity" post, and an earlier Yudkowsky (from even before the Sequences) who maintained a page on Crocker's rules: if you declare that you operate under Crocker's rules, you're consenting to other people optimizing their speech for conveying information rather than being nice to you. If someone calls you an idiot, that's not an "insult"; they're just informing you about the fact that you're an idiot, and you should probably thank them for the tip. (If you were an idiot, wouldn't you be better off knowing that?)

It's of course important to stress that Crocker's rules are opt-in on the part of the receiver; it's not a license to unilaterally be rude to other people. Adopting Crocker's rules as a community-level norm on an open web forum does not seem like it would end well.

Still, there's something precious about a culture where people appreciate the obvious normative ideal underlying Crocker's rules, even if social animals can't reliably live up to the normative ideal. Speech is for conveying information. People can say things—even things about me or my work—not as a command, or as a reward or punishment, but just to establish a correspondence between words and the world: a map that reflects a territory.

Appreciation of this obvious normative ideal seems strikingly absent from Yudkowsky's modern work—as if he's given up on the idea that reasoning in public is useful or possible. His Less Wrong commenting guidelines declare, "If it looks like it would be unhedonic to spend time interacting with you, I will ban you from commenting on my posts." The idea that people who are unhedonic to interact with might have intellectually substantive criticisms that the author has a duty to address [LW(p) · GW(p)] does not seem to have crossed his mind.

The "Reducing Negativity" post also warns against the failure mode of attempted "author telepathy": attributing bad motives to authors and treating those attributions as fact without accounting for uncertainty or distinguishing observations from inferences. I should be explicit, then: when I say negative things about Yudkowsky's state of mind, like it's "as if he's given up on the idea that reasoning in public is useful or possible", that's a probabilistic inference, not a certain observation.

But I think making probabilistic inferences is ... fine? The sentence "Credibly helpful unsolicited criticism should be delivered in private" sure does look to me like text generated by a state of mind that doesn't believe that reasoning in public is useful or possible. I think that someone who did believe in public reason would have noticed that criticism has information content whose public benefits might outweigh its potential to harm an author's reputation or feelings. If you think I'm getting this inference wrong, feel free to let me and other readers know why in the comments.

A Worthy Critic At Last (November 2019)

I received an interesting email comment on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifier for an application depends on the costs and benefits. As a classic example, prey animals need to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator.

I had thought of the "false positives are better than false negatives when detecting predators" example as being about the limitations of evolution as an AI designer: messy evolved animal brains don't track probability and utility separately the way a cleanly-designed AI could. As I had explained in "... Boundaries?", it made sense for consequences to motivate what variables you paid attention to. But given the subspace that's relevant to your interests, you want to run an "epistemically legitimate" clustering algorithm on the data you see there, which depends on the data, not your values. Ideal probabilistic beliefs shouldn't depend on consequences.

Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is because probabilistic reasoning is broadly useful: epistemology can be derived from instrumental concerns. He agreed that severe wireheading issues potentially arise if you allow consequentialist concerns to affect your epistemics.

But the alternative view had its own problems. If your AI consists of a consequentialist module that optimizes for utility in the world, and an epistemic module that optimizes for the accuracy of its beliefs, that's two agents, not one: how could that be reflectively coherent? You could, perhaps, bite the bullet here, for fear that consequentialism doesn't propagate itself and that wireheading was inevitable. On this view, Abram explained, "Agency is an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises." Whether this view was ultimately tenable or not, this did show that trying to forbid appeals-to-consequences entirely led to strange places.

I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me in the way that almost everyone else in Berkeley was trying to mess with me.)

Writer's Block (November 2019)

I wrote to Ben about how I was still stuck on writing the grief-memoir. My plan had been to tell the story of the Category War while Glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-frenemies warning when you're about to publicly call them intellectually dishonest), then share it to Less Wrong and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water).

The reason it should have been safe to write was because it's good to explain things. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to tell the true story about why I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, &c."

So why couldn't I write? Was it that I didn't know how to make "This is not a social attack" credible? Maybe because ... it wasn't true?? I was afraid that telling a story about our leader being intellectually dishonest was the nuclear option. If you're slowly but surely gaining territory in a conventional war, suddenly escalating to nukes would be pointlessly destructive. This metaphor was horribly non-normative (arguing is not a punishment; carefully telling a true story about an argument is not a nuke), but I didn't know how to make it stably go away.

A more motivationally-stable compromise would be to split off whatever generalizable insights that would have been part of the story into their own posts. "Heads I Win, Tails?—Never Heard of Her" [LW · GW] had been a huge success as far as I was concerned, and I could do more of that kind of thing, analyzing the social stuff without making it personal, even if, secretly ("secretly"), it was personal.

Ben replied that it didn't seem like it was clear to me that I was a victim of systemic abuse, and that I was trying to figure out whether I was being fair to my abusers. He thought if I could internalize that, I would be able to forgive myself a lot of messiness, which would make the problem less daunting.

I said I would bite that bullet: Yes, I was trying to figure out whether I was being fair to my abusers, and it was an important question to get right! "Other people's lack of standards harmed me, therefore I don't need to hold myself to standards in my response because I have extenuating circumstances [LW · GW]" would be a lame excuse.

This seemed correlated with the recurring stalemated disagreement within our posse, where Michael/Ben/Jessica would say, "Fraud, if the word ever meant anything", and while I agreed that they were pointing to an important pattern of false representations optimized to move resources, I was still sympathetic to the Caliphate-defender's perspective that this usage of "fraud" was motte-and-baileying between different senses of the word. (Most people would say that the things we were alleging MIRI and CfAR had done wrong were qualitatively different from the things Enron and Bernie Madoff had done wrong.[24]) I wanted to do more work to formulate a more precise theory of the psychology of deception to describe exactly how things were messed up a way that wouldn't be susceptible to the motte-and-bailey charge.

Interactions With a Different Rationalist Splinter Group (November–December 2019)

On 12 and 13 November 2019, Ziz published several blog posts laying out her grievances against MIRI and CfAR. On the fifteenth, Ziz and three collaborators staged a protest at the CfAR reunion being held at a retreat center in the North Bay near Camp Meeker. A call to the police falsely alleged that the protesters had a gun, resulting in a dramatic police reaction (SWAT team called, highway closure, children's group a mile away being evacuated—the works).

I was tempted to email links to Ziz's blog posts to the Santa Rosa Press-Democrat reporter covering the incident (as part of my information-sharing-is-good virtue ethics), but decided to refrain because I predicted that Anna would prefer I didn't.

The main relevance of this incident to my Whole Dumb Story is that Ziz's memoir–manifesto posts included a 5500 word section about me. Ziz portrays me as a slave to social reality, throwing trans women under the bus to appease the forces of cissexism. I don't think that's what's going on with me, but I can see why the theory was appealing.


On 12 December 2019 I had an interesting exchange with Somni, one of the "Meeker Four"—presumably out on bail at this time?—on Discord.

I told her it was surprising that she spent so much time complaining about CfAR, Anna Salamon, Kelsey Piper, &c., but I seemed to get along fine with her—because naïvely, one would think that my views were so much worse. Was I getting a pity pass because she thought false consciousness was causing me to act against my own transfem class interests? Or what?

In order to be absolutely clear about my terrible views, I said that I was privately modeling a lot of transmisogyny complaints as something like—a certain neurotype-cluster of non-dominant male is latching onto locally ascendant social-justice ideology in which claims to victimhood can be leveraged into claims to power. Traditionally, men are moral agents, but not patients; women are moral patients, but not agents. If weird non-dominant men aren't respected if identified as such (because low-ranking males aren't valuable allies, and don't have the intrinsic moral patiency of women), but can get victimhood/moral-patiency points for identifying as oppressed transfems, that creates an incentive gradient for them to do so. No one was allowed to notice this except me, because everybody who's anybody prefers to stay on the good side of social-justice ideology unless they have Something to Protect that requires defying it.

Somni said we got along because I was being victimized by the same forces of gaslighting as her and wasn't lying about my agenda. Maybe she should be complaining about me?—but I seemed to be following a somewhat earnest epistemic process, whereas Kelsey, Scott, and Anna were not. If I were to start going, "Here's my rationality org; rule #1: no transfems (except me); rule #2, no telling people about rule #1", then she would talk about it.

I would later remark to Anna that Somni and Ziz saw themselves as being oppressed by people's hypocritical and manipulative social perceptions and behavior. Merely using the appropriate language ("Somni ... she", &c.) protected her against threats from the Political Correctness police, but it actually didn't protect against threats from the Zizians. The mere fact that I wasn't optimizing for PR (lying about my agenda, as Somni said) was what made me not a direct enemy (although still a collaborator) in their eyes.

Philosophy Blogging Interlude 2! (December 2019)

I had a pretty productive blogging spree in December 2019. In addition to a number of more minor posts on this blog and [LW · GW] on [LW · GW] Less [LW · GW] Wrong [LW · GW], I also got out some more significant posts bearing on my agenda.

On this blog, in "Reply to Ozymandias on Fully Consensual Gender", I finally got out at least a partial reply to Ozy Brennan's June 2018 reply to "The Categories Were Made for Man to Make Predictions", affirming the relevance of an analogy Ozy had made between the socially-constructed natures of money and social gender, while denying that the analogy supported gender by self-identification. (I had been working on a more exhaustive reply, but hadn't managed to finish whittling it into a shape that I was totally happy with.)

I also polished and pulled the trigger on "On the Argumentative Form 'Super-Proton Things Tend to Come In Varieties'", my reply to Yudkowsky's implicit political concession to me back in March. I had been reluctant to post it based on an intuition of, "My childhood hero was trying to do me a favor; it would be a betrayal to reject the gift." The post itself explained why that intuition was crazy, but that just brought up more anxieties about whether the explanation constituted leaking information from private conversations—but I had chosen my words carefully such that it wasn't. ("Even if Yudkowsky doesn't know you exist [...] he's effectively doing your cause a favor" was something I could have plausibly written in the possible world where the antecedent was true.) Jessica said the post seemed good.

On Less Wrong, the mods had just announced a new end-of-year Review event [LW · GW], in which the best posts from the year before would be reviewed and voted on, to see which had stood the test of time and deserved to be part of our canon of cumulative knowledge. (That is, this Review period starting in late 2019 would cover posts published in 2018.)

This provided me with an affordance [LW(p) · GW(p)] to write some posts critiquing posts that had been nominated for the Best-of-2018 collection that I didn't think deserved such glory. In response to "Decoupling vs. Contextualizing Norms" [LW · GW] (which had been cited in a way that I thought obfuscatory during the "Yes Implies the Possibility of No" trainwreck [LW(p) · GW(p)]), I wrote "Relevance Norms; Or, Grecian Implicature Queers the Decoupling/Contextualizing Binary" [LW · GW], appealing to our academically standard theory of how context affects meaning to explain why "decoupling vs. contextualizing norms" is a false dichotomy.

More significantly, in reaction to Yudkowsky's "Meta-Honesty: Firming Up Honesty Around Its Edge Cases" [LW · GW], I published "Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think" [LW · GW],[25] explaining why I thought "Meta-Honesty" was relying on an unproductively narrow sense of "honesty", because the ambiguity of natural language makes it easy to deceive people without technically lying.

I thought that one cut to the heart of the shocking behavior that we had seen from Yudkowsky lately. The "hill of meaning in defense of validity" affair had been driven by Yudkowsky's obsession with not technically lying, on two levels: he had proclaimed that asking for new pronouns "Is. Not. Lying." (as if that were the matter that anyone cared about—as if conservatives and gender-critical feminists should just pack up and go home after it had been demonstrated that trans people aren't lying), and he had seen no interest in clarifying his position on the philosophy of language, because he wasn't lying when he said that preferred pronouns weren't lies (as if that were the matter my posse cared about—as if I should keep honoring him as my caliph after it had been demonstrated that he hadn't lied). But his Sequences had articulated a higher standard [LW · GW] than merely not-lying. If he didn't remember, I could at least hope to remind everyone else.

I also wrote a little post, "Free Speech and Triskadekaphobic Calculators" [LW · GW], arguing that it should be easier to have a rationality/alignment community that just does systematically correct reasoning than a politically savvy community that does systematically correct reasoning except when that would taint AI safety with political drama, analogous to how it's easier to build a calculator that just does correct arithmetic, than a calculator that does correct arithmetic except that it never displays the result 13. In order to build a "triskadekaphobic calculator", you would need to "solve arithmetic" anyway, and the resulting product would be limited not only in its ability to correctly compute 6 + 7 but also the infinite family of calculations that include 13 as an intermediate result: if you can't count on (6 + 7) + 1 being the same as 6 + (7 + 1), you lose the associativity of addition.

A Newtonmas Party (December 2019)

On 20 December 2019, Scott Alexander messaged me on Discord—that I shouldn't answer if it would be unpleasant, but that he was thinking of asking about autogynephilia on the next Slate Star Codex survey, and wanted to know if I had any suggestions about question design, or if I could suggest any "intelligent and friendly opponents" to consult. After reassuring him that he shouldn't worry about answering being unpleasant ("I am actively at war with the socio-psychological forces that make people erroneously think that talking is painful!"), I referred him to my friend Tailcalled, who had a lot of experience conducting surveys and ran a "Hobbyist Sexologists" Discord server, which seemed likely to have some friendly opponents.

The next day (I assume while I still happened to be on his mind), Scott also commented on [LW(p) · GW(p)] "Maybe Lying Doesn't Exist", my post from back in October replying to his "Against Lie Inflation."

I was frustrated with his reply, which I felt was not taking into account points that I had already covered in detail. A few days later, on the twenty-fourth, I succumbed to [LW(p) · GW(p)] the temptation [LW(p) · GW(p)] to blow up at him [LW(p) · GW(p)] in the comments.

After commenting, I noticed what day it was and added a few more messages to our Discord chat—

okay, maybe speech is sometimes painful
the Less Wrong comment I just left you is really mean
and you know it's not because I don't like you
you know it's because I'm genuinely at my wit's end
after I posted it, I was like, "Wait, if I'm going to be this mean to Scott, maybe Christmas Eve isn't the best time?"
it's like the elephant in my brain is gambling that by being socially aggressive, it can force you to actually process information about philosophy which you otherwise would not have an incentive to
I hope you have a merry Christmas

And then, as an afterthought—

oh, I guess we're Jewish
that attenuates the "is a hugely inappropriately socially-aggressive blog comment going to ruin someone's Christmas" fear somewhat

Scott messaged back at 11:08 the next morning, Christmas Day. He explained that the thought process behind his comment was that he still wasn't sure where we disagreed and didn't know how to proceed except to dump his understanding of the philosophy (which would include things I already knew) and hope that I could point to the step I didn't like. He didn't know how to convince me of his sincerity and rebut my accusations of him motivatedly playing dumb (which he was inclined to attribute to the malign influence of Michael Vassar's gang).

I explained that the reason for those accusations was that I knew he knew about strategic equivocation, because he taught everyone else about it (as in his famous posts about the motte-and-bailey doctrine and the noncentral fallacy [LW · GW]). And so when he acted like he didn't get it when I pointed out that this also applied to "trans women are women", that just seemed implausible.

He asked for a specific example. ("Trans women are women, therefore trans women have uteruses" being a bad example, because no one was claiming that.) I quoted an article from the The Nation: "There is another argument against allowing trans athletes to compete with cis-gender athletes that suggests that their presence hurts cis-women and cis-girls. But this line of thought doesn't acknowledge that trans women are in fact women." Scott agreed that this was stupid and wrong and a natural consequence of letting people use language the way he was suggesting (!).

I didn't think it was fair to ordinary people to expect them to go as deep into the philosophy-of-language weeds as I could before being allowed to object to this kind of chicanery. I thought "pragmatic" reasons to not just use the natural clustering that you would get by impartially running a clustering algorithm on the subspace of configuration space relevant to your goals, basically amounted to "wireheading" (optimizing someone's map for looking good rather than reflecting the territory) or "war" (optimizing someone's map to not reflect the territory in order to manipulate them). If I were to transition today and didn't pass as well as Jessica, and everyone felt obligated to call me a woman, they would be wireheading me: making me think my transition was successful, even though it wasn't. That's not a nice thing to do to a rationalist.

Scott thought that trans people had some weird thing going on in their brains such that being referred to as their natal sex was intrinsically painful, like an electric shock. The thing wasn't an agent, so the injunction to refuse to give in to extortion didn't apply. Having to use a word other than the one you would normally use in order to avoid subjecting someone to painful electric shocks was worth it.

I thought I knew things about the etiology of transness such that I didn't think the electric shock was inevitable, but I didn't want the conversation to go there if it didn't have to. I didn't have to ragequit the so-called "rationalist" community over a complicated empirical question, only over bad philosophy. Scott said he might agree with me if he thought the tradeoff were unfavorable between clarity and utilitarian benefit—or if he thought it had the chance of snowballing like in his "Kolmogorov Complicity and the Parable of Lightning".

I pointed out that what sex people are is more relevant to human social life than whether lightning comes before thunder. He said that the problem in his parable was that people were being made ignorant of things, whereas in the transgender case, no one was being kept ignorant; their thoughts were just following a longer path.

I was skeptical of the claim that no one was "really" being kept ignorant. If you're sufficiently clever and careful and you remember how language worked when Airstrip One was still Britain, then you can still think, internally, and express yourself as best you can in Newspeak. But a culture in which Newspeak is mandatory, and all of Oceania's best philosophers have clever arguments for why Newspeak doesn't distort people's beliefs doesn't seem like a culture that could solve AI alignment.

I linked to Zvi Mowshowitz's post about how the claim that "everybody knows" something gets used to silence people trying to point out the thing: in this case, basically, "'Everybody knows' our kind of trans women are sampled from (part of) the male multivariate trait distribution rather than the female multivariate trait distribution, why are you being a jerk and pointing this out?" But I didn't think that everyone knew.[26] I thought the people who sort-of knew were being intimidated into doublethinking around it.

At this point, it was almost 2 p.m. (the paragraphs above summarizing a larger volume of typing), and Scott mentioned that he wanted to go to the Event Horizon Christmas party, and asked if I wanted to come and continue the discussion there. I assented, and thanked him for his time; it would be really exciting if we could avoid a rationalist civil war.

When I arrived at the party, people were doing a reading of the "Hero Licensing" dialogue epilogue [LW · GW] to Inadequate Equilibria, with Yudkowsky himself playing the Mysterious Stranger. At some point, Scott and I retreated upstairs to continue our discussion. By the end of it, I was feeling more assured of Scott's sincerity, if not his competence. Scott said he would edit in a disclaimer note at the end of "... Not Man for the Categories".

It would have been interesting if I also got the chance to talk to Yudkowsky for a few minutes, but if I did, I wouldn't be allowed to recount any details of that here due to the privacy rules I'm following.

The rest of the party was nice. People were reading funny GPT-2 quotes from their phones. At one point, conversation happened to zag in a way that let me show off the probability fact I had learned during Math and Wellness Month. A MIRI researcher sympathetically told me that it would be sad if I had to leave the Bay Area, which I thought was nice. There was nothing about the immediate conversational context to suggest that I might have to leave the Bay, but I guess by this point, my existence had become a context.

All in all, I was feeling less ragequitty about the rationalists[27] after the party—as if by credibly threatening to ragequit, the elephant in my brain had managed to extort more bandwidth from our leadership. The note Scott added to the end of "... Not Man for the Categories" still betrayed some philosophical confusion, but I now felt hopeful about addressing that in a future blog post explaining my thesis that unnatural category boundaries were for "wireheading" or "war".

It was around this time that someone told me that I wasn't adequately taking into account that Yudkowsky was "playing on a different chessboard" than me. (A public figure focused on reducing existential risk from artificial general intelligence is going to sense different trade-offs around Kolmogorov complicity strategies than an ordinary programmer or mere worm focused on things that don't matter.) No doubt. But at the same time, I thought Yudkowsky wasn't adequately taking into account the extent to which some of his longtime supporters (like Michael or Jessica) were, or had been, counting on him to uphold certain standards of discourse (rather than chess)?

Another effect of my feeling better after the party was that my motivation to keep working on my memoir of the Category War vanished—as if I was still putting weight on a zero-sum frame in which the memoir was a nuke that I only wanted to use as an absolute last resort.

Ben wrote (Subject: "Re: state of Church leadership"):

It seems to [me] that according to Zack's own account, even writing the memoir privately feels like an act of war that he'd rather avoid, not just using his own territory as he sees fit to create internal clarity around a thing.

I think this has to mean either
(a) that Zack isn't on the side of clarity except pragmatically where that helps him get his particular story around gender and rationalism validated
or
(b) that Zack has ceded the territory of the interior of his own mind to the forces of anticlarity, not for reasons, but just because he's let the anticlaritarians dominate his frame.

Or, I pointed out, (c) I had ceded the territory of the interior of my own mind to Eliezer Yudkowsky in particular, and while I had made a lot of progress unwinding this, I was still, still not done, and seeing him at the Newtonmas party set me back a bit.

"Riley" reassured me that finishing the memoir privately would be clarifying and cathartic for me. If people in the Caliphate came to their senses, I could either not publish it, or give it a happy ending where everyone comes to their senses.

(It does not have a happy ending where everyone comes to their senses.)

Further Discourses on What the Categories Were Made For (January–February 2020)

Michael told me he had changed his mind about gender and the philosophy of language. We talked about it on the phone. He said that the philosophy articulated in "A Human's Guide to Words" [? · GW] was inadequate for politicized environments where our choice of ontology is constrained. If we didn't know how to coin a new third gender, or teach everyone the language of "clusters in high-dimensional configuration space," our actual choices for how to think about trans women were basically three: creepy men (the TERF narrative), crazy men (the medical model), or a protected class of actual woman.[28]

According to Michael, while "trans women are real women" was a lie (in the sense that he agreed that me and Jessica and Ziz were not part of the natural cluster of biological females), it was also the case that "trans women are not real women" was a lie (in the sense that the "creepy men" and "crazy men" stories were wrong). "Trans women are women" could be true in the sense that truth is about processes that create true maps, such that we can choose the concepts that allow discourse and information flow. If the "creepy men" and "crazy men" stories are a cause of silencing, then—under present conditions—we had to choose the "protected class" story in order for people like Ziz to not be silenced.

My response (more vehemently when thinking on it a few hours later) was that this was a garbage bullshit appeal to consequences. If I wasn't going to let Ray Arnold get away with "we are better at seeking truth when people feel safe," I shouldn't let Michael get away with "we are better at seeking truth when people aren't oppressed." Maybe the wider world was ontology-constrained to those three choices, but I was aspiring to higher nuance in my writing.

"Thanks for being principled," he replied.


On 10 February 2020, Scott Alexander published "Autogenderphilia Is Common and Not Especially Related to Transgender", an analysis of the results of the autogynephilia/autoandrophilia questions on the recent Slate Star Codex survey. Based on eyeballing the survey data, Alexander proposed "if you identify as a gender, and you're attracted to that gender, it's a natural leap to be attracted to yourself being that gender" as a "very boring" theory.

I appreciated the endeavor of getting real data, but I was unimpressed with Alexander's analysis for reasons that I found difficult to write up in a timely manner; I've only just recently gotten around to polishing my draft and throwing it up as a standalone post. Briefly, I can see how it looks like a natural leap if you're verbally reasoning about "gender", but on my worldview, a hypothesis that puts "gay people (cis and trans)" in the antecedent is not boring and takes on a big complexity penalty, because that group is heterogeneous with respect to the underlying mechanisms of sexuality. I already don't have much use for "if you are a sex, and you're attracted to that sex" as a category of analytical interest, because I think gay men and lesbians are different things that need to be studied separately. Given that, "if you identify as a gender, and you're attracted to that gender" (with respect to "gender", not sex) comes off even worse: it's grouping together lesbians, and gay men, and heterosexual males with a female gender identity, and heterosexual females with a male gender identity. What causal mechanism could that correspond to?

(I do like the hypernym autogenderphilia.)

A Private Document About a Disturbing Hypothesis (early 2020)

There's another extremely important part of the story that would fit around here chronologically, but I again find myself constrained by privacy norms: everyone's common sense of decency (this time, even including my own) screams that it's not my story to tell.

Adherence to norms is fundamentally fraught for the same reason AI alignment is. In rich domains, attempts to regulate behavior with explicit constraints face a lot of adversarial pressure from optimizers bumping up against the constraint and finding the nearest unblocked strategies that circumvent it. The intent of privacy norms is to conceal information. But information in Shannon's sense is about what states of the world can be inferred given the states of communication signals; it's much more expansive than the denotative meaning of a text.

If norms can only regulate the denotative meaning of a text (because trying to regulate subtext is too subjective for a norm-enforcing coalition to coordinate on), someone who would prefer to reveal private information but also wants to comply with privacy norms has an incentive to leak everything they possibly can as subtext—to imply it, and hope to escape punishment on grounds of not having "really said it." And if there's some sufficiently egregious letter-complying-but-spirit-violating evasion of the norm that a coalition can coordinate on enforcing, the whistleblower has an incentive to stay only just shy of being that egregious.

Thus, it's unclear how much mere adherence to norms helps, when people's wills are actually misaligned. If I'm furious at Yudkowsky for prevaricating about my Something to Protect, and am in fact more furious rather than less that he managed to do it without violating the norm against lying, I should not be so foolish as to think myself innocent and beyond reproach for not having "really said it."

Having considered all this, I want to tell you about how I spent a number of hours from early May 2020 to early July 2020 working on a private Document about a disturbing hypothesis that had occurred to me earlier that year.

Previously, I had already thought it was nuts that trans ideology was exerting influence on the rearing of gender-non-conforming children—that is, children who are far outside the typical norm of behavior for their sex: very tomboyish girls and very effeminate boys.

Under recent historical conditions in the West, these kids were mostly "pre-gay" rather than trans. (The stereotype about lesbians being masculine and gay men being feminine is, like most stereotypes, basically true: sex-atypical childhood behavior between gay and straight adults has been meta-analyzed at Cohen's d ≈ 1.31 standard deviations for men and d ≈ 0.96 for women.) A solid majority of children diagnosed with gender dysphoria ended up growing out of it by puberty. In the culture of the current year, it seemed likely that a lot of those kids would instead get affirmed into a cross-sex identity at a young age, even though most of them would have otherwise (under a "watchful waiting" protocol) grown up to be ordinary gay men and lesbians.

What made this shift in norms crazy, in my view, was not just that transitioning younger children is a dubious treatment decision, but that it's a dubious treatment decision that was being made on the basis of the obvious falsehood that "trans" was one thing: the cultural phenomenon of "trans kids" was being used to legitimize trans adults, even though a supermajority of trans adults were in the late-onset taxon and therefore had never resembled these HSTS-taxon kids. That is: pre-gay kids in our Society are being sterilized in order to affirm the narcissistic delusions[29] of guys like me.

That much was obvious to anyone who's had their Blanchardian enlightenment, and wouldn't have been worth the effort of writing a special private Document about. The disturbing hypothesis that occurred to me in early 2020 was that, in the culture of the current year, affirmation of a cross-sex identity might happen to kids who weren't HSTS-taxon at all.

Very small children who are just learning what words mean say a lot of things that aren't true (I'm a grown-up; I'm a cat; I'm a dragon), and grownups tend to play along in the moment as a fantasy game, but they don't coordinate to make that the permanent new social reality.

But if the grown-ups have been trained to believe that "trans kids know who they are"—if they're emotionally eager at the prospect of having a transgender child, or fearful of the damage they might do by not affirming—they might selectively attend to confirming evidence that the child "is trans", selectively ignore contrary evidence that the child "is cis", and end up reinforcing a cross-sex identity that would not have existed if not for their belief in it—a belief that the same people raising the same child ten years ago wouldn't have held. (A September 2013 article in The Atlantic by the father of a male child with stereotypically feminine interests was titled "My Son Wears Dresses; Get Over It", not "My Daughter Is Trans; Get Over It".)

Crucially, if gender identity isn't an innate feature of toddler psychology, the child has no way to know anything is "wrong." If none of the grown-ups can say, "You're a boy because boys are the ones with penises" (because that's not what nice smart liberal people are supposed to believe in the current year), how is the child supposed to figure that out independently? Toddlers are not very sexually dimorphic, but sex differences in play style and social behavior tend to emerge within a few years. There were no cars in the environment of evolutionary adaptedness, and yet the effect size of the sex difference in preference for toy vehicles is a massive d ≈ 2.44, about one and a half times the size of the sex difference in adult height.

(I'm going with the MtF case without too much loss of generality; I don't think the egregore is quite as eager to transition females at this age, but the dynamics are probably similar.)

What happens when the kid develops a self-identity as a girl, only to find out, potentially years later, that she noticeably doesn't fit in with the (cis) girls on the many occasions that no one has explicitly spelled out in advance where people are using "gender" (perceived sex) to make a prediction or decision?

Some might protest, "But what's the harm? She can always change her mind later if she decides she's actually a boy." I don't doubt that if the child were to clearly and distinctly insist, "I'm definitely a boy," the nice smart liberal grown-ups would unhesitatingly accept that.

But the harm I'm theorizing is not that the child has an intrinsic male identity that requires recognition. (What is an "identity", apart from the ordinary factual belief that one is of a particular sex?) Rather, the concern is that social transition prompts everyone, including the child themself, to use their mental models of girls (juvenile female humans) to make (mostly subconscious rather than deliberative) predictions and decisions about the child, which will be a systematically worse statistical fit than their models of boys (juvenile male humans), because the child is, in fact, a boy (juvenile male human), and those miscalibrated predictions and decisions will make the child's life worse in a complicated, illegible way that doesn't necessarily result in the child spontaneously asserting, "I prefer that you call me a boy" against the current of everyone in the child's life having accepted otherwise for as long the kid can remember.

Scott Alexander has written about how concept-shaped holes can be impossible to notice. In a culture whose civic religion celebrates being trans and denies that gender has truth conditions other than the individual's say-so, there are concept-shaped holes that would make it hard for a kid to notice the hypothesis "I'm having a systematically worse childhood than I otherwise would have because all the grown-ups in my life have agreed I was a girl since I was three years old, even though all of my actual traits are sampled from the joint distribution for juvenile male humans, not juvenile female humans."

The epistemic difficulties extend to the grown-ups as well. I think people who are familiar with the relevant scientific literature or come from an older generation will find the story I've laid out above pretty compelling, but the parents are likely to be unmoved. They know they didn't coach the child to claim to be a girl. On what grounds could a stranger who wasn't there (or a skeptical family friend who sees the kid maybe once a month) assert that subconscious influence must be at work?

In the early twentieth century, a German schoolteacher named Wilhelm von Osten claimed to have taught his horse, Clever Hans, to do arithmetic and other intellectual feats. One could ask, "How much is 2/5 plus 1/2?" and the stallion would first stomp his hoof nine times, and then ten times—representing 9/10ths, the correct answer. An investigation concluded that no deliberate trickery was involved: Hans could often give the correct answer when questioned by a stranger, demonstrating that von Osten couldn't be secretly signaling the horse when to stop stomping. But further careful experiments by Oskar Pfungst revealed that Hans was picking up on unconscious cues "leaked" by the questioner's body language as the number of stomps approached the correct answer: for instance, Hans couldn't answer questions that the questioner themself didn't know.[30]

Notably, von Osten didn't accept Pfungst's explanation, continuing to believe that his intensive tutoring had succeeded in teaching the horse arithmetic.

It's hard to blame him, really. He had spent more time with Hans than anyone else. Hans observably could stomp out the correct answers to questions. Absent an irrational prejudice against the idea that a horse could learn arithmetic, why should he trust Pfungst's nitpicky experiments over the plain facts of his own intimately lived experience?

But what was in question wasn't the observations of Hans's performance, only the interpretation of what those observations implied about Hans's psychology. As Pfungst put it: "that was looked for in the animal which should have been sought in the man."

Similarly, in the case of a reputedly transgender three-year-old, a skeptical family friend isn't questioning observations of what the child said, only the interpretation of what those observations imply about the child's psychology. From the family's perspective, the evidence is clear: the child claimed to be a girl on many occasions over a period of months, and expressed sadness about being a boy. Absent an irrational prejudice against the idea that a child could be transgender, what could make them doubt the obvious interpretation of their own intimately lived experience?

From the skeptical family friend's perspective, there are a number of anomalies that cast serious doubt on what the family thinks is the obvious interpretation.

(Or so I'm imagining how this might go, hypothetically. The following illustrative vignettes may not reflect real events.)

For one thing, there may be clues that the child's information environment did not provide instruction on some of the relevant facts. Suppose that, six months before the child's social transition went down, another family friend had explained to the child that "Some people don't have penises." (Nice smart liberal grown-ups in the current year don't feel the need to be more specific.) Growing up in such a culture, the child's initial gender statements may reflect mere confusion rather than a deep-set need—and later statements may reflect social reinforcement of earlier confusion. Suppose that after social transition, the same friend explained to the child, "When you were little, you couldn't talk, so your parents had to guess whether you were a boy or a girl based on your parts." While this claim does convey the lesson that there's a customary default relationship between gender and genitals (in case that hadn't been clear before), it also reinforces the idea that the child is transgender.

For another thing, from the skeptical family friend's perspective, it's striking how the family and other grown-ups in the child's life seem to treat the child's statements about gender starkly differently than the child's statements about everything else.

Imagine that, around the time of the social transition, the child responded to "Hey kiddo, I love you" with, "I'm a girl and I'm a vegetarian." In the skeptic's view, both halves of that sentence were probably generated by the same cognitive algorithm—something like, "practice language and be cute to caregivers, making use of themes from the local cultural environment" (of nice smart liberal grown-ups who talk a lot about gender and animal welfare). In the skeptic's view, if you're not going to change the kid's diet on the basis of the second part, you shouldn't social transition the kid on the basis of the first part.

Perhaps even more striking is the way that the grown-ups seem to interpret the child's conflicting or ambiguous statements about gender. Imagine that, around the time social transition was being considered, a parent asked the child whether the child would prefer to be addressed as "my son" or "my daughter."

Suppose the child replied, "My son. Or you can call me she. Everyone should call me she or her or my son."

The grown-ups seem to mostly interpret exchanges like this as indicating that while the child is trans, she's confused about the gender of the words "son" and "daughter". They don't seem to pay much attention to the competing hypothesis that the child knows he's his parents "son", but is confused about the implications of she/her pronouns.

It's not hard to imagine how differential treatment by grown-ups of gender-related utterances could unintentionally shape outcomes. This may be clearer if we imagine a non-gender case. Suppose the child's father's name is John Smith, and that after a grown-up explains "Sr."/"Jr." generational suffixes after it happened to come up in fiction, the child declares that his name is John Smith, Jr. now. Caregivers are likely to treat this as just a cute thing that the kid said, quickly forgotten by all. But if caregivers feared causing psychological harm by denying a declared name change, one could imagine them taking the child's statement as a prompt to ask followup questions. ("Oh, would you like me to call you John or John Jr., or just Junior?") With enough followup, it seems plausible that a name change to "John Jr." would meet with the child's assent and "stick" socially. The initial suggestion would have come from the child, but most of the optimization [LW · GW]—the selection that this particular statement should be taken literally and reinforced as a social identity, while others are just treated as a cute but not overly meaningful thing the kid said—would have come from the adults.

Finally, there is the matter of the child's behavior and personality. Suppose that, around the same time that the child's social transition was going down, a parent reported the child being captivated by seeing a forklift at Costco. A few months later, another family friend remarked that maybe the child is very competitive, and that "she likes fighting so much because it's the main thing she knows of that you can win."

I think people who are familiar with the relevant scientific literature or come from an older generation would look at observations like these and say, Well, yes, he's a boy; boys like vehicles (d ≈ 2.44!) and boys like fighting. Some of them might suggest that these observations should be counterindicators for transition—that the cross-gender verbal self-reports are less decision-relevant than the fact of a male child behaving in male-typical ways. But nice smart liberal grown-ups in the current year don't think that way.

One might imagine that the inferential distance [LW · GW] between nice smart liberal grown-ups and people from an older generation (or a skeptical family friend) might be crossed by talking about it, but it turns out that talking doesn't help much when people have radically different priors and interpret the same evidence differently.

Imagine a skeptical family friend wondering (about four months after the social transition) what "being a girl" means to the child. How did the kid know?

A parent obliges to ask the child: "Hey kiddo, somebody wants to know how you know that you are a girl."

"Why?"

"He's interested in that kind of thing."

"I know that I'm a girl because girls like specific things like rainbows and I like rainbows so I'm a girl."

"Is that how you knew in the first place?"

"Yeah."

"You know there are a lot of boys who like rainbows."

"I don't think boys like rainbows so well—oh hey! Here this ball is!"

(When recounting this conversation, the parent adds that rainbows hadn't come up before, and that the child was looking at a rainbow-patterned item at the time of answering.)

It would seem that the interpretation of this kind of evidence depends on one's prior convictions. If you think that transition is a radical intervention that might pass a cost–benefit analysis for treating rare cases of intractable sex dysphoria, answers like "because girls like specific things like rainbows" are disqualifying. (A fourteen-year-old who could read an informed-consent form would be able to give a more compelling explanation than that, but a three-year-old just isn't ready to make this kind of decision.) Whereas if you think that some children have a gender that doesn't match their assigned sex at birth, you might expect them to express that affinity at age three, without yet having the cognitive or verbal abilities to explain it. Teasing apart where these two views make different predictions seems like it should be possible, but might be beside the point, if the real crux is over what categories are made for. (Is sex an objective fact that sometimes merits social recognition, or is it better to live in a Society where people are free to choose the gender that suits them?)

Anyway, that's just a hypothesis that occurred to me in early 2020, about something that could happen in the culture of the current year, hypothetically, as far as I know. I'm not a parent and I'm not an expert on child development. And even if the "Clever Hans" etiological pathway I conjectured is real, the extent to which it might apply to any particular case is complex; you could imagine a kid who was "actually trans" whose social transition merely happened earlier than it otherwise would have due to these dynamics.

For some reason, it seemed important that I draft a Document about it with lots of citations to send to a few friends. I thought about cleaning it up and publishing it as a public blog post (working title: "Trans Kids on the Margin; and, Harms from Misleading Training Data"), but for some reason, that didn't seem as pressing.

I put an epigraph at the top:

If you love someone, tell them the truth.

—Anonymous

Given that I spent so many hours on this little research and writing project in May–July 2020, I think it makes sense for me to mention it at this point in my memoir, where it fits in chronologically. I have an inalienable right to talk about my own research interests, and talking about my own research interests obviously doesn't violate any norm against leaking private information about someone else's family, or criticizing someone else's parenting decisions.

The New York Times Pounces (June 2020)

On 1 June 2020, I received a Twitter DM from New York Times reporter Cade Metz, who said he was "exploring a story about the intersection of the rationality community and Silicon Valley." I sent him an email saying that I would be happy to talk but that had been pretty disappointed with the community lately: I was worried that the social pressures of trying to be a "community" and protect the group's status (e.g., from New York Times reporters who might portray us in an unflattering light?) might incentivize people to compromise on the ideals of systematically correct reasoning that made the community valuable in the first place.

He never got back to me. Three weeks later, all existing Slate Star Codex posts were taken down. A lone post on the main page explained that the New York Times piece was going to reveal Alexander's real last name and he was taking his posts down as a defensive measure. (No blog, no story?) I wrote a script (slate_starchive.py) to replace the Slate Star Codex links on this blog with links to the most recent Internet Archive copy.

Philosophy Blogging Interlude 3! (mid-2020)

I continued my philosophy of language work, looking into the academic literature on formal models of communication and deception. I wrote a couple [LW · GW] posts [LW · GW] encapsulating what I learned from that—and I continued work on my "advanced" philosophy of categorization thesis, the sequel to "Where to Draw the Boundaries?" [LW · GW]

The disclaimer note that Scott Alexander had appended to "... Not Man for the Categories" after our Christmas 2019 discussion had said:

I had hoped that the Israel/Palestine example above made it clear that you have to deal with the consequences of your definitions, which can include confusion, muddling communication, and leaving openings for deceptive rhetorical strategies.

This is certainly an improvement over the original text without the note, but I took the use of the national borders metaphor to mean that Scott still hadn't gotten my point about there being laws of thought underlying categorization: mathematical principles governing how choices of definition can muddle communication or be deceptive. (But that wasn't surprising; by Scott's own admission, he's not a math guy.)

Category "boundaries" are a useful visual metaphor for explaining the cognitive function of categorization: you imagine a "boundary" in configuration space containing all the things that belong to the category.

If you have the visual metaphor, but you don't have the math, you might think that there's nothing intrinsically wrong with squiggly or discontinuous category "boundaries", just as there's nothing intrinsically wrong with Alaska not being part of the contiguous United States. It may be inconvenient that you can't drive from Alaska to Washington without going through Canada, but it's not wrong that the borders are drawn that way: Alaska really is governed by the United States.

But if you do have the math, a moment of introspection will convince you that the analogy between category "boundaries" and national borders is shallow.

A two-dimensional political map tells you which areas of the Earth's surface are under the jurisdiction of which government. In contrast, category "boundaries" tell you which regions of very high-dimensional configuration space correspond to a word/concept, which is useful because that structure can be used to make probabilistic inferences. You can use your observations of some aspects of an entity (some of the coordinates of a point in configuration space) to infer category-membership, and then use category membership to make predictions about aspects that you haven't yet observed.

But the trick only works to the extent that the category is a regular, non-squiggly region of configuration space: if you know that egg-shaped objects tend to be blue, and you see a black-and-white photo of an egg-shaped object, you can get close to picking out its color on a color wheel. But if egg-shaped objects tend to blue or green or red or gray, you wouldn't know where to point to on the color wheel.

The analogous algorithm applied to national borders on a political map would be to observe the longitude of a place, use that to guess what country the place is in, and then use the country to guess the latitude—which isn't typically what people do with maps. Category "boundaries" and national borders might both be illustrated similarly in a two-dimensional diagram, but philosophically, they're different entities. The fact that Scott Alexander was appealing to national borders to defend gerrymandered categories, suggested that he didn't understand this.

I still had some deeper philosophical problems to resolve, though. If squiggly categories were less useful for inference, why would someone want a squiggly category boundary? Someone who said, "Ah, but I assign higher utility to doing it this way" had to be messing with you. Squiggly boundaries were less useful for inference; the only reason you would realistically want to use them would be to commit fraud, to pass off pyrite as gold by redefining the word "gold".

That was my intuition. To formalize it, I wanted some sensible numerical quantity that would be maximized by using "nice" categories and get trashed by gerrymandering. Mutual information was the obvious first guess, but that wasn't it, because mutual information lacks a "topology", a notion of "closeness" that would make some false predictions better than others by virtue of being "close".

Suppose the outcome space of X is {H, T} and the outcome space of Y is {1, 2, 3, 4, 5, 6, 7, 8}. I wanted to say that if observing X=H concentrates Y's probability mass on {1, 2, 3}, that's more useful than if it concentrates Y on {1, 5, 8}. But that would require the numerals in Y to be numbers rather than opaque labels; as far as elementary information theory was concerned, mapping eight states to three states reduced the entropy from lg 8 = 3 to lg 3 ≈ 1.58 no matter which three states they were.

How could I make this rigorous? Did I want to be talking about the variance of my features conditional on category membership? Was "connectedness" what I wanted, or was it only important because it cut down the number of possibilities? (There are 8!/(6!2!) = 28 ways to choose two elements from {1..8}, but only 7 ways to choose two contiguous elements.) I thought connectedness was intrinsically important, because we didn't just want few things, we wanted things that are similar enough to make similar decisions about.

I put the question to a few friends in July 2020 (Subject: "rubber duck philosophy"), and Jessica said that my identification of the variance as the key quantity sounded right: it amounted to the expected squared error of someone trying to guess the values of the features given the category. It was okay that this wasn't a purely information-theoretic criterion, because for problems involving guessing a numeric quantity, bits that get you closer to the right answer were more valuable than bits that didn't.

A Couple of Impulsive Emails (September 2020)

I decided on "Unnatural Categories Are Optimized for Deception" as the title for my advanced categorization thesis. Writing it up was a major undertaking. There were a lot of nuances to address and potential objections to preëmpt, and I felt that I had to cover everything. (A reasonable person who wanted to understand the main ideas wouldn't need so much detail, but I wasn't up against reasonable people who wanted to understand.)

In September 2020, Yudkowsky Tweeted something about social media incentives prompting people to make nonsense arguments, and something in me boiled over. The Tweets were fine in isolation, but I rankled at it given the absurdly disproportionate efforts I was undertaking to unwind his incentive-driven nonsense. I left a snarky, pleading reply and vented on my own timeline (with preview images from the draft of "Unnatural Categories Are Optimized for Deception"):

Who would have thought getting @ESYudkowsky's robot cult to stop trying to trick me into cutting my dick off (independently of the empirical facts determining whether or not I should cut my dick off) would involve so much math?? OK, I guess the math part isn't surprising, but—[31]

My rage-boil continued into staying up late writing him an angry email, which I mostly reproduce below (with a few redactions for either brevity or compliance with privacy norms, but I'm not going to clarify which).

To: Eliezer Yudkowsky <[redacted]>
Cc: Anna Salamon <[redacted]>
Date: Sunday 13 September 2020 2:24 a.m.
Subject: out of patience

"I could beg you to do it in order to save me. I could beg you to do it in order to avert a national disaster. But I won't. These may not be valid reasons. There is only one reason: you must say it, because it is true."
Atlas Shrugged by Ayn Rand

Dear Eliezer (cc Anna as mediator):

Sorry, I'm getting really really impatient (maybe you saw my impulsive Tweet-replies today; and I impulsively called Anna today; and I've spent the last few hours drafting an even more impulsive hysterical-and-shouty potential Less Wrong post; but now I'm impulsively deciding to email you in the hopes that I can withhold the hysterical-and-shouty post in favor of a lower-drama option of your choice): is there any way we can resolve the categories dispute in public?! Not any object-level gender stuff which you don't and shouldn't care about, just the philosophy-of-language part.

My grievance against you is very simple. You are on the public record claiming that:

you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning.

I claim that this is false. I think I am standing in defense of truth when I insist on a word, brought explicitly into question, being used with some particular meaning, when I have an argument for why my preferred usage does a better job of "carving reality at the joints" and the one bringing my usage into question doesn't have such an argument. And in particular, "This word usage makes me sad" doesn't count as a relevant argument. I agree that words don't have intrinsic ontologically-basic meanings [LW · GW], but precisely because words don't have intrinsic ontologically-basic meanings, there's no reason to challenge someone's word usage except because of the hidden probabilistic inference it embodies.

Imagine one day David Gerard of /r/SneerClub said, "Eliezer Yudkowsky is a white supremacist!" And you replied: "No, I'm not! That's a lie." And imagine E.T. Jaynes was still alive and piped up, "You are ontologcially confused if you think that's a false assertion. You're not standing in defense of truth if you insist on words, such white supremacist, brought explicitly into question, being used with some particular meaning." Suppose you emailed Jaynes about it, and he brushed you off with, "But I didn't say you were a white supremacist; I was only targeting a narrow ontology error." In this hypothetical situation, I think you might be pretty upset—perhaps upset enough to form a twenty-one month grudge against someone whom you used to idolize?

I agree that pronouns don't have the same function as ordinary nouns. However, in the English language as actually spoken by native speakers, I think that gender pronouns do have effective "truth conditions" as a matter of cognitive science. If someone said, "Come meet me and my friend at the mall; she's really cool and you'll like her", and then that friend turned out to look like me, you would be surprised.

I don't see the substantive difference between "You're not standing in defense of truth (...)" and "I can define a word any way I want." [...]

[...]

As far as your public output is concerned, it looks like you either changed your mind about how the philosophy of language works, or you think gender is somehow an exception. If you didn't change your mind, and you don't think gender is somehow an exception, is there some way we can get that on the public record somewhere?!

As an example of such a "somewhere", I had asked you for a comment on my explanation, "Where to Draw the Boundaries?" [LW · GW] (with non-politically-hazardous examples about dolphins and job titles) [...] I asked for a comment from Anna, and at first she said that she would need to "red team" it first (because of the political context), and later she said that she was having difficulty for other reasons. Okay, the clarification doesn't have to be on my post. I don't care about credit! I don't care whether or not anyone is sorry! I just need this trivial thing settled in public so that I can stop being in pain and move on with my life.

As I mentioned in my Tweets today, I have a longer and better explanation than "... Boundaries?" mostly drafted. (It's actually somewhat interesting; the logarithmic score doesn't work as a measure of category-system goodness because it can only reward you for the probability you assign to the exact answer, but we want "partial credit" for almost-right answers, so the expected squared error is actually better here, contrary to what you said in the "Technical Explanation" about what Bayesian statisticians do). [...]

The only thing I've been trying to do for the past twenty-one months is make this simple thing established "rationalist" knowledge:

(1) For all nouns N, you can't define N any way you want, for at least 37 reasons [LW · GW].

(2) Woman is such a noun.

(3) Therefore, you can't define the word woman any way you want.

(Note, this is totally compatible with the claim that trans women are women, and trans men are men, and nonbinary people are nonbinary! It's just that you have to argue for why those categorizations make sense in the context you're using the word, rather than merely asserting it with an appeal to arbitrariness.)

This is literally modus ponens. I don't understand how you expect people to trust you to save the world with a research community that literally cannot perform modus ponens.

[...] See, I thought you were playing on the chessboard of being correct about rationality. Such that, if you accidentally mislead people about your own philosophy of language, you could just ... issue a clarification? I and Michael and Ben and Sarah and ["Riley"] and Jessica wrote to you about this and explained the problem in painstaking detail, and you stonewalled us. Why? Why is this so hard?!

[...]

No. The thing that's been driving me nuts for twenty-one months is that I expected Eliezer Yudkowsky to tell the truth. I remain,

Your heartbroken student,
Zack M. Davis

I followed it with another email after I woke up the next morning:

To: Eliezer Yudkowsky <[redacted]>
Cc: Anna Salamon <[redacted]>
Date: Sunday 13 September 2020 11:02 a.m.
Subject: Re: out of patience

[...] The sinful and corrupted part wasn't the initial Tweets; the sinful and corrupted part is this bullshit stonewalling when your Twitter followers and me and Michael and Ben and Sarah and ["Riley"] and Jessica tried to point out the problem. I've never been arguing against your private universe [...]; the thing I'm arguing against in "Where to Draw the Boundaries?" [LW · GW] (and my unfinished draft sequel, although that's more focused on what Scott wrote) is the actual text you actually published, not your private universe.

[...] you could just publicly clarify your position on the philosophy of language the way an intellectually-honest person would do if they wanted their followers to have correct beliefs about the philosophy of language?!

You wrote:

Using language in a way you dislike, openly and explicitly and with public focus on the language and its meaning, is not lying.

Now, maybe as a matter of policy, you want to make a case for language being used a certain way. Well, that's a separate debate then. But you're not making a stand for Truth in doing so, and your opponents aren't tricking anyone or trying to.

The problem with "it's a policy debate about how to use language" is that it completely elides the issue that some ways of using language perform better at communicating information, such that attempts to define new words or new senses of existing words should come with a justification for why the new sense is useful for conveying information, and that is a matter of Truth. Without such a justification, it's hard to see why you would want to redefine a word except to mislead people with strategic equivocation.

It is literally true that Eliezer Yudkowsky is a white supremacist (if I'm allowed to define "white supremacist" to include "someone who once linked to the 'Race and intelligence' Wikipedia page [LW · GW] in a context that implied that it's an empirical question").

It is literally true that 2 + 2 = 6 (if I'm allowed to define '2' as •••-many).

You wrote:

The more technology advances, the further we can move people towards where they say they want to be in sexspace. Having said this we've said all the facts.

That's kind of like defining Solomonoff induction, and then saying, "Having said this, we've built AGI." No, you haven't said all the facts! Configuration space is very high-dimensional; we don't have access to the individual points. Trying to specify the individual points ("say all the facts") would be like what you wrote about in "Empty Labels" [LW · GW]—"not just that I can vary the label, but that I can get along just fine without any label at all." Since that's not possible, we need to group points into the space together so that we can use observations from the coordinates that we have observed to make probabilistic inferences about the coordinates we haven't. But there are mathematical laws governing how well different groupings perform, and those laws are a matter of Truth, not a mere policy debate.

[...]

But if behavior at equilibrium isn't deceptive, there's just no such thing as deception; I wrote about this on Less Wrong in "Maybe Lying Can't Exist?!" [LW · GW] (drawing on the academic literature about sender–receiver games). I don't think you actually want to bite that bullet?

In terms of information transfer, there is an isomorphism between saying "I reserve the right to lie 5% of the time about whether something is a member of category C" and adopting a new definition of C that misclassifies 5% of instances with respect to the old definition.

Like, I get that you're ostensibly supposed to be saving the world and you don't want randos yelling at you in your email about philosophy. But I thought the idea was that we were going to save the world by means of doing unusually clear thinking?

Scott wrote (with an irrelevant object-level example redacted): "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life." (Okay, he added a clarification after I spent Christmas yelling at him; but I think he's still substantially confused in ways that I address in my forthcoming draft post.)

You wrote: "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning."

I think I've argued pretty extensively this is wrong! I'm eager to hear counterarguments if you think I'm getting the philosophy wrong. But ... "people live in different private universes" is not a counterargument.

It makes sense that you don't want to get involved in gender politics. That's why I wrote "... Boundaries?" using examples about dolphins and job titles, and why my forthcoming post has examples about bleggs and artificial meat. This shouldn't be expensive to clear up?! This should take like, five minutes? (I've spent twenty-one months of my life on this.) Just one little ex cathedra comment on Less Wrong or somewhere (it doesn't have to be my post, if it's too long or I don't deserve credit or whatever; I just think the right answer needs to be public) affirming that you haven't changed your mind about 37 Ways Words Can Be Wrong? Unless you have changed your mind, of course?

I can imagine someone observing this conversation objecting, "[...] why are you being so greedy? We all know the real reason you want to clear up this philosophy thing in public is because it impinges on your gender agenda, but Eliezer already threw you a bone with the 'there's probably more than one type of dysphoria' thing. That was already a huge political concession to you! That makes you more than even; you should stop being greedy and leave Eliezer alone."

But as I explained in my reply criticizing why I think that argument is wrong, the whole mindset of public-arguments-as-political-favors is crazy. The fact that we're having this backroom email conversation at all (instead of just being correct about the philosophy of language on Twitter) is corrupt! I don't want to strike a deal in a political negotiation; I want shared maps that reflect the territory. I thought that's what this "rationalist community" thing was supposed to do? Is that not a thing anymore? If we can't do the shared-maps thing when there's any hint of political context (such that now you can't clarify the categories thing, even as an abstract philosophy issue about bleggs, because someone would construe that as taking a side on whether trans people are Good or Bad), that seems really bad for our collective sanity?! (Where collective sanity is potentially useful for saving the world, but is at least a quality-of-life improver if we're just doomed to die in 15 years no matter what.)

I really used to look up to you. In my previous interactions with you, I've been tightly cognitively constrained by hero-worship. I was already so starstruck that Eliezer Yudkowsky knows who I am, that the possibility that Eliezer Yudkowsky might disapprove of me, was too terrifying to bear. I really need to get over that, because it's bad for me, and it's really bad for you [LW · GW]. I remain,

Your heartbroken student,
Zack M. Davis

These emails were pretty reckless by my usual standards. (If I was entertaining some hope of serving as a mediator between the Caliphate and Vassar's splinter group after the COVID lockdowns were over, this outburst wasn't speaking well to my sobriety.) But as the subject line indicates, I was just—out of patience. I had spent years making all the careful arguments I could make. What was there left for me to do but scream?

The result of this recklessness was ... success! Without disclosing anything from any private conversations that may or may not have occurred, Yudkowsky did publish a clarification on Facebook, that he had meant to criticize only the naïve essentialism of asserting that a word Just Means something and that anyone questioning it is Just Lying, and not the more sophisticated class of arguments that I had been making.

In particular, the post contained this line:

you are being the bad guy if you try to shut down that conversation by saying that "I can define the word 'woman' any way I want"

There it is! A clear ex cathedra statement that gender categories are not an exception to the general rule that categories aren't arbitrary. (Only 1 year and 8 months after asking for it.) I could quibble with some of Yudkowsky's exact writing choices, which I thought still bore the signature of political squirming,[32] but it would be petty to dwell on quibbles when the core problem had been addressed.

I wrote to Michael, Ben, Jessica, Sarah, and "Riley", thanking them for their support. After successfully bullying Scott and Eliezer into clarifying, I was no longer at war with the robot cult and feeling a lot better (Subject: "thank-you note (the end of the Category War)").

I had a feeling, I added, that Ben might be disappointed with the thank-you note insofar as it could be read as me having been "bought off" rather than being fully on the side of clarity-creation. But I contended that not being at war actually made it emotionally easier to do clarity-creation writing. Now I would be able to do it in a contemplative spirit of "Here's what I think the thing is actually doing" rather than in hatred with flames on the side of my face.

A Private Catastrophe (December 2020)

There's a dramatic episode that would fit here chronologically if this were an autobiography (which existed to tell my life story), but since this is a topic-focused memoir (which exists because my life happens to contain this Whole Dumb Story which bears on matters of broader interest, even if my life would not otherwise be interesting), I don't want to spend more wordcount than is needed to briefly describe the essentials.

I was charged by members of the extended Michael Vassar–adjacent social circle with the duty of taking care of a mentally-ill person at my house on 18 December 2020. (We did not trust the ordinary psychiatric system to act in patients' interests.) I apparently did a poor job, and ended up saying something callous on the care team group chat after a stressful night, which led to a chaotic day on the nineteenth, and an ugly falling-out between me and the group. The details aren't particularly of public interest.

My poor performance during this incident weighs on my conscience particularly because I had previously been in the position of being crazy and benefiting from the help of my friends (including many of the same people involved in this incident) rather than getting sent back to psychiatric prison ("hospital", they call it a "hospital"). Of all people, I had a special debt to "pay it forward", and one might have hoped that I would also have special skills, that having been on the receiving end of a non-institutional psychiatric tripsitting operation would help me know what to do on the giving end. Neither of those panned out.

Some might appeal to the proverb "All's well that ends well", noting that the person in trouble ended up recovering, and that, while the stress of the incident contributed to a somewhat serious relapse of my own psychological problems on the night of the nineteenth and in the following weeks, I ended up recovering, too. But recovering normal functionality after a traumatic episode doesn't imply a lack of other lasting consequences (to the psyche, to trusting relationships, &c.). I am therefore inclined to dwell on another proverb, "A lesson is learned but the damage is irreversible."

A False Dénouement (January 2021)

I published "Unnatural Categories Are Optimized for Deception" [LW · GW] in January 2021.

I wrote back to Abram Demski regarding his comments from fourteen months before: on further thought, he was right. Even granting my point that evolution didn't figure out how to track probability and utility separately, as Abram had pointed out, the fact that it didn't meant that not tracking it could be an effective AI design. Just because evolution takes shortcuts that human engineers wouldn't didn't mean shortcuts are "wrong". (Rather, there are laws governing which kinds of shortcuts work.)

Abram was also right that it would be weird if reflective coherence was somehow impossible: the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'its own' code." In that light, it made sense to regard "have accurate beliefs" as merely a convergent instrumental subgoal, rather than what rationality is about—as sacrilegious as that felt to type.

And yet, somehow, "have accurate beliefs" seemed more fundamental than other convergent instrumental subgoals like "seek power and resources". Could this be made precise? As a stab in the dark, was it possible that the theorems on the ubiquity of power-seeking [LW · GW] might generalize to a similar conclusion about "accuracy-seeking"? If it didn't, the reason why it didn't might explain why accuracy seemed more fundamental.


And really, that should have been the end of the story. At the cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word woman any way you like. This suggested poor cognitive returns on investment from interacting with the "rationalist" community—if it took that much effort to correct a problem I had noticed myself, I couldn't expect them to help me with problems I couldn't detect—but I didn't think I was entitled to more. If I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war.

It turned out that I would have occasion to continue waging the robot-cult religious civil war. (To be continued.)


  1. The original quote says "one hundred thousand straights" ... "gay community" ... "gay and lesbian" ... "franchise rights on homosexuality" ... "unauthorized queer." ↩︎

  2. Although Sarah Constantin and "Riley" had also been involved in reaching out to Yudkowsky and were included in many subsequent discussions, they seemed like more marginal members of the group that was forming. ↩︎

  3. At least, not blameworthy in the same way as someone who committed the same violence as an individual. ↩︎

  4. The Sequences post referenced here, "Your Price for Joining" [LW · GW], argues that rationalists are too prone to "take their ball and go home" rather than tolerating imperfections in a collective endeavor. To combat this, Yudkowsky proposes a norm:

    If the issue isn't worth your personally fixing by however much effort it takes, and it doesn't arise from outright bad faith, it's not worth refusing to contribute your efforts to a cause you deem worthwhile.

    I claim that I was meeting this standard: I was willing to personally fix the philosophy-of-categorization issue no matter how much effort it took, and the issue did arise from outright bad faith. ↩︎

  5. It was common practice in our subculture to name group houses. My apartment was "We'll Name It Later." ↩︎

  6. I'm not giving Mike a pseudonym because his name is needed for this adorable anecdote to make sense, and I'm not otherwise saying sensitive things about him. ↩︎

  7. Anna was a very busy person who I assumed didn't always have time for me, and I wasn't earning-to-give anymore after my 2017 psych ward experience made me more skeptical about institutions (including EA charities) doing what they claimed. Now that I'm not currently dayjobbing, I wish I had been somewhat less casual about spending money during this period. ↩︎

  8. I was still deep enough in my hero worship that I wrote "plausibly" in an email at the time. Today, I would not consider the adverb necessary. ↩︎

  9. I particularly appreciated Said Achmiz's defense of disregarding community members' feelings [LW(p) · GW(p)], and Ben's commentary on speech acts that lower the message length of proposals to attack some group [LW(p) · GW(p)]. ↩︎

  10. No one ever seems to be able to explain to me what this phrase means. [LW · GW] ↩︎

  11. For one important disanalogy, perps don't gain from committing manslaughter. ↩︎

  12. The draft was hidden, but the API apparently didn't filter out comments on hidden posts, and the thread was visible on the third-party GreaterWrong site; I filed a bug. ↩︎

  13. Arnold qualifies this in the next paragraph:

    [in public. In private things are much easier. It's also the case that private channels enable collusion—that was an update [I]'ve made over the course of the conversation. ]

    Even with the qualifier, I still think this deserves a "(!!)". ↩︎

  14. An advantage of mostly living on the internet is that I have logs of the important things. I'm only able to tell this Whole Dumb Story with this much fidelity because for most of it, I can go back and read the emails and chatlogs from the time. Now that audio transcription has fallen to AI, maybe I should be recording more real-life conversations? In the case of this meeting, supposedly one of the Less Wrong guys was recording, but no one had it when I asked in October 2022. ↩︎

  15. Rationality and Effective Altruism Community Hub ↩︎

  16. Oddly, Kelsey seemed to think the issue was that my allies and I were pressuring Yudkowsky to make a public statement, which he supposedly never does. From our perspective, the issue was that he had made a statement and it was wrong. ↩︎

  17. As I had explained to him earlier, Alexander's famous post on the noncentral fallacy [LW · GW] condemned the same shenanigans he praised in the context of gender identity: Alexander's examples of the noncentral fallacy had been about edge-cases of a negative-valence category being inappropriately framed as typical (abortion is murder, taxation is theft), but "trans women are women" was the same thing, but with a positive-valence category.

    In "Does the Glasgow Coma Scale exist? Do Comas?" (published just three months before "... Not Man for the Categories"), Alexander defends the usefulness of "comas" and "intelligence" in terms of their predictive usefulness. (The post uses the terms "predict", "prediction", "predictive power", &c. 16 times.) He doesn't say that the Glasgow Coma Scale is justified because it makes people happy for comas to be defined that way, because that would be absurd. ↩︎

  18. The last of the original Sequences had included a post, "Rationality: Common Interest of Many Causes" [LW · GW], which argued that different projects should not regard themselves "as competing for a limited supply of rationalists with a limited capacity for support; but, rather, creating more rationalists and increasing their capacity for support." It was striking that the "Kolmogorov Option"-era Caliphate took the opposite policy: throwing politically unpopular projects (like autogynephila- or human-biodiversity-realism) under the bus to protect its own status. ↩︎

  19. The original TechCrunch comment would seem to have succumbed to linkrot, but it was quoted by Moldbug and others. ↩︎

  20. The pleonasm here ("to me" being redundant with "I thought") is especially galling coming from someone who's usually a good writer! ↩︎

  21. At best, "I" statements make sense in a context where everyone's speech is considered part of the "official record". Wrapping controversial claims in "I think" removes the need for opponents to immediately object for fear that the claim will be accepted onto the shared map. ↩︎

  22. Specifically, altruism towards the author. Altruistic benefits to other readers are a reason for criticism to be public. ↩︎

  23. That is, there's an analogy between economically valuable labor, and intellectually productive criticism: if you accept the necessity of paying workers money in order to get good labor out of them, you should understand the necessity of awarding commenters status in order to get good criticism out of them. ↩︎

  24. On the other hand, there's a case to be made that the connection between white-collar crime and the problems we saw with the community is stronger than it first appears. Trying to describe the Blight to me in April 2019, Ben wrote, "People are systematically conflating corruption, accumulation of dominance, and theft, with getting things done." I imagine a rank-and-file EA looking at this text and shaking their head at how hyperbolically uncharitable Ben was being. Dominance, corruption, theft? Where was his evidence for these sweeping attacks on these smart, hard-working people trying to make the world a better place?

    In what may be a relevant case study, three and a half years later, the FTX cryptocurrency exchange founded by effective altruists as an earning-to-give scheme turned out to be an enormous fraud à la Enron and Madoff. In Going Infinite, Michael Lewis's book on FTX mastermind Sam Bankman-Fried, Lewis describes Bankman-Fried's "access to a pool of willing effective altruists" as the "secret weapon" of FTX predecessor Alameda Research: Wall Street firms powered by ordinary greed would have trouble trusting employees with easily-stolen cryptocurrency, but ideologically-driven EAs could be counted on to be working for the cause. Lewis describes Alameda employees seeking to prevent Bankman-Fried from deploying a trading bot with access to $170 million for fear of losing all that money "that might otherwise go to effective altruism". Zvi Mowshowitz's review of Going Infinite recounts Bankman-Fried in 2017 urging Mowshowitz to disassociate with Ben because Ben's criticisms of EA hurt the cause. (It's a small world.)

    Rank-and-file EAs can contend that Bankman-Fried's crimes have no bearing on the rest of the movement, but insofar as FTX looked like a huge EA success before it turned out to all be a lie, Ben's 2019 complaints are looking prescient to me in retrospect. (And insofar as charitable projects are harder to evaluate than whether customers can withdraw their cryptocurrency, there's reason to fear that other apparent EA successes may also be illusory.) ↩︎

  25. The ungainly title was softened from an earlier draft following feedback from the posse; I had originally written "... Surprisingly Useless". ↩︎

  26. On this point, it may be instructive to note that a 2023 survey found that only 60% of the UK public knew that "trans women" were born male. ↩︎

  27. Enough to not even scare-quote the term here. ↩︎

  28. I had identified three classes of reasons not to carve reality at the joints: coordination (wanting everyone to use the same definitions) [LW · GW], wireheading (making the map look good, at the expense of it failing to reflect the territory), and war (sabotaging someone else's map to make them do what you want). Michael's proposal would fall under "coordination" insofar as it was motivated by the need to use the same categories as everyone else. (Although you could also make a case for "war" insofar as the civil-rights model winning entailed that adherents of the TERF or medical models must lose.) ↩︎

  29. Reasonable trans people aren't the ones driving the central tendency of the trans rights movement. When analyzing a wave of medical malpractice on children, I think I'm being literal in attributing causal significance to a political motivation to affirm the narcissistic delusions of (some) guys like me, even though not all guys like me are delusional, and many guys like me are doing fine maintaining a non-guy social identity without spuriously dragging children into it. ↩︎

  30. Oskar Pfungst, Clever Hans (The Horse Of Mr. Von Osten): A Contribution To Experimental Animal and Human Psychology, translated from the German by Carl L. Rahn ↩︎

  31. I anticipate that some readers might object to the "trying to trick me into cutting my dick off" characterization. But as Ben had pointed out earlier, we have strong reason to believe that an information environment of ubiquitous propaganda was creating medical transitions on the margin. I think it made sense for me to use emphatic language to highlight what was actually at stake here! ↩︎

  32. The way that the post takes pains to cast doubt on whether someone who is alleged to have committed the categories-are-arbitrary fallacy is likely to have actually committed it ("the mistake seems like it wouldn't actually fool anybody or be committed in real life, I am unlikely to be sympathetic to the argument", "But be wary of accusing somebody of planning to do this, if you haven't documented them actually doing it") is in stark contrast to the way that "A Human's Guide to Words" had taken pains to emphasize that categories shape cognition regardless of whether someone is consciously trying to trick you ("drawing a boundary in thingspace is not a neutral act [...] Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind" [LW · GW]). I'm suspicious that the change in emphasis reflects the need to not be seen as criticizing the "pro-trans" coalition, rather than any new insight into the subject matter.

    The first comment on the post linked to "... Not Man for the Categories". Yudkowsky replied, "I assumed everybody reading this had already read https://wiki.lesswrong.com/wiki/A_Human's_Guide_to_Words", a non sequitur that could be taken to suggest (but did not explicitly say) that the moral of "... Not Man for the Categories" was implied by "A Human's Guide to Words" (in contrast to my contention that "... Not Man for the Categories" was getting it wrong). ↩︎

191 comments

Comments sorted by top scores.

comment by Yoav Ravid · 2023-12-31T06:08:24.225Z · LW(p) · GW(p)

I don't have a lot to say, but I feel like mentioning that I read the whole thing, enjoyed it, and agreed with you, including on the point that if rationalists can't agree with your philosophy of language because of instrumental motivations then it's a problem for us as a group of people who try to reason clearly without such influences.

comment by Viliam · 2023-12-30T22:17:00.826Z · LW(p) · GW(p)

This is a fascinating story about obsession written in the first-person perspective. It is also too long to get an object-level reply, unless one decides to spend an entire day composing one. A meaningful meta-level reply, such as "dude, relax, and get some psychological help" will probably get me classified as an enemy, and will be interpreted as further evidence about how sick and corrupt is the mainstream-rationalist society.

Honestly, I don't care about your feud, because it became too complicated for me to understand. Is there a way to summarize this shortly? Eliezer disagreed with you about something, or maybe you just interpreted something he wrote as a disagreement with you... and now your soul can't find peace until he admits that he was wrong and you were right about things that are too meta for me to understand wtf you are talking about...

You had an erotic fantasy that became a centerpiece of your mental landscape, and you insist that it contains the actual answer to the mysteries of trans-sexuality, and you are frustrated that other people (especially rationalists) do not see it the same way. Well, maybe it does, maybe it does not. Maybe your fantasy is typical, maybe it is unique, I don't know; honestly I don't even care much; maybe if other people in a similar position all agreed with you, I would say "okay, this seems to explain some things", but this doesn't seem to be the case. You keep insisting that unless everyone updates on the generalization of your personal nontransferable experience, we have all collectively failed as a rationalist community. I disagree.

The reason of my disagreement is not that I am a politically correct sheep unable to contemplate edgy thoughts, but because (1) I am not even sure what your position that you feel so strongly about actually is, (2) you seem to generalize from your personal experience to everyone else, which seems suspicious given the lack of people saying "me too", and (3) your argumentation seems to consist of writing longer and longer articles, going more meta, and accusing people of failing at rationality if they disagree with you.

What could possibly convince me that you are correct? I guess, if you described your theory clearly, and then most transsexual people here said "yes, this also describes my personal experience". (I would still assign some probability to the possibility that readers of Less Wrong are not typical representatives of the population.)

But what if you are right, and everyone else is wrong, and by rejecting you merely because no one else agrees with you, I will be wrong, too? Frankly, I am okay with that. I generally prefer to be right rather than wrong, but I do not imagine that I have a magical ability to figure out the truth based on little evidence. No strong evidence = no strong opinion. Your strong belief is weak evidence in that direction, but the lack of people agreeing with you is evidence in the opposite direction; from my perspective, "I don't know" is the rational conclusion.

(If you find people who agree with your opinions, perhaps it would make sense to ask them to describe it, using their own words. You are clearly doing a bad job convincing people, maybe they will be more successful at it.)

If you have no new evidence (not "going more meta"; not "writing even longer articles"), please take a break.

Replies from: Vaniver, tailcalled
comment by Vaniver · 2023-12-31T20:03:22.365Z · LW(p) · GW(p)

Is there a way to summarize this shortly? Eliezer disagreed with you about something, or maybe you just interpreted something he wrote as a disagreement with you... and now your soul can't find peace until he admits that he was wrong and you were right about things that are too meta for me to understand wtf you are talking about...

Here's an attempt.

Sometimes people have expectations of each other, like "you won't steal objects from my house".  Those expectations get formed by both explicit and implicit promises. Violating those expectations is often a big deal, not just to the injured party but also to third parties--someone who stole from Alice might well steal from you, too.

To the extent this community encouraged expectations of each other, they were about core epistemic virtues and discussion practices. People will try to ensure their beliefs are consistent with their other beliefs; they won't say things without believing them; they'll share evidence when they can; when they are bound to be uncooperative, they at least explain how and why they'll be uncooperative, and so on. 

[For example, I keep secrets because I think information can be owned, even tho this is cooperative with the information-owner and not with the information-wanter.]

So "Eliezer disagreed with you about something" is an understatement; disagreement is fine, expected even! The thing was that instead of having a regular disagreement in the open, Zack saw Eliezer as breaking a lot of these core expectations, not being open about it or acknowledging it when being called out, and also others not reacting to Eliezer breaking those expectations. (If Eliezer had punched Zack, people would probably have thought that was shocking and criticized it, but this was arguably worse given the centrality of these expectations to Eliezer's prominence and yet people were reacting less.)

That said, the promises were (I think) clearly aspirational / mediated by the pressures of having to actually exist in the world. I do think it makes sense to have a heresy budget, and I think Zack got unlucky with the obsession lottery. I think if people had originally said to Zack "look, we're being greengrocers on your pet issue, sorry about throwing you to the wolves" he would have been sad but moved on; see his commentary on the 2013 disavowal.

Instead they made philosophical arguments that, as far as I can tell, were not correct, and this was crazy-making, because Zack now also doubted his reasoning that led to him disagreeing with them, but no one would talk about this publicly. (Normally if Zack was making a mistake, people could just point to the mistake, and then he could fix the upstream generator of that mistake and everyone could move on.) And, also, to the extent that they generalized their own incorrect justifications to reasoning about other fields, this was making them crazy, in a way that should have alarmed third parties who were depending on their reasoning. The disinterest of those third parties was itself also expectation-violating.

[I don't think I was ever worried about this bleeding over into reasoning about other things; I probably would have joined the conversation more actively if I had? I do regret not asking people what their strategy was back in ~2019; the only people I remember talking to about this were Zack and the LW team.]

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-12-31T20:47:40.967Z · LW(p) · GW(p)

I think this is a pretty good summary.

I do want to… disagree? quibble? (I am not actually sure how to characterize this)… on one bit, though:

I do think it makes sense to have a heresy budget

I agree that it makes sense to have a heresy budget, but I think that it’s important to distinguish between heresies that directly affect you and/or other people in your own community, and heresies that you can “safely”[1] ignore.

For example, suppose that I disagree with the mainstream consensus on climate change. But I, personally, cannot do anything to affect government policy related to climate change, or otherwise alter how society treats the issue. Maybe our community as a whole can have some effect on such things… but probably not. And there’s nothing to be done about it on an individual basis. So if I, and the rest of the rationalist community, mostly avoids talking about the subject (and, if forced to discuss it, we mouth the necessary platitudes and quickly change the subject), then relatively little is lost.

Now suppose that the subject is something like… distortions in reporting, by municipal governments, of violent crime statistics. Getting the wrong answer on a question like that might expose you and your family to significant personal danger, so it’s important to get the right answer. On the other hand, there’s nothing special about rationalists that makes this a more important question for us than for anyone else. On the third hand, maybe we’re unusually well-positioned to get such questions right. Still, the question of whether this should be part of our “heresy budget” is not clear-cut.

(But see COVID for an example of a question in this latter category which we did choose to include in our “heresy budget”. Of course, it wasn’t a very severe heresy, and maybe that’s part of why we were able to take that stance toward it. In any case, it worked out fairly well for us, yes?)

Finally, suppose that the subject is something like homeschooling, or child education more generally. Not only is this question highly personal for anyone who has children, but it’s substantially more likely to be relevant to people in the rationalist community than for the general population, due to the prevalence in said community of a variety of (heritable!) personality traits. Getting questions in this domain wrong is unusually likely, for us, to result in inflicting substantial suffering on our children. Quite reasonably, therefore, this is solidly within our “heresy budget”.

It seems clear that trans issues fall into the third category…

… or they should, at least. But that’s not how the rest of the rationalist community sees it, as Zack has discovered. That is, at the least, somewhat odd.

(Note that it does not suffice to say “actually, the mainstream consensus on trans issues is correct, so there is nothing to be heretical about”—since the heresy seems to consist not only of reaching some dissenting conclusion, but also of treating various relevant questions as open in the first place!)


  1. In the “they have not, in fact, come for me (yet)” sense of “safely”, at least. ↩︎

Replies from: Algon
comment by Algon · 2024-01-04T13:47:21.772Z · LW(p) · GW(p)

Finally, suppose that the subject is something like homeschooling, or child education more generally. Not only is this question highly personal for anyone who has children, but it’s substantially more likely to be relevant to people in the rationalist community than for the general population, due to the prevalence in said community of a variety of (heritable!) personality traits. Getting questions in this domain wrong is unusually likely, for us, to result in inflicting substantial suffering on our children. Quite reasonably, therefore, this is solidly within our “heresy budget”.

It seems clear that trans issues fall into the third category…

… or they should, at least. But that’s not how the rest of the rationalist community sees it, as Zack has discovered. That is, at the least, somewhat odd.

I mean, this is probably correct. But my problem is that despite finding a lot of Zack's claims on this topic in the past quite reasonable, I find the discussion over the last year or two exhausting to engage with. This post is 20k+ words alone! I'm not reading that. And there's no article I know of which a reasonably good summary of what the heck is going on. So I'm not observing what Zack's saying, let alone deciding and acting. Right now, I'm struggling to orient. 

By the way, thank you for writing this comment. Same goes for @tailcalled [LW · GW] and @Vaniver [LW · GW]'s comments to this post. If only @Zack_M_Davis [LW · GW] would write posts as concise!

EDIT: Changed Zack_D to Zack_M_Davis bc of Rafael Hearth's correct reponse that Zack_D has not written any posts longer than 2k words.

Replies from: tailcalled, sil-ver
comment by tailcalled · 2024-01-04T14:10:13.232Z · LW(p) · GW(p)

To some degree, litigating deceptive behavior from Eliezer and Scott is just inherently going to be exhausting because it's most in their interest to make the deception confusing.

comment by Rafael Harth (sil-ver) · 2024-01-04T14:00:13.588Z · LW(p) · GW(p)

To be fair, @Zack_D [LW · GW] hasn't written any posts longer than 2000 words!

comment by tailcalled · 2023-12-31T00:04:38.563Z · LW(p) · GW(p)

I agree that Zack's point can sort of be unclear. To me his vibe doesn't come off as mostly focusing on trans etiology, but instead as a three-step argument about what the rationalist community should acknowledge, with most of the focus being on the first step:

  • You can't just use redefinitions to turn trans women similar to cis women.
  • Trans women start out much more similar to cis men than to cis women, and transitioning doesn't do very much.
  • Therefore transness causes a lot of political problems.

However, this doesn't match Zack's official position. While his official position starts the same way with arguing about definition, the followup seems to be that the conflict exists because the rationalist community is trying to make him transition for bad reasons, e.g.:

Who would have thought getting @ESYudkowsky's robot cult to stop trying to trick me into cutting my dick off (independently of the empirical facts determining whether or not I should cut my dick off) would involve so much math?? OK, I guess the math part isn't surprising, but—

Or

I didn't think it was fair to ordinary people to expect them to go as deep into the philosophy-of-language weeds as I could before being allowed to object to this kind of chicanery. I thought "pragmatic" reasons to not just use the natural clustering that you would get by impartially running a clustering algorithm on the subspace of configuration space relevant to your goals, basically amounted to "wireheading" (optimizing someone's map for looking good rather than reflecting the territory) or "war" (optimizing someone's map to not reflect the territory in order to manipulate them). If I were to transition today and didn't pass as well as Jessica, and everyone felt obligated to call me a woman, they would be wireheading me: making me think my transition was successful, even though it wasn't. That's not a nice thing to do to a rationalist.

I sort of have trouble buying this explanation of his motivation though because he spends weirdly little time trying to communicate his priorities and concerns and such when it comes to transitioning. But if I pretend to believe it, here's what I would say:

Zack's primary concern seems to be that he wouldn't pass if he transitioned, and that this would make his transition bad. Now it's true that trans-focused rationalists probably encourage him to transition despite this, but it's not so clear that this is wrong of them.

There are lots of trans women who don't pass, and most of them don't think it was a mistake for them to transition. If rationalists just decide on transition advice based on pattern-matching, this might make it natural to recommend him to transition. Now there are some ways in which this advice might fail, but each of them has challenges:

  • Maybe he's different from those other trans women, but in that case it seems like a problem that he keeps insisting he is the same as them.

  • Maybe those trans women are wrong about whether it was a good idea for them to transition, but Zack hasn't done much to argue for that.

  • Maybe it was selfish for them to transition; good for themselves at the cost of others.

Zack also hasn't hugely argued for the third one, though he has argued more for it than the others, so maybe it is his position. Arguably he is combining both the second and the third positions.

Replies from: Viliam, TekhneMakre, M. Y. Zuo
comment by Viliam · 2023-12-31T15:30:36.988Z · LW(p) · GW(p)

You can't just use redefinitions to turn trans women similar to cis women.

Definitions are on a map. Similarity means "having some property in common", which in general is in the territory, but the perception of similarity depends on which properties we are noticing, so it is influenced by the map.

(For a mathematician, an ellipse is similar to a hyperbole, because both are conic sections. For a non-mathematician, the ellipse is a lame circle, and the hyperbole is two crooked lines; not similar.)

You can't use a redefinition to conjure a property that didn't exist before, but you can use it to draw attention to an already existing property.

(We have already successfully "redefined" dolphins to mammals. Previously they were considered fish. The fact that they live in water did not change.)

So the question is, which properties do trans women and cis women have in common (this cannot be redefined) and which properties we are paying attention to (this can be redefined).

Trans women start out much more similar to cis men than to cis women, and transitioning doesn't do very much.

Maybe yes, maybe no; where is the evidence? (I am focusing on the first part of the sentence. I assume that by "transitioning" you refer to the act of coming out as trans, not to hormonal therapy.)

the rationalist community is trying to make him transition for bad reasons

Speaking for myself, I don't care whether Zack transitions or what his reasons would be. Perhaps we should make a poll, and then Zack might find out that the people who are "trying to make him transition for bad reasons" ("trying to trick me into cutting my dick off") are actually quite rare, maybe completely nonexistent.

If I were to transition today and didn't pass as well as Jessica, and everyone felt obligated to call me a woman, they would be wireheading me: making me think my transition was successful, even though it wasn't. That's not a nice thing to do to a rationalist.

By this logic, any politeness is wireheading. If you want to know whether you are passing, perhaps you could ask directly. In that case, I agree that lying would be a sin against rationality. But in the usual social situation... if I meet a cis woman who looks not very feminine, I am not giving her unsolicited feedback either.

Too bad we can't predict whether Zack would pass before he actually goes ahead and transitions.

Maybe he's different from those other trans women, but in that case it seems like a problem that he keeps insisting he is the same as them.

Yeah, this is exactly my problem with Zack's statements. I am okay with him making plausibly sounding statements about himself, but when he tries to makes statements about others (who seem to disagree?), I demand evidence.

Replies from: martin-randall
comment by Martin Randall (martin-randall) · 2024-01-16T04:05:37.696Z · LW(p) · GW(p)

Speaking for myself, I don't care whether Zack transitions or what his reasons would be. Perhaps we should make a poll, and then Zack might find out that the people who are "trying to make him transition for bad reasons" ("trying to trick me into cutting my dick off") are actually quite rare, maybe completely nonexistent.

As a historical analogy, imagine a feminist saying that society is trying to make her into a housewife for bad reasons. ChatGPT suggests Simone de Beauvoir (1908-1986). Some man replies that "Speaking for myself, I don't care whether Simone becomes a housewife or what her reasons would be. Perhaps we should make a poll, and then Simone might find out that the people who are 'trying to make her a housewife for bad reasons' are actually quite rare, maybe completely nonexistent".

Well, probably very few people were still trying to make Simone into a housewife after she started writing thousands of words on feminism! But also, society can collectively pressure Simone to conform even if very few people know who Simone is, let alone have an opinion on her career choices.

Many other analogies possible, I picked this one for aesthetic reasons, please don't read too much into it.

comment by TekhneMakre · 2023-12-31T11:32:18.285Z · LW(p) · GW(p)

You can't just use redefinitions to turn trans women similar to cis women.

What does this mean? It seems like if the original issue is something about whether to call an XY-er "she" if the XY-er asks for that, then, that's sort of like a redefinition and sort of not like a redefinition... Is the claim something like:

Eliezer wants to redefine "woman" to mean "anyone who asks to be called 'she' ". But there's an objective cluster, and just reshuffling pronouns doesn't make someone jump from being typical of one cluster to typical of the other.

Trans women start out much more similar to cis men than to cis women, and transitioning doesn't do very much.

This one is a set of empirical, objective claims.... but elsewhere you said:

Focusing on brains seems like the wrong question to me. Brains matter due to their effect on psychology, and psychology is easier to observe than neurology.

Even if psychology is similar in some ways, it may not be similar in the ways that matter though, and in fact the ways that matter need not be restricted to psychology. Even if trans women are psychologically the same as cis women, trans women in women's sports is still a contentious issue.

So I guess that was representing your viewpoint, not Zack's?

Replies from: tailcalled
comment by tailcalled · 2023-12-31T11:50:05.962Z · LW(p) · GW(p)

What does this mean? It seems like if the original issue is something about whether to call an XY-er "she" if the XY-er asks for that, then,

My understanding of Zack's position is that he fixated on this because it's something with a clear right answer that has been documented in the Sequences, and that he was really just using this as the first step to getting the rationalist community to not make him transition.

that's sort of like a redefinition and sort of not like a redefinition...

Arguably what "it is" depends on why people are doing it. Zack has written extensive responses to different justifications for doing it. I can link you a relevant response and summarize it, but in order to do that I need to know what your justification is.

This one is a set of empirical, objective claims.... but elsewhere you said:

The latter was representing my viewpoint whereas the former was an attempt at representing Zack's viewpoint, but also I don't think the two views are contradictory with each other?

comment by M. Y. Zuo · 2023-12-31T01:42:38.129Z · LW(p) · GW(p)

This still doesn't seem to address the root issue that Villiam raised, of why should a random passing reader care enough about someone's gender self-perceptions/self-declarations/etc... to actually read such long rambling essays?

Caring about someone's sex maybe, since there's a biological basis that is falsifiable.

But gender is just too wishy washy in comparison for some random passing reader to plausibly care so much and spend hours of their time on this. 

Replies from: Kalciphoz, Zack_M_Davis
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-02T21:19:18.120Z · LW(p) · GW(p)

See, this is an example of the bad faith engagement that lies close to the core of this controversy.

People who do not care about a post click away from it. They do not make picket signs about how much they don't care and socially shame the poster for making posts that aren't aimed at random passing readers. Whether a post is aimed at random passing readers is an abysmally poor criterion for evaluating the merits of posts in a forum that is already highly technical and full of posts for specialist audiences, and in point of fact several readers did care enough to spend hours of their time on it.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2024-01-03T17:58:38.653Z · LW(p) · GW(p)

This seems incoherent considering I already addressed Zack's point, in a direct reply, 3d ago, just one comment chain down, along with several other folks weighing in.

So I'll assume you haven't read them. Here's my other comment reposted here:

They might be interested in information presented in a concise, high signal way. 

The way you've presented it practically guarantees that nearly every passing reader will not.

i.e. The average reader  'might be interested' only to an average degree.

The 'random passing reader' refers to all readers within a few standard deviations of the average, but not to literally every single reader. 

i.e. Those who have no strong views regarding Zack either way.

Hence it's unsurprising, and implied, that there are outliers. 

Are you confused about this terminology?

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-03T19:11:53.472Z · LW(p) · GW(p)

That incoherence you speak of is precisely what my previous comment pointed out, and it pertains to your argument rather than mine. As my previous comment explained, engaging with a post even just to call it uninteresting undermines any proclamation that you do not care about the post. If your engagement is more substantive than this, then that only further calls into question the need to shame the author for making posts that random passing readers might not care about.

Edited to add:

The 'random passing reader' refers to all readers within a few standard deviations of the average, but not to literally every single reader. 

i.e. Those who have no strong views regarding Zack either way.

Hence it's unsurprising, and implied, that there are outliers. 

Are you confused about this terminology?

If the outliers are sufficiently many to generate this much discussion, and they include such notable community members as Said Achmiz, then the critique that random passing readers might not spend hours on it is clearly asinine, regardless of the exact amount of standard deviations you include. I am not "confused about this terminology", I am just calling out your bad faith engagement.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2024-01-03T22:13:11.040Z · LW(p) · GW(p)

This is just incoherent, and quite oddly aimed, sorry to say. 

At best it reads like a series of emotional insinuations on another LW user's motivations, rationale, etc... for posting. At worst, it reads like someone who's totally lost the plot.

i.e. If you think my prior comments were somehow low quality or disparaging Zack in any way whatsoever, then why write something even worse and closer to random noise?

Shouldn't you be posting even higher quality and better reasoned out comments, to convince other readers that it's not just posturing and empty talk?

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-04T00:12:09.380Z · LW(p) · GW(p)

You are not even pretending to address the argument at this point, you are merely insulting it and me. I think your latest reply here speaks for itself.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2024-01-04T02:14:24.537Z · LW(p) · GW(p)

You are not even pretending to address the argument at this point, you are merely insulting it and me. I think your latest reply here speaks for itself.

 

There hasn't been a coherent argument presented yet, hence why I directly pointed out the incoherency... 

Since this is the second deflection in a row, I'll give one more chance to answer the previous direct questions:

Are you confused about this terminology?

...

i.e. If you think my prior comments were somehow low quality or disparaging Zack in any way whatsoever, then why write something even worse and closer to random noise?

Shouldn't you be posting even higher quality and better reasoned out comments, to convince other readers that it's not just posturing and empty talk?

And if you don't want to answer the second two questions, which is totally your prerogative, then at least answer the first direct question? Otherwise of course I'm not going to be 'pretending to address' any subsequent deflections... there's no reason for me to deviate from sticking to the chronological ordering of comments.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-04T02:34:13.449Z · LW(p) · GW(p)

There hasn't been a coherent argument presented yet, hence why I directly pointed out the incoherency... 

No, you did not, you added a fact that further corroborated the argument, as my reply showed.

Since this is the second deflection in a row, I'll give one more chance to answer the previous direct questions:

I have already directly answered the first question: no, I am not confused about the terminology. I have also answered the assumptions implicit in the question and shown why the question was irrelevant. Of course, both that one and the subsequent questions were merely insults disguised as questions, and your accusation that I am deflecting is mere hypocrisy and projection. 

Where are your manners?

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2024-01-04T03:00:42.834Z · LW(p) · GW(p)

I'm getting tired of this back and forth. 

Your opinions regarding all these supposed negative characteristics do not outweigh anyone else's, nor my own, so it seems unproductive. 

I acknowledge my own comments may seem to be low quality or 'bad' in your eyes, but to post even lower quality replies is self-defeating.

Where are your manners?

i.e. My manners in comment writing, even though they may be low quality or detestable in your opinion, are still higher quality than what has been demonstrated so far here:

...

If the outliers are sufficiently many to generate this much discussion, and they include such notable community members as Said Achmiz, then the critique that random passing readers might not spend hours on it is clearly asinine, regardless of the exact amount of standard deviations you include. I am not "confused about this terminology", I am just calling out your bad faith engagement.

Those in glass houses shouldn't throw stones. 

Can you offer some actual proof or substantive backing, not in edited comments, for at least half of all the stuff written so far?

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-04T10:08:01.979Z · LW(p) · GW(p)

I acknowledge my own comments may seem to be low quality or 'bad' in your eyes, but to post even lower quality replies is self-defeating.

I didn't. Mine at least contained actual arguments.

Those in glass houses shouldn't throw stones. 

The text you quoted makes a specific argument that you once again chose to simply insult instead of addressing it. Again, your behaviour speaks for itself.

At this point it has become abundantly clear that you are simply a troll, so I will not bother to engage with you henceforth.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2024-01-04T15:47:48.158Z · LW(p) · GW(p)

Like I said, one person's opinions regarding the supposed characteristics of another's comments simply cannot outweigh the opinions of anyone else. Plus I imagine on LW many readers can see through the superficial layer of words.

But if you genuinely want to productively engage, I'll give one final chance:

Can you offer some actual proof or substantive backing, not in edited comments, for at least half of all the stuff written so far?

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-04T18:40:35.961Z · LW(p) · GW(p)

Like I said, one person's opinions regarding the supposed characteristics of another's comments simply cannot outweigh the opinions of anyone else. 

Utterly irrelevant since I never asked anybody to take my opinions as outweighing their own.

But if you genuinely want to productively engage, I'll give one final chance:

Can you offer some actual proof or substantive backing, not in edited comments, for at least half of all the stuff written so far?

Again, I have already presented arguments for my case. If you do not consider them sufficiently substantive, then I invite you to tell me what you see as the flaw, or why you deem them insufficient.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2024-01-04T18:51:09.620Z · LW(p) · GW(p)

Utterly irrelevant since I never asked anybody to take my opinions as outweighing their own.

Again, I have already presented arguments for my case.

 

This is your own opinion that's being made to sound as if they are incontestable facts... every comment sounds like this.

My opinion is the opposite and at least equally valid.  So anyone can endlessly negate just by expressing the opposite opinion, hence it's unproductive. You need to list out actual arguments, proofs, analysis, or any falsifiable claims, etc... that satisfy the criteria of the counter-party. 

Whether or not they satisfy your own criteria is irrelevant to this point, and just saying it's the truth won't convince the counter-party. And if you still can't accept this, then do not engage, I won't be offended.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-04T19:24:53.659Z · LW(p) · GW(p)

Since you seem to have completely lost track of what actually happened, I will remind you:

  • Zack made this post and was met with a barrage of abuse
  • Some of the abusers were blaming Zack for making a post that random passersby might not care about
  • I pointed out that the people making this critique had in fact interacted much more with the post than somebody who genuinely wouldn't care
  • You pointed out that these people had interacted with the post in ways beside the one I just mentioned
  • I pointed out that this obviously corroborates my point rather than detracting from it
  • Instead of addressing this obvious point, you just called it incoherent and started delivering a barrage of insults instead of making any actual arguments 

Ie. you are the one just asserting opinions, whereas I made arguments, and then pointed out the arguments when you denied their existence, and now you seem to be asserting that your opinion is just as valid as mine, a thinly veiled "that's just your opinion, man", while still ignoring the actual arguments rather than actually addressing them. That is insane

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2024-01-04T19:47:31.906Z · LW(p) · GW(p)

Ie. you are the one just asserting opinions, whereas I made arguments, ...

This is in itself another opinion... Did you genuinely not read my previous comment to the end?

Whether or not they satisfy your own criteria is irrelevant to this point, and just saying it's the truth won't convince the counter-party.

i.e. You need to convince me, not yourself. And the previous opinions are just not convincing, to me, as coherent 'arguments'. Period. 

No amount of futile replies can alter the past, unless you edit the comments, which  would create its own credibility problems. We can agree to disagree and move on.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-04T20:06:05.105Z · LW(p) · GW(p)

I can't possibly hope to convince you when you are engaging in abysmally bad faith. My purpose is to call you out, because you should not be getting away with this shit.

On another note, I did in fact "list out actual arguments", exactly as you said. I can only surmise that they didn't satisfy the "criteria of the counter-party", and for some unguessable (/s) reason, you once again will not give even the slightest indication of what you deem to be insufficient about them.

How exactly am I supposed to convince an interlocutor who will not even explain why he is unmoved by the arguments provided? Again, this is insane.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2024-01-04T20:16:28.062Z · LW(p) · GW(p)

Do you realize I can see when you've posted replies and then 'deleted them without a trace' immediately afterwards? The mods can too. 

It's a feature of the LW notifications system, with the right timing. So there's no use in pretending.

I didn't want to call this out before, but it's important to set the record straight. And the mods will back me up here.

I can't possibly hope to convince you when you are engaging in abysmally bad faith. My purpose is to call you out, because you should not be getting away with this shit.

Anyways, just going by the writing that is considered not too embarrassing to delete, it's clear who has the better manners in comment writing.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-04T20:25:55.572Z · LW(p) · GW(p)

Do you realize I can see when you've posted replies and then 'deleted them without trace' immediately afterwards? The mods can too. 

For any others wondering, the deleted comment simply said "... That's what I get for engaging with a blatant troll", or something to that effect. It was because M. Y. Zuo's manipulative bs had made me forget my actual reasons for engaging, and I deleted the comment when I remembered what they were.

But it seems superfluous at this point, since any reasonable person can tell that M. Y. Zuo's behaviour is absolutely reprehensible. But I also have to admit that any such person can also tell that I've "bitten the bait" and engaged with him too long, to the point where my behaviour has become ridiculous and embarrassing.

There is a lot of wisdom to Mark Twain's admonition to never argue with a fool, lest they drag you down to their level and beat you with experience — wisdom which, I am sorry to report, I seem to have not yet learned.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2024-01-05T20:51:14.019Z · LW(p) · GW(p)

Thanks for confirming my suspicions, since the precise wording must have been very embarrassing to intentionally delete without a trace, I won't pry, and I'll let bygones be bygones.

It wasn't my intention to drive you into a hopeless corner, since it seems there was substantial agitation from close to the beginning, but it's hard to ignore deception and false pretences when the LW forum software is literally notifying me of it.

I understand it can be a bit scary and frustrating when someone much more experienced and well established takes a counter-argument line, so I won't provoke whatever root issue(s) is lying beneath all this but I do hope there's some value in what's been written.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-05T21:00:26.735Z · LW(p) · GW(p)

Thanks for confirming my suspicions, since the precise wording must have been very embarrassing to intentionally delete without a trace, I won't pry, and I'll let bygones be bygones.

I already told you what the comment said. I deleted it not because I thought it was embarrassing, but because I thought it was irrelevant.

Is there some way for moderators or admins to identify the content of a deleted comment? If so, I give my permission for them to do so and state publicly what it contained.

I understand it can be a bit scary and frustrating when someone much more experienced and well established takes a counter-argument line

I have been in this community for over ten years.

This latest comment of yours is utterly disgraceful and contemptible by any reasonable standard. Purely an attempt to humiliate me, and on an entirely speculative basis. So much for "letting bygones be bygones", eh?
 

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2024-01-06T20:58:12.962Z · LW(p) · GW(p)

So we agree to disagree.

EDIT: I wanted to say it was an interesting discussion to be polite, but the juvenile insults and mud slinging tactics are obvious enough that probably zero passing readers would believe it.

comment by Zack_M_Davis · 2023-12-31T03:02:08.805Z · LW(p) · GW(p)

why should a random passing reader care enough [...] to actually read such long rambling essays?

I mean, they probably shouldn't? When I write a blog post, it's because I selfishly [LW · GW] had something I wanted to say. Obviously, I understand that people who think it's boring aren't going to read it! Not everyone needs to read every blog post! That's why we have a karma system, to help people make prioritization decisions about what to read.

Replies from: tailcalled
comment by tailcalled · 2023-12-31T03:07:02.241Z · LW(p) · GW(p)

I thought people were supposed to care because you were highlighting systematic political distortions in the rationalist community?

I didn't mention that part in my other comment because Villiam seemed confused about the inner part of the conflict whereas this seemed like the outer part of the conflict.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2023-12-31T07:02:38.676Z · LW(p) · GW(p)

I mean, yes, people who care about this alleged "rationalist community" thing might be interested in information about it being biased (and I wrote this post with such readers in mind), but if someone is completely uninterested in the "rationalist community" and is only on this website because they followed a link to an article about information theory, I'd say that's a pretty good life decision!

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-12-31T20:37:14.396Z · LW(p) · GW(p)

They might be interested in information presented in a concise, high signal way. 

The way you've presented it practically guarantees that nearly every passing reader will not.

i.e. The average reader  'might be interested' only to an average degree.

comment by TekhneMakre · 2023-12-30T18:46:46.188Z · LW(p) · GW(p)

I certainly haven't read even a third of your writing about this. But... I continue to not really get the basic object-level thing. Isn't it simply factually unknown whether or not there's such a thing as men growing up with brains that develop like female brains? Or is that not a crux for anything?

Separately, isn't the obvious correct position simply: there's a bunch of objective stuff about the differences between men and women; there's uncertainty about exactly how these clusters overlap / are violated in real life, e.g. as described in the previous paragraph; and separately there's a bunch of conduct between people that people modulate depending on whether they are interacting with a man or a woman; and now that there are more people openly not falling neatly into the two clusters, there's some new questions about conduct; and some of the conduct questions involve factual questions, for which calling a particular XY-er a woman would be false, and some of the conduct questions involve factual questiosn (e.g. the brain thing) for which calling a particular XY-er a woman would be true, and some of the conduct questions are instead mainly about free choices, like whether or not to wear a dress or whatever?

I mean, if person 1 is using the word "he" to mean something like "that XY-er", then yeah, it's false for them to say "he" of an XX-er. If person 2 is using the word "he" to mean something like "that person, who wants to be treated in the way that people usually treat men", then for some XX-ers, they should call the XX-er "he". This XX-er certainly might seek to decieve person 1; e.g. if the XX-er wants to be treated by person 1 the way person 1 treats XY-ers, and person 1 does not want to treat this XX-er that way, but would treat the XX-er this way if they don't know the XX status, then the XX-er might choose to have allies say "he" in order to decieve person 1. But that's not the only reason. One can imagine simply that everyone is like person 2; then an XX-er asking to be called "he" is saying something like "I prefer to not be flirted with by heterosexual men; I'd like people to accurately expect me to be more interested in going to a hackathon rather than going to a mall; etc.", or something. I mean, I'm not at all saying there's no problem, but... It's not clear (though again, I didn't read your voluminous writing on this carefully) who is saying what that's wrong... Like, if there's a bunch of conventional conduct that's tied up with words, then it's not just about the words' meaning, and you have to actually do work to separate the conduct from the reference, if you want them to be separate.

Replies from: tailcalled, ChristianKl, Vaniver
comment by tailcalled · 2023-12-30T22:48:33.985Z · LW(p) · GW(p)

Isn't it simply factually unknown whether or not there's such a thing as men growing up with brains that develop like female brains? Or is that not a crux for anything?

Focusing on brains seems like the wrong question to me. Brains matter due to their effect on psychology, and psychology is easier to observe than neurology.

Even if psychology is similar in some ways, it may not be similar in the ways that matter though, and in fact the ways that matter need not be restricted to psychology. Even if trans women are psychologically the same as cis women, trans women in women's sports is still a contentious issue.

There are some fairly big ways in which trans women are not similar to cis women though, for instance trans women tend to be mostly sexually attracted to women, whereas cis women tend to be mostly sexually attracted to men. Whether this is policy-relevant is I guess up to you, but it certainly has a lot of high-impact implications [LW(p) · GW(p)].

Replies from: TekhneMakre
comment by TekhneMakre · 2023-12-31T07:04:19.331Z · LW(p) · GW(p)

Ok. (I continue to not know what the basic original object-level disagreement is!)

Replies from: tailcalled
comment by tailcalled · 2023-12-31T10:50:59.432Z · LW(p) · GW(p)

Possibly this explanation [LW(p) · GW(p)] helps? As in basically he's been focusing on the first step to a multi-step argument, though it's sort of unclear what the last step(s) are supposed to add up to.

comment by ChristianKl · 2023-12-30T20:01:58.347Z · LW(p) · GW(p)

I continue to not really get the basic object-level thing. Isn't it simply factually unknown whether or not there's such a thing as men growing up with brains that develop like female brains? 

That's a bit like saying that it's "factually unknown" whether there's an invisible dragon in the garage. 

Neuroscientists measure a lot of things about brains and if you need to define "develop like female brains" in a way that doesn't show up in any metric that neuroscientists can measure, and it's therefore "factually unknown".

Or is that not a crux for anything?

Rationalists generally aren't very favorable to god of the gaps arguments, so it's unclear why gender of the gaps should be a crux given our existing neuroscience. 

If you truly believe that there's a gap here, then why is there a gap? One straightforward reason for why there might be a gap is that any neuroscientist who would research this would be canceled. If there's a gap that's a sign for an unhealthy epistemic environment.

Part of what Zack is writing about is that this unhealthy epistemic environment was harmful to him when trying to figure out whether out not Zack is a woman or a man. 

Replies from: greylag, TekhneMakre, lalaithion
comment by greylag · 2023-12-30T21:55:38.572Z · LW(p) · GW(p)

Hm. Now I thought I’d heard of gender dysphoria/transgender/etc showing up in brain imaging (eg. https://pubmed.ncbi.nlm.nih.gov/26766406/) and while “develop like female brains” would be bounding happily ahead of the evidence, that seems at least like sporadic snorting noises from the garage in the night time

Replies from: tailcalled, ChristianKl
comment by tailcalled · 2023-12-30T22:42:24.241Z · LW(p) · GW(p)

I can't confidently make claims about all brain imaging studies as I haven't read enough of them, but as a general rule studies that claim to find links between neurology and psychological traits are fake (same problem as candidate gene studies, plus maybe also the problem of "it's not clear we're looking at the right variables") unless the trait in question is g (IQ).

This applies not just to the trans brain studies, but also to the studies claiming to find the sex differences in brain structure (while large sex differences in brain structure do exist, the ones that have been found so far appear to be completely uncorrelated with psychological traits that have sex differences once you control for sex, so they do not mediate the relationship between sex and those psychological traits).

Replies from: tailcalled
comment by tailcalled · 2023-12-30T23:27:18.353Z · LW(p) · GW(p)

Oh and I guess I should add, if we do insist on talking about brain neurology in the context of transness, there is one set of studies I expect to replicate, because it is conceptually very simple. The idea is to take a bunch of cis men and cis women, train a predictor to classify people's sex from their brain structure, and then apply that brain structure to trans women. This is essentially a multivariate approach, which I'd expect Zack to like because he talks a lot about multivariate approaches.

I think I've seen three or four studies that do this, but the two I have at hand right now are Sex Matters: A Multivariate Pattern Analysis of Sex- and Gender-Related Neuroanatomical Differences in Cis- and Transgender Individuals Using Structural Magnetic Resonance Imaging and Regional volumes and spatial volumetric distribution of gray matter in the gender dysphoric brain.

The general pattern from the studies I've read is that prior to transitioning, trans women have male brains, and after having been on HRT for a while, trans women's brain structure shifts to be in the middle between cis women and cis men (on the sex-separating axis). I don't know if trans women's brains change even more given even longer time; it seems conceivable that they do.

But anyway most noteworthy about these studies is that this applies to both HSTSs and AGPTSs. I.e. HSTS MtFs (who Zack sees as "true transsexuals") have male brains prior to transitioning. (See the second of my links for more info on this.) This illustrates why I am not enthusiastic about arguments based on multivariate group-separating axes: HSTSs are clearly feminine in some sense, but this isn't the sense which gets emphasized when taking the neurological sex-separating axis. I'm not sure why Zack still regularly makes appeals to multivariate groups differences though. My best guess is that he doesn't pay attention to this but he should be encouraged to answer for himself.

comment by ChristianKl · 2023-12-30T22:58:35.381Z · LW(p) · GW(p)

The fact that someone finds a brain pattern that describes gender dysphoria but thinks that brain pattern does not warrant the description of looking like female brain patterns, to me does not look like evidence pointing in the direction that gender dysphoria is associated with female brain patterns.

Vul et al's voodoo neuroscience paper is also worth reading, to have some perspective on these kinds of findings. 

comment by TekhneMakre · 2023-12-31T07:05:58.311Z · LW(p) · GW(p)

Are you claiming that Zack is claiming that there's no such thing as gender? Or that there's no objective thing? Or that there's nothing that would show up in brain scans? I continue to not know what the basic original object-level disagreement is!

Replies from: ChristianKl
comment by ChristianKl · 2023-12-31T12:18:36.236Z · LW(p) · GW(p)

No, Zack does believe that there's something like gender. He believes that you are either male or female and that those categories are straightforwardly derived. 

You are the person who claims that there's something that is "factually unknown". For it to be factually unknown it's necessary not to have shown up in the brain scans that people already did. 

comment by lalaithion · 2023-12-30T21:00:14.742Z · LW(p) · GW(p)

What factual question is/was Zack trying to figure out? “Is a woman” or “is a man” are pure semantics, and if that’s all there is then… okay… but presumably there’s something else?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-12-30T21:19:08.064Z · LW(p) · GW(p)

Given some referent—some definition, either intensional or extensional—of the word “man” (in other words, some discernible category with the label “man”), the question “is X a man” (i.e., “is X a member of this category labeled ‘man’”) is an empirical question. And “man”, like any commonly used word, can’t be defined arbitrarily.

All of the above being the case, what do you mean by “pure semantics” such that your statement is true…?

Replies from: lalaithion
comment by lalaithion · 2023-12-30T21:42:17.060Z · LW(p) · GW(p)

Yeah, what factual question about empirical categories is/was Zack interested in resolving? Tabooing the words “man” and “woman”, since what I mean by semantics is “which categories get which label”. I’m not super interested in discussing which empirical category should be associated with the phonemes /mæn/, and I’m not super interested in the linguistic investigation of the way different groups of English speakers assign meaning to that sequence of phonemes, both of which I lump under the umbrella of semantics.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-12-30T21:54:07.510Z · LW(p) · GW(p)

Yeah, what factual question about empirical categories is/was Zack interested in resolving?

Zack has written very many words about this, including this very post, and the ones prior to it in the sequence; and also his other posts, on Less Wrong and on his blog.

I’m not super interested in discussing which empirical category should be associated with the phonemes /mæn/, and I’m not super interested in the linguistic investigation of the way different groups of English speakers assign meaning to that sequence of phonemes, both of which I lump under the umbrella of semantics.

But other people are interested in these things (and related ones), as it turns out; and the question of why they have such interest, as well as many related questions, are also factual in nature.

What’s more, “A Human’s Guide to Words” (which I linked to in the grandparent) explains why reassigning different words to existing categories is not arbitrary, but has consequences for our (individual and collective) epistemics. So even such choices cannot be dismissed by labeling them “semantics”.

Replies from: lalaithion
comment by lalaithion · 2023-12-30T22:23:52.473Z · LW(p) · GW(p)

I haven’t read everything Zack has written, so feel free to link me something, but almost everything I’ve read, including this post, includes far more intra-rationalist politicking than discussion of object level matters.

I know other people are interested in those things. I specifically phrased my previous post in an attempt to avoid arguing about what other people care about. I can neither defend nor explain their positions. Neither do I intend to dismiss or malign those preferences by labeling them semantics. That previous sentence is not to be read as a denial of ever labeling them semantics, but rather as a denial of thinking that semantics is anything to dismiss or malign. Semantics is a long and storied discipline on philosophy and linguistics. I took an entire college course on semantics. Nevertheless, I don’t find it particularly interesting.

I’ve read a human’s guide to words. I understand you cannot redefine reality by redefining words. I am trying to step past disagreement you and I might have regarding the definitions of words and figure out if we have disagreements about reality.

I think you are doing the same thing I have seen Zack do repeatedly, which is to avoid engaging in actual disagreement and discussion, but instead repeatedly accuse your interlocutor of violating norms of rational debate. So far nothing you have said is something I disagree with, except the implication that I disagree with it. If you think I’m lying to you, feel free to say so and we can stop talking. If our disagreement is merely “you think semantics is incredibly important and I find it mostly boring and stale”, let me know and you can go argue with someone who cares more than me.

But the way that Zack phrases things makes it sound, to me, like he and I have some actual disagreement about reality which he thinks is deeply important for people considering transition to know. And as someone considering transition, if you or he or someone else can say that or link to that isn’t full of semantics or intracommunity norms of discourse call-outs, I would like to see it!

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-12-30T23:18:54.763Z · LW(p) · GW(p)

I haven’t read everything Zack has written, so feel free to link me something, but almost everything I’ve read, including this post, includes far more intra-rationalist politicking than discussion of object level matters.

Certainly:

https://www.lesswrong.com/posts/LwG9bRXXQ8br5qtTx/sexual-dimorphism-in-yudkowsky-s-sequences-in-relation-to-my [LW · GW]

https://www.greaterwrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal [LW · GW]

https://www.lesswrong.com/posts/RxxqPH3WffQv6ESxj/blanchard-s-dangerous-idea-and-the-plight-of-the-lucid [LW · GW]

http://unremediatedgender.space/2018/Feb/the-categories-were-made-for-man-to-make-predictions/

http://unremediatedgender.space/2020/Nov/survey-data-on-cis-and-trans-women-among-haskell-programmers/

http://unremediatedgender.space/2020/Apr/book-review-human-diversity/

http://unremediatedgender.space/2019/Sep/does-general-intelligence-deflate-standardized-effect-sizes-of-cognitive-sex-differences/

Zack also has several posts which, although themselves written at a meta-level, nevertheless explain in great (and highly technical) detail why “is X a woman/man” (i.e., “to which of these two categories, no matter their labels, does X properly belong”) is a factual question. These include:

https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries [LW · GW]

https://www.greaterwrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception [LW · GW]

https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests [LW · GW]

I think you are doing the same thing I have seen Zack do repeatedly, which is to avoid engaging in actual disagreement and discussion, but instead repeatedly accuse your interlocutor of violating norms of rational debate.

To my knowledge, I’ve made no such accusations against you.

So far nothing you have said is something I disagree with, except the implication that I disagree with it. If you think I’m lying to you, feel free to say so and we can stop talking.

I don’t think you’ve made any concrete claims, so how could you be lying…? (I suppose you could be lying about what you are or are not interested in, but I’m not sure what the point of doing so would be, in this case…)

If our disagreement is merely “you think semantics is incredibly important and I find it mostly boring and stale”, let me know and you can go argue with someone who cares more than me.

That is certainly not the disagreement.

Your first comment in this thread was responding to an exchange which was about object-level questions, and very clearly so. Like, if I say “I’m trying to figure out whether this animal in front of me is a wolf spider or a fishing spider”, and you respond by saying “‘is a wolf spider’ or ‘is a fishing spider’ is pure semantics, so what factual question are you trying to figure out”, that is a nonsensical thing to say. Do you agree? Or do you think that’s a perfectly sensible reply?

But the way that Zack phrases things makes it sound, to me, like he and I have some actual disagreement about reality which he thinks is deeply important for people considering transition to know. And as someone considering transition, if you or he or someone else can say that or link to that isn’t full of semantics or intracommunity norms of discourse call-outs, I would like to see it!

I claim no expertise related to transition, nor do I have any special insight into these matters, so I’m surely not the right person to ask any such thing.

As for Zack… well, look, you are commenting on a post which is, indeed, about community norms and epistemic standards and other such “meta” questions. Zack has written many, many posts about the object-level issues. He has a whole blog which is just absolutely jam-packed with discussion of the object-level issues. (This is a link-post, so you can click that link and check out said blog.) If Zack writes a bunch of posts about the object-level stuff, and then, having done so, writes a post about the meta-level stuff, and you read that post and ask “where is the object-level stuff”, what is anyone supposed to say other than “it’s in all the other posts, the ones about the object-level stuff, which this post is not one of”?

So if your question was just “where are those object-level posts”, then I hope my links have answered that. If your question was something else, then by all means feel free to clarify!

Replies from: lalaithion, tailcalled
comment by lalaithion · 2023-12-31T00:20:44.312Z · LW(p) · GW(p)

I owe you an apology; you’re right that you did not accuse me of violating norms, and I’m sorry for saying that you did. I only intended to draw parallels between your focus on the meta level and Zack’s focus on the meta level, and in my hurry I erred in painting you and him with the same brush.

I additionally want to clarify that I didn’t think you were accusing me of lying, but merely wanted preemptively close off some of the possible directions this conversation could go.

Thank you for providing those links! I did see some of them on his blog and skipped over them because I thought, based on the first paragraph or title, they were more intracommunity discourse. I have now read them all.

I found them mostly uninteresting. They focus a lot on semantics and on whether something is a lie or not, and neither of those are particularly motivating to me. Of the rest, they are focused on issues which I don’t find particularly relevant to my own personal journey, and while I wish that Zack felt like he was able to discuss these issues openly, I don’t really think people in the community disagreeing with him is some bizarre anti-truth political maneuvering.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-12-31T00:32:10.628Z · LW(p) · GW(p)

Apology accepted!

Thank you for providing those links!

You’re quite welcome.

I found them mostly uninteresting. They focus a lot on semantics and on whether something is a lie or not, and neither of those are particularly motivating to me.

Hmm. I continue to think that you are using the term “semantics” in a very odd way, but I suppose it probably won’t be very fruitful to go down that avenue of discussion…

I don’t really think people in the community disagreeing with [Zack] is some bizarre anti-truth political maneuvering.

I imagine the answer to this one will depend on the details—which people, disagreeing on what specific matter, in what way, etc. Certainly it seems implausible that none of it is “political maneuvering” of some sort (which I don’t think is “bizarre”, by the way; really it’s quite the opposite—perfectly banal political maneuvering, of the sort you see all the time, especially these days… more sad to see, perhaps, for those of us who had high hopes for “rationality”, but not any weirder, for all that…).

Replies from: lalaithion
comment by lalaithion · 2023-12-31T01:28:26.765Z · LW(p) · GW(p)

I also consider myself as someone who had—and still has—high hopes for rationality, and so I think it’s sad that we disagree, not on the object level, but on whether we can trust the community to faithfully report their beliefs. Sure, some of it may be political maneuvering, but I mostly think it’s political maneuvering of the form of—tailoring the words, metaphors, and style to a particular audience, and choosing to engage on particular issues, rather than outright lying about beliefs.

I don’t think I’m using “semantics” in a non-standard sense, but I may be using it in a more technical sense? I’m aware of certain terms which have different meanings inside of and outside of linguistics (such as “denotation”) and this may be one.

comment by tailcalled · 2023-12-30T23:31:13.544Z · LW(p) · GW(p)

Your first comment in this thread was responding to an exchange which was about object-level questions, and very clearly so. Like, if I say “I’m trying to figure out whether this animal in front me of is a wolf spider or a fishing spider”, and you respond by saying “‘is a wolf spider’ or ‘is a fishing spider’ is pure semantics, so what factual question are you trying to figure out”, that is a nonsensical thing to say. Do you agree? Or do you think that’s a perfectly sensible reply?

You would probably not include actual hyperlinks if you were literally saying this in the real world, so that makes this example disanalogous to the usual cases.

(I do think the question would be meaningful in the usual cases, but adding hyperlinks seems like cheating as it binds the statement to a lot more information than there would otherwise be. It adds the same sort of information as you would be adding by tabooing the words.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-12-30T23:46:55.115Z · LW(p) · GW(p)

I added the hyperlinks for the benefit of any readers who have no idea what those terms mean. In a face-to-face conversation, if my interlocutor responded by asking “huh? ‘wolf spider’, ‘fishing spider’, what is that? I’ve never heard of these things”, then I could explain to them what the terms refer to; or we could use a smartphone or computer to access the very same Wikipedia pages which I linked to in my comment.

In any case you may feel free to mentally strip out the hyperlinks—that will not change my point, which is that any good-faith interlocutor will understand from the quoted comment (possibly after asking for an explanation, to rectify a total lack of domain knowledge) that the terms “wolf spider” and “fishing spider” refer to a pair of disjoint categories, and that my inquiry is into the question of which (if either!) of the two categories any given actual spider ought properly to be placed in.

comment by Vaniver · 2023-12-31T20:05:10.305Z · LW(p) · GW(p)

"that person, who wants to be treated in the way that people usually treat men"

Incidentally, one of the things I dislike about this framing is that gender stereotypes / scripts "go both ways". That is, it should be not just "treated like a man" but also "treat people like men do."

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-12-31T20:54:08.748Z · LW(p) · GW(p)

It was surprisingly impactful to tell myself and my parents I identified as male for purposes of elder care. Obviously I had the option to say "I will manage finances and logistics but not emotional or physical care labor" the whole time, but it was freeing to frame it as "well this is all my uncle was doing and no one thought he was defecting". 

comment by gwern · 2023-12-30T20:57:51.003Z · LW(p) · GW(p)

When I saw the latest zacpost was only ~25k words & 19 chapters, I was concerned, but then I skipped to the end and saw:

It turned out that I would have occasion to continue waging the robot-cult religious civil war. (To be continued.)

Phew! I guess he's OK.

comment by hwold · 2023-12-31T21:47:21.664Z · LW(p) · GW(p)

category boundaries should be drawn for epistemic and not instrumental reasons

 

Sounds very wrong to me. In my view, computationally unbounded agents don’t need categories at all ; categories are a way for computationally bounded agents to approximate perfect Bayesian reasoning, and how to judge the quality of the approximation will depend on the agent goals — different agents with different goals will care differently about a similar error.

(It's actually somewhat interesting; the logarithmic score doesn't work as a measure of category-system goodness because it can only reward you for the probability you assign to the exact answer, but we want "partial credit" for almost-right answers, so the expected squared error is actually better here, contrary to what you said in the "Technical Explanation" about what Bayesian statisticians do)

Yes, exactly. When you’re at the point when you’re deciding between log-loss and MSE, you’re no longer doing pure epistemics, you’re entering the realm of decision theory ; you’re crafting a measure of how good your approximation is, a measure that can and should be tailored to your specific goals as a rational agent. log-loss and MSE are only two possibilities in a vast universe of possible such measures, ones that are quite generic and therefore not optimal for a given agent goals.

Replies from: tailcalled, SaidAchmiz
comment by tailcalled · 2024-01-04T11:33:52.067Z · LW(p) · GW(p)

MSE can also be seen as a special-case of log-loss for a Gaussian distribution with constant variance.

comment by Said Achmiz (SaidAchmiz) · 2023-12-31T22:31:39.156Z · LW(p) · GW(p)

computationally unbounded agents don’t need categories at all

This can only be true if they do not ever have to interact with computationally bounded agents.

comment by Vaniver · 2023-12-31T19:16:53.983Z · LW(p) · GW(p)

Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" facts.) I agreed that conflating political positions with facts would be bad.

I don't get what 'intrinsically' is doing in the middle sentence. (Well, to the extent that I have guessed what you meant, I disagree.)

Like, yes, there's one underlying reality, descriptions of it get called facts.

But isn't the broader context the propagation of propositions, not the propositions themselves? That is, saying X is also saying "pay attention to X" and if X is something whose increased salience is good for the right-wing, then it makes sense to categorize it as a 'right wing fact', as left-wing partisans will be loathe to share it and right-wing partisans will be eager to.

Like, currently there's an armed conflict going on in Israel and Palestine which is harming many people. Of the people most interested in talking about it that I see on the Internet, I sure see a lot of selectivity in which harms they want to communicate, because their motive for communicating about it is not attempting to reach an unbiased estimate, but to participate in a cultural conflict which they hope their side will win. (One could summarize this view as "speech is intrinsically political.")

This bit of HPMOR comes to mind:

"I don't suppose you could explain," Harry said dryly, "in your capacity as an official of the Hogwarts school system, why catching a golden mosquito is deemed an academic accomplishment worthy of a hundred and fifty House points?"

A smile crossed Severus's lips. "Dear me, and I thought you were supposed to be perceptive. Are you truly so incapable of understanding your classmates, Potter, or do you dislike them too much to try? If Quidditch scores did not count toward the House Cup then none of them would care about House points at all. It would merely be an obscure contest for students like you and Miss Granger."

It was a shockingly good answer.

comment by tailcalled · 2023-12-30T22:58:49.120Z · LW(p) · GW(p)

I'm gonna repost my comment on unremediatedgender.space here:

A two-dimensional political map tells you which areas of the Earth's surface are under the jurisdiction of which government. In contrast, category "boundaries" tell you which regions of very high-dimensional configuration space correspond to a word/concept, which is useful because that structure can be used to make probabilistic inferences. You can use your observations of some aspects of an entity (some of the coordinates of a point in configuration space) to infer category-membership, and then use category membership to make predictions about aspects that you haven't yet observed.

But the trick only works to the extent that the category is a regular, non-squiggly region of configuration space: if you know that egg-shaped objects tend to be blue, and you see a black-and-white photo of an egg-shaped object, you can get close to picking out its color on a color wheel. But if egg-shaped objects tend to blue or green or red or gray, you wouldn't know where to point to on the color wheel.

The analogous algorithm applied to national borders on a political map would be to observe the longitude of a place, use that to guess what country the place is in, and then use the country to guess the latitude—which isn't typically what people do with maps. Category "boundaries" and national borders might both be illustrated similarly in a two-dimensional diagram, but philosophically, they're different entities. The fact that Scott Alexander was appealing to national borders to defend gerrymandered categories, suggested that he didn't understand this.

I would add that it probably is relatively easy to get squiggly national borders from a clustering of variables associated with a location, you just have to pick the right variables. Instead of latitude and longitude, consider variables such as:

  • If you were stabbed or robbed here, which organization should you report it to?
  • And who decides what rules there are to report to this organization?
  • What language is spoken here?
  • What forces prevent most states from grabbing the resources here?
  • What kind of money can I use to pay with here?
  • What phone companies provide the cheapest coverage here?
  • ...

I still had some deeper philosophical problems to resolve, though. If squiggly categories were less useful for inference, why would someone want a squiggly category boundary? Someone who said, "Ah, but I assign higher utility to doing it this way" had to be messing with you. Squiggly boundaries were less useful for inference; the only reason you would realistically want to use them would be to commit fraud, to pass off pyrite as gold by redefining the word "gold".

That was my intuition. To formalize it, I wanted some sensible numerical quantity that would be maximized by using "nice" categories and get trashed by gerrymandering. Mutual information was the obvious first guess, but that wasn't it, because mutual information lacks a "topology", a notion of "closeness" that would make some false predictions better than others by virtue of being "close".

Suppose the outcome space of X is {H, T} and the outcome space of Y is {1, 2, 3, 4, 5, 6, 7, 8}. I wanted to say that if observing X=H concentrates Y's probability mass on {1, 2, 3}, that's more useful than if it concentrates Y on {1, 5, 8}. But that would require the numerals in Y to be numbers rather than opaque labels; as far as elementary information theory was concerned, mapping eight states to three states reduced the entropy from log2 8 = 3 to log2 3 ≈ 1.58 no matter which three states they were.

How could I make this rigorous? Did I want to be talking about the variance of my features conditional on category membership? Was "connectedness" what I wanted, or was it only important because it cut down the number of possibilities? (There are 8!/(6!2!) = 28 ways to choose two elements from {1..8}, but only 7 ways to choose two contiguous elements.) I thought connectedness was intrinsically important, because we didn't just want few things, we wanted things that are similar enough to make similar decisions about.

I put the question to a few friends in July 2020 (Subject: "rubber duck philosophy"), and Jessica said that my identification of the variance as the key quantity sounded right: it amounted to the expected squared error of someone trying to guess the values of the features given the category. It was okay that this wasn't a purely information-theoretic criterion, because for problems involving guessing a numeric quantity, bits that get you closer to the right answer were more valuable than bits that didn't.

Variance is a commonly chosen metric to optimize for these sorts of algorithms yes, for essentially this reason. That said, most of the interesting discussion is in the exact nature of the Y, rather than in the metric used to measure it. When you are creating a classification system X which summarizes lots of noisy indicators Y₁, Y₂, ..., the algorithms that optimize for information (e.g. Latent Class Analysis, Latent Dirichlet Allocation, ...) usually seek the minimal amount of information that makes the indicators independent. When the indicators are noisy, the information in low-variance causes gets destroyed by the noise, so what remains to generate dependencies is the information in high-variance factors, and therefore seeking minimal shared information becomes equivalent to explaining maximum correlations. (And correlations are squared error based.) It's a standard empirical finding that different latent variable methods yield essentially the same latents when applied to essentially the same data.

And yet, somehow, "have accurate beliefs" seemed more fundamental than other convergent instrumental subgoals like "seek power and resources". Could this be made precise? As a stab in the dark, was it possible that the theorems on the ubiquity of power-seeking might generalize to a similar conclusion about "accuracy-seeking"? If it didn't, the reason why it didn't might explain why accuracy seemed more fundamental.

The only robust way to avoid wireheading is that instead of taking actions which maximize your reward (or your expectation of your utility, or ...), you should 1) have a world-model, 2) have a pointer into the value in the world-model, 3) pick actions which your model thinks increases the-thing-pointed-to-by-the-value-pointed and then execute those actions in reality.

This would prevent yourself from e.g. modifying your brain to believe that you had a high value, because if ahead of time you ask your world-model "would this lead to a lot of value?", the world model can answer "no, it would lead to you falsely believing you had a lot of value".

This system is usually built into utility maximization models since in those models the utility function can be any random variable, but it is not usually built into reinforcement learning systems since those systems often assume value to be a function of observations.

comment by TAG · 2023-12-31T01:43:24.290Z · LW(p) · GW(p)

“Credibly helpful unsolicited criticism should be delivered in private,” he writes!

Does he apply that to himself? He appears to have criticised many people publically, over the years.

Replies from: shankar-sivarajan
comment by Shankar Sivarajan (shankar-sivarajan) · 2023-12-31T03:13:40.265Z · LW(p) · GW(p)

He appears to have criticized many people publicly

Yes, but never helpfully.

comment by Unreal · 2023-12-31T00:17:00.948Z · LW(p) · GW(p)

I was bouncing around LessWrong and ran into this. I started reading it as though it were a normal post, but then I slowly realized ... 

I think according to typical LessWrong norms, it would be appropriate to try to engage you on the object level claims or talk about the meta-presentation as though you and I were trying to collaborate on figuring things out and how to communicate things.

But according to my personal norms and integrity, if I detect that something is actually quite off (like alarm bells going) then it would be kind of sick to ignore that, and we should actually treat this like a triage situation. Or at least a call to some kind of intervention. And it would be sick to treat this like everything is normal, and that you are sane, and I am sane, and we're just chatting about stuff and oh isn't the weather nice today. 

LessWrong is the wrong place for this to happen. This kind of "prioritization" sanity does not flourish here. 

Not-sane people get stuck on LessWrong in order to stay not-sane because LW actually reinforces a kind of mental unwellness and does not provide good escape routes. 

If you're going to write stuff on LW, it might be better to write a journal about what the various personal, lifestyle interventions you are making to get out of the personal, unwell hole you are in. A kind of way to track your progress, get accountability, and celebrate wins. 

Replies from: rsaarelm
comment by rsaarelm · 2023-12-31T08:37:46.599Z · LW(p) · GW(p)

Is this your first time running into Zack's stuff? You sound like you're talking to someone showing up out of nowhere with a no-context crackpot manuscript and has zero engagement with community. Zack's post is about his actual engagement with the community over a decade, we've seen a bunch of the previous engagement (in pretty much the register we see here so this doesn't look like an ongoing psychotic break), he's responsive to comments and his thesis generally makes sense. This isn't drive-by crackpottery and it's on LessWrong because it's about LessWrong.

Replies from: Viliam
comment by Viliam · 2023-12-31T14:39:09.710Z · LW(p) · GW(p)

I agree that Zack has a long history of engagement with the rationalist community, and that this post is a continuation of that history (in a predictable direction).

But that doesn't necessarily make this engagement sane.

From my perspective, Zack has a long-term obsession, and also he is smart enough to be popular on LessWrong despite the fact that practically everything he says is somehow connected to this obsession (and if for a moment it seems like it is not, that's just because he is preparing some convoluted meta argument that will later be used to support the obsession). I enjoy his writings, too, until something reminds me of "oh, this is going to be yet another meta argument in support of the belief that his erotic fantasy is the ultimate truth about the nature of trans-sexuality".

This isn't drive-by crackpottery, but it is a long-term crackpottery; and it is on LessWrong because the previous parts of it were on LessWrong. It is "about LessWrong" only in the sense that it is about Zack's previous writing on LessWrong and about his interactions with various people here. This very article, and this debate we are having now, will probably be used as a reason to write yet another article, etc.

At some moment we should reflect on the fact that we are probably enabling a mental illness, in a similar way as telling a paranoid person "you know, according to Bayes, the probability that CIA is following you is never exactly 0", or telling a depressed person "according to the second law of thermodynamics, everything you care about will be destroyed one day, probably sooner rather than later".

It is interesting to list various reasons why "corrupted" people may oppose X, but none of that actually proves that X is true.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-12-31T15:16:04.840Z · LW(p) · GW(p)

Even if it's true that he's obsessed with it and everything he writes is somehow connected to it - what's the problem with that? Couldn't you have said the same thing about Eliezer and AI? I bet there were lots of important contributions that were made by people following an obsession, even to their own detriment.

To me the question is whether it's true and valuable (I think so), not whether he's obsessed. 

Replies from: Viliam
comment by Viliam · 2023-12-31T15:35:50.461Z · LW(p) · GW(p)

To me the question is whether it's true

I agree, and I would like to see the evidence.

What I get instead, are indirect arguments like "people who disagree with me only do so for political reasons" (and "the entire rationalist community is corrupt, they are enemies and we are at war" and more such nonsense). That proves nothing. For example, people may also disagree with false statements for political reasons.

Replies from: SaidAchmiz, Yoav Ravid
comment by Said Achmiz (SaidAchmiz) · 2023-12-31T16:31:23.446Z · LW(p) · GW(p)

This is really a very strange criticism. Zack has been writing direct arguments, and evidence, for literal years now. You’re acting as if this is the first post he’s ever written on this subject!

Replies from: Viliam
comment by Viliam · 2023-12-31T20:31:58.506Z · LW(p) · GW(p)

Looking at the history of Zack's writing on LW...

"Dreaming of Political Bayescraft [LW · GW]" - nice and short.

"An Intuition on the Bayes-Structural Justification for Free Speech Norms [LW · GW]" - already goes meta about how human speech contains "a zero-sum social-control/memetic-warfare component".

"Change [LW · GW]" - a story explaining how a word can have two different meanings.

"Blegg Mode [LW · GW]" - a metaphor for something; the top comment says "I don't understand what point are you trying to make" and I agree.

"Where to Draw the Boundaries? [LW · GW]" - long but good.

"But It Doesn't Matter" [LW · GW] - short meta.

...I will stop here, but I think the pattern is visible. Zack keeps talking meta, sometimes he makes some great points and gets upvoted, sometimes the readers are confused. It takes him a very long time to get to his final point.

Unlike the Sequences, which push the reader from point A to point Z ("there is no supernatural", "therefore human intelligence is made of atoms", "therefore it is possible to make an intelligence out of silicon atoms", etc.), Zack's articles are dancing around the topic: going more meta to gain readers, going closer to the object level to lose them again, etc.

If there is a direct argument that fits into one screen of text, I would like to read it.

Replies from: SaidAchmiz, abandon
comment by Said Achmiz (SaidAchmiz) · 2023-12-31T20:59:27.051Z · LW(p) · GW(p)

If there is a direct argument that fits into one screen of text, I would like to read it.

If there isn’t a direct argument that fits into one screen of text, then…?

Zack is thereby proven wrong? The topic is thereby proven to be irrelevant? What?

Replies from: Viliam
comment by Viliam · 2023-12-31T23:20:15.503Z · LW(p) · GW(p)

Even if Zack happens to be right, the fact that people do not update about something they don't care about and which cannot be sufficiently simply explained, is not evidence of them being "fake", "corrupt", "epistemically rotten", "enemy combatants", or any other hysterical hyperbole.

Heck, I am not even saying that Blanchard is wrong (assuming that this was all about him, which I am not sure); from my perspective he might be right, or he might be wrong, or he might be right about some things or some people and wrong about other things or other people... I don't know, I do not have enough data to make an opinion on this, and I see no reason why I should spend my time figuring this out, and I see no reason why I should trust Zack's opinion on this.

The part that I do have an opinion on is that redefining the word "woman" to mean "legally woman" rather than "biologically woman" is not a choice that I would make, but that doesn't make it wrong per se. I would have voted against it, but I am not going to fight against it. (Also, this is unrelated to whether Blanchard is right or wrong.) Pluto is not a planet anymore.

This is not because I am too scared to express a politically incorrect opinion (I don't live in USA), or because I am afraid to disagree with the rationalist consensus (I had my own battles). From my perspective, it actually feels like Zack is the one who is pushing me to adopt an opinion for a wrong reason (to avoid his accusations; to be seen as brave and edgy rather than hypocritical and boring), and these comments are me pushing back.

Replies from: Kalciphoz, SaidAchmiz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-02T21:30:07.242Z · LW(p) · GW(p)

Even if Zack happens to be right, the fact that people do not update about something they don't care about and which cannot be sufficiently simply explained, is not evidence of them being "fake", "corrupt", "epistemically rotten", "enemy combatants", or any other hysterical hyperbole.

The complexity you complain about is not Zack's fault. His detractors engage in endless evasiveness including God-of-the-gaps style arguments as ChristianKI pointed out, and walking back an entire LW sequence that was previously non-controversial, simply because it has become politically inconvenient. The reception is so hostile that Zack is required to go practically all the way back to first principles, even needing to briefly revisit the modus ponens.

Phrases like "epistemically rotten" and "enemy combatants" are not a hysterical hyperbole to describe that. Zack chooses these terms because he is too agreeable to call a spade a spade and point out that the rationalist community has become outright evil.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2024-01-03T04:57:54.367Z · LW(p) · GW(p)

I think it's also worth emphasizing that the use of the phrase "enemy combatants" was in an account of something Michael Vassar said in informal correspondence, rather than being a description I necessarily expect readers of the account to agree with (because I didn't agree with it at the time). Michael meant something very specific by the metaphor, which I explain in the next paragraph. In case my paraphrased explanation wasn't sufficient, his exact words were:

The latter frame ["enemy combatants"] is more accurate both because criminals have rights and because enemy combatants aren't particularly blameworthy. They exist under a blameworthy moral order and for you to act in their interests implies acting against their current efforts, at least temporary [sic], but you probably would like to execute on a Marshall Plan later.

I think the thing Michael actually meant (right or wrong) is more interesting than a "Hysterical hyperbole!" "Is not!" "Is too!" grudge match.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-03T10:17:41.303Z · LW(p) · GW(p)

I guess it's just not very clear to me why Michael Vassar doesn't consider them to be highly blameworthy.

comment by Said Achmiz (SaidAchmiz) · 2023-12-31T23:57:49.352Z · LW(p) · GW(p)

Even if Zack happens to be right, the fact that people do not update about something they don’t care about and which cannot be sufficiently simply explained, is not evidence of them being “fake”, “corrupt”, “epistemically rotten”, “enemy combatants”, or any other hysterical hyperbole.

That’s as may be… but surely the threshold for “sufficiently simply” isn’t as low as one screen of text…?

Heck, I am not even saying that Blanchard is wrong (assuming that this was all about him, which I am not sure); from my perspective he might be right, or he might be wrong, or he might be right about some things or some people and wrong about other things or other people… I don’t know, I do not have enough data to make an opinion on this, and I see no reason why I should spend my time figuring this out, and I see no reason why I should trust Zack’s opinion on this.

I don’t particularly have an opinion about this either, but what has this to do with anything, really…? The OP mentions Blanchard twice in 19,000 words… very little in this discussion hinges on whether Blanchard is right or wrong.

The part that I do have an opinion on is that redefining the word “woman” to mean “legally woman” rather than “biologically woman” is not a choice that I would make, but that doesn’t make it wrong per se. I would have voted against it, but I am not going to fight against it.

Neither “legally woman” nor “biologically woman” can possibly serve as definitions of “woman”, for obvious reasons of circularity. In any case you’re… attempting to have this debate at almost the maximally naive level, as if nobody, much less Zack, has written anything about the topic. This is silly.

Pluto is not a planet anymore.

You’ve been on Less Wrong long enough to know better than this sort of nonsense.

From my perspective, it actually feels like Zack is the one who is pushing me to adopt an opinion for a wrong reason (to avoid his accusations; to be seen as brave and edgy rather than hypocritical and boring), and these comments are me pushing back.

What opinion do you think Zack is pushing you to adopt, exactly?

Replies from: Viliam, lahwran
comment by Viliam · 2024-01-01T16:08:07.943Z · LW(p) · GW(p)

surely the threshold for “sufficiently simply” isn’t as low as one screen of text…?

Most scientific papers have an abstract that is shorter than one screen.

what has this to do with anything, really…?

What opinion do you think Zack is pushing you to adopt, exactly?

I don't know, and that's my point, kind of.

*

My current best guess is that Zack essentially makes two separate claims:

First, he seems to make some object-level claim. (Or maybe multiple object-level claims.) And no matter how many of his long texts I read, I still have a problem pinpointing what exactly the object-level claim is. Some people seem to say that the object-level claims are obvious, but even they can't tell me what exactly they are. It all seems to be related to trans-sexuality, because that is a topic Zack keeps returning to. It seems to somehow contradict the mainstream narrative, otherwise Zack wouldn't keep making such a big deal out of it. This is about all I can say about it.

Second, -- this part I am a little more certain of, -- Zack also makes a meta-level claim that the rationalist community is "corrupt" and "epistemically rotten" for disagreeing with his object-level claim, whatever it is. This gets upvoted; I am not sure whether it's because people literally agree with that claim, or they just enjoy watching the drama, or it's some game of vague political connotations (I suspect that it's the last one, and that the vote for Zack is somehow a vote for contrarianism and against political correctness or something like that).

I resent being called corrupt for not agreeing with something that was never clearly communicated to me in the first place.

I am trying to cooperate on figuring out what Zack's object-level claim actually is, but apparently this does not work -- maybe I am doing a bad job here, but I start suspecting that this is actually a feature, not a bug (if a claim is never made clearly, no one can disprove it).

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2024-01-02T06:51:14.599Z · LW(p) · GW(p)

Does this help? (159 words and one hyperlink to a 16-page paper)

Empirical Claim: late-onset gender dysphoria in males is not an intersex condition.

Summary of Evidence for the Empirical Claim: see "Autogynephilia and the Typology of Male-to-Female Transsexualism: Concepts and Controversies" by Anne Lawrence, published in European Psychologist. (Not by me!)

Philosophical Claim: categories are useful insofar as they compress information by "carving reality at the joints"; in particular, whether a categorization makes someone happy or sad is not relevant.

Sociological Claim: the extent to which a prominence-weighted sample of the rationalist community has refused to credit the Empirical or Philosophical Claims even when presented with strong arguments and evidence is a reason to distrust the community's collective sanity.

Caveat to the Sociological Claim: the Sociological Claim about a prominence-weighted sample of an amorphous collective doesn't reflect poorly on individual readers of lesswrong.com who weren't involved in the discussions in question and don't even live in America, let alone Berkeley.

Replies from: TekhneMakre, tailcalled, Viliam, Viliam
comment by TekhneMakre · 2024-01-02T17:05:31.354Z · LW(p) · GW(p)

categories are useful insofar as they compress information by "carving reality at the joints";

I think from context you're saying "...are only useful insofar...". Is that what you're saying? If so, I disagree with the claim. Compressing information is a key way in which categories are useful. Another key way in which categories are useful is compressing actions, so that you can in a convenient way decide and communicate about e.g. "I'm gonna climb that hill now". More to the point, calling someone "he" is mixing these two things together: you're both kinda-sorta claiming the person has XY chromosomes, is taller-on-average, has a penis, etc.; and also kinda-sorta saying "Let's treat this person in ways that people tend to treat men". "He" compresses the cluster, and also is a button you can push to treat people in that way. These two things are obviously connected, but they aren't perfectly identical. Whether or not the actions you take make someone happy or sad is relevant.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2024-01-03T06:49:10.571Z · LW(p) · GW(p)

Sorry, the 159-word version leaves out some detail. I agree that categories are often used to communicate action intentions.

The academic literature on signaling in nature mentions that certain prey animals have different alarm calls for terrestrial or aerial predators, which elicit for different evasive maneuvers: for example, vervet monkeys will climb trees when there's a leopard or hide under bushes when there's an eagle. This raises the philosophical question of what the different alarm calls "mean": is a barking vervet making the denotative statement, "There is a leopard", or is it a command, "Climb!"?

The thing is, whether you take the "statement" or the "command" interpretation (or decline the false dichotomy), there are the same functionalist criteria for when each alarm call makes sense, which have to do with the state of reality: the leopard being there "in the territory" is what makes the climbing action called for.

The same is true when we're trying to make decisions to make people happy. Suppose I'm sad about being ugly, and want to be pretty instead. It wouldn't be helping me to say, "Okay, let's redefine the word 'pretty' such that it includes you", because the original concept of "pretty" in my map was tracking features of the territory that I care about (about how people appraise and react to my appearance), which gets broken if you change the map without changing the territory.

I don't think it's plausible to posit an agent that wants to be categorized in a particular way in the map, without that category tracking something in the territory. Where would such a pathological preference come from?

Replies from: TekhneMakre, Viliam
comment by TekhneMakre · 2024-01-03T07:12:08.014Z · LW(p) · GW(p)

If someone wants to be classified as "... has XY chromosomes, is taller-on-average, has a penis..." and they aren't that, then it's a pathological preference, yeah. But categories aren't just for describing territory, they're also for coding actions. If a human says "Climb!" to another human, is that a claim about the territory? You can try to infer a claim about reality, like "There's something in reality that makes it really valuable for you to climb right now, assuming you have the goals that I assume you have".

If someone says "call me 'he' ", it could be a pathological preference. Or it could be a preference to be treated by others with the male-role bundle of actions. That preference could be in conflict with others' preferences, because others might only want to treat a person with the male-role bundle if that person "... has XY chromosomes, is taller-on-average, has a penis..." . Probably it's both, and they haven't properly separated out their preferences / society hasn't made it convenient for them to separate out their preferences / there's a conflict about treatment that is preventing anyone from sorting out their preferences.

"Okay, let's redefine the word 'pretty' such that it includes you" actually makes some sense. Specifically, it's an appeal to anti-lookism. It's of course confused, because ugliness is also an objective thing. And it's a conflict, because most people want to treat ugly people differently than they treat pretty people, so the request to be treated like a pretty person is being refused.

Replies from: tailcalled
comment by tailcalled · 2024-01-03T11:30:59.676Z · LW(p) · GW(p)

If a human says "Climb!" to another human, is that a claim about the territory?

Can you add more context? Are you talking about an experienced fighter who has been cornered by enemies with a less-experienced friend? A personal trainer whose trainee has been taking a 5 minute break from rock climbing? Something else?

Replies from: TekhneMakre
comment by TekhneMakre · 2024-01-03T11:47:20.748Z · LW(p) · GW(p)

Any of them. My point is that "climb!" is kind of like a message about the territory, in that you can infer things from someone saying it, and in that it can be intended to communicate something about the territory, and can be part of a convention where "Climb!" means "There's a bear!" or whatever; but still, "Climb!" is, besides being an imperative, a word that's being used to bundle actions together. Actions are kinda part of the territory, but as actions they're also sort of internal to the speaker (in the same way that a map is also part of the territory, but it's also internal to the speaker) and so has some special status. Part of that special status is that your actions, and how you bundle your actions, is up to your choice, in a way that it's not up to your choice whether there's a biological male/female approximate-cluster-approximate-dichotomy, or whether 2+4=6 etc.

comment by Viliam · 2024-01-07T11:13:19.736Z · LW(p) · GW(p)

Suppose I'm sad about being ugly, and want to be pretty instead. It wouldn't be helping me to say, "Okay, let's redefine the word 'pretty' such that it includes you", because the original concept of "pretty" in my map was tracking features of the territory that I care about (about how people appraise and react to my appearance), which gets broken if you change the map without changing the territory.

Yes, but also if people bully you for being ugly, maybe a ban on bullying is an effective action.

(Unpacking the metaphor: sometimes there are multiple reasons why a person wants to do X, and some of them cannot be helped by a certain kind of action, but some could be. Then it depends on how the person will feel about the partial success.)

comment by tailcalled · 2024-01-02T11:13:12.072Z · LW(p) · GW(p)

Disagree with the sociological claim because the Blanchardian arguments for the empirical claim are baaaaaaaad and it's pretty reasonable to not credit an empirical claim when the arguments presented for it are so bad.

One could still defend the sociological claim due to the philosophical claim but at the same time I have the impression that there's some hestitance partly because they are so confused about the arguments around the empirical claim.

comment by Viliam · 2024-01-07T14:40:39.262Z · LW(p) · GW(p)

Commenting on the linked article, as I read it:

biologic males who seek sex reassignment ... are not a homogeneous clinical population but comprise two or more distinct subtypes with different symptoms and developmental trajectories.

Sounds likely. (Betting on "it's complicated" is usually a safe bet.)

In 1989, psychologist Ray Blanchard proposed that most nonandrophilic MtF transsexuals display a paraphilic sexual orientation called autogynephilia, defined as the propensity to be sexually aroused by the thought or image of oneself as a woman.

Taking this sentence literally, it only says p(E|X) > 0.5, but it seems to imply that p(E|~X) < 0.5.

As an analogy, if I said "most nonandrophilic MtF transsexuals drink Coke", the fact that I consider this relevant to the topic would imply that drinking Coke is an unusual activity among people who are not nonandrophilic MtF transsexuals. So, it is really? Because if it is not, why are we even discussing this?

the hypothesis that almost all nonandrophilic MtF transsexuals are autogynephilic, whereas almost all androphilic MtF transsexuals are not.

Okay, they got this part covered. But for the completeness, I would also like to know the prevalence of autogynephilia among cis men, and among cis women. Because different answers would give different pictures of reality. Is it "nonandrophilic MtF have this special trait" or rather "androphilic MtF have this special trait", compared to cis men? And is it "nonandrophilic MtF have this special trait that makes them different from everyone else" or "nonandrophilic MtF have this trait that is special among men, but normal among women"? (Actually, since we divide MtF to androphilic and nonandrophilic, it would also make sense to make separate statistics for cis men and women by their sexual orientation.)

Also, this is probably answered somewhere, but I suppose that autogynephilia exists on a spectrum: some people may be aroused by a thought in some situation but not in another, the arousal may be weaker or stronger, it may be a once-in-a-lifetime event or a permanent obsession... The reason I am saying this is because it is easy to change the conclusion by just rounding up the values for different groups differently. (Also, recently I had to answer in a psychological test "did you ever think about suicide?", and I was like: WTF does this even mean? If I just thought about suicide once, and rejected the idea after a fraction of a second as obviously wrong, that too would technically qualify as "thinking about suicide", wouldn't it? But the test treated such answer as a red flag, so... maybe no?)

Okay, moving past the abstract...

MtF transsexuals significantly outnumber their FtM counterparts

Interesting. I wonder how this relates to the fact that it is socially acceptable for a woman to take a traditional man's role (it seems like a large part of feminism is about enabling this)... but this is unrelated to the original topic.

Typologies based on sexual orientation have been more widely utilized and were relatively uncontroversial until about 2003.  ... these ... typologies have often been simplified to distinguish only two fundamental subtypes: persons attracted exclusively to males (androphilic MtF transsexuals) and persons attracted to females, males and females, or neither gender (nonandrophilic MtF transsexuals).

Interesting.

In the late 1980s, psychologist Ray Blanchard proposed that almost all nonandrophilic MtF transsexuals exhibit a paraphilic sexual orientation he called autogynephilia (literally “love of oneself as a woman”), which he formally defined as “a male’s propensity to be sexually aroused by the thought of himself as a female”.

Okay. I see where this is coming from. Based on the previous, it seems like there are, to put it simply, "gay MtF" and "something-else MtF" (and this part is uncontroversial? or at least has been for a long time?), and Blanchard is proposing a hypothesis on what that "something else" could be.

What rubs me the wrong way is the "a male's propensity to..." part of the definition. I mean, why not simply define it as "a propensity to...", and then talk about the prevalence of autogynephilia among men? (Maybe I am just overthinking it and Blanchard would say: whatever.)

Adding to my list of "things that also should be investigated": the same in the opposite direction; how many cis men and cis women are sexually aroused by the thought of themselves being male?

Autogynephilia became a controversial topic after it was discussed in a contentious book by psychologist Bailey (2003).

Okay, the rabbit hole goes deep. I am kinda curious what exactly was controversial about that book, but I am already giving this more time than I originally wanted.

He found that 73% of [nonandrophilic] participants reported a history of sexual arousal with cross-dressing, compared with only 15% of the androphilic participants.

I guess I have an opposite reaction to most people here, because it is the lack of autogynephilia that I find interesting. I mean: "you want to be a woman, you want it so much that you are willing to take hormones and cut off your penis... and yet you don't find the thought of being a woman exciting?"

It seems to me that the nonandrophilic MtF match the non-scientific description of "having a female brain, but living in a male body", while the androphilic MtF seem just like... gays who want to be compatible with a heteronormative society? ("I want to be a woman so that I can have sex with men, without being a gay man" vs "I want to be a woman, because being a woman is awesome!!! omg I get an orgasm just from imagining it!!!")

Blanchard theorized that a substantial number of fundamentally gynephilic MtF transsexuals develop a secondary sexual interest in male partners – he called this interest pseudoandrophilia – based on the autogynephilic desire to have their femininity validated by the admiration or sexual interest of men.

OK, I am bit confused here. We started with defining nonandrophilic MtF transsexuals as "persons attracted to females, males and females, or neither gender", and now we have a theory that they (a substantial number of them) have sexual interest (secondary) in male partners.

(It sounds a bit like: "Men are either gay or not-gay. Blanchard theorized that a substantial number of non-gay men are gays... as a secondary sexual interest." That is, I am not surprised by the statement than many men are gays, per se, but I am surprised that the statement was made specifically about a group that was originally defined as non-gays.)

What is the reason for this hypothesis and what are the data supporting it?

MtF transsexuals and transgender persons routinely minimize or deny autogynephilic arousal in association with cross-dressing or cross-gender fantasy for reasons that probably are often unintentional but sometimes are clearly deliberate. 

I could imagine a good reason for that. Suppose that you have a few non-sexual reasons for X, but also a sexual reason for X. If you admit that you have the sexual reason, most people are going to dismiss all the non-sexual reasons as mere rationalizations. So you deny or minimize the sexual reason, as a way to express that the non-sexual reasons are valid.

In some cases, autogynephilic MtF transsexuals who claim to be attracted to men may simply be experiencing attraction to the idea of having their femininity validated by men, a different phenomenon.

Makes me think, how does this compare to cishet women? How many of them had their first sexual experience because they wanted to have their feminity validated by a man?

Anyway, when we have people lying to researchers about their actual feelings, we are in a tricky epistemic situation. (On one hand we can't use "the subjects say no" as a definite falsification of a hypothesis. On the other hand, how else to evaluate the hypotheses, beyond "sounds plausible to me"?)

all gender dysphoric males who are not sexually oriented toward men are instead sexually oriented toward the thought or image of themselves as women

Didn't we have "a substantial number" a few paragraphs ago, and now it is "all"?

Autogynephilia might be better characterized as an orientation than as a paraphilia.

Sounds like debating definitions, and... I think I disagree? Maybe this is just because the article is a short summary of a longer argument, but it feels like a jump from "excited by X" to "excited only by X". (If a cishet man is sexually excited by the thought of having a male body, does this automatically make it an orientation?)

...oops, still just a page 5 out of 16, but I hope that I have communicated some concerns clearly. Even if most things seem plausible to me, at some moment it feels like an unwarranted jump to conclusions.

Replies from: tailcalled
comment by tailcalled · 2024-01-07T15:30:56.152Z · LW(p) · GW(p)

But for the completeness, I would also like to know the prevalence of autogynephilia among cis men

It's somewhat unclear but it probably looks something like this:

where "CGS" is an abbreviation of "cross-gender sexuality", and covers stuff like this (from a different survey).

and this part is uncontroversial? or at least has been for a long time?

I mean it is certainly uncontroversial that some trans women are exclusively attracted to men and some trans women are not exclusively attracted to men. But that presumably has something to do that you see the same for other demographics, e.g. cis men or cis women, where some are attracted to men and some are not, as well as from the fact that most trans women are open about their orientation and there's plenty of trans women from each orientation available.

However, Blanchardians tend to go motte/bailey a lot with this. Like they add a lot of additional claims about this, and then put forth the positions that these additional claims are also part of the uncontroversial knowledge, and obviously the more claims you add, the less uncontroversial it will be. They also have the advantage that it used to be only a handful of academics and clinicians discussing it, so "uncontroversial" within this handful of people isn't as significant as "uncontroversial" today.

What rubs me the wrong way is the "a male's propensity to..." part of the definition. I mean, why not simply define it as "a propensity to...", and then talk about the prevalence of autogynephilia among men? (Maybe I am just overthinking it and Blanchard would say: whatever.)

You're not overthinking it, Blanchardians constantly do this sort of thing, where they try to establish their ideas as true by definition. (Another example of this is, sometimes I've been studying autogynephilia in gay men, and Blanchardians have tended to say that this is definitionally impossible.)

comment by Viliam · 2024-01-02T12:19:38.800Z · LW(p) · GW(p)

Thank you for the summary!

(I apologize, the timing is unfortunate, I am leaving for a one-week vacation without internet access right now, so I can't give you a response this would deserve. Perhaps later [LW(p) · GW(p)].)

comment by the gears to ascension (lahwran) · 2024-01-01T00:31:55.181Z · LW(p) · GW(p)

That’s as may be… but surely the threshold for “sufficiently simply” isn’t as low as one screen of text…?

this does not seem like an impossible requirement for almost any scoped argument I can remember seeing (that is, a claim which is not inherently a conjunction of dozens of subclaims), including some very advanced math ones. granted, by making it fit on one screen you often get something shockingly dense. but you don't need more than about 500 words to make most coherent arguments. the question is whether it would increase clarity to compress it like that. and I claim without evidence that the answer is generally that the best explanation of a claim is in fact this short, though it's not guaranteed that one has the time and effort available to figure out how to precisely specify the claim in words that few; often, trying to precisely specify something in few words runs into "those words are not precisely defined in the mind of the readers" issues, a favorite topic of Davis.

(I believe this to apply to even things that people spend hundreds of thousands of words on on this site, such as "is ai dangerous". that it took yudkowsky many blog posts to make the point does not mean that a coherent one-shot argument needs to be that long, as long as it's using existing words well. It might be the case that the concise argument is drastically worse at bridging inferential gaps, but I don't think it need be impossible to specify!)

comment by dirk (abandon) · 2023-12-31T21:12:49.247Z · LW(p) · GW(p)

AIUI the actual arguments are over on Zack's blog due to being (in Zack's judgement) Too Spicy For LessWrong (that is, about trans people). (Short version, Blanchardianism coupled with the opinion that most people who disagree are ignoring obvious truths about sex differences for political reasons; I expect the long version is more carefully-reasoned than is apparent in this perhaps-uncharitable summary.)

comment by Yoav Ravid · 2023-12-31T16:05:08.728Z · LW(p) · GW(p)

Can you say exactly which claims Zack is making without showing enough evidence? Is it one or more of these

(1) For all nouns N, you can't define N any way you want, for at least 37 reasons [LW · GW].

(2) Woman is such a noun.

(3) Therefore, you can't define the word woman any way you want.

Or something else?

Replies from: Viliam
comment by Viliam · 2023-12-31T19:05:22.829Z · LW(p) · GW(p)

I agree with all of this.

But there is a space between "any way you want" and "only one possible way".

Is Mona Lisa (the painting) a woman? Paintings do not have chromosomes, and many of them do not even have sexual organs. Yet if I say "Mona Lisa is a woman", it is true in some meaningful sense... and false in some other meaningful sense.

Sometimes you use one bucket for things, and then you find out that you need two. Which one of the new buckets should inherit the original name... is a social/political choice. I may disagree with the choice, but that doesn't make it wrong. If you want to be unambiguous, use an adjective, for example "trans women are not biological women" or "trans women are legally considered women".

(Just like tomato is biologically a fruit but legally a vegetable; carrot is legally a fruit in EU; and ketchup is legally a vegetable in USA.)

comment by Vladimir_Nesov · 2023-12-31T07:15:22.421Z · LW(p) · GW(p)

There is no global clarity, not even in math. There are islands of framing that make reasoning locally work. They benefit from being small and robust, cheap to master and not requiring correct nuance to follow. Mountains of wisdom can be built out of such building blocks, relying on each other but making sense on their own. Occasionally contradicting each other or not making sense in each other's language.

This doesn't help with many complicated questions afflicted by necessity of nuance, where clarity is currently infeasible. A productive activity is finding small and robust observations inspired by such questions, working towards a future wisdom that would be able to digest them entirely.

comment by MSRayne · 2024-01-15T16:19:08.392Z · LW(p) · GW(p)

I am not the best at writing thorough comments because I am more of a Redditor than a LessWronger, but I just want you to know that I read the entire post over the course of ~2.5 hours and I support you wholeheartedly and think you're doing something very important. I've never been part of the rationalist "community" and don't want to be (I am not a rationalist, I am a person who strives weakly for rationality, among many other strivings), particularly after reading all this, but I definitely expected better out of it than I've seen lately. But perhaps I shouldn't; the few self-identified rationalists I've interacted with one on one have mostly seemed like... at best, very strange people to me. And Eliezer has always, honestly, struck me as a dangerous narcissist whose interest in truth is secondary to his interest in being the Glorious Hero. I don't want to go to the effort of replying to specific things you said - and you don't know who I am and probably won't read this anyway - but yeah, just, I'm glad you said them.

comment by Zane · 2024-01-08T19:28:30.002Z · LW(p) · GW(p)

Previously, I had already thought it was nuts that trans ideology was exerting influence on the rearing of gender-non-conforming children—that is, children who are far outside the typical norm of behavior for their sex: very tomboyish girls and very effeminate boys.

Under recent historical conditions in the West, these kids were mostly "pre-gay" rather than trans. (The stereotype about lesbians being masculine and gay men being feminine is, like most stereotypes, basically true: sex-atypical childhood behavior between gay and straight adults has been meta-analyzed at [? · GW] Cohen's d ≈ 1.31 standard deviations for men and d ≈ 0.96 for women.) A solid majority of children diagnosed with gender dysphoria ended up growing out of it by puberty [? · GW]. In the culture of the current year, it seemed likely that a lot of those kids would instead get affirmed into a cross-sex identity at a young age, even though most of them would have otherwise (under a "watchful waiting" protocol [? · GW]) grown up to be ordinary gay men and lesbians.

I think I might be confused about what your position is here. As I understood the two-type taxonomy theory, the claim was that while some "trans women" really were unusually feminine compared to typical men, most of them were just non-feminine men who were blinded into transitioning by autogynephilia. But the early-onset group, as I understood the theory, were the ones who really were trans? Your whole objection to people classifying autogynephilic people as "trans women" was that they didn't actually have traits drawn from a female distribution, and so modelling them as women would be less accurate than modelling them as men. But if members of the early-onset group really do behave in a way more typical of femininity than masculinity, then that would mean they essentially are "women on the inside, men on the outside."

Am I missing something about your views here?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2024-01-12T06:26:23.434Z · LW(p) · GW(p)

"Essentially are" is too strong. (Sex is still real, even if some people have sex-atypical psychology.) In accordance with not doing policy, I don't claim to know under what conditions kids in the early-onset taxon should be affirmed early: maybe it's a good decision. But whether or not it turns out to be a good decision, I think it's increasingly not being made for the right reasons; the change in our culture between 2013 and 2023 does not seem sane.

Replies from: Zane
comment by Zane · 2024-01-12T20:30:24.433Z · LW(p) · GW(p)

If a person has a personality that's pretty much female, but a male body, then thinking of them as a woman will be a much more accurate model of them for predicting anything that doesn't hinge on external characteristics. I think the argument that society should consider such a person to be a woman for most practical purposes is locally valid, even if you reject that the premise is true in many cases.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2024-01-12T22:13:07.783Z · LW(p) · GW(p)

If a person has a personality that's pretty much female, but a male body, then thinking of them as a woman will be a much more accurate model of them for predicting anything that doesn't hinge on external characteristics. I think the argument that society should consider such a person to be a woman for most practical purposes is locally valid, even if you reject that the premise is true in many cases.

I have to point out that if this logic applies symmetrically, it implies that Aella should be viewed as a man. (She scored .95% male on the gender-contimuum test, which is much more than the average man (don't have a link unfortunately, small chance that I'm switching up two tests here).) But she clearly views herself as a woman, and I'm not sure you think that society should consider her a man for most practical purposes (although probably for some?)

You could amend the claim by the condition that the person wants to be seen as the other gender, but conditioning on preference sort of goes against the point you're trying to make.

Replies from: Zane
comment by Zane · 2024-01-12T23:28:42.387Z · LW(p) · GW(p)

Fair. I do indeed endorse the claim that Aella, or other people who are similar in this regard, can be more accurately modelled as a man than as a woman - that is to say, if you're trying to predict some yet-unmeasured variable about Aella that doesn't seem to be affected by physical characteristics, you'll have better results by predicting her as you would a typical man, than as you would a typical woman. Aella probably really is more of a man than a woman, as far as minds go.

But your mentioning this does make me realize that I never really had a clear meaning in mind when I said "society should consider such a person to be a woman for most practical purposes." When I try to think of ways that men and women should be treated differently, I mostly come up blank. And the ways that do come to mind are mostly about physical sex rather than gender - i.e. sports. I guess my actual position is "yeah, Aella is probably male with regard to personality, but this should not be relevant to how society treats ?her."

Replies from: Zack_M_Davis, sil-ver
comment by Zack_M_Davis · 2024-01-14T23:32:14.389Z · LW(p) · GW(p)

Consider a biased coin that comes up Heads with probability 0.8. Suppose that in a series of 20 flips of such a coin, the 7th through 11th flips came up Tails. I think it's possible to simultaneously notice this unusual fact about that particular sequence, without concluding, "We should consider this sequence as having come from a Tails-biased coin." (The distributions include the outliers, even though there are fewer of them.)

I agree that Aella is an atypical woman along several related dimensions. It would be bad and sexist if Society were to deny or erase that. But Aella also ... has worked as an escort? If you're writing a biography of Aella, there are going to be a lot of detailed Aella Facts that only make sense in light of [LW · GW] the fact that she's female. The sense in which she's atypically masculine is going to be different from the sense in which butch lesbians are atypically masculine.

I'm definitely not arguing that everyone should be forced into restrictive gender stereotypes. (I'm not a typical male either.) I'm saying a subtler thing about the properties of high-dimensional probability distributions. If you want to ditch the restricting labels and try to just talk about the probability distributions (at the expense of using more words), I'm happy to do that. My philosophical grudge is specifically against people saying, "We can rearrange the labels to make people happy."

Replies from: Zane
comment by Zane · 2024-01-15T20:40:03.690Z · LW(p) · GW(p)

The question, then, is whether a given person is just an outlier by coincidence, or whether the underlying causal mechanisms that created their personality actually are coming from some internal gender-variable being flipped. (The theory being, perhaps, that early-onset gender dysphoria is an intersex condition, to quote the immortal words of a certain tribute band.)

If it was just that biological females sometimes happened to have a couple traits that were masculine - and these traits seemed to be at random, and uncorrelated - then that wouldn't imply anything beyond "well, every distribution has a couple outliers." But when you see that lesbians - women who have the typically masculine trait of attraction to women - are also unusually likely to have other typically masculine traits - then that implies that there's something else going on. Such as, some of them really do have "male brains" in some sense.

And there are so many different personality traits that are correlated with gender (at least 18, according to the test mentioned above, and probably many more that can't be tested as easily) that it's very unlikely someone would have an opposite-sex personality just by chance alone. That's why I'd guess that a lot of the feminine "men" and masculine "women" really do have some sort of intersex condition where their gender-variable is flipped. (Although there are some cultural confounders too, like people unconsciously conforming to stereotypes about how gay people act.)

I completely agree that dividing everyone between "male" and "female" isn't enough to capture all the nuance associated with gender, and would much prefer that we used more words than that. But if, as seems to often be expected by the world, we have to approximate all of someone's character traits all with only a single binary label... then there are a lot of people for whom it's more accurate to use the one that doesn't match their sex.

comment by Rafael Harth (sil-ver) · 2024-01-13T08:42:16.588Z · LW(p) · GW(p)

I do indeed endorse the claim that Aella, or other people who are similar in this regard, can be more accurately modelled as a man than as a woman

I think that's fair -- in fact, the test itself is evidence that the claim is literally true in some ways. I didn't mean the comment as a reductio ad absurdum, more as as "something here isn't quit right (though I'm not sure what)". Though I think you've identified what it is with the second paragraph.

comment by Eli Tyre (elityre) · 2024-02-19T09:43:14.773Z · LW(p) · GW(p)

Under recent historical conditions in the West, these kids were mostly "pre-gay" rather than trans. (The stereotype about lesbians being masculine and gay men being feminine is, like most stereotypes, basically true: sex-atypical childhood behavior between gay and straight adults has been meta-analyzed at Cohen's d ≈ 1.31 standard deviations for men and d ≈ 0.96 for women.) A solid majority of children diagnosed with gender dysphoria ended up growing out of it by puberty. In the culture of the current year, it seemed likely that a lot of those kids would instead get affirmed into a cross-sex identity at a young age, even though most of them would have otherwise (under a "watchful waiting" protocol) grown up to be ordinary gay men and lesbians.

What made this shift in norms crazy, in my view, was not just that transitioning younger children is a dubious treatment decision, but that it's a dubious treatment decision that was being made on the basis of the obvious falsehood that "trans" was one thing: the cultural phenomenon of "trans kids" was being used to legitimize trans adults, even though a supermajority of trans adults were in the late-onset taxon and therefore had never resembled these HSTS-taxon kids. That is: pre-gay kids in our Society are being sterilized in order to affirm the narcissistic delusions[29] of guys like me.

I definitely want to think more about this, and my views are provisional.

But if this basic story is true, it sure changes my attitude towards childhood gender-transitions!

comment by Eli Tyre (elityre) · 2024-02-19T09:05:35.610Z · LW(p) · GW(p)

I was skeptical of the claim that no one was "really" being kept ignorant. If you're sufficiently clever and careful and you remember how language worked when Airstrip One was still Britain, then you can still think, internally, and express yourself as best you can in Newspeak. But a culture in which Newspeak is mandatory, and all of Oceania's best philosophers have clever arguments for why Newspeak doesn't distort people's beliefs doesn't seem like a culture that could solve AI alignment.

Hm. Is it a crux for you if language retains the categories of "transwoman" and "cis woman" in addition to (now corrupted, in your view) general category of "woman"?

I guess not, but I'm not totally sure what your reason for why not would be.

...or maybe you're mainly like "it's fucked up that this particular empirical question propagated so far back into our epistemology that it caused Scott and Eliezer to get a general philosophical question wrong."

That does seem to me like the most concerning thing about this whole situation, if that is indeed what happened. 

comment by Eli Tyre (elityre) · 2024-02-18T08:11:33.828Z · LW(p) · GW(p)

And as it happened, on 7 May 2019, Kelsey wrote a Facebook comment displaying evidence of understanding my thesis.

This link is dead?

comment by philh · 2024-01-07T16:47:17.365Z · LW(p) · GW(p)

But ... "I thought X seemed Y to me"[20] and "X is Y" do not mean the same thing!

And it seems to me that in the type of comment Eliezer's referring to, "X seemed stupid to me" is more often correct than "X was stupid".

Argument for this: it's unlikely that someone would say "X seemed stupid to me" if X actually didn't seem stupid to them, so it's almost always true when said; whereas I think it's quite common to misjudge whether X was actually stupid.

("X was stupid, they should have just used the grabthar device." / "Did you miss the part three chapters back [published eight months ago] where that got disabled?")

So we might expect that "more often true ⇒ less information content". We could rewrite "X was stupid" to "this story contained the letter E" and that would more often be true, too. But I don't think that holds, because

  • "X seemed stupid" is not almost-always true, unlike "this story contained the letter E";
  • But if someone said "X was stupid" I think it's almost-always also the case that X seemed stupid to them;
  • And in fact people don't reliably track this distinction.

I think people track it more than zero, to be clear. But if I see someone say "X was stupid", two prominent hypotheses are:

  1. This person reliably tracks the distinction between "X was stupid" and "X seemed stupid", and in this case they have sufficient confidence to make the stronger claim.
  2. This person does not reliably track that distinction.

And even on LessWrong, (2) is sufficiently common that in practice I often just rewrite the was-claim to the seemed-claim in my head.

(Actually, I think I'm imperfect at this. I think as a rule of thumb, the "was" claim updates me further than is warranted in the direction that X was stupid. My guess is that this kind of failure is pretty common. But that's separate from a claim about information content of people's words.)

So I think Eliezer is giving good advice for "how to be good at saying true and informative things", as well as good advice for "how to discuss an author's work in a way that leaves them motivated to keep writing".

Replies from: Zack_M_Davis, martin-randall
comment by Zack_M_Davis · 2024-01-14T22:57:52.963Z · LW(p) · GW(p)

I agree that "seems to me" statements are more likely to be true than the corresponding unqualified claims, but they're also about a less interesting subject matter (which is not quite the same thing as "less information content"). You probably don't care about how it seems to me; you care about how it is.

Replies from: philh
comment by philh · 2024-01-15T11:49:14.993Z · LW(p) · GW(p)

You probably don’t care about how it seems to me; you care about how it is.

Indeed, and as I argued above, a person who reliably tracks the distinction between what-is and what-seems-to-them tells me more about what-is than a person who doesn't.

I mean, I suppose that if someone happened to know that the dress was blue, and told me "the dress looks white to me" without saying "...but it's actually blue", that would be misleading on the subject of the color of the dress. But I think less misleading, and a less common failure mode, than a person who doesn't know that the dress is blue, who tells me "the dress is white" because that's how it looks to them.

I mean, in the specific case of the colors of objects in photographs, I think correspondence between what-is and what-seems is sufficiently high not to worry about it most of the time. The dress was famous in part because it's unusual. If you know that different people see the dress as different colors, and you don't know what's going on, then (according to me and, I claim, according to sensible rationalist discourse norms) you should say "it looks white to me" rather than "it's white". But if you have no reason to think there's anything unusual about this particular photograph of a dress that looks white to you, then whatever.

But I think this correspondence is significantly lower between "X was stupid" and "X seemed stupid". And so in this case, it seems to me that being careful to make the distinction:

  • Makes you better at saying true things;
  • Increases the information content of your words, on both the subjects what-is and what-seems-to-you;
  • Is kinder to authors.
Replies from: philh
comment by philh · 2024-01-15T18:43:45.708Z · LW(p) · GW(p)

Hm, I think I'm maybe somewhat equivocating between "the dress looks blue to me" as a statement about my state of mind and as a statement about the dress.

Like I think this distinction could be unpacked and it would be fine, I'd still endorse what I'm getting at above. But I haven't unpacked it as much as would be good.

comment by Martin Randall (martin-randall) · 2024-01-15T12:49:54.442Z · LW(p) · GW(p)

Edited to add: this is my opinion regarding media criticism, not in general, apologies for any confusion.

To me, the difference between x is y" and "x seems y" and "x seems y to me" and "I think x seems y to me" and "mileage varies, I think x seems y to me" and the many variations of that is:

  • Expressing probabilities or confidence intervals
  • Acknowledging (or changing) social reality
  • Acknowledging (or changing) power dynamics / status

In the specific case of responses to fiction there is no base reality, so we can't write "x is y" and mean it literally. All these things are about how the fictional character seems. Still, I would write "Luke is a Jedi" not "Luke seems to be a Jedi".

I read the quoted portion of Yudkowsky's comment as requiring/encouraging negative literary criticism to express low confidence, to disclaim attempts to change social reality, and to express low status.

Replies from: philh
comment by philh · 2024-01-15T13:50:07.538Z · LW(p) · GW(p)

Two differences I think you're missing:

  • "seems to me" suggests inside view, "is" suggests outside view.
  • "seems to me" gestures vaguely at my model, "is" doesn't. This is clearer with the dress; if I think it's blue, "it looks blue to me" tells you why I think that, while "it's blue" doesn't distinguish between "I looked at the photo" and "I read about it on wikipedia and apparently someone tracked down the original dress and it was blue". With "X seemed stupid to me", it's a vaguer gesture, but I think something like "this was my gut reaction, maybe I thought about it for a few minutes". (If someone has spoken with the author and the author agrees "oops yeah that was stupid of X, they should instead have...", then "X was stupid" seems a lot more justifiable to me.)

In the specific case of responses to fiction there is no base reality, so we can’t write “x is y” and mean it literally. All these things are about how the fictional character seems. Still, I would write “Luke is a Jedi” not “Luke seems to be a Jedi”.

Eh... so I don't claim to fully understand what's going on when we talk about fictional universes. But still, I'm comfortable with "Luke is a Jedi", and I think it's importantly different from, say, "Yoda is wise" or "the Death Star is indestructible" or "the Emperor has been defeated once and for all".

And I think the ways it's different are similar to the differences between claims about base-level reality like "Tim Cook is a CEO" versus "the Dalai Lama is wise" or "the Titanic is unsinkable" or "Napoleon has been defeated once and for all".

Replies from: martin-randall
comment by Martin Randall (martin-randall) · 2024-01-16T03:09:13.103Z · LW(p) · GW(p)

Thanks for replying. I'm going to leave aside non-fictional examples ("The Dress") because I intended to discuss literary criticism.

"seems to me" suggests inside view, "is" suggests outside view.

I'm not sure exactly what you mean, see Taboo "Outside View" [LW · GW]. My best guess is that you mean that "X seems Y to me" implies my independent impression, not deferring to the views of others, whereas "X is Y" doesn't.

If so, I don't think I am missing this. I think that "seems to me" allows for a different social reality (others say that X is NOT Y, but my independent impression is that X is Y), whereas "is" implies a shared social reality (others say that X is Y, I agree), and can be an attempt to change or create social reality (I say "X is Y", others agree, and it becomes the new social reality).

"seems to me" gestures vaguely at my model, "is" doesn't. ... With "X seemed stupid to me", it's a vaguer gesture, but I think something like "this was my gut reaction, maybe I thought about it for a few minutes".

Again, I don't think I am missing this. I agree that "X seems Y to me" implies something like a gut reaction or a hot take. I think this is because "X seems Y to me" expresses lower confidence than "X is Y", and someone reporting a gut reaction or a hot take would have lower confidence than someone who has studied the text at length and sought input from other authorities. Similarly gesturing vaguely at the map/territory distinction implies that the distinction is relevant because the map may be in error.

I think Eliezer is giving good advice for "how to be good at saying true and informative things",

Well, that isn't his stated goal. I concede that Yudkowsky makes this argument under "criticism easily goes wrong", but like Zack I notice that he only applies this argument in one direction. Yudkowsky doesn't advise critics to say: "mileage varied, I thought character X seemed clever to me", he doesn't say "please don't tell me what good things the author was thinking unless the author plainly came out and said so". Given the one-sided application of the advice, I don't take it very seriously.

Also, I've read some Yudkowsky. Here is a Yudkowsky book review, excerpted from You're Calling Who A Cult Leader? [LW · GW] from 2009.

"Gödel, Escher, Bach" by Douglas R. Hofstadter is the most awesome book that I have ever read. If there is one book that emphasizes the tragedy of Death, it is this book, because it's terrible that so many people have died without reading it.

I claim that this text would not be more true and informative with "mileage varies, I think x seems y to me". What do you think?

Replies from: philh
comment by philh · 2024-01-16T11:22:02.763Z · LW(p) · GW(p)

Thanks for replying. I’m going to leave aside non-fictional examples (“The Dress”) because I intended to discuss literary criticism.

So uh. Fair enough but I don't think anything else in your comment hinged on examples being drawn from literary criticism rather than reality? And I like the dress as an example a lot, so I think I'm gonna keep using it.

I’m not sure exactly what you mean, see Taboo “Outside View” [LW · GW]. My best guess is that you mean that “X seems Y to me” implies my independent impression, not deferring to the views of others, whereas “X is Y” doesn’t.

From a quick skim, I'd say many of the things in both the inside-view and outside-view lists there could fit. Like if I say "the dress looks white to me but I think it's actually blue", some ways this could fit inside/outside view:

  • Inside is one model available to me (visual appearance), outside is all-things-considered (wikipedia).
  • Inside is my personal guess, outside is taking a poll (most people think it's blue, they're probably right).
  • Inside is my initial guess, outside is reference class forecasting (I have a weird visual processing bug and most things that look white to me turn out to be blue).

If so, I don’t think I am missing this.

I don't really know how to reply to this, because it seems to me that you listed "acknowledging or changing social reality", I said "I think you're missing inside versus outside view", and you're saying "I don't think I am missing that" and elaborating on the social reality thing. I claim the two are different, and if they seem the same to you, I don't really know where to proceed from there.

Again, I don’t think I am missing this. I agree that “X seems Y to me” implies something like a gut reaction or a hot take. I think this is because “X seems Y to me” expresses lower confidence than “X is Y”, and someone reporting a gut reaction or a hot take would have lower confidence than someone who has studied the text at length and sought input from other authorities.

I think you have causality backwards here. I'd buy "it seems low confidence because it suggests a gut reaction" (though I'm not gonna rule out that there's more going on). I don't buy "it suggests a gut reaction because it seems low confidence".

So I claim the gut-reaction thing is more specific than the low-confidence thing.

Well, that isn’t his stated goal.

Right. Very loosely speaking, Eliezer said to do it because it was kind to authors; Zack objected because it was opposed to truth; I replied that in fact it's pro-truth. (And as you point out, Eliezer had already explained that it's pro-truth, differently but compatibly with my own explanation.)

Yudkowsky doesn’t advise critics to say: “mileage varied, I thought character X seemed clever to me”, he doesn’t say “please don’t tell me what good things the author was thinking unless the author plainly came out and said so”.

Well, I can't speak for Eliezer, and what Eliezer thinks is less important than what's true. For myself, I think both of those would be good advice for the purpose of saying true and informative things; neutral advice for the purpose of being kind to authors.

Given the one-sided application of the advice, I don’t take it very seriously.

I'm not sure what you mean by not taking it very seriously.

Applying a rule in one situation is either good advice for some purpose, or it's not. Applying a rule in another situation is either good advice for some purpose, or it's not. If someone advises applying the rule in one situation, and says nothing about another situation... so what?

My vague sense here is that you think he has hidden motives? Like "the fact that he advises it in this situation and not that situation tells us... something"? But:

  • I don't think his motives are hidden. He's pretty explicitly talking about how to be kind to authors, and the rule helps that purpose more in one situation than another.
  • You can just decide for yourself what your purposes are and whether it's good advice for them in any given situation. If he makes arguments that are only relevant to purposes you don't share, you can ignore them. If he makes bad arguments you can point them out and/or ignore them. If he makes good arguments that generalize further than he takes them, in ways that you endorse but you think he wouldn't, you can follow the generalization anyway.

I claim that this text would not be more true and informative with “mileage varies, I think x seems y to me”. What do you think?

Eliezer described it as his opinion before saying it, and to me that does the same work.

If it weren't flagged as opinion, then yes, I think a "seems" or "to me" or something would make it slightly more true and informative. Not loads in this case - "awesome" and "terrible" are already very subjective words, unlike "blue" or "indestructible".


This feels like the type of conversation that takes a lot of time and doesn't help anyone much. So after this I'm gonna try to limit myself to two more effortful replies to you in this thread.

Replies from: martin-randall
comment by Martin Randall (martin-randall) · 2024-01-18T04:58:30.205Z · LW(p) · GW(p)

My vague sense here is that you think he has hidden motives?

Absolutely not, his motive (how to be kind to authors) is clear. I think he is using the argument as a soldier [? · GW]. Unlike Zack, I'm fine with that in this case.

This feels like the type of conversation that takes a lot of time and doesn't help anyone much.

I endorse that. I'll edit my grandparent post to explicitly focus on literary/media criticism. I think my failure to do so got the discussion off-track and I'm sorry. You mention that "awesome" and "terrible" are very subjective words, unlike "blue", and this is relevant. I agree. Similarly, media criticism is very subjective, unlike dress colors.

Replies from: philh
comment by philh · 2024-01-19T22:30:52.402Z · LW(p) · GW(p)

I think he is using the argument as a soldier.

I see. That's not a sense I pick up on myself, but I suppose it's not worth litigating.

To be clear, skimming my previous posts, I don't see anything that I don't endorse when it comes to literary criticism. Like, if I've said something that you agree with most of the time, but disagree with for literary criticism, then we likely disagree. (Though of course there may be subtleties e.g. in the way that I think something applies when the topic is literary criticism.)

You mention that “awesome” and “terrible” are very subjective words, unlike “blue”, and this is relevant. I agree. Similarly, media criticism is very subjective, unlike dress colors.

Media criticism can be very subjective, but it doesn't have to be. "I love Star Wars" is more subjective than "Star Wars is great" is more subjective than "Star Wars is a technical masterpiece of the art of filmmaking" is more subjective than "Star Wars is a book about a young boy who goes to wizard school". And as I said above:

I’m comfortable with “Luke is a Jedi”, and I think it’s importantly different from, say, “Yoda is wise” or “the Death Star is indestructible” or “the Emperor has been defeated once and for all”.

And I think the ways it’s different are similar to the differences between claims about base-level reality like “Tim Cook is a CEO” versus “the Dalai Lama is wise” or “the Titanic is unsinkable” or “Napoleon has been defeated once and for all”.

comment by Eli Tyre (elityre) · 2024-02-19T08:55:44.888Z · LW(p) · GW(p)

He asked for a specific example. ("Trans women are women, therefore trans women have uteruses" being a bad example, because no one was claiming that.) I quoted an article from the The Nation: "There is another argument against allowing trans athletes to compete with cis-gender athletes that suggests that their presence hurts cis-women and cis-girls. But this line of thought doesn't acknowledge that trans women are in fact women." Scott agreed that this was stupid and wrong and a natural consequence of letting people use language the way he was suggesting (!).

I wonder if the crux here is that Scott keeps thinking of the question as "what words should we use to describe things" and not "what internal categories should I use"?

Like, I could imagine thinking "It's not really a problem / not that bad to say that transwomen are women, because I happen to have the category of "transwomen" and so can keep track of the ways in which transwomen, on average, are different from cis women. Given that I'll be able to track the details of the world one way or the other, it's a pragmatic question of whether we should call transwomen women, and it seems like it's an overall pretty good choice on utilitarian grounds."

Replies from: elityre
comment by Eli Tyre (elityre) · 2024-02-19T11:46:46.491Z · LW(p) · GW(p)

Or to say it differently: we can unload some-to-most of the content of the word woman (however much of it doesn't apply to transwomen) onto the word "cis-woman", and call it a day. The "woman" category becomes proportionally less useful, but it's mostly fine because we still have the expressiveness to say everything we might want to say. 

Replies from: Richard_Kennaway, frontier64
comment by Richard_Kennaway · 2024-02-19T12:09:32.328Z · LW(p) · GW(p)

Then they will come for the words "cis-woman" and "trans-woman" and say that it's oppressive to make a distinction.

You can't win a conflict by surrendering.

Replies from: elityre
comment by Eli Tyre (elityre) · 2024-02-19T18:20:42.974Z · LW(p) · GW(p)

Fair enough, but is that a crux for you, or for Zack?

If you knew there wasn't a slippy slope here, would this matter?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2024-02-19T20:49:42.379Z · LW(p) · GW(p)

I believe there is a blatant slippery slope there, and redefining "woman" is not so much a step onto it as jumping into a toboggan, so I see no point in considering a hypothetical world in which somehow, magically [LW · GW], there wasn't.

comment by frontier64 · 2024-02-19T18:35:10.946Z · LW(p) · GW(p)

I don’t think that solution accomplishes anything because the trans goal is to pretend to be women and the anti trans goal is to not allow trans women to be called women. The proposed solution doesn’t get anybody closer to their goals.

comment by Eli Tyre (elityre) · 2024-02-19T08:01:22.973Z · LW(p) · GW(p)

It might seem like a little thing of no significance—requiring "I" statements is commonplace in therapy groups and corporate sensitivity training—but this little thing coming from Eliezer Yudkowsky setting guidelines for an explicitly "rationalist" space made a pattern click. If everyone is forced to only make claims about their map ("I think", "I feel") and not make claims about the territory (which could be construed to call other people's maps into question and thereby threaten them, because disagreement is disrespect), that's great for reducing social conflict but not for the kind of collective information processing that accomplishes cognitive work,[21] like good literary criticism. A rationalist space needs to be able to talk about the territory.

I strongly disagree with the bolded text. It often helps a lot to say phrases like "on my model" or "as I see it", because it emphasizes the difference between my map and the territory, even though I'm implicitly/explicitly claiming that my map models the territory.

This is helpful for a bunch of human psychological reasons, but one is that humans often feel social pressure to overwrite their own models or impressions with their received model of someone speaking confidently. In most parts of the world, stating something with confidence is not just a claim about truth values to be disputed, it's a social bid (sometimes a social threat) for others to treat what we're saying as true. That's very bad for collective epistemology!

Many of us rationalist-types (especially, in practice, males) have a social aura of confidence that fucks with other people's epistemology. (I've been on both sides of that dynamic, depending on who I'm talking to.)

By making these sorts of declarations where we emphasize that our maps are not the territory, we make space for others to have their own differing impressions and views, which means that I am able to learn more from them

Now, it's totally fine to say "well, people shouldn't be like that." But we already knew we were dealing with corrupted hardware riddled with biases. The question is how can we make use of the faculties that we actually have at our disposable to cobble together effective epistemic processes (individual and collective) anyway.

And it turns out that being straightforward about what you think is true at a content level (not dissembling), while also adopting practices and norms that attend to people's emotional and social experience works better than ignoring the social dimension, and trying to just focus on the content.

...Or that's my understanding anyway. 

See for instance: https://musingsandroughdrafts.com/2018/12/24/using-the-facilitator-to-make-sure-that-each-persons-point-is-held/

comment by frontier64 · 2024-01-04T19:16:59.243Z · LW(p) · GW(p)

My takeaway is that you've discovered there are bad actors who claim to support rationality and truth, but also blatantly lie and become political soldiers when it comes to trans issues. If this is true, why continue to engage with them? Why try to convince them with rationality on that same topic where you acknowledge that they are operating as soldiers instead of scouts?

If 2019-era "rationalists" were going to commit an epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, and they couldn't correct the mistake even after it was pointed out, then the "rationalists" were worse than useless to me.

You shouldn't cling to the idea that the disagreement is due to a mistake when evidence suggests it's a value conflict.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2024-01-04T21:12:05.963Z · LW(p) · GW(p)

If this is true, why continue to engage with them? Why try to convince them with rationality on that same topic where you acknowledge that they are operating as soldiers instead of scouts?

I think the point is that Zack isn’t continuing to engage with them. Indeed, isn’t this post (and the whole series of which it is a part) basically an announcement that the engagement is at an end, and an explanation of why that is?

Replies from: frontier64
comment by frontier64 · 2024-01-05T18:39:27.246Z · LW(p) · GW(p)

I'm too dumb to understand whether or not Zack's post disclaims continued engagement. He continues to respond to proponents of the sort of transideology he writes about so he's engaging at least that amount. Also just writing all this is a form of engagement.

comment by Eli Tyre (elityre) · 2024-02-19T09:52:16.163Z · LW(p) · GW(p)

In the skeptic's view, if you're not going to change the kid's diet on the basis of the second part, you shouldn't social transition the kid on the basis of the first part.

I think I probably would change the kid's diet?? Or at least talk with them further about it, and if their preference was robust, help them change their diet.

comment by Eli Tyre (elityre) · 2024-02-19T09:44:48.013Z · LW(p) · GW(p)

But if the grown-ups have been trained to believe that "trans kids know who they are"—if they're emotionally eager at the prospect of having a transgender child, or fearful of the damage they might do by not affirming—they might selectively attend to confirming evidence that the child "is trans", selectively ignore contrary evidence that the child "is cis", and end up reinforcing a cross-sex identity that would not have existed if not for their belief in it—a belief that the same people raising the same child ten years ago wouldn't have held. (A September 2013 article in The Atlantic by the father of a male child with stereotypically feminine interests was titled "My Son Wears Dresses; Get Over It", not "My Daughter Is Trans; Get Over It".)

Wow. This is a horrifying thought.

comment by Eli Tyre (elityre) · 2024-02-19T08:19:03.566Z · LW(p) · GW(p)

messy evolved animal brains don't track probability and utility separately the way a cleanly-designed AI could.

Side-note: a cleanly designed AI could do this, but it isn't obvious to me that this is actually the optimal design choice. Insofar as the agent is ultimately optimizing for utility, you might want epistemology to be shaped according considerations of valence (relevance to goals) up and down the stack. You pay attention to, and form concepts about, things in proportion to their utility-relevance.

comment by philh · 2024-01-07T14:50:49.851Z · LW(p) · GW(p)

I have an inalienable right to talk about my own research interests, and talking about my own research interests obviously doesn't violate any norm against leaking private information about someone else's family, or criticizing someone else's parenting decisions.

I think you're violating a norm against criticizing someone's parenting decisions, to the extent that readers know whose decisions they are. I happen to know the answer, and I guess a significant number but far from a majority of readers also know. Which also means the parent or parents in question can't easily reply without deanonymizing themselves, which is awkward.

This isn't to take a stance on what you have a right to do or should have done. But I think it's false to say that you obviously haven't violated the norms you mentioned.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2024-01-14T23:33:40.781Z · LW(p) · GW(p)

If that section were based on a real case, I would have cleared it with the parents before publishing. (Cleared in the sense of, I can publish this without it affecting the terms of our friendship, not agreement.)

Replies from: philh
comment by philh · 2024-01-15T13:10:58.235Z · LW(p) · GW(p)

Nod, in that hypothetical I think you would have done nothing wrong.

I think the "obviously" is still false. Or, I guess there are four ways we might read this:

  1. "It is obvious to me, and should be obvious to you, that in general, talking about my own research interests does not violate these norms": I disagree, in general it can violate them.

  2. "It is obvious to me, but not necessarily to you, that in general...": I disagree for the same reason.

  3. "It is obvious to me, and should be obvious to you, that in this specific case, talking about my own research interests does not violate these norms": it's not obvious to the reader based on the information presented in the post.

  4. "It is obvious to me, but not necessarily to you, that in this specific case...": okay sure.

To me (1) is the most natural and (4) is the least natural reading, but I suppose you might have meant (4).

...not that this particularly matters. But it does seem to me like an example of you failing to track the distinction between what-is and what-seems-to-you, relevant to our other thread [LW(p) · GW(p)] here.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2024-01-16T04:32:21.155Z · LW(p) · GW(p)

Alternatively,

  1. "My claim to 'obviously' not being violating any norms is deliberate irony which I expect most readers to be able to pick up on given the discussion at the start of the section about how people who want to reveal information are in an adversarial relationship to norms for concealing information; I'm aware that readers who don't pick up on the irony will be deceived, but I'm willing to risk that"?
Replies from: philh
comment by philh · 2024-01-16T09:20:20.267Z · LW(p) · GW(p)

Fair enough! I did indeed miss that.

comment by PhilosophicalSoul (LiamLaw) · 2023-12-31T07:28:10.942Z · LW(p) · GW(p)

"—but if one hundred thousand [normies] can turn up, to show their support for the [rationalist] community, why can't you?"

I said wearily, "Because every time I hear the word community, I know I'm being manipulated. If there is such a thing as the [rationalist] community, I'm certainly not a part of it. As it happens, I don't want to spend my life watching [rationalist and effective altruist] television channels, using [rationalist and effective altruist] news systems ... or going to [rationalist and effective altruist] street parades. It's all so ... proprietary. You'd think there was a multinational corporation who had the franchise rights on [truth and goodness]. And if you don't market the product their way, you're some kind of second-class, inferior, bootleg, unauthorized [nerd]."

—"Cocoon" by Greg Egan (paraphrased)[1]


I don't think this applies to rationalism. it's not an ideology, or an ethical theory. Rationalism (at least to me as an outside party to all this drama) is exigent to people's beliefs, and this community is just refining how to describe, and use better objective principles of reality. Edit: I agree with the general idea that psychospheres and the words related to them can act as meaningful keys of meaning, even in rationalist circles. Respect to Zack in this case.

Aside, I also think you've suffered what I call the aesthetic death. Too much to explain in a comment section. However, I'll briefly say; it's getting yourself wound up in a narrative psychosphere in which you serve archetypes like 'hero' and 'martyr'. I think this serves a purpose when it comes to achieving some greater goal, and helping you with morale. I do not think this post serves some greater goal (if it does, like many others in this comment section, I am confused.) this bit's been retracted after reading the below comment.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-12-31T08:09:10.457Z · LW(p) · GW(p)

I do not think this post serves some greater goal (if it does, like many others in this comment section, I am confused)

(I'll try to explain as best I understand, but some of it may not be exactly right)

The goal of this post is to tell the story of Zack's project (which also serves the project). The goal of Zack's project is best described by the title of his previous post - he's creating a Hill of Validity in Defense of Meaning [LW · GW].

Rationalists strive to be consistent, take ideas seriously, and propagate our beliefs, which means a fundamental belief about the meaning of words will affect everything we think about, and if it's wrong, then it will eventually make us be wrong about many things.

Zack saw Scott and Eliezer, the two highest status people in this group/community, plus many others, make such a mistake. With Eliezer it was "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning.". With Scott it was "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life.".

This was relevant to questions about trans, which Zack cares a lot about, so he made a bunch of posts arguing against these propositions. The reason it didn't remain a mere philosophy of language debate, is that it bumped into the politics of the trans debate. Seeing the political influence made Zack lose faith with the rationalist community, and warranted a post about people instead of just about ideas.

Replies from: LiamLaw
comment by PhilosophicalSoul (LiamLaw) · 2023-12-31T16:20:05.052Z · LW(p) · GW(p)

Thank you so much for this explanation. Through this lens, this post makes a lot more sense; a meaningful aesthetic death then.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-12-31T16:21:48.330Z · LW(p) · GW(p)

I don't know what you mean by aesthetic death, but I'm glad to help :)

comment by Chris_Leong · 2023-12-31T16:50:12.239Z · LW(p) · GW(p)

I don’t know man, really seems to me that Eliezer was quite clear in politics are the mind-killer that we couldn’t expect our rationality skills to be as helpful in determining truth in politics.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-12-31T17:14:52.619Z · LW(p) · GW(p)

He didn't say anything like that in Politics is the Mind-Killer [LW · GW], quite the contrary:

"Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational."

"I’m not saying that I think we should be apolitical"

The main point of the post was to not shove politics where it's unnecessary, because it can have all these bad effects. I expect Eliezer agrees far more with the idea that Politics is hard mode [LW · GW], than the idea that "we couldn’t expect our rationality skills to be as helpful in determining truth in politics".

Replies from: Chris_Leong
comment by Chris_Leong · 2024-01-01T06:15:52.422Z · LW(p) · GW(p)

Thanks for sharing.

Maybe I should have spoken more precisely. He wasn't telling individuals to be apolitical. It's more that he didn't think it was a good idea to center the rationalist community around it as it would interfere with the rationalist project. ie. That even with our community striving to improve our rationality that it'd still be beyond us to bring in discussions of politics without corrupting our epistemology.

So when I said "we couldn’t expect our rationality skills to be as helpful in determining truth in politics", I was actually primarily talking about the process of a community attempting to converge on the truth rather than an individual.

comment by sapphire (deluks917) · 2023-12-30T19:10:15.025Z · LW(p) · GW(p)

Rationality is Winning. Rationality is not about becoming obsessed with this stuff, losing sleep, losing friends and literally going insane. This is a real lesson that we should be emphasizing Rationality is Winning way more.

Replies from: tailcalled, Kalciphoz, None
comment by tailcalled · 2023-12-30T23:02:44.885Z · LW(p) · GW(p)

Here you're appealing to winning on an individual level, which creates coordination problems. If Zack is doing something wrong because he is losing at an individual level, then sufficiently powerful coalitions get to control what is right or wrong by controlling the individual incentives, which seems like A Problem.

If we think Zack has a point on the object level, but some force is preventing him from winning, then it seems logical for rationalists to coordinate to help him win. If we think Zack is wrong on the object level, then it seems like it would be more appropriate to explain to him his mistake on the object level, rather than to appeal to the political challenges he faces.

Replies from: deluks917
comment by sapphire (deluks917) · 2023-12-31T00:47:22.410Z · LW(p) · GW(p)

I don't think we should help him convince other people of a position that seems to have driven him kinda insane.

It is also kind of funny to me the post references clarity in the title but I honestly don't even know what Zach thinks about when people should transition. To be clear I think we should be supportive of people who transition. And people should transition iff they think it will make them happier. But whatever the best practical policies are I seriously doubt Zach's philosophical point of view is going to be prudent to promote or adopt.

Replies from: SaidAchmiz, tailcalled
comment by Said Achmiz (SaidAchmiz) · 2023-12-31T00:54:10.647Z · LW(p) · GW(p)

It is also kind of funny to me the post references clarity in the title but I honestly don’t even know what Zach thinks about when people should transition.

Zack actually has a post which addresses this sort of question quite directly:

http://unremediatedgender.space/2021/Sep/i-dont-do-policy/

comment by tailcalled · 2023-12-31T03:12:22.526Z · LW(p) · GW(p)

Not sure what "a position" is referring to. Do you mean his beliefs about categorization? His distrust of rationalists? I think lots of people agree with both of these without obsessively writing blog posts and losing sleep, so I don't think you can attribute his problems solely to this.

comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-02T21:06:31.364Z · LW(p) · GW(p)

The insanity is more reasonably attributed to being met with constant abuse (which your comment is ostensibly an example of) than to his positions on epistemology or the ontology of gender. Also, Zack has already explained that he has something to protect, which is existentially threatened by his detractors. The implication of your sentiment seems to be that he should simply give up on what is precious to him and pick the winning side. This is not the standard you would be applying if you were engaging in good faith.

Replies from: tailcalled, deluks917
comment by tailcalled · 2024-01-03T11:49:50.620Z · LW(p) · GW(p)

I'm not sure "constant abuse" is accurate. Zack's interlocutors seem to vary from genuinely abusive (arguably applicable to sapphire's comment) to locally supportive to locally wrong to locally corrective, but most significantly his interlocutors seem unstructured and unproductive for the conversation.

I'd guess that the unstructuredness and unproductiveness is partly because they're not really paying attention to the subject, but also to a significant extent because there are some genuinely confusing aspects to Zack's position, due to a combination of bad communication and Extremely Bad Takes that haven't been corrected yet. It's not abusive to be genuinely confused.

(To an extent, these Extremely Bad Takes actually overlap with his position on epistemology/ontology. He tends to take categories as formative, based on models like PCA, which in turn makes it challenging to make sensible descriptions like "biological sex is binary because chromosomes are binary, XX vs XY". This is tricky to fix partly because the sequences also take a position like this, so correcting it would require walking back on significant parts of the sequences and rationalist epistemology.)

That said, I don't know whether fixing Zack's bad communication/Bad Takes would fix the conflict. I guess it could make it worse, by making it easier for aggressive activists to know what to attack. But it seems to me like even that could generate less mental illness, as it could be less ambiguous that what is left is simple conflict rather than Zack genuinely being importantly mistaken.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-04T00:34:24.433Z · LW(p) · GW(p)

I'm not sure "constant abuse" is accurate. Zack's interlocutors seem to vary from genuinely abusive (arguably applicable to sapphire's comment) to locally supportive to locally wrong to locally corrective, but most significantly his interlocutors seem unstructured and unproductive for the conversation.

I did not say that his critics are uniformly abusive, merely that he is being met with constant abuse. This can still be the case even if only some of his interlocutors are abusive. I think "constant abuse" is a fitting description of the experiences recounted in Zack's post, not to mention that it seems aptly justified by simply looking at this comment section.

It's not abusive to be genuinely confused.

As you say, there are aspects that they may legitimately be confused about, but those do not cover the whole of the issue, and even these do not justify the weaponisation of that confusion as seems to have become a favourite tactic of his more toxic detractors, whose favourite tactics seem to include:

  • Obfuscate endlessly to force Zack to revisit basic principles that were previously noncontroversial, then blame him for the added complexity
  • Declare imperiously that Zack and/or his supporters are being incoherent and poorly reasoned without even bothering to make actual counterarguments
  • Blame him for not being interesting enough

That said, I don't know whether fixing Zack's bad communication/Bad Takes would fix the conflict.

It quite clearly wouldn't. The abuse he is being met with comes from people having glimpses of the politically incorrect aspect of his positions, not from bad takes, which the abusers themselves make free to engage in and thus is only something they take issue with when the outgroup does it.

But it seems to me like even that could generate less mental illness, as it could be less ambiguous that what is left is simple conflict rather than Zack genuinely being importantly mistaken.

That is already quite unambiguous. LessWrongers do not behave this way when it comes to non-political topics[1], even if they deem someone to be seriously mistaken. Any such ambiguity is purely the result of motivated reasoning, or more specifically: their habitual tactic of weaponising confusion.

  1. ^

    I am including the controversy surrounding Duncan Sabien and Said Achmiz as political due to the centrality of LessWrong moderation policy to the dispute.

Replies from: tailcalled
comment by tailcalled · 2024-01-04T12:44:41.240Z · LW(p) · GW(p)

I did not say that his critics are uniformly abusive, merely that he is being met with constant abuse. This can still be the case even if only some of his interlocutors are abusive. I think "constant abuse" is a fitting description of the experiences recounted in Zack's post, not to mention that it seems aptly justified by simply looking at this comment section.

"Constant" implies some notion of uniformity, though, doesn't it? Not necessarily across critics as it could also be e.g. across time, but it seems like we should have constancy across some axis in order for it to be constant.

As you say, there are aspects that they may legitimately be confused about, but those do not cover the whole of the issue, and even these do not justify the weaponisation of that confusion as seems to have become a favourite tactic of his more toxic detractors, whose favourite tactics seem to include:

  • Obfuscate endlessly to force Zack to revisit basic principles that were previously noncontroversial, then blame him for the added complexity
  • Declare imperiously that Zack and/or his supporters are being incoherent and poorly reasoned without even bothering to make actual counterarguments
  • Blame him for not being interesting enough

I'm not completely sure what you mean by "weaponization" of confusion. What I mean is that Zack's Ultimate Point is unclear. I think Ozy best communicated the feeling people who are confused about it have:

For someone so monomaniacally obsessed with how psychologically different he is from women, how he can never be a woman and never share a woman’s experiences and how every cell in his body and thought in his mind would have to be totally rewritten for him to approximate womanhood, Zack Davis is remarkably vague about what a woman is like. Indeed, a careless reader would easily be led to believe that the fundamental difference between men and women is that men are sometimes turned on by being women and women never are.

But I think this sort of echoes throughout a bunch of his writing. His standard response is to talk about multivariate group-discriminating axes (e.g Mahalanobis D), but those axes just don't work the way he'd intuitively like them to work. The correct approach would IMO be to more clearly list what he is getting at, but for some reason he doesn't do this. Zack's interests in traits seems to start and end with a desire to Prove That Demographics Really Exist, which is kind of a weird way to treat something that is so central to this discussion.

That is already quite unambiguous. LessWrongers do not behave this way when it comes to non-political topics, even if they deem someone to be seriously mistaken. Any such ambiguity is purely the result of motivated reasoning, or more specifically: their habitual tactic of weaponising confusion.

LessWrongers may not behave this way with non-political topics, but do they behave this way with well-communicated political topics? It's definitely justified to hold politically sensitive discussion to higher standards than non-political discussion, so I don't think you can unambiguously attribute it solely to distortions due to the politics without also comparing to well-communicated political topics.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-04T14:57:43.286Z · LW(p) · GW(p)

"Constant" implies some notion of uniformity, though, doesn't it? Not necessarily across critics as it could also be e.g. across time, but it seems like we should have constancy across some axis in order for it to be constant.

Yes, pretty much every time he makes a post on this topic, he is met with a barrage of abuse.

I'm not completely sure what you mean by "weaponization" of confusion.

There is this particular tactic I have seen from LessWrongers[1] and nowhere else. It consists of a catch 22:

  • if you make a simple informal point, eg. calling attention to something absurd and pointing out its absurdity, your argument will be criticised for being manipulative or consisting of baseless assertions, or perhaps your interlocutor will simply deny that you made an argument at all, and you will be called upon to formalise it more or in some other way make the argument more rigorous.
  • if you make a detailed point covering enough ground to address all the obfuscations and backtracking, then you will be accused of obfuscating, people will claim they are confused about what you mean, and they will blame you for the confusion, and still other people, believing themselves to be helpful mediators, will assert that your central point isn't clear.

This tactic is a "fully general counterargument", but also, either prong includes some amount of moral condemnation and/or ridicule for the person putting forward the argument. It is just about the single most toxic debate tactic I have ever seen anywhere, and if you call out some instance of it, your detractors will simply use this very same tactic to dismiss your calling it out.

Ten years ago, this community was a force for unusual levels of clarity and integrity. Now it seems to be a force for unusual levels of insanity and dishonesty, but because most people here seem to believe that dishonesty is always intentional, and that intent is always honest, they implicitly assume that it is impossible to be dishonest without being aware of it, and thus a lot of the worst offenders manage to convince themselves that they are perfectly or almost perfectly honest. By contrast, when people engage in similarly toxic flamewars on eg. twitter or reddit, they are at least usually not in deep denial about being eristic in their argumentation; they do not usually pride themselves on their good faith at the same time, and on that account they are still not quite as dishonest as many LessWrongers have become.

What I mean is that Zack's Ultimate Point is unclear.

Only because his critics insist on endless obfuscation.

LessWrongers may not behave this way with non-political topics, but do they behave this way with well-communicated political topics?

Yes, but in such cases they will also go into denial about those political thoughts being well-communicated.

It's definitely justified to hold politically sensitive discussion to higher standards than non-political discussion

I suggest these fine people start with holding their own political discussion to a higher level, then. 

  1. ^

    Intended here to include LessWrong-adjacent people like ACX'ers, EAs, etc.

Replies from: tailcalled, shankar-sivarajan
comment by tailcalled · 2024-01-04T16:11:49.316Z · LW(p) · GW(p)

There is this particular tactic I have seen from LessWrongers[1] [LW(p) · GW(p)] and nowhere else. It consists of a catch 22:

  • if you make a simple informal point, eg. calling attention to something absurd and pointing out its absurdity, your argument will be criticised for being manipulative or consisting of baseless assertions, or perhaps your interlocutor will simply deny that you made an argument at all, and you will be called upon to formalise it more or in some other way make the argument more rigorous.
  • if you make a detailed point covering enough ground to address all the obfuscations and backtracking, then you will be accused of obfuscating, people will claim they are confused about what you mean, and they will blame you for the confusion, and still other people, believing themselves to be helpful mediators, will assert that your central point isn't clear.

This tactic is a "fully general counterargument", but also, either prong includes some amount of moral condemnation and/or ridicule for the person putting forward the argument. It is just about the single most toxic debate tactic I have ever seen anywhere, and if you call out some instance of it, your detractors will simply use this very same tactic to dismiss your calling it out.

The thing is this tactic needs the cooperation of both participants to work. If the participant getting attacked with the catch 22 just makes a clear description of the central point, and then writes quick clear answer to each sidetracking about how they are sidetracking, it's easy to resist. See e.g. my discussion with Jiro and S. Verona Lišková here, which was easy enough to keep on track [LW(p) · GW(p)].

Only because his critics insist on endless obfuscation.

I disagree, because Zack's Ultimate Point is also somewhat unclear to me.

These days, Zack seems to be going back and forth between "I'm purely making a philosophical point about how categorization works" and "I'm purely trying to defend myself against people insisting I should transition". The latter seems somewhat implausible as a motivation, partly because if he would just shut up about the topic, nobody would be telling him to transition. The former is somewhat more believable, but still seems pretty dubious, considering that he also keeps bringing up autogynephilia.

If you look at his history, his original tagline was "LATE-ONSET GENDER DYSPHORIA IS NOT AN INTERSEX CONDITION, YOU LYING BASTARDS". He even got a shirt with that label - do you think he has a shirt saying "Categories should be made to minimize mean squared error"? So I think most people interpret his philosophy-of-language arguments to be making a point somewhere in the vicinity of the etiology of transness.

... I've come to suspect that he didn't really mean to make a point about the etiology of transness, but instead maybe a nearly-political point about disruptive transsexuality [LW(p) · GW(p)]? With etiology being more of an accident due to some combination of poor communication [LW(p) · GW(p)], deception about his point (he says he doesn't do policy, but that doesn't mean it's not the sub-subtext?? plausibly this deception in turn is caused by abuse/social pressure to support trans rights, but it's located in a different place than where you made it, and it makes the manipulation critique in the original catch-22 correct), maybe some pressure from me (which in retrospect was somewhat misguided if his goal wasn't actually etiology), and maybe also poorly-chosen priors (parsimony/sparsity/the assumption that there's not a lot of details going on so all these distinctions don't really matter).

(That said, if he was purely making a philosophical point about locally valid [LW · GW] types of reasoning for classification, then that would be OK. What I'm saying is that part of what shapes the conflict a lot is that people don't really believe that he is purely making a philosophical point about classification. Heck, it might be relevant to ask, what do you think Zack's Ultimate Point is?)

Ten years ago, this community was a force for unusual levels of clarity and integrity. Now it seems to be a force for unusual levels of insanity and dishonesty, but because most people here seem to believe that dishonesty is always intentional, and that intent is always honest, they implicitly assume that it is impossible to be dishonest without being aware of it, and thus a lot of the worst offenders manage to convince themselves that they are perfectly or almost perfectly honest. By contrast, when people engage in similarly toxic flamewars on eg. twitter or reddit, they are at least usually not in deep denial about being eristic in their argumentation; they do not usually pride themselves on their good faith at the same time, and on that account they are still not quite as dishonest as many LessWrongers have become.

I do think there are gains to be made in increasing cooperativeness, but my experience is that there tends to be a need for greater order (e.g. in the conversation about trans stuff, there's a lack of people forwarding their interests in a structured manner). My current working theory is that it is a cost/coordination problem: for a lot of 1-on-1 disputes, it's simply not worth it to go through the motions to accurately resolve them, and nobody has set up good enough organizations to fund the resolution of N-on-M disputes.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-04T19:15:05.346Z · LW(p) · GW(p)

The thing is this tactic needs the cooperation of both participants to work. If the participant getting attacked with the catch 22 just makes a clear description of the central point, and then writes quick clear answer to each sidetracking about how they are sidetracking, it's easy to resist.

No it doesn't, it just requires that the person engaging in the tactic is sufficiently persistent to resume immediately after the victim of the tactic has defused it using the defence you recommend. The tactic will succeed if there's even the slightest failure in the victim's vigilance, and your prescription still not only leaves the victim on the defensive, but also (at least in the example conversation you linked) puts the person using this defensive tactic in the position of having to make demands, which may well become repetitive if the attacker is being persistent, and which on that account opens further vulnerabilities. 

Also, your example gives a grossly distorted picture because, 1., it is a case in which you are playing the role of a "helpful mediator", or, more bluntly, that of an enabler, and 2., the tactic I am describing was not particularly central to the strategy of either person's side in that particular case. It simply is not a relevant example to any appreciable degree.

I disagree, because Zack's Ultimate Point is also somewhat unclear to me.

Because you like other LessWrongers are in the habit of being fooled by your own manipulations, such as the aforementioned weaponised confusion, and even then you have correctly identified Zack's ultimate point in your reference to this original tagline.

That said, if he was purely making a philosophical point about locally valid [LW · GW] types of reasoning for classification, then that would be OK. What I'm saying is that part of what shapes the conflict a lot is that people don't really believe that he is purely making a philosophical point about classification.

Valid principles of classification are valid even if their proponents are advocating them with a view to some other, more specific point, and the fact that he has that point in mind when making posts about those principles of classification does not alter the fact that such posts are about principles of classification and not about the points he plans to make with them. This is not merely a high-decoupling vs low-decoupling thing; I am not suggesting that people should feign ignorance of his broader point, simply pointing out that the fact that he may advocate some principles of classification as part of a more specific line of argumentation about autogynephilia does not in fact create ambiguity surrounding the thesis/theses of a single given post. They can still straightforwardly be classified as making a point about autogynephilia, about the philosophy of classification, about the flaws of the rationalist community, or some combination of these. This post is clearly mainly a critique of the rationalist community, with the other two topics being secondary to that.

I do think there are gains to be made in increasing cooperativeness, but my experience is that there tends to be a need for greater order

I think there has been an excess of cooperativeness. Setting yourself up as a helpful mediator between Zack and his abusers is an injustice to Zack. The abusers need to be put it in their place, rather.

Replies from: tailcalled
comment by tailcalled · 2024-01-04T22:22:03.573Z · LW(p) · GW(p)

Because you like other LessWrongers are in the habit of being fooled by your own manipulations, such as the aforementioned weaponised confusion, and even then you have correctly identified Zack's ultimate point in your reference to this original tagline.

See the thing is for a long time I used to think Zack's ultimate point was his original tagline, but as I kept pushing him more and more to focus on empirical research in the area instead of on arguing with rationalists, eventually he stopped me and corrected me that his true point wasn't really trans etiology anymore, it was philosophy of classification.

(This was in DMs, IIRC, so I don't immediately have a link on hand.)

Valid principles of classification are valid even if their proponents are advocating them with a view to some other, more specific point, and the fact that he has that point in mind when making posts about those principles of classification does not alter the fact that such posts are about principles of classification and not about the points he plans to make with them. This is not merely a high-decoupling vs low-decoupling thing; I am not suggesting that people should feign ignorance of his broader point, simply pointing out that the fact that he may advocate some principles of classification as part of a more specific line of argumentation about autogynephilia does not in fact create ambiguity surrounding the thesis/theses of a single given post. They can still straightforwardly be classified as making a point about autogynephilia, about the philosophy of classification, about the flaws of the rationalist community, or some combination of these. This post is clearly mainly a critique of the rationalist community, with the other two topics being secondary to that.

The other two topics can't be relegated to secondary relevance in this way. This post is a critique of the rationalist community, but it's a critique with respect to the philosophy of classification (and autogynephilia?), and so understanding the point of the original conflict around philosophy of classification is a necessary condition for understanding the meaning of the critique of the rationalist community.

One option to bypass this problem would be to instead consider posts which are less directly dependent on history. Some examples which seem subtly relevant to the AGP debates without being directly dependent on its history (trying to be reasonably comprehensive):

  • Assume bad faith [LW · GW]: While there was lots of opposition in the comments, it was opposition that made me think.
  • Challenges to Yudkowsky's pronoun reform proposal [LW · GW]: Comments are mostly supportive and reasonable. There are less-reasonable comments but they have fewer upvotes and their rebuttals have a lot of upvotes.
  • Blood is thicker than water 🐬 [LW · GW]: There's a lot of pushback in the comments. This could fit under your model, but I'd also guess it's partly an incompleteness of the post. One illustrative example was this [LW(p) · GW(p)], where I knew the way it was incomplete and could therefore add additional information. At the time it was posted, I didn't know enough information to correct the other incompletenesses, but after spending a long time philosophizing about categorization, I think I know the answers to the other ones, so I'm inclined to say the pushback was appropriate for highlighting the problems with the post.
  • Reply to Nate Soares on dolphins [LW · GW]: Most comments are fine. One point is, Nate Soares claims that he didn't mean this in relationship to transgender topics. I'm not sure what you make of that but it seems believable to me.
  • Communication requires common interests or differential signal costs [LW · GW]: Not sure how relevant it is intended to be to the topic, but it seems relevant. Comments seem fine.
  • Unnatural categories are optimized for deception [LW · GW]: Clearly a spicy take when understood in the context of trans issues, but the comments there seem perfectly fine.
  • Message length [LW · GW]: Comments were positive, but actually it should have received more pushback if it was interpreted in the context of transgender debates, because a lot of the disagreements are causal, whereas this post is correlational.

Overall, I don't think the pattern is as bad as you say.

I think there has been an excess of cooperativeness. Setting yourself up as a helpful mediator between Zack and his abusers is an injustice to Zack. The abusers need to be put it in their place, rather.

One of the abusers was sapphire, who I posted a pretty decisive rebuttal to. Is this not putting her in her place? There was a subtext of "you seem to be part of the forces that are trying to control Zack", would it have been sufficient to surface this subtext?

Another person I responded to was Viliam, but at the time of responding, I believed Viliam to be genuinely confused about Zack's ultimate point, because Viliam thought Zack's ultimate point was about the etiology of transsexuality, and I had been privately corrected that he had changed is area of discourse. If I got it right, then it was an understandable/non-abusive confusion for Viliam to have, as can be observed from you having a similar confusion. Though the fact that Zack said elsewhere in this post that his part of his core position was the etiology of transness does support Viliam's original position - but in that case there is actually a lot to be said in defense of rationalists, because a lot of the autogynephilia discourse is simply abysmal. (And the fact that the Ultimate Point is so inconsistent generates good reasons to be confused.)

Also the main abusers are presumably Scott Alexander and Eliezer Yudkowsky (and maybe also Ozy or someone like that?) but I haven't exactly recommended that Zack cooperates more with them. Instead in the case of e.g. Scott Alexander I have told Zack that Scott doesn't pay enough attention to this subject for him to get through (and I don't think I have gotten involved with Eliezer).

It is true that I have come up with opinions about how Zack should communicate his message, but I don't really think it is accurate to characterize it as me setting myself up as a helpful mediator. A lot of it comes down to the fact that I have spent the past few years researching transgender topics for my own purposes, for a long period believing in autogynephilia theory, but then uncovering a wide array of flaws. Under such a circumstance, it seems relevant to inform Zack "hey the core of these arguments we've been making all this time have these gaping flaws, you should probably fix your strategy. here's my understanding of how, given your goals". Separately from this, I am also interested in correcting autogynephilia theory, or at least informing non-autogynephilia-theorists that autogynephilia theory is deeply flawed, so that at least my work can get some use and my frustrations over the last few years can be legible to someone.

In fact a substantial part of my opinions come from attempting to change autogynephilia theorists' minds, failing, and trying to work out the patterns of why I failed - what their rhetorical motivations and inferential methods must be in order for them to end up stuck in precisely these errors.

The tactic will succeed if there's even the slightest failure in the victim's vigilance 

I don't think this is true because if it gets off track one can sort of take stock and "regroup", getting rid of irrelevant side-threads and returning to the core of it.

and your prescription still not only leaves the victim on the defensive, but also (at least in the example conversation you linked) puts the person using this defensive tactic in the position of having to make demands, which may well become repetitive if the attacker is being persistent, and which on that account opens further vulnerabilities.

Not sure I understand this.

Also, your example gives a grossly distorted picture because, 1., it is a case in which you are playing the role of a "helpful mediator", or, more bluntly, that of an enabler,

I'm not even sure who you say I am enabling in that link - Jiro or S. Verona Lišková? Both?

My view is that both of them were obscuring the positions they were taking (probably intentionally, because their positions were unpopular? with Jiro taking the position of "transness should not be normalized" and S. Verona Lišková taking positions such as "trans women do not have any male sports advantage", "trans teens should be able to transition without their parents knowing about it, and this shouldn't even be up for debate", etc..

2., the tactic I am describing was not particularly central to the strategy of either person's side in that particular case. It simply is not a relevant example to any appreciable degree.

I guess.

My take is that I am consistently able to navigate rationalist conversations about autogynephilia theory or sex differences without getting caught up in these sorts of issues. I don't know if we could measure it somehow - e.g. having me write a post as a test or something. So I find it weird to see this as a "rationalist thing", and when I look at what the various Blanchardians are doing, I can quite easily see lots of ways in which they set themselves up for this kind of trouble.

This admittedly wasn't always so clear to me, but the way it became clear to me was that I studied the subject matter of autogynephilia, learned a lot of things, tried to talk with Blanchardians about them, and saw them resist in weird ways.

These also seem like ways in which I could've set myself up for the same sort of catch-22 in the conversation I linked, which is why I linked it.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-05T14:44:23.066Z · LW(p) · GW(p)

See the thing is for a long time I used to think Zack's ultimate point was his original tagline, but as I kept pushing him more and more to focus on empirical research in the area instead of on arguing with rationalists, eventually he stopped me and corrected me that his true point wasn't really trans etiology anymore, it was philosophy of classification.

True point =/= ultimate point. The ultimate point is where your line of argumentation terminates, whereas the true point is simply the point you care most about in the given moment. At this point it appears to me that his focus has shifted all the way to calling out "the blight" or "epistemic rot", ie. the apparent decline of a community he loves or loved. That, then, would be his present true point, though the ultimate point is nevertheless the one corresponding to his original tagline.

The other two topics can't be relegated to secondary relevance in this way. This post is a critique of the rationalist community, but it's a critique with respect to the philosophy of classification (and autogynephilia?), and so understanding the point of the original conflict around philosophy of classification is a necessary condition for understanding the meaning of the critique of the rationalist community.

That is what I meant by "secondary", though, in analogy to how a necessary instrumental goal is sometimes described by non-LessWrongers as being secondary to their final purpose.

Overall, I don't think the pattern is as bad as you say.

Most of those posts are from before the thing I call "constant abuse" began on LessWrong. It started when Zack began more directly calling out the rationalist community. The only post you gave as an example from this period was the Assume Bad Faith one, and that one wasn't one in which he directly addressed any of the three topics enumerated (LOGD not being an intersex condition, philosophy of classification, critique of LW), so it is not actually a counterexample of the trend I am talking about. If you look at his recent posts on these topics, you will find that the pattern of abuse began at some point and was a constant occurrence since.

Of course, he was having some mental health issues before then, but as his chronicle shows, he was being met with a lot of abuse well before that abuse became a constant trend in his LessWrong comment sections in particular. The reason I attribute his present mental health issues to the present constant abuse, however, is that I don't think of mental illness as a switch that's turned on somehow and then remains turned on, caused by the initial trigger. I attribute his past mental illness to past abuse, and his present mental illness to the current stream of abuse, ie. the one I referred to as "constant abuse". While I have no doubt that there are endogenous factors to his mental illness (eg. his decision to try to save this sinking ship that is LW rather than walking out on it), I don't think those are the main factors that make him deviate from baseline mental health. That seems distinctly attributable to the mistreatment of him, rather.

One of the abusers was sapphire, who I posted a pretty decisive rebuttal to. Is this not putting her in her place? There was a subtext of "you seem to be part of the forces that are trying to control Zack", would it have been sufficient to surface this subtext?

Potentially. The way your comment was written was decidedly insufficient, however. Wielding massive social and financial pressures against thoughtcriminals to silence them and champion a progressive cause — is so ubiquitous and widely accepted that a subtext is certainly not sufficient social punishment for someone who evidently takes such controlling behaviours (and her right to them) for granted, and indeed sapphire responded to your comment with more of the same abuse: "I don't think we should help him convince other people of a position that seems to have driven him kinda insane."

More importantly, when I called out the abuse more directly[1], you immediately made a comment that seemed to imply that the constant abuse could not be the reason for what sapphire calls his insanity, by arguing that the abuse was not constant. In this comment, you also described sapphire's mistreatment of him as being merely "arguably" abusive, when it quite clearly had the form of a bully telling the victim that he shouldn't have picked the losing side — a grossly and overtly abusive behaviour. You then characterised what seems to me like an abusive pattern of weaponised confusion and the catch 22 tactic I mentioned earlier as being merely "unstructured and unproductive" rather than abusive, and attributed to this to what you deem as flaws in Zack's writings. That is you using the very same abusive tactic to downplay the abuse he is being met with.

Also, whatever flaws his writing may have, none of them come even close to justifying the way in which he is treated, and your initial comment in this thread obscured this important point by way of blaming the victim with a semi-plausible critique of his writing.

By being less abusive than sapphire and simultaneously two-siding it with "to be fair, they do have a point that Zack's writing is unclear", you are juxtaposing these two criticisms and making the case seem a lot more even than it is. One side is engaging in gross overt abuse against someone who has been gaslit by progressive ideology, the other side writes posts that are too long and meandering. Guess I'll take the middle ground. Also worth noting that you wrote considerably more words to criticise Zack's writing than to call out sapphire's abusive behaviour. You effectively set yourself up to appear as a sensible middle-ground, creating a position of compromise between Zack and his abusers, which is frankly worse than anything sapphire did, but even setting that whole tactic aside, you were also being directly abusive to him yourself as I pointed out two paragraphs ago.

It is true that I have come up with opinions about how Zack should communicate his message, but I don't really think it is accurate to characterize it as me setting myself up as a helpful mediator. A lot of it comes down to the fact that I have spent the past few years researching transgender topics for my own purposes, for a long period believing in autogynephilia theory, but then uncovering a wide array of flaws.

I am again in the position of having to remind you that being incorrect about factual issues is not a sufficient justification for others to engage in vicious abuse against you. Also, it was specifically your behaviour in this comment thread that I am characterising as setting yourself up as a helpful mediator. Your comment in this thread was not directed at Zack, pointing out flaws in his autogynephilia theory, but directed at me, undermining my attempt to call out sapphire's blatantly and grossly abusive behaviour.

I don't think this is true because if it gets off track one can sort of take stock and "regroup", getting rid of irrelevant side-threads and returning to the core of it.

That approach does not diffuse the moral opprobrium levelled against a person for being long winded or making baseless assertions. These "regroupings" can equally well be engaged in by the person wielding the abusive tactic as by the person trying to defend himself against it, but it is typically the abuser and not the defender who has more experience controlling and weaponising the complexity of a discussion.

Not sure I understand this.

The defensive person is in the position of having to demand regroupings, or else of trying to simply impose them. Either one gives a weapon to the attacker if done repeatedly.

I'm not even sure who you say I am enabling in that link - Jiro or S. Verona Lišková? Both?

That's beside the point. The point is that you are not the recipient of the abuse and so your situation is fundamentally different, and the only reason it even looks successful is because it manages to set you up as a reasonable mediator who is above it all, thus flattering your narcissism.

My take is that I am consistently able to navigate rationalist conversations about autogynephilia theory or sex differences without getting caught up in these sorts of issues.

My take is that that reflects negatively on your own communication tactics and merely indicates being skilful at manipulation, though in this case it is probably as simple as two-sides'ing everything. Your take reminds me of the "white allies" who say that Malcolm X was setting himself up for trouble by being too combative, or, on the other end of the political aisle, of William F. Buckley trying to clean up the mainstream right by silencing eg. libertarians, paleoconservatives, and populists. I believe your recent dabbles in critical theory have taught you something or other about this social dynamic, which is itself a part of the abuse I am accusing the LessWrong community of being guilty of.

  1. ^

    Incidentally I was sorely tempted to invoke Godwin's law and point out that she could've wielded the same tactic against frustrated, embittered dissidents living under nazism, and with only very slight variations she could've used it to condemn the Edelweiss pirates, the swings, etc., eg. "I don't think we should help him convince other people of a position that seems to have gotten him ostracised and driven him into trouble with the SS". Granted, it was "mere" psychiatrists that Zack had gotten in trouble with. 

Replies from: tailcalled, Vaniver
comment by tailcalled · 2024-01-05T23:17:38.366Z · LW(p) · GW(p)

Your take reminds me of the "white allies" who say that Malcolm X was setting himself up for trouble by being too combative, or, on the other end of the political aisle, of William F. Buckley trying to clean up the mainstream right by silencing eg. libertarians, paleoconservatives, and populists. I believe your recent dabbles in critical theory have taught you something or other about this social dynamic, which is itself a part of the abuse I am accusing the LessWrong community of being guilty of.

My dabbles in critical theory arose from and is almost entirely limited to my contact with Zack's associates, and from critical theorists seeming to describe pathologies that I have frequently faced from Blanchardians. As such, Blanchardianism basically screens off (in a probabilistic DAG sense) other critical theory topics for me. If critical theory says that my behavior in these topics is that of Bad Centrists, then I say "hmm then maybe those Bad Centrists were actually onto something, idk". I don't know anything about how combative Malcolm X was, nor do I know anything about William F. Buckley, I just know that Blanchardianism sucks, and if critical theorists don't know that then they lack basic information for commenting on this subject matter.

Most of those posts are from before the thing I call "constant abuse" began on LessWrong. It started when Zack began more directly calling out the rationalist community.

I guess "Zack only recently began more directly calling out the rationalist community" is maybe a natural way for an outsider/newcomer to parse this conflict, idk. I don't find this parsing super intuitive because I immediately think of posts from 2018-2020 like this and this and this and this. But I was following his blog during this time, and these haven't really been discussed on LessWrong due to the "no politics!" restriction.

If I were to do a timeline, the most intuitive version for me would be:

  • 2016-2017 - Zack and rationalists were debating autogynephilia, but mostly in-person or in obscure Facebook threads, so it is hard exactly to know who did well, though given Zack's current arguments, and the usual arguments forwarded by Blanchardians, and the fact that Zack has talked about pushing MTIMB on people, it seems like a good bet that Zack's core arguments were abysmal.[1]
  • 2018 - Zack posts his response to Scott, finds it didn't work, gives up on the rationalist community. He posts mourning statements on his blog, and continues to critique them on and off.
  • 2019-2020 - Zack starts posting transgender-related critiques to LessWrong, using metaphors, nonspecificity, and such things to make them relatively inoffensive.
  • 2021-now - Zack starts posting his memoir, which among other things reveals more direct issues with rationalist leaders.

Now all but one of the links I gave were post 2021, so clearly this breakdown doesn't capture your objection. Zooming in on the last bit, my reading is:

  • I guess on reflection Zack was really uneven in his publication of his memoir?? He posted part 1 in 2021 [LW · GW], but then waited until 2023 with his second part, and now posted this third part just before 2024. Which I guess makes the bulk of the LW conflict much later than I'd intuitively think of it.
  • Zack did criticize rationalist leaders during this time, though, including the 2021 stuff on dolphins and the 2022 stuff on pronouns.

So I suppose you could validly raise the hypothesis of "the abuse only arose when Zack escalated". I don't really buy into this, partly for reasons I'll get into later, but at least the history I cited before doesn't disprove it.

(How about the comments to his previous post in this series, Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer [LW · GW]? I take from "If you look at his recent posts on these topics, you will find that the pattern of abuse began at some point and was a constant occurrence since" that you are asserting it showed up here too? Most of the comments on it are fine though, and the bad comments seem well within the tolerance zone where it would be incredibly fragile not to tolerate them. Maybe you're referring to Alyssa's twitter stunt? Idk? I'm confused.)

Of course, he was having some mental health issues before then, but as his chronicle shows, he was being met with a lot of abuse well before that abuse became a constant trend in his LessWrong comment sections in particular. The reason I attribute his present mental health issues to the present constant abuse, however, is that I don't think of mental illness as a switch that's turned on somehow and then remains turned on, caused by the initial trigger. I attribute his past mental illness to past abuse, and his present mental illness to the current stream of abuse, ie. the one I referred to as "constant abuse". While I have no doubt that there are endogenous factors to his mental illness (eg. his decision to try to save this sinking ship that is LW rather than walking out on it), I don't think those are the main factors that make him deviate from baseline mental health. That seems distinctly attributable to the mistreatment of him, rather.

Just to be clear, I'm not saying that Zack is Simply Crazy And That's Why He's Doing This. I agree that Scott's weird stonewalling of him makes it worse.

Potentially. The way your comment was written was decidedly insufficient, however. Wielding massive social and financial pressures against thoughtcriminals, to silence them and champion a progressive cause is so ubiquitous and widely accepted that a subtext is certainly not sufficient social punishment for someone who evidently takes such controlling behaviours (and her right to them) for granted, and indeed sapphire responded to your comment with more of the same abuse: "I don't think we should help him convince other people of a position that seems to have driven him kinda insane."

More importantly, when I called out the abuse more directly[1] [LW(p) · GW(p)], you immediately made a comment that seemed to imply that the constant abuse could not be the reason for what sapphire calls his insanity, by arguing that the abuse was not constant. In this comment, you also described sapphire's mistreatment of him as being merely "arguably" abusive, when it quite clearly had the form of a bully telling the victim that he shouldn't have picked the losing side — a grossly and overtly abusive behaviour. You then characterised what seems to me like an abusive pattern of weaponised confusion and the catch 22 tactic I mentioned earlier as being merely "unstructured and unproductive" rather than abusive, and attributed to this to what you deem as flaws in Zack's writings. That is you using the very same abusive tactic to downplay the abuse he is being met with.

I'm... somewhat ambivalent about describing sapphire as "Wielding massive social and financial pressures against thoughtcriminals, to silence them and champion a progressive cause"? On the one hand, a point in favor is when she threatened A Certain Person with a ban in the Slate Star Codex discord server for saying that rationalism is the most extreme malebrained area available. But idk, this isn't that abusive, considering the ban didn't happen and that the person was being kind of childish about it (in the typical annoying Blanchardian way of making vague yet extreme claims - really needed to be put in his place). On the other hand, she did let autogynephilia discourse flourish for quite a while, quite strongly, on the very same server, even including around that very same person. And if I understand correctly, her weakness in moderating eventually lead to its culture war channel becoming a cesspool and her stepping down? Idk, I wasn't around at that time.

It doesn't seem to me that sapphire has been consistent enough towards this topic to be described as "constantly" anything, and it doesn't seem to me that saphhire and Zack have had enough interactions to describe their relationship with any sort of constancy either.

Of course her not being constantly abusive does not mean she is not sometimes abusive. My above writing is not a claim that she was not acting abusively. I do lean towards saying that she treated Zack abusively. The "unstructured and unproductive" comment also wasn't meant to apply to someone like sapphire, but instead to various other figures. I'm tempted to admit that I was wrong to use the term "arguably abusive", however I do think that because the original conflict is so messy, it's not so straightforward. (And usually "arguably X" is used to refer to a case where you lean in favor of X, or at least want to forward something like X?)

Also, whatever flaws his writing may have, none of them come even close to justifying the way in which he is treated, and your initial comment in this thread obscured this important point by way of blaming the victim with a semi-plausible critique of his writing.

By being less abusive than sapphire and simultaneously two-siding it with "to be fair, they do have a point that Zack's writing is unclear", you are juxtaposing these two criticisms and making the case seem a lot more even than it is. One side is engaging in gross overt abuse against someone who has been gaslit by progressive ideology, the other side writes posts that are too long and meandering. Guess I'll take the middle ground. Also worth noting that you wrote considerably more words to criticise Zack's writing than to call out sapphire's abusive behaviour. You effectively set yourself up to appear as a sensible middle-ground, creating a position of compromise between Zack and his abusers, which is frankly worse than anything sapphire did, but even setting that whole tactic aside, you were also being directly abusive to him yourself as I pointed out two paragraphs ago.

I feel like excessive use of the term "abuse" makes this less clear.

If we interpret sapphire as making a forceful threat, then Zack's poor writing doesn't justify the forceful threat. (On the other hand, if Zack was e.g. a university professor or a clinical researcher, then poor argument for his theories would justify a threat of firing - it'd literally be his job to do proper research.) This wasn't really how I interpreted it, and last I heard from Zack, it's not really something he has feared. But I guess I can see how one could interpret it that way.

But... again if we take someone like Viliam, I think calling his comment "abuse" is just wrong? If Zack's original arguments against rationalists were bad, then rationalists shouldn't be convinced by them, and it's not that outrageous that that they sort of make a half-assed counter and then ignore the topic, and it's a relevant point to ask "but wait, what were your original arguments? doesn't this seem overly convoluted?".

I am again in the position of having to remind you that being incorrect about factual issues is not a sufficient justification for others to engage in vicious abuse against you. Also, it was specifically your behaviour in this comment thread that I am characterising as setting yourself up as a helpful mediator. Your comment in this thread was not directed at Zack, pointing out flaws in his autogynephilia theory, but directed at me, undermining my attempt to call out sapphire's blatantly and grossly abusive behaviour.

My actual motivation with my original comment was to try to point you at some of the areas in which Blanchardians are wrong or even abusive, since (in our other Discussion, in the emails) you were skeptical that my views are all that much driven by my experiences with Blanchardianism.

  1. ^

    A case study would presumably be Reply to Ozymandias on Two-Type MtF Taxonomy. A lot of Ozy's arguments were really bad, and then Zack responded by correcting Ozy's bad arguments, but also by throwing a whole bunch of other bad arguments in there too. This was arguably the debate that original convinced me of Blanchardianism, and yet in retrospect the thing that convinced me (the ETLE stuff) shouldn't have been convincing! Of particular note is Zack's statement "But you agree that erotic female embodiment fantasies are very common in pre-trans women; you seem to think this can be a mere manifestation of gender dysphoria.", which borders on abusive considering how it equivocates between Ozy's moderate "well let's listen to what trans women have to say about their experiences" and Blanchardian's radical "let's assume that 80% of gynephilic trans women are severely lying about this subject, and therefore disqualify their testimony/exaggerated their claimed experiences and declare this the Official Scientific Truth and call everyone who is objecting deniers". This rhetorical trick works by using the vagueness of informal language instead of doing more crisp and precise psychometric characterizations. Or there's also the post before that, Reply to Ozymandias on Autogynephilia, where by my count 5 out of 6 of the replies were Bad Takes.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-06T00:51:01.070Z · LW(p) · GW(p)

My dabbles in critical theory arose from and is almost entirely limited to my contact with Zack's associates, and from critical theorists seeming to describe pathologies that I have frequently faced from Blanchardians. As such, Blanchardianism basically screens off (in a probabilistic DAG sense) other critical theory topics for me. If critical theory says that my behavior in these topics is that of Bad Centrists, then I say "hmm then maybe those Bad Centrists were actually onto something, idk". I don't know anything about how combative Malcolm X was, nor do I know anything about William F. Buckley, I just know that Blanchardianism sucks, and if critical theorists don't know that then they lack basic information for commenting on this subject matter.

So the whole critical theory thing really was just self-serving, then. Funny, the critical theorists wrote about that, too.

I guess "Zack only recently began more directly calling out the rationalist community" is maybe a natural way for an outsider/newcomer to parse this conflict, idk. I don't find this parsing super intuitive because I immediately think of posts from 2018-2020 like this and this and this and this. But I was following his blog during this time, and these haven't really been discussed on LessWrong due to the "no politics!" restriction.

Look, I am really becoming quite impatient with this whole tangent of nitpicking one single adjective that was never particularly essential to my argument. There are older cases of him calling out the LessWrong community, some of them even before 2018, and there are also older cases of him being abused in various ways. His more recent interactions are met with an abusive reception more consistently than his older interactions. I am not going to bother continuing to defend my choice to use the phrase "constant abuse". My point stands without it, and as far as I can tell, there is no point to this endless nitpicking other than simply evading the actual argument.

 Just to be clear, I'm not saying that Zack is Simply Crazy And That's Why He's Doing This. I agree that Scott's weird stonewalling of him makes it worse.

Great, you've added one more way to feel above it all and congratulate yourself on it. Now if you could see how your own behaviour makes it worse, we might actually get somewhere.

I'm... somewhat ambivalent about describing sapphire as "Wielding massive social and financial pressures against thoughtcriminals, to silence them and champion a progressive cause"?

She didn't create the pressure, but she invoked it when talking about how he has gone insane and is losing friends, etc., and she certainly wielded it against him. But actually my point was simply that even the creation of such pressures is so widely accepted that your callout of sapphire's comparatively milder abuse would fly beneath the radar of most people, and thus not work effectively as a callout.

It doesn't seem to me that sapphire has been consistent enough towards this topic to be described as "constantly" anything

Hold on; I talked about the abuse he's been receiving as an explanation for his insanity, not as part of an accusation that sapphire was constantly abusing him. I was in fact in the process of collecting examples of abusive behaviour and other bad faith engagement, to use for a post about the existence of unintentional manipulation and other forms of bad faith that the perpetrator may not be aware of engaging in, because there is in the LessWrong community a completely erroneous implicit assumption that people are always aware when they're being manipulative. I wanted to make a post correcting this error, explaining some things about the boundaries of consciousness, about what it means for intents to be conscious, etc., and I wanted to illustrate it with examples of people unknowingly engaging in bad faith.

 If we interpret sapphire as making a forceful threat, then Zack's poor writing doesn't justify the forceful threat. (On the other hand, if Zack was e.g. a university professor or a clinical researcher, then poor argument for his theories would justify a threat of firing - it'd literally be his job to do proper research.) This wasn't really how I interpreted it, and last I heard from Zack, it's not really something he has feared. But I guess I can see how one could interpret it that way.

I did not interpret it as a forceful threat either.

But... again if we take someone like Viliam, I think calling his comment "abuse" is just wrong? If Zack's original arguments against rationalists were bad, then rationalists shouldn't be convinced by them, and it's not that outrageous that that they sort of make a half-assed counter and then ignore the topic, and it's a relevant point to ask "but wait, what were your original arguments? doesn't this seem overly convoluted?".

The only way you're getting this analysis to sound reasonable at all is by omitting a lot of crucial points. For example, the cultishness of trans theory in assuming that gender dysphoria in an AMAB implies female brainsex, the fact that his arguments, though erroneous in some of the particulars, did point to a very real and very central point, which I will here just indicate as the point that not all MtFs are HSTS, whatever the explanation for the others, the disinterest in seriously investigating these issues at all, despite how massively they impact so many members of this community, etc. There was plenty of very real bad faith in the LW community's reception of Zack's points, well beyond what can be explained by factual errors, especially when they were the sort that took even you a considerable amount of time to discover.

My actual motivation with my original comment was to try to point you at some of the areas in which Blanchardians are wrong or even abusive

Most Blanchardians I have interacted with were TERFs, whom I consider to be some of the most dishonest, abusive people I have ever encountered. Even with our current falling out, I am still utterly enraged at how Rod Fleming treats you. He is probably in my top ten of least likeable people I have ever encountered. I am very annoyed at Michael Bailey's behaviour towards you, because I would very much have liked to see debates between you and him.

I am not sure why you think you need to convince me that Blanchardians are wrong and most of them abusive. I think it is worth pointing out that you are just now making the case that a community you interact with a lot, and which you were a part of for a long time, is wrong and abusive. You have made the same observation about a lot of other such communities. I don't remember the exact list, but I seem to recall that it included liberals (and perhaps antifeminists? idk). 

Here's the kicker: I agree.

I also happen to think it might be fruitful for you to wonder if you might be drawn to these abusive communities, and whether the abusiveness might have been something of a constant throughout your changing affiliations, and whether it might not have persisted through your most recent such changes.

since (in our other Discussion, in the emails) you were skeptical that my views are all that much driven by my experiences with Blanchardianism.

Because, regarding your dabbles in critical theory, I was paying attention to the bright little spark of genuine contrition and good faith, not to the apparently much larger component that was merely self-serving. Perhaps I was being overly Christian.

Replies from: tailcalled
comment by tailcalled · 2024-01-07T00:13:30.883Z · LW(p) · GW(p)

So the whole critical theory thing really was just self-serving, then. Funny, the critical theorists wrote about that, too.

I don't know what the critical theorists wrote about this, but I don't think it was just self-serving. I naturally learn the most about subjects that intersect with my activities, but that doesn't mean I can't change my opinions about other subjects on the basis of what I learn. The apparently-not-critical-theory-but-instead-something-else impression I got still made me question a bunch of my past behavior.

If critical theorists have come up with some relevant theory, then feel encouraged to post it. I'm not going to be convinced by vague allusions to figures I don't know anything about.

Look, I am really becoming quite impatient with this whole tangent of nitpicking one single adjective that was never particularly essential to my argument. ... My point stands without it, and as far as I can tell, there is no point to this endless nitpicking other than simply evading the actual argument.

Sure. Zack faces a bunch of abuse from his posts. Whether it's exactly constant isn't so important.

Great, you've added one more way to feel above it all and congratulate yourself on it. Now if you could see how your own behaviour makes it worse, we might actually get somewhere.

I've added one more way to feel above it all and congratulate myself on it? How?

The only way you're getting this analysis to sound reasonable at all is by omitting a lot of crucial points. For example, the cultishness of trans theory in assuming that gender dysphoria in an AMAB implies female brainsex, the fact that his arguments, though erroneous in some of the particulars, did point to a very real and very central point, which I will here just indicate as the point that not all MtFs are HSTS, whatever the explanation for the others, the disinterest in seriously investigating these issues at all, despite how massively they impact so many members of this community, etc. There was plenty of very real bad faith in the LW community's reception of Zack's points, well beyond what can be explained by factual errors, especially when they were the sort that took even you a considerable amount of time to discover.

Some issues with this:

  • Zack... doesn't seem to have discussed brainsex much?
  • Zack's dodge of cultish brainsex theories seems to be stupid reasons. He seems to agree with the prior that brainsex theories are likely, as evidenced by his treatment of gender diagnosticity as reflecting brainsex, his sympathy towards the extreme male brain theory of autism, and his unqualified endorsement of Phil's book, which e.g. asserts that autogynephilia is linked with extreme male-brainedness. In such a case it seems reasonable for people to be confused and think "but if brainsex is so relevant in all these other cases, I suppose it's also relevant for transness?".
  • Feminine essence theory isn't really the leading alternative to Blanchardianism.
  • Approximately nobody in these rationalist debates are claiming that all MtFs are HSTS. I guess "which I will here just indicate" is supposed to code that I'm not supposed to take this literally, maybe you're talking about the disruptive/pragmatic typology [LW(p) · GW(p)], but you've gotta explain it for it to make sense.
  • It's not clear how you're asking it to be investigated, and Zack hasn't written much about this either. (I have extensive opinions about how it should be investigated! But nobody listens to me about this...)

Most Blanchardians I have interacted with were TERFs, whom I consider to be some of the most dishonest, abusive people I have ever encountered. Even with our current falling out, I am still utterly enraged at how Rod Fleming treats you. He is probably in my top ten of least likeable people I have ever encountered. I am very annoyed at Michael Bailey's behaviour towards you, because I would very much have liked to see debates between you and him.

I am not sure why you think you need to convince me that Blanchardians are wrong and most of them abusive.

Well, for one, because you don't seem to agree with me in the case of Zack.

Also I'm not super convinced by your opposition to Michael Bailey as you probably don't know the specifics of that conflict. For all I know, you might support Bailey if you knew more. And considering that Michael Bailey did offer something like a debate, it seems like you need to be more specific about which subject you'd like to see me debate with him, in order for you to truly illustrate that you are not simply on his side.

I think it is worth pointing out that you are just now making the case that a community you interact with a lot, and which you were a part of for a long time, is wrong and abusive. You have made the same observation about a lot of other such communities. I don't remember the exact list, but I seem to recall that it included liberals (and perhaps antifeminists? idk). 

Here's the kicker: I agree.

I also happen to think it might be fruitful for you to wonder if you might be drawn to these abusive communities, and whether the abusiveness might have been something of a constant throughout your changing affiliations, and whether it might not have persisted through your most recent such changes.

Given that they were all abusive in like 2 very specific ways, yes, but also this makes me able to identify them in the future.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-07T01:40:59.848Z · LW(p) · GW(p)

The apparently-not-critical-theory-but-instead-something-else impression I got still made me question a bunch of my past behavior.

Indeed, and some of those lines of questioning yourself did indeed lead to regrets — for a time. Until you walked back your few genuine displays of good faith (edit: excepting the one with the discord server, though you did weaponise my emphasis on that one against me, which is arguably similar to walking it back). That sort of thing gives me an impression that even the process of questioning your past behaviour is basically just a self-serving preemptive defence against criticisms such as this one.

If critical theorists have come up with some relevant theory, then feel encouraged to post it. I'm not going to be convinced by vague allusions to figures I don't know anything about.

Iirc it is mostly in its applied forms, as in critical race theory. Robin DiAngelo for example frequently argues that white progressives are just appropriating the language of the civil rights movement and of subsequent theories (CRT being one) in a way that doesn't properly engage with the issues and is really just a self-serving tactic to preserve their privileged position and their white saviour complex. I believe Herbert Marcuse also argued something similar in One-Dimensional Man, albeit obviously without the focus on race.

I've added one more way to feel above it all and congratulate myself on it? How?

Because, whether by calculation or (as I think) by political instinct, all your critiques of Zack's critics are goal-oriented towards making you appear as the sensible moderate as contrasted with the extremists on both sides, even though in point of fact I have had to practically drag you to make even this admission. At first you were describing sapphire as only arguably abusive, and even your previous comment you were still creating an outrageously one-sided portrayal of Zack's interactions with the community that simply portrayed his arguments as bad while glossing over the fact that he was pointing to a lot of real substance. Even after your break with Blanchardianism, you are after all still using most of the terminology that you were introduced to through Blanchardianism. There is real substance there, even if most (all?) of it predates Blanchard's own work, and the idea that the rationalist community dismissed it all simply because of flaws in Zack's arguments does not even come remotely close to being a reasonable characterisation. I think you know this on some level.

In short, your behaviour is goal oriented towards keeping up appearances of being a sensible moderate, charitable to both sides, while in actual fact having an absolutely immense bias.

  • Zack... doesn't seem to have discussed brainsex much?

Not by that term, but that is what is implied when discussing whether LOGD is an intersex condition. It's not like he was referring to XXY chromosomes or some such.

I am not convinced you are correctly interpreting that market.

Approximately nobody in these rationalist debates are claiming that all MtFs are HSTS

Of course not. Almost none of them would've even encountered the term if not for Blanchardianism, which is the point I'm getting at. Previously they would have simply recognised HSTS's as "straight transwomen" and left it at that.

  • It's not clear how you're asking it to be investigated, and Zack hasn't written much about this either. (I have extensive opinions about how it should be investigated! But nobody listens to me about this...)

It's not like I am criticising them for failing to spend lots of effort pursuing some particular line of investigation, just pointing out that their rejection of Zack simply cannot be explained by some flaws in Blanchardianism that took even you quite a while to uncover.

Look, I am going to be blunt and say that you have a profound proclivity for bullshitting and you really need to learn to get it under control.

Well, for one, because you don't seem to agree with me in the case of Zack.

Zack is being complicit in his own abuse in much the same way you are complicit in it, albeit to a lesser extent.

Also I'm not super convinced by your opposition to Michael Bailey as you probably don't know the specifics of that conflict. For all I know, you might support Bailey if you knew more. And considering that Michael Bailey did offer something like a debate, it seems like you need to be more specific about which subject you'd like to see me debate with him, in order for you to truly illustrate that you are not simply on his side.

I indeed don't know the specifics of that conflict; certainly not enough to be "simply on his side". I have however read your explanation of how he came to block you, and am willing to take your word for it, since it seems consistent with the vibe I get from him. He actually kinda reminds me of a very particular kind of annoying Catholic father figure[1]. So although I do not know the specifics of that conflict, I do know enough to have a negative overall impression of him, just going by vibes. 

My bad impression of him has been sufficient to deter me from looking closer into him without a clear reason, though such a reason was to some extent granted by his view of femininity as you related them to me (something to the effect that straight men will never be truly feminine). There, I am probably mostly on his side, though I suspect he has less understanding of the more aristocratic kind of femininity that I consider more central to the concept.

Given that they were all abusive in like 2 very specific ways, yes, but also this makes me able to identify them in the future.

Yes, you will be able to identify these particular manifestations of narcissism, and thus find communities in which it manifests differently, in ways you are less aware of, and hence will have even less self-awareness of perpetrating. If there is an improvement implied here, I fail to see it.

  1. ^

    and I say this as someone who both prefers Catholicism to Protestantism and patriarchy to feminism. There are nevertheless some very annoying, very prejudiced Catholic patriarchs in Texas, and he reminds me of them. I don't mean to imply that he actually is Catholic, of course.

Replies from: gjm, tailcalled
comment by gjm · 2024-01-07T03:06:04.803Z · LW(p) · GW(p)

I would find this discussion more enlightening and more pleasant to read if you would focus on the issues rather than devoting so much of what you write to saying what a bad person you think tailcalled is.

Of course there's no particular reason why you should care what I find enlightening or pleasant, so let me add that one strong effect of the large proportion of insults in what you write is that it makes me think it more likely that you're wrong. (Cf. this old lawyers' saying.)

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-07T14:54:43.001Z · LW(p) · GW(p)

The issue at hand is a critique of the rationalist community. A community is the product of its members. 

Although, in this particular case, part of the issue is that tailcalled is having a private feud with me on the side, which he decided to bring into this comment section under false pretenses, cf. his own words:

My actual motivation with my original comment was to try to point you at some of the areas in which Blanchardians are wrong or even abusive, since (in our other Discussion, in the emails) you were skeptical that my views are all that much driven by my experiences with Blanchardianism.

There is no excuse for this kind of manipulative behaviour, but it is par for the course when it comes to the LessWrong community and thus eminently relevant to critiquing that same community.

Replies from: gjm
comment by gjm · 2024-01-08T00:33:09.132Z · LW(p) · GW(p)

I haven't followed whatever Drama may be going on between you and tailcalled elsewhere, but I don't see anything manipulative or under-false-pretenses about what you're complaining about here.

(And, for what it's worth, reading this thread I get a much stronger impression of "importing grudges from elsewhere" from you than from tailcalled.)

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-08T03:32:32.754Z · LW(p) · GW(p)

but I don't see anything manipulative or under-false-pretenses about what you're complaining about here.

He responded to me in a manner that seemed to only suggest an intention of addressing the subject matter of discussion in this post, not an intention of swaying my stance towards him in our private feud, but then in the text I quoted, he explicitly states that his purpose was to sway my stance in that private feud. That's practically the definition of false pretenses.

You're falling prey to the halo effect. You are put off by my more disagreeable manner, and so you impute other negative characteristics to me and become blinded to even very blatant abuses from tailcalled towards me. For my part, I am compelled to be very forcefully assertive by tailcalled's extreme evasiveness.

(And, for what it's worth, reading this thread I get a much stronger impression of "importing grudges from elsewhere" from you than from tailcalled.)

That's because you've fallen for his manipulation tactics. He literally admitted the false pretenses, stopping only short of actually using that label. His original reply to me was, by his own admission, motivated by the private feud, which means he was the one who imported a grudge from elsewhere, regardless of what vibe you are getting. 

And the sole reason I am coming across as more begrudging than he is because he keeps evading the points so I have to keep directing him back towards them, making me appear forceful, which you may remember was precisely what I said would happen if I follow his prescription for defusing these manipulation tactics.

All of that is him manipulating you, and you have fallen for it.

Replies from: gjm
comment by gjm · 2024-01-08T04:36:06.269Z · LW(p) · GW(p)

I am not persuaded by any part of your analysis of the situation.

Saying something relevant to an ongoing discussion (which it seems clear to me tailcalled's original comment was) while also hoping it will be persuasive to someone who has disagreed with you about something else is not "false pretenses".

It is certainly true that I am put off by your disagreeable manner. I do not think this is the halo effect. Finding unpleasantness unpleasant isn't the halo/horns effect, it's just what unpleasantness is; as for any opinions I may form, that's a matter of reasoning "if Cornelius had good arguments I would expect him to use them; since he evidently prefers to insult people, it is likely that he doesn't have good arguments". Of course you might just enjoy being unpleasant for its own sake, in which case indeed I might underestimate the quality of the arguments or evidence you have at your disposal; if you want me (or others who think as I do) not to do that, I suggest that you try actually presenting said arguments and evidence rather than throwing insults around.

It doesn't look to me as if tailcalled is being evasive; if anything he[1] seems to me to be engaging with the issues rather more than you are. (Maybe he's being evasive in whatever other venues your Drama is spilling over from; I have no way of knowing about that.) In any case, evasiveness doesn't compel insults. There is no valid inference from "tailcalled is being evasive" to "I must insult devote a large fraction of what I say to tailcalled to insulting him".

[1] I actually have no idea of tailcalled's gender; I'm going along with your choice of pronoun. In the unlikely (but maybe less unlikely in this particular sort of context) event that this is leading my astray, my apologies to tailcalled.

It does not look to me as if your repeated insultingness towards tailcalled is a necessary consequence (or in fact any sort of consequence) of having to keep pulling the conversation back to something he is avoiding talking about. (I'm not sure what it is that you think he is avoiding talking about. Maybe it's How Terrible Tailcalled Is, but in that case I don't think you get to say "I'm only being insulting to tailcalled because he keeps trying to make the conversation be about something other than how awful he is".)

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-08T12:18:01.480Z · LW(p) · GW(p)

Saying something relevant to an ongoing discussion (which it seems clear to me tailcalled's original comment was) while also hoping it will be persuasive to someone who has disagreed with you about something else is not "false pretenses".

He specifically wanted to convince me that Blanchardians are abusive, which massively distorts his judgement with respect to commenting on the justice of Zack's actions and LW's reception of him. Tailcalled ought to at the very least have disclosed these ulterior motives from the beginning.

An additional point to note is that after more than a decade of efforts to mend the relationship, I gave up and cut off contact with tailcalled. I had however given him the opportunity to reach out to me with a view to make amends, or otherwise to convince me that I had been wrong to cut him off. He exploited this offer and chose not to do either, and for some reason I went along with it, causing the past several months to have been a lot more torturous than they needed to be, but it was somewhat bearable because it was confined to that one email conversation.

Then he interacts with me here, not only to address the topic of Zack's post, but specifically to pursue his feud with me outside of emails.

 It is certainly true that I am put off by your disagreeable manner. I do not think this is the halo effect.

That's not what I said. It's your being put off by my disagreeable manner that makes you subject to the halo effect when it comes to tailcalled's responses.

as for any opinions I may form, that's a matter of reasoning "if Cornelius had good arguments I would expect him to use them; since he evidently prefers to insult people, it is likely that he doesn't have good arguments"

But the things you deemed insults were actually critiques of his character, not mere insults, and most of those critiques were aimed at showing that he is being unjust towards Zack, with the few exceptions pointing out character flaws that are characteristic of many LessWrongers and not just him. It is simply not possible to argue in favour of my position without raising points of personal criticism, because those points of criticism are absolutely central to my position, and it is only the horns effect that makes you perceive them as mere insults.

 Of course you might just enjoy being unpleasant for its own sake

No, I do not. I actually have quite a distaste for it, but when faced with an immensely abusive community such as this one, my only other means of defence is to plead for mercy, which is errosive to self esteem.

But in this case, since I am dealing with tailcalled in particular, even that would not work. I have learned from about more than a decade of abuse from him that this is the only viable defence. Problem is, if he is in a crowd of enablers who don't notice his bs because they are used to engaging in milder forms of the same abusive behaviour, then it will paint me as the abusive one.

It doesn't look to me as if tailcalled is being evasive; if anything he[1] seems to me to be engaging with the issues rather more than you are. 

No, this is simply him having evaded my arguments for so long that he has managed to distort your impression of what is actually being discussed. The main issue is a critique of the rationalist community. That then led to an issue of tailcalled's injustice in judging the feud, and that in turn led to an issue of his evading my points.

If you trace back the lines of argumentation where I seem to be insulting him, you will find that what you deem insults are mostly accusations of injustice that were centrally relevant to the argument. Then, by endless nitpicking and evasiveness, and my insistence on maintaining the accusations of injustice through this obfuscation, they became increasingly separated from their original context, and you quite simply lost track of why I made them in the first place.

There are however also a few of them (edit: namely, the ones about self-serving bias) that only make sense in context of the private feud, and which are in response to remarks of his (eg. about the critical theory) that only look cruel if seen in context, which sorta illustrates what I mean about the false pretenses, because if he had disclosed them from the beginning, I would not have engaged at all.

Edit: I am also suspicious that he might have taken it here in part to present the feud in front of a crowd, with zero context, and specifically a crowd that is part of his culture and is likely to agree with him based on surface appearances, setting up false appearances of unanimity.

*edit: removed a fact that could be used to personally identify tailcalled

Replies from: gjm, tailcalled
comment by gjm · 2024-01-08T17:07:37.091Z · LW(p) · GW(p)

Well, maybe I'm confused about what tailcalled's "original comment" that you're complaining about was, because looking at what I thought it was [LW(p) · GW(p)] I can't see anything in it that anyone could possibly expect to convince anyone that Blanchardians are abusive. Nor much that anyone could expect to convince anyone that Blanchardians are wrong, which makes me suspect even more that I've failed to identify what comment we're talking about. But the only other plausible candidate I see for the "original comment" is this one [LW(p) · GW(p)], which has even less of that sort. Or maybe this one [LW(p) · GW(p)], which again doesn't have anything like that. What comment do you think we are talking about here?

I am fairly sure my opinions of tailcalled's responses here is very similar to my opinion of his comments elsewhere which haven't (so far as I've noticed) involved you at all, so I don't find it very plausible that those opinions are greatly affected by the fact that on this occasion he is arguing with someone I'm finding disagreeable.

"Pointing out character flaws". "Insults". Po-TAY-to. Po-TAH-to. My complaint isn't that the way in which you are pointing out tailcalled's alleged character flaws is needlessly unpleasant, it's that you're doing it at all. (And I would say the same if tailcalled were spending all his time pointing out your alleged character flaws, whatever those might be, but he isn't.) As far as I am concerned, when an LW discussion becomes mostly about the character of one of its participants, it is very unlikely that it is doing any good to anyone. And if what you mostly want to do here is point out people's character flaws, then even if those character flaws are real I think it's probably not very helpful.

It doesn't look to me as if LW is the hotbed of "constant abuse" you are trying to portray it as (and no, I'm not trying to insist that "constant" has to mean "literally nonstop" or anything). It looks to me -- and here I'm going off my own impression, not e.g. anything tailcalled may have said about the situation -- as if Zack gets plenty of disagreement on LW but very little abuse. So to whatever extent your "accusations of injustice" are of the form "tailcalled denies that Zack is constantly being abused, but he is", I find myself agreeing with tailcalled more than with you. Again, this was already my impression, so it can't be a halo/horns thing from this conversation.

(Of course, you may have me pigeonholed as one of the "crowd of enablers". Maybe you're right, though from my perspective I'm pretty sure I'm not abusing anyone and have no intention or awareness of engaging in the specific catch-22 you describe. I have disagreed with Zack from time to time, though.)

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-09T01:22:30.766Z · LW(p) · GW(p)

Well, maybe I'm confused about what tailcalled's "original comment" that you're complaining about was, because looking at what I thought it was [LW(p) · GW(p)] I can't see anything in it that anyone could possibly expect to convince anyone that Blanchardians are abusive. Nor much that anyone could expect to convince anyone that Blanchardians are wrong, which makes me suspect even more that I've failed to identify what comment we're talking about. But the only other plausible candidate I see for the "original comment" is this one [LW(p) · GW(p)], which has eve n less of that sort. Or maybe this one [LW(p) · GW(p)], which again doesn't have anything like that. What comment do you think we are talking about here?

I also don't see how it was supposed to do that, but I am commenting on his stated intentions. The fact that it is hard to spot those intentions in his first comments, even when actively looking for them, only further corroborates my point that his stated intentions were not obvious at all, and that it seemed to be a relatively innocuous reply that was made with only the discussion in mind. Yet, by his own statements, his point in responding was to convince me that Blanchardians are abusive. Thus, as I said, false pretenses.

I am fairly sure my opinions of tailcalled's responses here is very similar to my opinion of his comments elsewhere which haven't (so far as I've noticed) involved you at all, so I don't find it very plausible that those opinions are greatly affected by the fact that on this occasion he is arguing with someone I'm finding disagreeable.

My claim was specifically that the halo effect is blinding you to an evasiveness that he does not typically display. Thus it is wholly consistent with you having a similar opinion of his comments here compared to your usual opinion of his comments. 

"Pointing out character flaws". "Insults". Po-TAY-to. Po-TAH-to. My complaint isn't that the way in which you are pointing out tailcalled's alleged character flaws is needlessly unpleasant, it's that you're doing it at all.

I have already addressed that argument, and the whole point of my using the phrase "pointing out character flaws" was to stress the relevance of doing so to the argument I am making.

Ad hominem is not a fallacy if the topic of discussion is literally about the person's character, and justice when commenting on feuds is after all a character trait. I cannot effectively criticise a community without criticising its members, and I cannot effectively criticise its members without pointing out character flaws, ie. without "insulting" them as you put it. If I had to adhere to your standards, my position would be ruled out before I even had a chance to make my case.

Replies from: tailcalled, gjm
comment by tailcalled · 2024-01-09T11:21:49.816Z · LW(p) · GW(p)

My stated intention wasn't to convince you that Blanchardians are abusive. My stated intention was to "point you at some of the areas in which Blanchardians are wrong or even abusive". The information in my comment is supposed to lie in the exact areas I point to, not in Blanchardians being bad.

You've decided that I am actually terribly misjudging these areas due to bias and so my opinions on them are derailing the conversation. You're entitled to have that opinion, but I disagree, and therefore endlessly insulting my intellect while not engaging with my core point is not going to be convincing to me.

I don't know how to inform you about these points other than to just keep hold of it while you try to turn LessWrong against me.

Of course this sort of mirrors the situation in the emails where you acted like I had converted to some insane blank-slatism even though I told you that wasn't the case and my crux was more closely related to Blanchardianism.

comment by gjm · 2024-01-09T02:38:19.753Z · LW(p) · GW(p)

I am deeply unconvinced by the argument "Some time after writing X, tailcalled said he said it partly to do Y; it's very unclear how X could possibly do Y; therefore when tailcalled wrote X he did it under false pretenses". It certainly does seem to follow from those premises that tailcalled's account of why he did X isn't quite right. But that doesn't mean that when he wrote X there was anything dishonest going on. I actually think the most likely thing is that he didn't in fact write X in order to do Y, he just had a vague notion in his mind that maybe the discussion would have effect Y, and forgot that he hadn't so far got round to saying anything that was likely to do it. Never attribute to malice what is adequately explained by incompetence.

(Not very much incompetence. This sort of discussion is easy to lose track of.)

And, again, it is not "false pretenses" to engage in a discussion with more than one goal in mind and not explicitly lay out all one's goals in advance.

an evasiveness that he does not typically display

Oh. I'd thought you were mostly alleging persistent character flaws rather than one-off things. Anyway: I won't say it's impossible that what you say is true, but I am so far unconvinced.

I cannot effectively criticise a community without criticising its members

Perhaps I have been unclear about what it is I think you have been doing in this thread that it would be better not to do. I am not objecting to criticizing people's behaviour. (I think I disagree with many of your criticisms, but that's a separate matter.) What I think is both rude and counterproductive is focusing on what sort of person the other person is, as opposed to what they have done and are doing. In this particular thread the rot begins with "thus flattering your narcissism" -- I don't agree with all your previous criticism of tailcalled but it all has the form "you did X, which was bad because Y", which I think is fine; but at this point you switch to "and you are a bad person". And then we get "you've added one more way to feel above it all and congratulate yourself on it" and "your few genuine displays of good faith" and "goal-oriented towards making you appear as the sensible moderate" and "you have a profound proclivity for bullshitting" and so forth.

I think this sort of comment is basically never helpful. If what you are trying to do here is something that can't be done without this sort of comment, then I think it would be better not to do it . (More precisely: if you think that what you are trying to do here is something that can't be done without such comments, then I think you are probably wrong unless what you are trying to do is mostly "make tailcalled feel bad" or something.)

Replies from: tailcalled, Kalciphoz
comment by tailcalled · 2024-01-09T18:41:22.396Z · LW(p) · GW(p)

I did in fact do X in order to do Y. The proof, which only @Cornelius Dybdahl [LW · GW] can see, is that "which in turn makes it challenging to make sensible descriptions like "biological sex is binary because chromosomes are binary, XX vs XY"" is a reference to something he said in the emails.

The issue is that he is misrepresenting what Y is. Y is not proving that Blanchardians are abusive. Y is highlighting a problem with Blanchardian rhetoric, which Zack arguably does more than the run-of-the-mill TERF that Cornelius said he already knew was abusive.

comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-12T00:34:40.547Z · LW(p) · GW(p)

And, again, it is not "false pretenses" to engage in a discussion with more than one goal in mind and not explicitly lay out all one's goals in advance.

It saddens me that LessWrong has reached such a state that it is now a widespread behaviour to straw man the hell out of someone's position and then double down when called on it.

What I think is both rude and counterproductive is focusing on what sort of person the other person is, as opposed to what they have done and are doing. In this particular thread the rot begins with "thus flattering your narcissism"

But the problem is at the level of his character, not any given behaviour. I have already explained this in one of my replies to tailcalled; if he simply learns to stay away from one type of narcissistic community, he will still be drawn in by communities where narcissism manifests in other ways than the one he is "immunized" to, so to speak. Likewise with the concrete behaviours: if he learns to avoid some toxic behaviours, the underlying toxicity will simply manifest in other toxic behaviours. I do not say there is therefore no point in calling out the toxic behaviours, but the only point in doing that is to use them as pointers to the underlying problem. If I just get him to recognise a particular pattern of behaviour, then I will have misidentified the pattern to him and might as well have done nothing. The issue is specifically that he is a horrible person and needs to realise it so he can begin practising virtue — this being of course a moral philosophy that LessWrongers are generally averse to, but you can see the result.

And then we get "you've added one more way to feel above it all and congratulate yourself on it" and "your few genuine displays of good faith" and "goal-oriented towards making you appear as the sensible moderate" and "you have a profound proclivity for bullshitting" and so forth.

All of these are criticising behaviours rather than character and thus fit your pretended criterion. Thus, you made no specific complaint about them, because what you actually take issue with is simply my harshness and directness.

I think this sort of comment is basically never helpful

It is the only thing that is ever helpful when an improvement to the underlying character is what is called for.

Replies from: habryka4, gjm
comment by habryka (habryka4) · 2024-01-12T00:40:18.529Z · LW(p) · GW(p)

(LessWrong mod here. I am very far from having read remotely all discussion on this post, and am unlikely to because this is a truly giant pile of text. FWIW, this comment seems quite aggressive to me standing on its own, and my best guess, using really just surface-level heuristics and not having engaged in much depth, is that this conversation seems not particularly productive and if I was a participant I would probably do something else. 

Also, please don't generalize LW norms from a comment thread as niche and deep as this one. I highly doubt any of the mods have followed this discussion all the way to the end, and I doubt the voting here corresponds to anything but the strong feelings of a relatively small number of discussion participants. 

All this is just speaking as someone who has skimmed this thread. I might totally be misreading things. I don't think I am going to stop anyone from commenting here unless someone wants me to call for more official moderator action.)

comment by gjm · 2024-01-12T01:17:04.911Z · LW(p) · GW(p)

I am not (deliberately or knowingly) strawmanning anything, and what you call "doubling down" I call "not having been convinced by your arguments". If you think tailcalled was doing something more heinous than (1) having purposes other than advancing the discussion here and (2) not going out of his way to say so, then maybe you should actually indicate what that was; your accounts of his alleged dishonesty, so far, look to me like (1) + (2) + your disapproval, rather than (1) + (2) + something actually worse than 1+2.

If "the problem is at the level of his character" then I do not think there is any realistic chance that complaining about his character will do anything to solve the problem.

Have you ever seen any case where a substantial improvement to someone's character came about as a result of someone telling them on an internet forum what a bad person they were? I don't think I have.

At this point I shall take habryka's advice and drop this discussion. (Not only because of habryka's advice but because I agree with him that this conversation seems unlikely to be very productive, and because the LW user interface -- deliberately -- makes it painful to take part in discussions downthread of highly-downvoted comments.) I will not be offended if you choose to get in the last word.

comment by tailcalled · 2024-01-08T14:26:15.503Z · LW(p) · GW(p)

We can take the discussion to emails to avoid crowd pressure.

comment by tailcalled · 2024-01-07T15:17:46.762Z · LW(p) · GW(p)

Indeed, and some of those lines of questioning yourself did indeed lead to regrets — for a time. Until you walked back your few genuine displays of good faith (edit: excepting the one with the discord server, though you did weaponise my emphasis on that one against me, which is arguably similar to walking it back). That sort of thing gives me an impression that even the process of questioning your past behaviour is basically just a self-serving preemptive defence against criticisms such as this one.

I think we should take our personal dispute in emails once we've talked about the case of Blanchardianism, since talking through Blanchardianism may at least inform you where my priors come from etc..

Iirc it is mostly in its applied forms, as in critical race theory. Robin DiAngelo for example frequently argues that white progressives are just appropriating the language of the civil rights movement and of subsequent theories (CRT being one) in a way that doesn't properly engage with the issues and is really just a self-serving tactic to preserve their privileged position and their white saviour complex. I believe Herbert Marcuse also argued something similar in One-Dimensional Man, albeit obviously without the focus on race.

But the thing is, Robin DiAngelo and other CRT people are constantly bluffing. They keep citing evidence for their beliefs that doesn't actually precisely pin down their position, but instead can accommodate a wide variety of positions. In such a case, it's not unreasonable or unexpected that people would pick and choose what ideas they find most plausible.

(A concrete example I have in mind: to disprove color blindness, Robin DiAngelo cites field studies showing that black people are still discriminated against when it comes to callbacks for resumes, but her argument requires this discrimination is independent of color-blind ideology, which she doesn't give evidence for.)

I don't know to what extent this is just poor communication (maybe she does have the relevant evidence but doesn't cite it) or a grift (considering she axiomatically rejects innate racial differences, and falsely presents innate racial differences as the reigning ideological explanation for racial inequality, there's probably at least a nonzero element of grift).

Because, whether by calculation or (as I think) by political instinct, all your critiques of Zack's critics are goal-oriented towards making you appear as the sensible moderate as contrasted with the extremists on both sides, even though in point of fact I have had to practically drag you to make even this admission.

The case with Scott Alexander seems like an exception to this, though? If Scott is someone who is extremely prone to not paying attention to this subject matter, then I am clearly not simultaneously contrasting myself with people who are extremely prone to paying attention to this subject matter. Instead I am making comments all over the place.

Your accusation only seems true in the weakest possible sense. Like it's just factually true that there is a Blanchardian camp that spends a lot of time arguing about this subject without systematically studying or thinking about the subject and therefore ends up constantly spamming all sorts false or nonsensical ideas, and an anti-Blanchardian camp that spends a lot of time arguing against the Blanchardian camp, again without systematically studying or thinking about the subject, and therefore also ends up constantly spamming all sorts of false or nonsensical ideas, and that I've spent the past few years systematically studying and thinking about the subject and therefore have detailed opinions about how either camp is wrong. But that doesn't mean I'm "above it all", instead I'm way deep into it all and I'm so tired of and defeated by it all.

Zack has gotten to the point where he pretty much admits he is mainly driven by priors! It's just not wrong to see things this way. If we want, we could quantify it with a survey, listing a bunch of Blanchardian and anti-Blanchardian beliefs, and then scoring people. Yes, I'd probably score in the middle whereas the Blanchardians would score to one side and the anti-Blanchardians would score to another side.

At first you were describing sapphire as only arguably abusive

Possibly I have a habit of using the word "arguably" wrong, idk, I plead ESL. Plus dictionaries agree with my usage.

even your previous comment you were still creating an outrageously one-sided portrayal of Zack's interactions with the community that simply portrayed his arguments as bad while glossing over the fact that he was pointing to a lot of real substance

Not by that term, but that is what is implied when discussing whether LOGD is an intersex condition. It's not like he was referring to XXY chromosomes or some such.

And here we're getting to the real meat of the issue.

First, a communication issue. Zack has plausibly intended to talk about brainsex or something, but his interlocutors have openly been thinking about other things, and the semantics of "an intersex condition" is not simply defined to be brainsex. (As a sidenote, XXY is not an intersex condition, but AFAIK most intersex conditions do have a fairly bounded scope, like MRKH and such.)

People have frequently been discussing things like body-map theory, which does not require that the entire brain has its sex swapped, only that some basic perception things are swapped. Body map theory is pretty stupid, but Zack hasn't really done much to address it (and I think for a while he might even have been sympathetic to it applying to HSTSs? Idk, I may be wrong).

My bad impression of him has been sufficient to deter me from looking closer into him without a clear reason, though such a reason was to some extent granted by his view of femininity as you related them to me (something to the effect that straight men will never be truly feminine). There, I am probably mostly on his side, though I suspect he has less understanding of the more aristocratic kind of femininity that I consider more central to the concept.

And again this gets to the meat of the issue.

Nobody gets to pick the definition of "masculinity/femininity" used for these topics. If one is studying the sociological effects of transness, one has to focus on the sociologically relevant aspects of femininity, which I think is heavily intertwined with the sexual market.

But, Blanchardianism is not claiming to be a theory of sociology. Blanchardianism is claiming to be a theory of transgender etiology, and therefore one shouldn't simply be picking the definition of "masculinity/femininity" to optimize sociological relevance.

There's a case to be made that one should avoid talking about "masculinity/femininity" at all, since it can be confusing due to its sociological meaning. But it doesn't change the fact that one needs to consider the distinction between macho vs sensitive men, and that one might want to entertain the relevance of neuroticism or feminine aesthetic interests or so on. If one just rejects these questions as "not real femininity" without providing any explanation, or by (as Zack tends to do) providing nonsense explanations (e.g. multivariate group differences), then one is doing something very wrong.

... if one is trying to study etiology. Of course, maybe Blanchardianism is not about etiology?? Could that be true?? Wouldn't that be wild?? Then everyone would have to change their entire discourse because all the preexisting discourse was using etiology-related words.

It's not like I am criticising them for failing to spend lots of effort pursuing some particular line of investigation, just pointing out that their rejection of Zack simply cannot be explained by some flaws in Blanchardianism that took even you quite a while to uncover.

I only got into Blanchardianism due to 2 extreme coincidences though, which are most clearly illustrated by Ozy's Thoughts on The Blanchard/Bailey Distinction and On Autogynephilia.

First, Ozy were making some nonsense arguments against ETLE. The fact that they were nonsense, and that the ETLE correlations actually seemed to hold, made me think the Blanchardians were onto something. In retrospect, in order to determine the direction of the causal arrow between "sexually attracted to being X" and "want to be X", I was using "sexually attracted to X" as an instrumental variable. But in retrospect this seems stupid because whichever confounders could generate an attraction to X as partners could plausibly also generate a desire to be X, so the IV assumptions don't hold.

Another argument was Ozy's point that there is a distinction between "true autogynephiles" and trans women, which I took as making predictions about discontinuities in the distribution that turned out to (sort of) not be there. But that is focusing overly much on random predictions made by a random person who has not been thinking very much about it. What I've instead learned is that I should ignore almost everything that people have previously said on these sorts of topics because it is very poorly informed, and instead collect a wealth of information myself.

Of course, these lines of thought would be insane if Blanchardianism was a sociological theory. A sane line of thought if Blanchardianism was a sociological theory would be something like the disruptive/pragmatic typology [LW(p) · GW(p)], though of course since Blanchardianism is claiming to be an etiological theory, it is instead absolutely insane to take the evidence for the disruptive/pragmatic typology as being some huge validation of Blanchardianism.

Yes, you will be able to identify these particular manifestations of narcissism, and thus find communities in which it manifests differently, in ways you are less aware of, and hence will have even less self-awareness of perpetrating. If there is an improvement implied here, I fail to see it.

I mean it would be insane for me to just simply avoid those 2 pathologies. Instead I should ask more generally what the community is trying to achieve, whether it is good at achieving that, whether I want to achieve that and whether it would be helpful for me to be in it, whether it is responsive to critique and accountable, etc..

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-07T17:32:31.414Z · LW(p) · GW(p)

But the thing is, Robin DiAngelo and other CRT people are constantly bluffing. They keep citing evidence for their beliefs that doesn't actually precisely pin down their position, but instead can accommodate a wide variety of positions. In such a case, it's not unreasonable or unexpected that people would pick and choose what ideas they find most plausible.

Good thing then that I'm calling you out on self-serving bias rather than special pleading, then.

I don't know to what extent this is just poor communication (maybe she does have the relevant evidence but doesn't cite it) or a grift (considering she axiomatically rejects innate racial differences, and falsely presents innate racial differences as the reigning ideological explanation for racial inequality, there's probably at least a nonzero element of grift).

It's a gift. She is doing precisely the same thing she is calling out other white progressives on, but when you think about it, that only corroborates her point that race grifting is something white progressives are liable to do.

The case with Scott Alexander seems like an exception to this, though? If Scott is someone who is extremely prone to not paying attention to this subject matter, then I am clearly not simultaneously contrasting myself with people who are extremely prone to paying attention to this subject matter. Instead I am making comments all over the place.

The two major factions in a controversy are rarely perfectly orthogonal. I am not suggesting that you are contrasting yourself with people who are extremely prone to paying attention to the subject matter, merely that you are setting yourself up as the moderate who fairly critiques both factions, despite actually having an absolutely immense bias in what standards you hold each side to.

But that doesn't mean I'm "above it all", instead I'm way deep into it all and I'm so tired of and defeated by it all.

Describing yourself as "so tired of and defeated by it all" is simply another way of positioning yourself above it all, differing only in that it insinuates a kind of martyrdom at the same time. Your behaviour is almost comically narcissistic.

Possibly I have a habit of using the word "arguably" wrong, idk, I plead ESL. Plus dictionaries agree with my usage.

Missing the point again — the point is simply that you use a lot more qualifiers when critiquing one side than the other, even if the former is actually behaving a lot worse. ESL or not, I am pretty sure you are able to tell that "sapphire is abusive" is a much more assertive formulation than "sapphire is arguably abusive", therefore I am inclined to call bullshit on your ESL excuse.

Body map theory is pretty stupid, but Zack hasn't really done much to address it (and I think for a while he might even have been sympathetic to it applying to HSTSs? Idk, I may be wrong).

Again missing the point, which is simply that Zack's discussion of these concepts actually did provide a lot of genuine value and insights, even if there were also many points where he was flatly wrong, and the total dismissiveness of the community, again, simply cannot be explained by flaws that were subtle enough for even you to take a while to discover them.

And again this gets to the meat of the issue.

No, this was simply me describing my impression of Michael Bailey. I am well aware that Blanchardianism is not a theory of sociology, and is not about masculinity and femininity.

Of course, these lines of thought would be insane if Blanchardianism was a sociological theory. A sane line of thought if Blanchardianism was a sociological theory would be something like the disruptive/pragmatic typology [LW(p) · GW(p)], though of course since Blanchardianism is claiming to be an etiological theory, it is instead absolutely insane to take the evidence for the disruptive/pragmatic typology as being some huge validation of Blanchardianism.

Disruptive HSTS's are however disruptive in very different ways than other disruptive trans women. In particular, HSTS's, disruptive or not, are much less likely to be extremely oppressive to gay men.

I mean it would be insane for me to just simply avoid those 2 pathologies. Instead I should ask more generally what the community is trying to achieve, whether it is good at achieving that, whether I want to achieve that and whether it would be helpful for me to be in it, whether it is responsive to critique and accountable, etc..

No, going with an immunity analogy, that will still only give you immunity to specific strains of narcissism as you learn to recognise them. What you ought to do instead is to find healthy communities so that you can train your system 1 to immediately recognise the difference between a healthy community and an unhealthy one. The approach you are using is much too vulnerable to self-deception.

But that's just the community side of things. You are still leaving unexamined the question of why those pathological communities appealed to you in the first place.

Replies from: tailcalled
comment by tailcalled · 2024-01-07T21:05:43.909Z · LW(p) · GW(p)

Good thing then that I'm calling you out on self-serving bias rather than special pleading, then.

I was about to list some of the cases where I had sacrificed huge amounts of status on the basis of principles I believed in, as a counterexample to self-serving bias. Maybe you also believe those cases are self-serving somehow, but I guess maybe more likely the appropriate continuation lies along the following lines:

By sacrificing that status, I lost the ability to continue engaging in those things. For instance by criticizing Bailey on his core misbehavior, he did his best to get rid of me, which lost me the ability to continue criticizing him, thus closing off that angle of behavior.

Thus, in the long run, discourse is going to select for me engaging in the places that are appealing to the prejudices of the onlookers or the moderators. So for example, rationalists might like some reason why they weren't wrong to reject Zack, so if I have some belief about that, then they are going to promote me as the answer for that, yet that doesn't mean they are actually learning from me.

Is that getting your position right? Or? (If it is, I would still be inclined to say your position is wrong, maybe arguably inverted compared to the truth. Or I guess one could argue the truth is just an even more epic garbage fire. More on that later...)

The two major factions in a controversy are rarely perfectly orthogonal. I am not suggesting that you are contrasting yourself with people who are extremely prone to paying attention to the subject matter, merely that you are setting yourself up as the moderate who fairly critiques both factions, despite actually having an absolutely immense bias in what standards you hold each side to.

I am, or at least used to be, a Blanchardian intellectual/researcher/teacher. This makes it my job to continually raise the standards for Blanchardians, by providing new information at the edge of their knowledge, and pointing out errors in existing positions.

I then learned that they weren't interested in new information, especially not if it was disadvantageous to their political interests. It seems valid for me to share this to warn others who were in a similar position to me. If Blanchardians don't like this, they shouldn't have promoted me as their intellectual/researcher/teacher without warning me ahead of time.

Does this lead to Blanchardians getting held to higher standards than anti-Blanchardians? I suppose it does, because anti-Blanchardians openly announce their political biases, and so I wouldn't have felt betrayed in the same way by them.

The point of criticism is to inform people. A bit of that information can be used to choose what side to support, but since there's only enough space and people for a small number of sides, you don't get need much information to choose a side. Instead, a better use of information is to integrate it into a side to improve it, i.e. for an ideology to get rid of its bad memes and replace them with good ones. Blanchardians don't do this.

Describing yourself as "so tired of and defeated by it all" is simply another way of positioning yourself above it all, differing only in that it insinuates a kind of martyrdom at the same time. Your behaviour is almost comically narcissistic.

False. It is not simply a way of "positioning myself above it all". It is also factually true; I spent the last few years, including much of the time I should have spent on e.g. education on it, so "so tired of it all" is a factual description of me, and similarly by any reasonable means of counting, I'm cut away from the discourse on this topic, so I am also defeated.

There may in addition to this factual matter be some sort of strategic consequences of my framing, but you can't just say that this statement is simply those strategic consequences, and it would be helpful if you did say what those strategic consequences were in more detail.

Missing the point again — the point is simply that you use a lot more qualifiers when critiquing one side than the other, even if the former is actually behaving a lot worse. ESL or not, I am pretty sure you are able to tell that "sapphire is abusive" is a much more assertive formulation than "sapphire is arguably abusive", therefore I am inclined to call bullshit on your ESL excuse.

I know more about the Blanchardian and Blanchardian-adj side than I know about the anti-Blanchardian side. More qualifiers are justified due to greater uncertainty.

Again missing the point, which is simply that Zack's discussion of these concepts actually did provide a lot of genuine value and insights, even if there were also many points where he was flatly wrong, and the total dismissiveness of the community, again, simply cannot be explained by flaws that were subtle enough for even you to take a while to discover them.

No, this was simply me describing my impression of Michael Bailey. I am well aware that Blanchardianism is not a theory of sociology, and is not about masculinity and femininity.

And again this gets to the meat of the issue:

Blanchardianism should be a theory of sociology. Or like, maybe we should also keep the etiology-focused version of Blanchardianism around, though as it stands now, approximately all people talking about Blanchardianism lack a real [LW · GW] interest in etiology, so if Blanchardianism is supposed to be community-driven, something about the interests needs to change for an etiology-focused version to work.

But again let's take Zack's valuable and insightful discussion. How many of these contributions are about etiology? Few, maybe even none. How many are about sociology and politics? Lots! And this is despite the fact that he explicitly considers politics off-limits, and considers activism wrong, and so on.

How did this happen? It happened because sociology is a field that is more accessible to informal observation and theorizing, compared to etiology. So since sociology is a more fruitful field, Blanchardians should simply explicitly focus on it instead of insisting that they are focusing on etiology.

But, if Blanchardians are insisting that they are focusing on etiology, then onlookers will concentrate on looking for whether Blanchardians have good etiological insights, and when they see there are none, it's not so surprising if they abandon it.

I don't think the "flaws that were subtle enough for even you to take a while to discover them" holds here.

Disruptive HSTS's are however disruptive in very different ways than other disruptive trans women. In particular, HSTS's, disruptive or not, are much less likely to be extremely oppressive to gay men.

The exists a General Factor of Disruptiveness, which in psychometrics is often called Externalizing and which correlates with traits like disagreeableness, unconscientiousness and extraversion. Like the "rebel factor".

When I talk about disruptive transsexuality, this is not the factor I am talking about, and in fact anecdotally HSTSs tend to be elevated on the general factor of disruptiveness. I think this is what you might be getting at when you are talking about disruptive HSTSs?

There are definitely forms of disruptiveness that are equally common among trans women regardless of sexual orientation, or even that are more common among HSTSs than AGPTSs. Possibly this makes the disruptive/pragmatic labels problematic, and one could replace them with other labels. For psychometrics I care less about the labels than about their derivation and their indicators.

What I am proposing is a dimension reduction based on the primary 1 or few transgender-related characteristics that are relevant to the interests of or salient to different outsider parties. The justification for this is that by picking variables that are relevant to parties' interests, one automatically ends up with a variable that is important, and by doing a dimension reduction over multiple outsider parties, it in particular focuses on a variable whose relevance exists across many contexts, thereby making it not so context-dependent.

I hypothesize that such a dimension reduction will mostly pick up on sexual orientation, for reasons I argued in my link. This presumably also applies to your "be extremely oppressive to gay men" point. Maybe one could design a study that measures this factor, then show that there's a huge sexual orientation difference in it, and then switch to calling the factor "androphilic/nonandrophilic" or something, idk.

I think this would bring the debate far closer to people's crux, that this would make the academic studies on it far more applicable in practice, and that this would make it easier to reason about and to discuss. I also propose that this has kind of already informally happened, in the sense that because people have to stitch together their information based on bits of personal experience and pieces that others find important to share, they basically struggle to maintain high dimensionality of models, and they basically build their models out of similar pieces to this. So I think this constitutes realigning the formal theory with what people want to do anyway.

No, going with an immunity analogy, that will still only give you immunity to specific strains of narcissism as you learn to recognise them. What you ought to do instead is to find healthy communities so that you can train your system 1 to immediately recognise the difference between a healthy community and an unhealthy one. The approach you are using is much too vulnerable to self-deception.

Are there any publicly accessible healthy communities that you'd recommend I peek at as a starting point?

I've recently taken a liking to htmx - see their discord here and twitter here. Is that some strain of narcissism too? (Cringemaxxing narcissism maybe?)

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-08T04:32:16.809Z · LW(p) · GW(p)

By sacrificing that status, I lost the ability to continue engaging in those things. For instance by criticizing Bailey on his core misbehavior, he did his best to get rid of me, which lost me the ability to continue criticizing him, thus closing off that angle of behavior.

Your self-serving bias is a bias and not a rational stance of calculated actions. It sways your reasoning and the beliefs you arrive at, not your direct behaviour towards Michael Bailey.

Is that getting your position right?

No. I am not making any point about what discourse selects for. I could make such points, but they would look quite different from what you have imputed. My point was about your behaviour and the psychology implied by it.

I then learned that they weren't interested in new information, especially not if it was disadvantageous to their political interests. It seems valid for me to share this to warn others who were in a similar position to me. If Blanchardians don't like this, they shouldn't have promoted me as their intellectual/researcher/teacher without warning me ahead of time.

Does this lead to Blanchardians getting held to higher standards than anti-Blanchardians? I suppose it does, because anti-Blanchardians openly announce their political biases, and so I wouldn't have felt betrayed in the same way by them.

I swear you are inventing more and more elaborate ways to miss the point. The issue is that you portray yourself as a reasonable mediator while having these asymmetric standards. I do not object to you holding Blanchardianism to higher standards when acting in your capacity as an expert critic of Blanchardianism, but here you were commenting on a feud between Zack and LessWrong, and my point was specifically that LessWrong's treatment towards Zack has been abusive, not that they have made more factual errors or that they were more ideologically motivated than him. Your position as an expert critic of Blanchardianism does not in the slightest justify an enormous bias in standards of behaviour when mediating a feud. It is irrelevant.

I suppose you might argue that you were not intending to act as a mediator, but that is precisely why it is objectionable that your behaviour is strongly goal-oriented to portraying yourself as a reasonable mediator willing to call out both sides when they are wrong.

False. It is not simply a way of "positioning myself above it all". It is also factually true; I spent the last few years, including much of the time I should have spent on e.g. education on it, so "so tired of it all" is a factual description of me, and similarly by any reasonable means of counting, I'm cut away from the discourse on this topic, so I am also defeated.

Again you nitpick a single word (in this case the word "simply") as a way of avoiding the issue. The point is that you described yourself as "so tired of and defeated by it all" as an argument that you are not positioning yourself above it all, as if the two were in conflict (hence your usage of the word "instead"), when in fact they are strikingly congruent.

I know more about the Blanchardian and Blanchardian-adj side than I know about the anti-Blanchardian side. More qualifiers are justified due to greater uncertainty.

I call bullshit again. There was no need for that qualifier. Sapphire's argument could have been used with minimal alteration to tell people off for being dissidents in nazi germany. It was overtly abusive and the qualifier was not necessary in the slightest.

But, if Blanchardians are insisting that they are focusing on etiology, then onlookers will concentrate on looking for whether Blanchardians have good etiological insights, and when they see there are none, it's not so surprising if they abandon it.

They really don't. They first see the sociological implications, not even of the position, but of the delivery, of the other stances held by the proponents, etc. You know this. Not only is this addressed extensively in the Sequences (eg. in politics is the mindkiller) but it is also something you yourself have frequently called out in the past, specifically pertaining to the reaction of the LessWrong community toward Blanchardianism. So I simply do not buy the argument that the proponents of Blanchardianism view it through a more sociological lens than the critics do. I do not even buy that you believe otherwise.

When I talk about disruptive transsexuality, this is not the factor I am talking about, and in fact anecdotally HSTSs tend to be elevated on the general factor of disruptiveness. I think this is what you might be getting at when you are talking about disruptive HSTSs?

No, I simply clicked your link and read what you wrote about the disruptive/pragmatic typology.

Maybe one could design a study that measures this factor, then show that there's a huge sexual orientation difference in it, and then switch to calling the factor "androphilic/nonandrophilic" or something, idk.

Androphilia is not however limited to HSTS's, as in the case of meta-attraction or whatever is the current explanation for why some trans women who psychologically resemble exclusively gynephilic trans women are also attracted to men. This latter case is also prone to being viciously oppressive to gay men.

Are there any publicly accessible healthy communities that you'd recommend I peek at as a starting point?

Not in the sense you probably mean by "publicly accessible". These days, public accessibility is almost impossible to reconcile with being a healthy community. The only way to maintain a healthy community at this point is to exclude the people who would destroy it. 

But to give you an idea: a typical boxing gym, a traditional martial arts class, a group of fishermen, a scouting organization, or for that matter Bohemian smalltown is a very healthy community. I can also think of some healthy internet communities, but they are not publicly accessible.

I've recently taken a liking to htmx - see their discord here and twitter here. Is that some strain of narcissism too? (Cringemaxxing narcissism maybe?)

Yes. It is less unhealthy than the communities you are used to, which is probably why you like it, but it is still unhealthy. Cringemaxxing stems from profound insecurity and low self-esteem. People cringemaxx to preempt criticism, or to find cathartic release from their habitual vigilance against being cringy, or some other variety of either guardedness or catharsis. Cringemaxxers are, in fact, neurotics.

comment by Vaniver · 2024-01-05T18:07:12.135Z · LW(p) · GW(p)

Most of those posts are from before the thing I call "constant abuse" began on LessWrong.

I think I remember this timeline differently, or would like you to be a bit more clear on what you mean. I thought of this as an entrenched conflict back in 2019 [LW · GW], which was before all the posts used as examples.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-05T20:53:09.403Z · LW(p) · GW(p)

Yes, there was abuse before then, but it wasn't constant. It has since then become constant abuse. Do we really need to endlessly nitpick my usage of the phrase "constant abuse"? 

I still think the word "constant" is sufficiently apt, but more importantly, my argument does not depend in the slightest on the aptness of that one particular word, yet here we are, idk how many comments in, still discussing it. That strikes me as merely a way to evade the point by endless nitpicknig.

comment by Shankar Sivarajan (shankar-sivarajan) · 2024-01-05T18:19:55.734Z · LW(p) · GW(p)

There is this particular tactic I have seen from LessWrongers and nowhere else.

It looks like a cousin of "sealioning", certainly not unique to LessWrong. If you squint a bit, you might see Socrates as having pioneered it (see Killing Socrates [LW · GW]).

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-05T20:29:45.928Z · LW(p) · GW(p)

The tactic consists of two prongs, both of which I have seen used in isolation in other places than LessWrong. I have not however seen both together with this switching tactic elsewhere. Non-rationalists may also dismiss arguments addressing the big picture by calling them baseless assertions or manipulative or conspiracy theories or whatever, but they will not be in the habit of prompting people to revisit underlying assumptions, and if the proponent does this of his own initiative, they might accuse him of spin and of making elaborate excuses to hold on to an obviously untenable view.

They will not however follow the discussion to these prior assumptions and engage with these, tracing it all the way back to the epistemology of classification, or by some other manner of obfuscation induce the proponent to write several pages of explanation, and only then turn around and accuse him of making things needlessly complicated. That, as far as I can tell, really does seem to be a tactic unique to the LessWrong crowd.

Edited to add:

For clarification, I don't think it's solely a matter of degree. The difference is that the LessWrongian approach has an intermediate step of encouraging the added complexity, instead of immediately making accusations of obfuscation. In the non-LW version, the approach is to accuse the overall argument of being baseless or manipulative, and then when more substantiation is added, to accuse the proponent of making excuses. The LessWrongian approach would at this state debate with these, accusing the additional substantiation of being insufficient or baseless or of simply not being argumentation at all, then keep this going for a while, and only after quite a long time turn around and accuse the proponent of obfuscation. That intermediate step is the crucial bit, because it obscures what is going on by causing people to lose track of the conversation, and it creates so many circumlocutions that the charge of obfuscation will seem credible to people who haven't noticed the tactic that was employed.

comment by sapphire (deluks917) · 2024-01-02T21:24:03.981Z · LW(p) · GW(p)

People disagreeing with you, on public sites and especially on their own blogs, is not abuse! 

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-01-02T21:31:08.045Z · LW(p) · GW(p)

It is the bad faith engagement which I deem abusive, especially given the context, not the disagreement.

comment by [deleted] · 2023-12-30T20:07:39.252Z · LW(p) · GW(p)

Following this logically, to win the most you make the best bets, and you need more resources (more time to live, more money) so that you can make more total bets and thus win more.

This means rationalists should be in favor of life extension, getting rich as individuals, and getting personal access to the most powerful artificial general intelligence tools that can be controlled.  (this is why AI pause advocacy, at least at GPT-4 capability level, seems 'weird' for a 'rational' individual to advocate for.  A much strong model can likely be controlled, and if you think it can't, how do you know this?)

Replies from: deluks917
comment by sapphire (deluks917) · 2023-12-30T23:55:13.199Z · LW(p) · GW(p)

"This means rationalists should be in favor of life extension, getting rich as individuals, and getting personal access to the most powerful artificial general intelligence tools that can be controlled. "

Uhhhh yes they should do this instead of becoming obsessed with this type of stuff. Though 'can be controlled' is certainly load bearing.