My Interview With Cade Metz on His Reporting About Slate Star Codex

post by Zack_M_Davis · 2024-03-26T17:18:05.114Z · LW · GW · 186 comments

On 16 March 2024, I sat down to chat with New York Times technology reporter Cade Metz! In part of our conversation, transcribed below, we discussed his February 2021 article "Silicon Valley's Safe Space", covering Scott Alexander's Slate Star Codex blog and the surrounding community.

The transcript has been significantly edited for clarity. (It turns out that real-time conversation transcribed completely verbatim is full of filler words, false starts, crosstalk, "uh huh"s, "yeah"s, pauses while one party picks up their coffee order, &c. that do not seem particularly substantive.)


ZMD: I actually have some questions for you.

CM: Great, let's start with that.

ZMD: They're critical questions, but one of the secret-lore-of-rationality things is that a lot of people think criticism is bad, because if someone criticizes you, it hurts your reputation. But I think criticism is good, because if I write a bad blog post, and someone tells me it was bad, I can learn from that, and do better next time.

So, when we met at the Pause AI protest on February 12th, I mentioned that people in my social circles would say, "Don't talk to journalists." Actually, I want to amend that, because when I later mentioned meeting you, some people were more specific: "No, talking to journalists makes sense; don't talk to Cade Metz specifically, who is unusually hostile and untrustworthy."

CM: What's their rationale?

ZMD: Looking at "Silicon Valley's Safe Space", I don't think it was a good article. Specifically, you wrote,

In one post, [Alexander] aligned himself with Charles Murray, who proposed a link between race and I.Q. in "The Bell Curve." In another, he pointed out that Mr. Murray believes Black people "are genetically less intelligent than white people."

End quote. So, the problem with this is that the specific post in which Alexander aligned himself with Murray was not talking about race. It was specifically talking about whether specific programs to alleviate poverty will actually work or not.

This seems like a pretty sleazy guilt-by-association attempt. I'm wondering—as a writer, are you not familiar with the idea that it's possible to quote a writer about one thing without agreeing with all their other views? Did they not teach that at Duke?

CM: That's definitely true. It's also true that what I wrote was true. There are different ways of interpreting it. You're welcome to interpret it however you want, but those areas are often discussed in the community. And often discussed by him. And that whole story is backed by a whole lot of reporting. It doesn't necessarily make it into the story. And you find this often that within the community, and with him, whether it's in print or not in print, there is this dancing around those areas. And you can interpret that many ways. You can say, we're just exploring these ideas and we should be able to.

ZMD: And that's actually my position.

CM: That's great. That's a valid position. There are other valid positions where people say, we need to not go so close to that, because it's dangerous and there's a slippery slope. The irony of this whole situation is that some people who feel that I should not have gone there, who think I should not explore the length and breadth of that situation, are the people who think you should always go there.

ZMD: I do see the irony there. That's also why I'm frustrated with the people who are saying, "Don't talk to Cade Metz," because I have faith. I am so serious about the free speech thing that I'm willing to take the risk that if you have an honest conversation with someone, they might quote your words out of context on their blog.

CM: But also, it's worth discussing.

ZMD: It's worth trying.

CM: Because I hear your point of view. I hear your side of things. And whatever people think, my job at the Times is to give everyone their due, and to give everyone's point of view a forum and help them get that point of view into any given story. Now, what also happens then is I'm going to talk to people who don't agree with me, and who don't agree with Scott Alexander. And their point of view is going to be in there, too. I think that's the only way you get to a story that is well-rounded and gives people a full idea of what's going on.

ZMD: But part of why I don't think the February 2021 piece was very good is that I don't think you did a good job of giving everyone their due. Speaking of Kelsey Piper, you also wrote:

I assured her my goal was to report on the blog, and the Rationalists, with rigor and fairness. But she felt that discussing both critics and supporters could be unfair. What I needed to do, she said, was somehow prove statistically which side was right.

End quote. I don't think this is a fair paraphrase of what Kelsey actually meant. You might think, well, you weren't there, how could you know? But just from knowing how people in this subculture think, even without being there, I can tell that this is not a fair paraphrase.

CM: Well, I think that it definitely was. From my end, I talked to her on the phone. I checked everything with her before the story went up. It was fact-checked. She had an issue with it after the story went up, not before. There's a lot of rigor that goes into this.

ZMD: But specifically, the specific sentence, "She felt that discussing both critics and supporters could be unfair." I think if we asked Kelsey, I do not think she would endorse that sentence in those words. If I can try to explain what I imagine the point of view to be—

CM: Sure.

ZMD: It's not that you shouldn't discuss critics at all. I think the idea is that you can exercise judgment as to which criticisms are legitimate, which is not the same thing as, "Don't discuss critics at all." I feel like it would be possible to write some sentence that explains the difference between those ideas: that you can simultaneously both not just blindly repeat all criticisms no matter how silly, while also giving due to critics. Maybe you think I'm nitpicking the exact wording.

[Editor's note: when reached for comment, Piper said that she told Metz that she "wasn't a huge fan of 'both sides' journalism, where you just find someone with one opinion and someone with the other opinion, and that I felt our duty as journalists was to figure out the truth. The context was a conversation about whether Slate Star Codex was a 'gateway' into right-wing extremism, and I suggested he look at the annual SSC survey data to figure out if that was true or not, or at surveys of other populations, or traffic to right-wing sites, etc. I don't remember all the suggestions I made. I was trying to give him suggestions about how, as a journalist, he could check if this claim held up."]

CM: No, this is great. But I think that when the story did come out, that was the complaint. Everybody said, you shouldn't give this point of view its due, with the Charles Murray stuff. But that is part of what's going on, has gone on, on his blog and in the community. And it's very difficult to calibrate what's going on there and give an accurate view of it. But let me tell you, I tried really hard to do that and I feel like I succeeded.

ZMD: I don't agree, really. You might object that, well, this is just that everyone hates being reported on, and you didn't do anything different than any other mainstream journalist would have done. But The New Yorker also ran a couple pieces about our little subculture. There was one in July 2020, "Slate Star Codex and Silicon Valley's War Against the Media" by Gideon Lewis-Kraus, and just [recently], "Among the AI Doomsayers" by Andrew Marantz. And for both of those, both me and other people I talk to, reading those pieces, we thought they were much more fair than yours.

CM: I haven't read the [second] one, but I didn't think The New Yorker was fair to my point of view.

ZMD: Okay. Well, there you go.

CM: Right? Let's leave it at that. But I understand your complaints. All I can say is, it is valuable to have conversations like this. And I come away from this trying really hard to ensure that your point of view is properly represented. You can disagree, but it's not just now. If I were to write a story based on this, it happens in the lead-up to the story, it's happening as the story's being fact-checked. And I come back to you and I say, this is what I'm going to say. Is this correct? Do you have an objection? And what I was trying to do from the beginning was have a conversation like this. And it quickly spiraled out of control.

ZMD: Because Scott values his privacy.

CM: I understand that. But there are other ways of dealing with that. And in the end, I understand that kind of privacy is very important to the community as well, and him in particular. I get that. I had to go through that whole experience to completely get it. But I get it. But the other thing is that our view, my view, The New York Times view of that situation was very, very different. Right? And you had a clash of views. I felt like there were better ways to deal with that.

ZMD: But also, what exactly was the public interest in revealing his last name?

CM: Think about it like this. If he's not worth writing about, that's one thing. He's just some random guy in the street, and he's not worth writing about. All this is a non-issue. What became very clear as I reported the story, and then certainly it became super clear after he deleted his blog: this is an influential guy. And this continues to come up. I don't know if you saw the OpenAI response to Elon Musk's lawsuit. But Slate Star Codex is mentioned in it.

ZMD: Yeah, I saw that.

CM: Everybody saw it. This is an influential person. That means he's worth writing about. And so once that's the case, then you withhold facts if there is a really good reason to withhold facts. If someone is in a war zone, if someone is really in danger, we take this seriously. We had countless discussions about this. And we decided that—

ZMD: Being a psychiatrist isn't the equivalent of being in a war zone.

CM: What his argument to me was is that it violated the ethics of his profession. But that's his issue, not mine, right? He chose to be a super-popular blogger and to have this influence as a psychiatrist. His name—when I sat down to figure out his name, it took me less than five minutes. It's just obvious what his name is. The New York Times ceases to serve its purpose if we're leaving out stuff that's obvious. That's just how we have to operate. Our aim—and again, the irony is that your aim is similar—is to tell people the truth, and have them understand it. If we start holding stuff back, then that quickly falls apart.

186 comments

Comments sorted by top scores.

comment by Nathan Young · 2024-03-26T18:14:18.974Z · LW(p) · GW(p)

To me this reads as a person caught in the act of bullying who is trying to wriggle out of it. Fair play for challenging him, yuck at the responses.

Replies from: quanticle
comment by quanticle · 2024-03-26T19:12:05.839Z · LW(p) · GW(p)

The last answer is especially gross:

He chose to be a super-popular blogger and to have this influence as a psychiatrist. His name—when I sat down to figure out his name, it took me less than five minutes. It’s just obvious what his name is.

Can we apply the same logic to doors? "It took me less than five minutes to pick the lock so..."

Or people's dress choices? "She chose to wear a tight top and a miniskirt so..."

Metz persistently fails to state why it was necessary to publish Scott Alexander's real name in order to critique his ideas.

Replies from: tomcatfish, Ninety-Three, viking_math, ben-lang, DPiepgrass
comment by Alex Vermillion (tomcatfish) · 2024-03-27T02:45:27.367Z · LW(p) · GW(p)

This is not a good metaphor. There's an extreme difference between spreading information that's widely available and the general concept of justifying an action. I think your choice of examples adds a lot more heat than light here.

Replies from: frankybegs
comment by frankybegs · 2024-03-27T13:54:55.742Z · LW(p) · GW(p)

I agree on the latter example, which is a particularly unhelpful one to use unless strictly necessary, and not really analogous here anyway.

But on the lock example, what is the substantive difference? His justification seems to be 'it was easy to do, so there's nothing wrong with doing it'. In fact, the only difference I detect makes the doxxing look much worse. Because he's saying 'it was easy for me to do, so there's nothing wrong with me doing it on behalf of the world'.

So while it's also heat-adding, on reflection I can't think of any real world example that fits better: wouldn't the same justification apply to the people who hack celebrities for their private photos and publicise them? Both could argue:

It was easy for me (with my specialist journalist/hacker skills) to access this intended-to-be-private information, so I see no problem with sharing it with the world, despite the strong, clearly expressed preference of its subject that I not do so.

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2024-03-28T16:49:49.325Z · LW(p) · GW(p)

I'd be amenable to quibbles over the lock thing, though I think it's still substantially different. A better metaphor (for the situation that Cade Metz claims is the case, which may or may not be correct) making use of locks would be "Anyone can open the lock by putting any key in. By opening the lock with my own key, I have done no damage". I do not believe that Cade Metz used specialized hacking equipment to reveal Scott's last name unless this forum is unaware of how to use search engines.

Replies from: frankybegs, tomcatfish
comment by frankybegs · 2024-03-28T19:09:22.485Z · LW(p) · GW(p)

I do not believe that Cade Metz used specialized hacking equipment to reveal Scott's last name

 

I said "specialist journalist/hacker skills".

I don't think it's at all true that anyone could find out Scott's true identity as easily as putting a key in a lock, and I think that analogy clearly misleads vs the hacker one, because the journalist did use his demonstrably non-ubiquitous skills to find out the truth and then broadcast it to everyone else. To me the phone hacking analogy is much closer, but if we must use a lock-based one, it's more like a lockpick who picks a (perhaps not hugely difficult) lock and then jams it so anyone else can enter. Still very morally wrong, I think most would agree.

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2024-03-31T19:10:05.908Z · LW(p) · GW(p)

I think you are dramatically overestimating how difficult it was, back in the day, to accidentally or incidentally learn Scott's full name. I think this is the crux here.

It was extremely easy to find his name, and often people have stories of learning it on accident. I don't believe it was simple enough that Scott's plea to not have his name be published in the NYT was invalid, but I do think it was simple enough that an analogy to lockpicking is silly.

comment by Alex Vermillion (tomcatfish) · 2024-03-28T16:52:05.804Z · LW(p) · GW(p)

Your comment is actually one of the ones in the thread that replied to mine that I found least inane, so I will stash this downthread of my reply to you:

I think a lot of the stuff Cade Metz is alleged to say above is dumb as shit and is not good behavior. However, I don't need to make bad metaphors, abuse the concept of logical validity, or do anything else that breaks my principles to say that the behavior is bad, so I'm going to raise an issue with those where I see them and count on folks like you to push back the appropriate extent so that we can get to a better medium together.

comment by Ninety-Three · 2024-03-27T23:33:54.111Z · LW(p) · GW(p)

Metz persistently fails to state why it was necessary to publish Scott Alexander's real name in order to critique his ideas.


It's not obvious that that should be the standard. I can imagine Metz asking "Why shouldn't I publish his name?", the implied "no one gets to know your real name if you don't want them to" norm is pretty novel.

One obvious answer to the above question is "Because Scott doesn't want you to, he thinks it'll mess with his psychiatry practice", to which I imagine Metz asking, bemused "Why should I care what Scott wants?" A journalist's job is to inform people, not be nice to them! Now Metz doesn't seem to be great at informing people anyway, but at least he's not sacrificing what little information value he has upon the altar of niceness.

comment by viking_math · 2024-03-27T15:00:23.280Z · LW(p) · GW(p)

Or explain why the NYT does use the chosen name of other people, like musicians' stage names. 

comment by Ben (ben-lang) · 2024-03-27T14:44:28.818Z · LW(p) · GW(p)

I not getting the "he should never have published his last name" thing. If Scot didn't want his surname to be published then it would be a kindness to respect his wishes. But can you imagine writing a newspaper article where you are reporting on the actions of an anonymous person? Its borderline nonsense. Especially if it is true that 5 minutes is all that it takes to find out who they are. If I read a newspaper article about the antics of a secret person, then I worked out who they were in 5 mins, my estimate of the journalist writing it would drop through the floor. Imagine you read about the mysterious father of bitcoin, or an unknown major donor to a political party, all dressed in secrecy, and then two googles later you know who it is. It would reflect so badly on the paper you were reading.

I think our journalist is correct when he says that the choice was not to write about him at all, or to give his full name, by the standards of every newspaper I have ever read (only UK papers, maybe there are differences in the US). In print journalism it is the standard to refer to people by Title Firstname Surname, the/a REASON WE CARE. (EG. "Mr Ed Miliband, the Shadow Secretary for the Environment", "Dr Slate Star a prominent blogger").

Maybe there is an angle on this I am missing? (His name is a BIG DEAL for some reason? But that just makes publishing it more essential not less.)

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2024-03-27T16:45:51.399Z · LW(p) · GW(p)

But can you imagine writing a newspaper article where you are reporting on the actions of an anonymous person? Its borderline nonsense. 

I can easily imagine writing a newspaper article about how Charlie Sheen influenced the film industry, that nowhere mentions the fact that his legal name is Carlos Irwin Estévez. Can’t you? Like, here’s one.

(If my article were more biographical in nature, with a focus on Charlie Sheen’s childhood and his relationship with his parents, rather than his influence on the film industry, then yeah I would presumably mention his birth name somewhere in my article in that case. No reason not to.)

Replies from: daniele-de-nuntiis, Sherrinford
comment by Daniele De Nuntiis (daniele-de-nuntiis) · 2024-04-03T13:03:00.995Z · LW(p) · GW(p)

I think there's a tradeoff where on one side it seems fair to keep the two identities separate, on the other as a journalist it makes sense that if it takes five minutes for someone to find out Scott's last name, including it in the article doesn't sound like a big deal.

The problem is that from the point of view of view of a patient you probably needed more than 5 minutes and Scott's full name to find out about the blog.

comment by Sherrinford · 2024-03-30T22:30:22.916Z · LW(p) · GW(p)

Suppose Carlos Irwin Estévez worked as a therapist part-time, and he kept his identities separate such that his patients could not use his publicly known behavior as Sheen in order to update about whether they should believe his methods work. Should journalists writing about the famous Estevéz method of therapy keep his name out of the article to support him?

Replies from: adam-scherlis
comment by Adam Scherlis (adam-scherlis) · 2024-04-02T06:58:14.127Z · LW(p) · GW(p)

That might be relevant if anyone is ever interested in writing an article about Scott's psychiatric practice, or if his psychiatric practice was widely publicly known. It seems less analogous to the actual situation.

To put it differently: you raise a hypothetical situation where someone has two prominent identities as a public figure. Scott only has one. Is his psychiatrist identity supposed to be Sheen or Estevéz, here?

Replies from: Sherrinford
comment by Sherrinford · 2024-04-03T18:03:20.483Z · LW(p) · GW(p)

Estevéz. If I recall this correctly, Scott thought that potential or actual patients could be influenced in their therapy by knowing his public writings. (But I may mistemember that.)

Replies from: adam-scherlis
comment by Adam Scherlis (adam-scherlis) · 2024-04-03T22:56:44.379Z · LW(p) · GW(p)

In that case, "journalists writing about the famous Estevéz method of therapy" would be analogous to journalists writing about Scott's "famous" psychiatric practice.

If a journalist is interested in Scott's psychiatric practice, and learns about his blog in the process of writing that article, I agree that they would probably be right to mention it in the article. But that has never happened because Scott is not famous as a psychiatrist.

Replies from: Sherrinford
comment by Sherrinford · 2024-04-05T07:16:28.996Z · LW(p) · GW(p)

I said Estevéz because he is the less famous aspect of the person, not because I super-finetuned the analogy.

Updating the trust into your therapist seems to be a legitimate interest even if he is not famous for his psychiatric theory or practice. Suppose for example that an influential and controversial (e.g. White-supremacist) politician spent half his week being a psychiatrist and the other half doing politics, but somehow doing the former anymously. I think patients might legitimately want to know that their psychiatrist is this person. This might even be true if the psychiatrist is only locally active, like the head of a KKK chapter. And journalists might then find it inappropriate to treat the two identities as completely separate.

I assume there are reasons for publishing the name and reasons against. It is not clear that being a psychiatrist is always an argument against.

Part of the reason is, possibly, that patients often cannot directly judge the quality of therapy. Therapy is a credence good and therapists may influence you in ways that are independent of your depression or anorexia. So having more information about your psychiatrist may be helpful. At the same time, psychiatrists try to keep their private life out of the therapy, for very good reasons. It is not completely obvious to me where journalists should draw the line.

Replies from: adam-scherlis
comment by Adam Scherlis (adam-scherlis) · 2024-04-08T00:13:30.424Z · LW(p) · GW(p)

That's a reasonable argument but doesn't have much to do with the Charlie Sheen analogy.

The key difference, which I think breaks the analogy completely, is that (hypothetical therapist) Estevéz is still famous enough as a therapist for journalists to want to write about his therapy method. I think that's a big enough difference to make the analogy useless.

If Charlie Sheen had a side gig as an obscure local therapist, would journalists be justified in publicizing this fact for the sake of his patients? Maybe? It seems much less obvious than if the therapy was why they were interested!

comment by DPiepgrass · 2024-03-27T10:09:19.750Z · LW(p) · GW(p)

I'm thinking it's not Metz's job to critique Scott, nor did his article admit to being a critique, but also that that's a strawman; Metz didn't publish the name "in order to" critique his ideas. He probably published it because he doesn't like the guy.

Why doesn't he like Scott? I wonder if Metz would've answered that question if asked. I doubt it: he wrote "[Alexander] aligned himself with Charles Murray, who proposed a link between race and I.Q." even though Scott did not align himself with Murray about race/IQ, nor is Murray a friend of his, nor does Alexander promote Murray, nor is race/IQ even 0.1% of what Scott/SSC/rationalism is about―yet Metz defends his misleading statement and won't acknowledge it's misleading. If he had defensible reasons to dislike Scott that he was willing to say out loud, why did he instead resort to tactics like that?

(Edit: I don't read/follow Metz at all, so I'll point to Gwern's comment for more insight [LW(p) · GW(p)])

Replies from: frankybegs
comment by frankybegs · 2024-03-27T14:04:56.016Z · LW(p) · GW(p)

Not hugely important, but I want to point out because I think the concept is in the process of having its usefulness significantly diluted by overuse: that's not a straw man. That's just a false reason.

A straw man is when you refute an argument that your opponent didn't make, in order to make it look like you've refuted their actual argument.

comment by Jacob Falkovich (Jacobian) · 2024-03-26T18:03:08.802Z · LW(p) · GW(p)

Kudos for getting this interview and posting it! Extremely based.

As for Metz himself, nothing here changed my mind from what I wrote about him three years ago:

But the skill of reporting by itself is utterly insufficient for writing about ideas, to the point where a journalist can forget that ideas are a thing worth writing about. And so Metz stumbled on one of the most prolific generators of ideas on the internet and produced 3,000 words of bland gossip. It’s lame, but it’s not evil.

He just seems not bright or open minded enough to understand different norms of discussion and epistemology than what is in the NYT employee's handbook. It's not dangerous to talk to him (which I did back in 2020, before he pivoted the story to be about Scott). It's just kinda frustrating and pointless.

Replies from: romeostevensit
comment by romeostevensit · 2024-03-27T05:36:51.936Z · LW(p) · GW(p)

I think you're ascribing to stupidity what can be attributed to malice on a plain reading of his words.

comment by trevor (TrevorWiesinger) · 2024-03-26T19:25:05.175Z · LW(p) · GW(p)

The only reason that someone like Cade Metz is able to do what he does, performing at the level he has been, with a mind like what he has, is because people keep going and talking to him. For example, he might not even have known about the "among the doomsayers" article until you told him about it (or found out about it much sooner). 

I can visibly see you training him, via verbal conversation, how to outperform the vast majority of journalists at talking about epistemics. You seemed to stop towards the end, but Metz nonetheless probably emerged from the conversation much better prepared to think up attempts to dishonestly angle-shoot the entire AI safety scene, as he has continued to do over the last several months.

From the original thread that coined the "Quokka" concept (which, important to point out, was written by an unreliable and often confused narrator):

Rationalists are, in Scott Alexander's formulation, missing a mood, or rather, they are drawn from a pool of mostly men who are missing one. "Normal" people instinctively grasp social norms without having them explained. Rationalists lack this instinct.

In particular, they struggle with small talk and other social norms around speech, because they naively think words are a vehicle for their literal meanings. Yud's sequences help this by formalizing the implicit decisions that normal people make.

...

The quokka, like the rationalist, is a creature marked by profound innocence. The quokka can't imagine you might eat it, and the rationalist can't imagine you might deceive him. As long they stay on their islands, they survive, but both species have problems if a human shows up.

In theory, rationalists like game theory, in practice, they need to adjust their priors. Real-life exchanges can be modeled as a prisoner's dilemma. In the classic version, the prisoners can't communicate, so they have to guess whether the other player will defect or cooperate.

Image

The game changes when we realize that life is not a single dilemma, but a series of them, and that we can remember the behavior of other agents. Now we need to cooperate, and the best strategy is "tit for two tats", wherein we cooperate until our opponent defects twice.

The problem is, this is where rationalists hit a mental stop sign. Because in the real world, there is one more strategy that the game doesn't model: lying. See, the real best strategy is "be good at lying so that you always convince your opponent to cooperate, then defect".

And rationalists, bless their hearts, are REALLY easy to lie to. It's not like taking candy from a baby; babies actually try to hang onto their candy. The rationalists just limply let go and mutter, "I notice I am confused".

...

Rationalists = quokkas, this explains a lot about them. Their fear instincts have atrophied. When a quokka sees a predator, he walks right up; when a rationalist talks about human biodiversity on a blog under almost his real name, he doesn't flinch away.

A normal person learns from social cues that certain topics are forbidden, and that if you ask questions about them, you had better get the right answer, which is not the one with the highest probability of being true, but the one with the highest probability of keeping your job.

This ability to ask uncomfortable questions is one of the rationalist's best and worst attributes, because mental stop signs, like road stop signs, actually exist to keep you safe, and although there may be times one should disregard them, most people should mostly obey them,

...

Apropos of the game theory discussion above, if there is ONE thing I can teach you with this account, it's that you have evolved to be a liar. Lying is "killer app" of animal intelligence, it's the driver of the arms race that causes intelligence to evolve.

...

The main way that you stop being a quokka is that you realize there are people in the world who really want to hurt you [LW · GW]. There are people who will always defect, people whose good will is fake, whose behavior will not change if they hear the good news of reciprocity.

So things that everyone warns you not to do, like going and talking to people like Cade Metz, might seem like a source of alpha, undersupplied by the market. But in reality there is a good reason why everyone at least tried to coordinate not to do it, and at least tried to make it legible why people should not do that. Here the glass has already been blown into a specific shape and cooled.

Do not talk to journalists without asking for help. You have no idea how much there is to lose, even just from a short harmless-seeming conversation where they are able to look at how your face changes as you talk about some topics and avoid others

Human genetic diversity implies that there are virtually always people out there who are much better at that than you'd expect from your own life experience of looking at people's facial expressions, no matter your skill level, and other factors indicate that these people probably started pursuing high-status positions a long time ago.

Replies from: Hazard, Chris_Leong, matto, garrison
comment by Hazard · 2024-03-27T01:14:27.260Z · LW(p) · GW(p)

I can visibly see you training him, via verbal conversation, how to outperform the vast majority of journalists at talking about epistemics.

Metz doesn't seem any better at seeming like he cares about or thinks at all about epistemics than he did in 2021.

https://naturalhazard.xyz/vassar_metz_interview.html

Replies from: gwern, Jiro, Benito
comment by gwern · 2024-03-27T01:53:17.587Z · LW(p) · GW(p)

Interesting interview. Metz seems extraordinarily incurious about anything Vassar says - like he mentions all sorts of things like Singularity University or Kurzweil or Leverage, which Metz clearly doesn't know much about and are relevant to his stated goals, but Metz is instead fixated on asking about a few things like 'how did X meet Thiel?' 'how did Y meet Thiel?' 'what did Z talk about with Thiel?' 'What did A say to Musk at Puerto Rico?' Like he's not listening to Vassar at all, just running a keyword filter over a few people's names and ignoring anything else. (Can you imagine, say, Caro doing an interview like this? Dwarkesh Patel? Or literally any Playboy interviewer? Even Lex Fridman asks better questions.)

I was wondering how, in his 2020 DL book The Genius Makers he could have so totally missed the scaling revolution when he was talking to so many of the right people, who surely would've told him how it was happening; and I guess seeing how he does interviews helps explain it: he doesn't hear even the things you tell him, just the things he expects to hear. Trying to tell him about the scaling hypothesis would be like trying to tell him about, well, things like Many Worlds... (He is also completely incurious about GPT-3 in this August 2020 interview too, which is especially striking given all the reporting he's done on people at OA since then, and the fact that he was presumably working on finishing Genius Makers for its March 2021 publication despite how obvious it should have been that GPT-3 may have rendered it obsolete almost a year before publication.)

And Metz does seem unable to explain at all what he considers 'facts' or what he does when reporting or how he picks the topics to fixate on that he does, giving bizarre responses like

Cade Metz: Well, you know, honestly, my, like, what I think of them doesn't matter what I'm trying to do is understand what's going on like, and so -

How do you 'understand' them without 'thinking of them'...? (Some advanced journalist Buddhism?) Or how about his blatant dodges and non-responses:

Michael Vassar: So you have read Scott's posts about Neo-reaction, right? They're very long.

Cade Metz: Yes.

Michael Vassar: So what did you think of those?

Cade Metz: Well, okay, maybe maybe I'll get even simpler here. So one thing I mentioned is just sort of the way all this stuff played out. So you had this relationship with Peter Thiel, Peter Thiel has, had, this relationship with, with Curtis Yarvin. Do you know much about that? Like, what's the overlap between sort of Yarvin's world and Silicon Valley?

We apparently have discovered the only human being to ever read all of Scott's criticisms of NRx and have no opinion or thought about them whatsoever. Somehow, it is 'simpler' to instead pivot to... 'how did X have a relationship with Thiel' etc. (Simpler in what respect, exactly?)

I was also struck by this passage at the end on the doxing:

Michael Vassar: ...So there are some important facts that need to be explained. There's there's this fact about why it would seem threatening to a highly influential psychologist and psychiatrist and author to have a New York Times article written about his blog with his real name, that seems like a very central piece of information that would need to be gathered, and which I imagine you've gathered to some degree, so I'd love to hear your take on that.

Cade Metz: Well, I mean... sigh Well, rest assured, you know, we we will think long and hard about that. And also -

Vassar: I'm not asking you do to anything, or to not do anything. I'm asking a question about what information you've gathered about the question. It's the opposite of a call to action: it's a request for facts.

Cade Metz: Yeah, I mean, so you know, I think what I don't know for sure, but I think when it comes time, you know, depending on what the what the decision is, we might even try to explain it in like a separate piece. You know, I think there's a lot of misinformation out there about this and and not all the not all the facts are out about this and so it is it is our job as trained journalists who have a lot of experience with this stuff. To to get this right and and we will.

Michael Vassar: What would getting it right mean?

Cade Metz: Well, I will send our - send you a link whenever, whenever the time comes,

Michael Vassar: No, I don't mean, "what will you do?" I'm saying what - what, okay. That that the link, whenever the time comes, would be a link to what you did. If getting it right means "whatever you end up doing", then it's a recursive definition and therefore provides no information about what you're going to do. The fact that you're going to get it right becomes a non-fact.

Cade Metz: Right. All right. Well... pause let me put it this way. We are journalists with a lot of experience with these things. And, and that is -

Michael Vassar: Who's "we"?

Cade Metz: Okay, all right. You know, I don't think we're gonna reach common ground on this. So I might just have to, to, to beg off on this. But honestly, I really appreciate all your help on this. I do appreciate it. And I'll send you a copy of this recording. As I said, and I really appreciate you taking all the time. It's, it's been helpful.

One notes that there was no separate piece, and even in OP's interview of Metz 4 years later about a topic that he promised Vassar he was going to have "thought long and hard about" and which caused Metz a good deal of trouble, Metz appears to struggle to provide any rationale beyond the implied political activism one. Here Metz struggles to even think of what the justification could be or even who exactly is the 'we' making the decision to dox Scott. This is not some dastardly gotcha but would seem to be a quite straightforward question and easy answer: "I and my editor at the NYT on this story" would not seem to be a hard response! Who else could be involved? The Pope? Pretty sure it's not, like, NYT shareholders like Carlos Slim who are gonna make the call on it... But Metz instead speaks ex cathedra in the royal we, and signs off in an awful hurry after he says "once I gather all the information that I need, I will write a story" and Vassar starts asking pointed questions about that narrative and why it seems to presuppose doxing Scott while unable to point to some specific newsworthy point of his true name like "his dayjob turns out to be Grand Wizard of the Ku Klux Klan".

(This interview is also a good example of the value of recordings. Think how useful this transcript is and how much less compelling some Vassar paraphrases of their conversation would be.)

comment by Jiro · 2024-03-27T03:07:33.273Z · LW(p) · GW(p)

"Outperform at talking about epistemics" doesn't mean "perform better at being epistemically correct", it means "perform better at getting what he wants when epistemics are involved".

Replies from: Hazard
comment by Hazard · 2024-03-27T03:39:39.185Z · LW(p) · GW(p)

I might be misunderstanding, I understood the comment I was responding to as saying that Zack was helping Cade do a better job of disguising himself as someone who cared about good epistemics. Something like "if Zack keeps talking, Cade will learn to the surface level features of a good Convo about epistemology and thus, even if he still doesn't know shit, he'll be able to trick more people into thinking he's someone worth talking to."

In response to that claim, I shared an older interview of Cade to demonstrate that his been exposed to people who talk about epistemology for a while, and he did not do a convincing job of pretending to be in good faith then, and in this interview with Zack I don't think he's doing any better a job of seeming like he's acting in good faith.

And while there can still be plenty of reasons to not talk to journalists, or Cade in particular, I really don't think "you'll enable them to mimick us better" is remotely plausible.

Replies from: DPiepgrass, Jiro
comment by DPiepgrass · 2024-03-27T10:55:23.175Z · LW(p) · GW(p)

I agree, except for the last statement. I've found that talking to certain people with bad epistemology about epistemic concepts will, instead of teaching them concepts, teach them a rhetorical trick that (soon afterward) they will try to use against you as a "gotcha" (related [LW · GW])... as a result of them having a soldier mindset and knowing you have a different political opinion.

While I expect most of them won't ever mimic rationalists well, (i) mimicry per se doesn't seem important and (ii) I think there are a small fraction of people (tho not Metz) who do end up fostering a "rationalist skin" ― they talk like rationalists, but seem to be in it mostly for gotchas, snipes and sophistry.

comment by Jiro · 2024-03-27T14:11:15.049Z · LW(p) · GW(p)

I understood the comment I was responding to as saying that Zack was helping Cade do a better job of disguising himself as someone who cared about good epistemics.

Yes, but disguising himself as someone who cares about good epistemics doesn't require using good epistemics. Rather it means saying the right things to get the rationalist to let his guard down. There are plenty of ways to convince people about X that don't involve doing X.

Replies from: Hazard
comment by Hazard · 2024-03-27T17:29:53.214Z · LW(p) · GW(p)

I agree that disguising one's self as "someone who cares about X" doesn't require being good at X, at least when you only have short contained contact with them.

I'm trying to emphasize that I don't think Cade has made any progress in learning to "say the right things". I think he has probably learned more individual words that are more frequent in a rationalist context than not (like the word "priors"), but it seems really unlikely that he's gotten any better at even the grammar of rationalist communication.

Like, I'd be mediumly surprised if he, when talking to a rat, said something like "so what's your priors on XYZ?" I'd be incredibly surprised if he said something like "there's clearly a large inferential distance between your world model and the public's world model, so maybe you could help point me towards what you think the cruxes might be for my next article?"

That last sentence seems like a v clear example of something that both doesn't actually require understanding or caring about epistemology to utter, yet if I heard it I'd assume a certain orientation to epistemology and someone could falsely get me to "let my guard down". I don't think Cade can do things like that. And based on Zack's convo and Vassar's convo with him, and the amount of time and exposure he's had to learn between the two convos, I don't think that's the sort of thing he's capable of. 

Replies from: Jiro
comment by Jiro · 2024-03-28T18:32:31.061Z · LW(p) · GW(p)

it seems really unlikely that he’s gotten any better at even the grammar of rationalist communication.

You don't need to use rationalist grammar to convince rationalists that you like them. You just need to know what biases of theirs to play upon, what assumptions they're making, how to reassure them, etc.

The skills for pretending to be someone's friend are very different from the skills for acting like them.

comment by Ben Pace (Benito) · 2024-04-08T03:10:13.931Z · LW(p) · GW(p)

That interview is hysterically funny. 

I think as a 15 year old I could've had as hard a time as Metz did there. I mean, not if it were a normal conversation, then I would be far more curious about the questions being brought up, but if it was a first connection and I was trying to build a relationship and interview someone for information in a high-stakes situation, then I can imagine me being very confused and just trying to placate (and at that age I may have given a false pretense of understanding my interlocutor).

Yet from my current position everything Vassar said was quite straightforward (though the annotations were helpful).

comment by Chris_Leong · 2024-03-27T00:10:24.279Z · LW(p) · GW(p)

What did he say that was dishonest in the China article? (It's paywalled).

comment by matto · 2024-03-29T18:40:37.821Z · LW(p) · GW(p)

How would you label Metz's approach in the dialogue with Zack? To me it's clear that Zack is engaging in truth-seeking--questioning maps, seeking where they differ, trying to make sense of noisy data.

But Metz is definitely not doing that. Plenty of Dark Arts techniques there, and his immediate goal is pretty clear (defend position & extract information), but I can't quite put a finger on his larger goal.

If Zack is doing truth-seeking, then Metz is doing...?

comment by garrison · 2024-03-28T22:38:54.779Z · LW(p) · GW(p)

I only skimmed the NYT piece about China and ai talent, but didn't see evidence of what you said (dishonestly angle shooting the AI safety scene).

comment by romeostevensit · 2024-03-26T22:51:31.180Z · LW(p) · GW(p)

I hope these responses help people understand that the hostility to journalists is not based on hyperbole. They really are like this. They really are competing to wreck the commons for a few advertising dollars.

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2024-03-27T02:51:45.134Z · LW(p) · GW(p)

If you're looking to convince without hyperbole, drawing the link from "Cade Metz" to "Journalists" would be nice, as would explaining any obvious cutouts that make someone an Okay Journalist.

Replies from: Jiro
comment by Jiro · 2024-03-27T03:12:51.080Z · LW(p) · GW(p)

His behavior is clearly accepted by the New York Times, and the Times is big and influential enough among mainstream journalists that this reflects on the profession in general.

explaining any obvious cutouts that make someone an Okay Journalist.

Not lying (by non-Eliezer standards [LW · GW]) would be a start.

Replies from: tomcatfish, Silver_Swift
comment by Alex Vermillion (tomcatfish) · 2024-03-28T17:00:42.754Z · LW(p) · GW(p)

Hm. I think we like Slate Star Codex in this thread, so let's enjoy a throwback:

It was wrong of me to say I hate poor minorities. I meant I hate Poor Minorities! Poor Minorities is a category I made up that includes only poor minorities who complain about poverty or racism.

No, wait! I can be even more charitable! A poor minority is only a Poor Minority if their compaints about poverty and racism come from a sense of entitlement. Which I get to decide after listening to them for two seconds. And If they don’t realize that they’re doing something wrong, then they’re automatically a Poor Minority.

I dedicate my blog to explaining how Poor Minorities, when they’re complaining about their difficulties with poverty or asking why some people like Paris Hilton seem to have it so easy, really just want to steal your company’s money and probably sexually molest their co-workers. And I’m not being unfair at all! Right? Because of my new definition! I know everyone I’m talking to can hear those Capital Letters. And there’s no chance whatsoever anyone will accidentally misclassify any particular poor minority as a Poor Minority. That’s crazy talk! I’m sure the “make fun of Poor Minorities” community will be diligently self-policing against that sort of thing. Because if anyone is known for their rigorous application of epistemic charity, it is the make-fun-of-Poor-Minorities community!

I’m not even sure I can dignify this with the term “motte-and-bailey fallacy”. It is a tiny Playmobil motte on a bailey the size of Russia. (from https://slatestarcodex.com/2014/08/31/radicalizing-the-romanceless/)

Can your use of "Journalism" pass this test? Can we really say "the hostility to journalists is not based on hyperbole. They really are like this. They really are competing to wreck the commons for a few advertising dollars." and expect everyone to pay close attention to check that the target is a Bad Journalist Who Lies first?

Replies from: Jiro
comment by Jiro · 2024-03-28T18:41:05.268Z · LW(p) · GW(p)

The reason that I can make a statement about journalists based on this is that the New York Times really is big and influential in the journalism profession. On the other hand, Poor Minorities aren't representative of poor minorities.

Not only that, the poor minorities example is wrong in the first place. Even the restricted subset of poor minorities don't all want to steal your company's money. The motte-and-bailey statement isn't even true about the motte. You never even get to the point of saying something that's true about the motte but false about the bailey.

comment by Douglas_Knight · 2024-03-26T19:42:06.968Z · LW(p) · GW(p)

That they have a "real names" policy is a blatant lie.

They withhold "real names" every day, even ones so "obvious" as to be in wikipedia. If they hate the subject, such as Virgil Texas, they assert that it is a pen name. Their treatment of Scott is off the charts hostile.

If we cannot achieve common knowledge of this, what is the point of any other detail?

comment by habryka (habryka4) · 2024-03-26T17:25:12.125Z · LW(p) · GW(p)

They're critical questions, but one of the secret-lore-of-rationality things is that a lot of people think criticism is bad, because if someone criticizes you, it hurts your reputation.

I am so confused about this statement. Is this a joke?

But that seems like a weird joke to make with a journalist who almost certainly doesn't get it. 

Or maybe I am failing to parse this. I read this as saying "people in the rationality community believe that criticism is bad because that hurts your reputation, but I am a contrarian within that community and think it's good" which would seem like a kind of hilarious misrepresentation of where most people in the community are at on the topic of criticism, so maybe you meant something else here.

Replies from: Seth Herd, Unnamed, sumguysr, sinclair-chen
comment by Seth Herd · 2024-03-26T18:00:13.791Z · LW(p) · GW(p)

I thought he just meant "criticism is good, actually; I like having it done to me so I'm going to do it to you", and was saying that rationalists tend to feel this way.

Replies from: habryka4
comment by habryka (habryka4) · 2024-03-26T18:02:57.422Z · LW(p) · GW(p)

Yeah, that would make sense to me, but the text does seem at least unclear on whether that's what Zack did try to say (and seems good to clarify).

Replies from: Zack_M_Davis, SkinnyTy, abramdemski
comment by Zack_M_Davis · 2024-03-27T05:38:45.109Z · LW(p) · GW(p)

I affirm Seth's interpretation in the grandparent. Real-time conversation is hard; if I had been writing carefully rather than speaking extemporaneously, I probably would have managed to order the clauses correctly. ("A lot of people think criticism is bad, but one of the secret-lore-of-rationality things is that criticism is actually good.")

Replies from: Jiro, habryka4
comment by Jiro · 2024-03-30T04:36:19.261Z · LW(p) · GW(p)

I think some kinds of criticism are good and some are not. Criticizing you because I have some well-stated objection to your ideas is good. Criticizing you by saying "Zach posts in a place which contains fans of Adolf Hitler" is bad. Criticizing you by causing real-life problems to happen to you (i.e. analogous to doxing Scott) is also bad.

comment by habryka (habryka4) · 2024-03-27T07:01:17.048Z · LW(p) · GW(p)

Cool, makes sense. Transcript editing is hard, as I know from experience.

comment by HiddenPrior (SkinnyTy) · 2024-03-26T19:09:40.084Z · LW(p) · GW(p)

This may be an example of one of those things where the meaning is clearer in person, when assisted by tone and body language.

comment by abramdemski · 2024-03-26T19:45:32.946Z · LW(p) · GW(p)

The "one of those" phrasing makes me think there was prior conversational context about this before the start of the interview. From my own prior knowledge of Zack, my guess is that it is a tragedy of the green rationalist [LW · GW] type sentiment. But it doesn't exactly fit.

comment by Unnamed · 2024-03-27T01:36:36.930Z · LW(p) · GW(p)

They're critical questions, but one of the secret-lore-of-rationality things is that a lot of people think criticism is bad, because if someone criticizes you, it hurts your reputation. But I think criticism is good, because if I write a bad blog post, and someone tells me it was bad, I can learn from that, and do better next time.

I read this as saying 'a common view is that being criticized is bad because it hurts your reputation, but as a person with some knowledge of the secret lore of rationality I believe that being criticized is good because you can learn from it.'

And he isn't making a claim about to what extent the existing LW/rationality community shares his view.

comment by sumguysr · 2024-03-27T13:14:27.280Z · LW(p) · GW(p)

I think you're parsing it very literally. This was an in person conversation with much less strict rules of construction. 

I took it to mean: there's a lore in the rationality community that criticism is good because it helps you improve, contrary to the general feeling that it's bad because it hurts your reputation. 

It's presented out of order because there's a conversation going on where the speaker only has a rough idea they want to communicate and they're putting it into words as they go, and there's non-verbal feedback going on in the process we can't see.

When I imagine myself in Metz's position I expect he would take this same meaning, and I therefore think it's likely a lot of other readers would take the same meaning. I think the only major ambiguity exists when readers parse it as something different than a transcript.

comment by Sinclair Chen (sinclair-chen) · 2024-03-27T08:22:03.882Z · LW(p) · GW(p)

My literal interpretation of Zack:

The secret lore of the Rationalist movement is that some specific kinds of criticism make Rationalists hate you, such as criticizing someone for being too friendly to racists.
The secret truth of rationality is that all "criticism" is at least neutral and possibly good for a perfectly rational agent, including criticizing the agent for being too friendly to racists. 

My thoughts

- Reputation is real, but less real than you think or hope. And reputation is asymmetrically fact-favored - just speak the truth and keep being you, and your reputation will follow.
  - Slander may cause dumb or mean people to turn against you, but wise people will get it and kind people will forgive you, and those people are who really matters.
  - Bad press is good press. It helps you win at the attention economy.
- The Rationalists are better at accepting criticism, broadly construed, than average. 
- The Rationalists are better at handling culture-war stuff than average, but mostly because they are more autistic and more based than average.
- The average sucks. Seek perfection.
- I understand on an emotional level being afraid of cancel culture. I used to be. For me it's tied up with fear of social isolation, loneliness, rejection. I overreacted to this and decided to "not care what other people think" (while still actually caring that people saw me as clever, contrarian, feminine, etc; I just mean I decided to be egotistical.) This led to the opposite failure of not listening enough to others. but it was a lot of fun. I think the right identity-stance is in between.

On a personal level, crockers rule made me happier and believe more true things. Even activating, unfair, or false criticism as a gift of feedback. The last time someone said something super triggering to me, it caused me to open up my feelings and love people more. The time before that, I became more accepting of embarrassing kinks I had - and this time was from some quite off-base trolly criticism.
It's related to "staring into the void" or considering the worst possible scenarios - literally as in "what if I lose my job" but also spiritually like "what if people stop loving me." Kinda like how you're supposed to think of death five times a day to be happy. Or like being a dnd monk I imagine. Either they're right and you deserve it or they're wrong and it doesn't matter.

 

comment by LGS · 2024-03-26T19:01:07.018Z · LW(p) · GW(p)

I think for the first objection about race and IQ I side with Cade. It is just true that Scott thinks what Cade said he thinks, even if that one link doesn't prove it. As Cade said, he had other reporting to back it up. Truth is a defense against slander, and I don't think anyone familiar with Scott's stance can honestly claim slander here.

This is a weird hill to die on because Cade's article was bad in other ways.

Replies from: pktechgirl, wilkox, cubefox
comment by Elizabeth (pktechgirl) · 2024-03-26T22:52:42.023Z · LW(p) · GW(p)

Let's assume that's true: why bring Murray into it? Why not just say the thing you think he believes, and give whatever evidence you have for it? That could include the way he talks about Murray, but "Scott believes X, and there's evidence in how he talks about Y" is very different than "Scott is highly affiliated with Y"

Replies from: LGS
comment by LGS · 2024-03-27T00:14:11.796Z · LW(p) · GW(p)

I assume it was hard to substantiate.

Basically it's pretty hard to find Scott saying what he thinks about this matter, even though he definitely thinks this. Cade is cheating with the citations here but that's a minor sin given the underlying claim is true.

It's really weird to go HOW DARE YOU when someone says something you know is true about you, and I was always unnerved by this reaction from Scott's defenders. It reminds me of a guy I know who was cheating on his girlfriend, and she suspected this, and he got really mad at her. Like, "how can you believe I'm cheating on you based on such flimsy evidence? Don't you trust me?" But in fact he was cheating.

Replies from: tomcatfish, frankybegs, DPiepgrass, pktechgirl
comment by Alex Vermillion (tomcatfish) · 2024-03-27T02:56:37.901Z · LW(p) · GW(p)

I don't think "Giving fake evidence for things you believe are true" is in any way a minor sin of evidence presentation

Replies from: interstice, LGS
comment by interstice · 2024-03-27T05:39:41.192Z · LW(p) · GW(p)

How is Metz's behavior here worse than Scott's own behavior defending himself? After all, Metz doesn't explicitly say that Scott believes in racial iq differences, he just mentions Scott's endorsement of Murray in one post and his account of Murray's beliefs in another, in a way that suggests a connection. Similarly, Scott doesn't explicitly deny believing in racial iq differences in his response post, he just lays out the context of the posts in a way that suggests that the accusation is baseless(perhaps you think Scott's behavior is locally better? But he's following a strategy of covertly communicating his true beliefs while making any individual instance look plausibly deniable, so he's kinda optimizing against "locally good behavior" tracking truth here, so it seems perverse to give him credit for this)

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2024-03-28T16:47:43.555Z · LW(p) · GW(p)

I don't (and shouldn't) care what Scott Alexander believes in order to figure out whether what Cade Metz said was logically valid. You do not need to figure out how many bones a cat has to say that "The moon is round, so a cat has 212 bones" is not valid.

Replies from: Kalciphoz
comment by Cornelius Dybdahl (Kalciphoz) · 2024-03-29T03:19:18.309Z · LW(p) · GW(p)

The issue at hand is not whether the "logic" was valid (incidentally, you are disputing the logical validity of an informal insinuation whose implication appears to be factually true, despite the hinted connection — that Scott's views on HBD were influenced by Murray's works — being merely probable)

The issues at hand are:

1. whether it is a justified "weapon" to use in a conflict of this sort

2. whether the deed is itself immoral beyond what is implied by "minor sin"

comment by LGS · 2024-03-27T04:36:37.828Z · LW(p) · GW(p)

The evidence wasn't fake! It was just unconvincing. "Giving unconvincing evidence because the convincing evidence is confidential" is in fact a minor sin.

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2024-03-28T16:46:06.217Z · LW(p) · GW(p)

The evidence offered "Scott agrees with the The Bell Curve guy" is of the same type and strength as those needed to link him to Hitler, Jesus Christ, Eliezer Yudkowsky, Cate Metz, and so on. There was absolutely nothing special about the evidence that tied it to the people offered and could have been recast without loss of accuracy to fit any leaning.

As we are familiar with, if you have an observation that proves anything, you do not have evidence.

comment by frankybegs · 2024-03-28T10:17:39.065Z · LW(p) · GW(p)

So despite it being "hard to substantiate", or to "find Scott saying" it, you think it's so certainly true that a journalist is justified in essentially lying in order to convey it to his audience?

comment by DPiepgrass · 2024-03-27T11:23:10.859Z · LW(p) · GW(p)

he definitely thinks this

He definitely thinks what, exactly?

Anyway, the situation is like: X is writing a summary about author Y who has written 100 books, but pretty much ignores all those books in favor of digging up some dirt on what Y thinks about a political topic Z that Y almost never discusses (and then instead of actually mentioning any of that dirt, X says Y "aligned himself" with a famously controversial author on Z.)

It's really weird to go HOW DARE YOU when someone says something you know is true about you, and I was always unnerved by this reaction from Scott's defenders

It's not true though. Perhaps what he believes is similar to what Murray believes, but he did not "align himself" with Murray on race/IQ. Like, if an author in Alabama reads the scientific literature and quietly comes to a conclusion that humans cause global warming, it's wrong for the Alabama News to describe this as "author has a popular blog, and he has aligned himself with Al Gore and Greta Thunberg!" (which would tend to encourage Alabama folks to get out their pitchforks 😉) (Edit: to be clear, I've read SSC/ACX for years and the one and only time I saw Scott discuss race+IQ, he linked to two scientific papers, didn't mention Murray/Bell Curve, and I don't think it was the main focus of the post―which makes it hard to find it again.)

Replies from: LGS
comment by LGS · 2024-03-27T20:20:31.229Z · LW(p) · GW(p)

Scott thinks very highly of Murray and agrees with him on race/IQ. Pretty much any implication one could reasonably draw from Cade's article regarding Scott's views on Murray or on race/IQ/genes is simply factually true. Your hypothetical author in Alabama has Greta Thunberg posters in her bedroom here.

Replies from: frankybegs, DPiepgrass
comment by frankybegs · 2024-03-28T13:16:28.086Z · LW(p) · GW(p)

Scott thinks very highly of Murray and agrees with him on race/IQ. 


This is very much not what he's actually said on the topic, which I've quoted in another reply to you. Could you please support that claim with evidence from Scott's writings? And then could you consider that by doing so, you have already done more thorough journalism on this question than Cade Metz did before publishing an incredibly inflammatory claim on it in perhaps the world's most influential newspaper?

Replies from: xpym
comment by xpym · 2024-04-03T11:25:48.062Z · LW(p) · GW(p)
comment by DPiepgrass · 2024-04-04T01:53:14.364Z · LW(p) · GW(p)

Strong disagree based on the "evidence" you posted for this elsewhere in this thread. It consists one-half of some dude on Twitter asserting that "Scott is a racist eugenics supporter" and retweeting other people's inflammatory rewordings of Scott, and one-half of private email from Scott saying things like

HBD is probably partially correct or at least very non-provably not-correct

It seems gratuitous for you to argue the point with such biased commentary. And what Scott actually says sounds like his judgement of ... I'm not quite sure what, since HBD is left without a definition, but it sounds a lot like the evidence he mentioned years later from 

(yes, I found the links I couldn't find earlier thanks to a quote by frankybegs [LW(p) · GW(p)] from this post which―I was mistaken!―does mention Murray and The Bell Curve because he is responding to Cade Metz and other critics).

This sounds like his usual "learn to love scientific consensus" stance, but it appears [LW(p) · GW(p)] you refuse to acknowledge a difference between Scott privately deferring to expert opinion, on one hand, and having "Charles Murray posters on his bedroom wall".

Almost the sum total of my knowledge of Murray's book comes from Shaun's rebuttal of it, which sounded quite reasonable to me. But Shaun argues that specific people are biased and incorrect, such as Richard Lynn and (duh) Charles Murray. Not only does Scott never cite these people, what he said about The Bell Curve was "I never read it". And why should he? Murray isn't even a geneticist!

So it seems the secret evidence matches the public evidence, does not show that "Scott thinks very highly of Murray", doesn't show that he ever did, doesn't show that he is "aligned" with Murray etc. How can Scott be a Murray fanboy without even reading Murray?

You saw this before:

I can't find any expert surveys giving the expected result that they all agree this is dumb and definitely 100% environment and we can move on (I'd be very relieved if anybody could find those, or if they could explain why the ones I found were fake studies or fake experts or a biased sample, or explain how I'm misreading them or that they otherwise shouldn't be trusted. If you have thoughts on this, please send me an email). I've vacillated back and forth on how to think about this question so many times, and right now my personal probability estimate is "I am still freaking out about this, go away go away go away". And I understand I have at least two potentially irresolveable biases on this question: one, I'm a white person in a country with a long history of promoting white supremacy; and two, if I lean in favor then everyone will hate me, and use it as a bludgeon against anyone I have ever associated with, and I will die alone in a ditch and maybe deserve it.

You may just assume Scott is lying (or as you put it, "giving a maximally positive spin on his own beliefs"), but again I think you are conflating. To suppose experts in a field have expertise in that field isn't merely different from "aligning oneself" with a divisive conservative political scientist whose book one has never read ― it's really obviously different how are you not getting this??

Replies from: LGS
comment by LGS · 2024-04-08T08:40:01.088Z · LW(p) · GW(p)

Weirdly aggressive post.

I feel like maybe what's going on here is that you do not know what's in The Bell Curve, so you assume it is some maximally evil caricature? Whereas what's actually in the book is exactly Scott's position, the one you say is "his usual "learn to love scientific consensus" stance".

If you'd stop being weird about it for just a second, could you answer something for me? What is one (1) position that Murray holds about race/IQ and Scott doesn't? Just name a single one, I'll wait.

Or maybe what's going on here is that you have a strong "SCOTT GOOD" prior as well as a strong "MURRAY BAD" prior, and therefore anyone associating the two must be on an ugly smear campaign. But there's actually zero daylight between their stances and both of them know it!

comment by Elizabeth (pktechgirl) · 2024-03-27T18:09:33.369Z · LW(p) · GW(p)

What Metz did is not analogous to a straightforward accusation of cheating. Straightforward accusations are what I wish he did. What he did is the equivalent of angrily complain to mutual friends that your boyfriend liked an instagram post (of a sunset, but you leave that part out), by someone known to cheat (or maybe is just polyamorous, and you don't consider there to be a distinction). If you made a straightforward accusation, your boyfriend could give a factual response. He's not well incentivized to do so, but it's possible.  But if you're very angry he liked an innocuous instagram post, what the hell can he say?

Replies from: LGS
comment by LGS · 2024-03-27T20:28:48.801Z · LW(p) · GW(p)


What Metz did is not analogous to a straightforward accusation of cheating. Straightforward accusations are what I wish he did.

 

It was quite straightforward, actually. Don't be autistic about this: anyone reasonably informed who is reading the article knows what Scott is accused of thinking when Cade mentions Murray. He doesn't make the accusation super explicit, but (a) people here would be angrier if he did, not less angry, and (b) that might actually pose legal issues for the NYT (I'm not a lawyer).

What Cade did reflects badly on Cade in the sense that it is very embarrassing to cite such weak evidence. I would never do that because it's mortifying to make such a weak accusation.

However, Scott has no possible gripe here. Cade's article makes embarrassing logical leaps, but the conclusion is true and the reporting behind the article (not featured in the article) was enough to show it true, so even a claim of being Gettier Cased does not work here.

Replies from: frankybegs
comment by frankybegs · 2024-03-28T13:08:08.296Z · LW(p) · GW(p)

This is reaching Cade Metz levels of slippery justification.

He doesn't make the accusation super explicit, but (a) people here would be angrier if he did, not less angry

How is this relevant? As Elizabeth says, it would be more honest and epistemically helpful if he made an explicit accusation. People here might well be angry about that, but a) that's not relevant to what is right and b) that's because, as you admit, that accusation could not be substantiated. So how is it acceptable to indirectly insinuate that accusation instead? 

(Also c), I think you're mistaken in that prediction).

(b) that might actually pose legal issues for the NYT (I'm not a lawyer).

Relatedly, if you cannot outright make a claim because it is potentially libellous, you shouldn't use vague insinuation to imply it to your massive and largely-unfamiliar-with-the-topic audience.

However, Scott has no possible gripe here.

You have yourself outlined several possible gripes. I'd have a gripe with someone dishonestly implying an enormously inflammatory accusation to their massive audience without any evidence for it, even if it were secretly true (which I still think you need to do more work to establish).

I think there are multiple further points to be made about why it's unacceptable, outside of the dark side epistemology angle above. Here's Scott's direct response to exactly your accuastion, that despite Metz having been dishonest in his accusation, he does truly believe what Metz implied:

This is far enough from my field that I would usually defer to expert consensus, but all the studies I can find which try to assess expert consensus seem crazy. A while ago, I freaked out upon finding a study that seemed to show most expert scientists in the field agreed with Murray's thesis in 1987 - about three times as many said the gap was due to a combination of genetics and environment as said it was just environment. Then I freaked out again when I found another study (here is the most recent version, from 2020) showing basically the same thing (about four times as many say it’s a combination of genetics and environment compared to just environment). I can't find any expert surveys giving the expected result that they all agree this is dumb and definitely 100% environment and we can move on (I'd be very relieved if anybody could find those, or if they could explain why the ones I found were fake studies or fake experts or a biased sample, or explain how I'm misreading them or that they otherwise shouldn't be trusted. If you have thoughts on this, please send me an email). I've vacillated back and forth on how to think about this question so many times, and right now my personal probability estimate is "I am still freaking out about this, go away go away go away". And I understand I have at least two potentially irresolveable biases on this question: one, I'm a white person in a country with a long history of promoting white supremacy; and two, if I lean in favor then everyone will hate me, and use it as a bludgeon against anyone I have ever associated with, and I will die alone in a ditch and maybe deserve it. So the best I can do is try to route around this issue when considering important questions. This is sometimes hard, but the basic principle is that I'm far less sure of any of it than I am sure that all human beings are morally equal and deserve to have a good life and get treated with respect regardless of academic achievement.

I sort of agree that it's quite plausible to infer from this that he does believe there are some between-group average differences that are genetic in origin. But I think it allows Scott several gripes with the Metz' dishonest characterisation:

  • First of all, this is already significantly different, more careful and qualified than what Metz implied, and that's after we read into it more than what Scott actually said. Does that count as "aligning yourself"?
  • Relatedly, even if Scott did truly believe exactly what Charles Murray does on this topic, which again I don't think we can fairly assume, he hasn't said that, and that's important. Secretly believing something is different from openly espousing it, and morally it can be much different if one believes that openly espousing it could lead to it being used in harmful ways (which from the above, Scott clearly does, even in the qualified form which he may or may not believe). Scott is going to some lengths and being very careful not to espouse it openly and without qualification, and clearly believes it would be harmful to do so, so it's clearly dishonest and misleading to suggest that he has "aligns himself" with Charles Murray on this topic. Again, this is even after granting the very shaky proposition that he secretly does align with Charles Murray, which I think we have established is a claim that cannot be substantiated.
  • Further, Scott, unlike Charles Murray, is very emphatic about the fact that, whatever the answer to this question, this should not affect our thinking on important issues or our treatment of anyone. Is this important addendum not elided by the idea that he 'aligned himself' with Charles Murray? Would not that not be a legitimate "gripe"?

And in case you or Metz would argue that those sentiments post-date the article in question, here's an earlier Scott quote from 'In Favor of Civilisation':

Having joined liberal society, they can be sure that no matter what those researchers find, I and all of their new liberal-society buddies will fight tooth and nail against anyone who uses any tiny differences those researchers find to challenge the central liberal belief that everyone of every gender has basic human dignity. Any victory for me is going to be a victory for feminists as well; maybe not a perfect victory, but a heck of a lot better than what they have right now.

He's talking about feminism and banning research into between-gender differences, there, but it and many other of Scott's writings make it very clear that he supports equal treatment and moral consideration for all. Is this not an important detail for a journalist to include when making such an inflammatory insinuation, that could so easily be interpreted as implying the opposite?

Your position seems to amount to epistemic equivalent of 'yes, the trial was procedurally improper, and yes the prosecutor deceived the jury with misleading evidence, and no the charge can't actually be proven beyond a reasonable doubt- but he's probably guilty anyway, so what's the issue'. I think the issue is journalistic malpractice. Metz has deliberately misled his audience in order to malign Scott on a charge which you agree cannot be substantiated, because of his own ideological opposition (which he admits). To paraphrase the same SSC post quoted above, he has locked himself outside of the walled garden. And you are "Andrew Cord", arguing that we should all stop moaning because it's probably true anyway so the tactics are justified.

Replies from: LGS
comment by LGS · 2024-03-29T01:09:42.189Z · LW(p) · GW(p)

Relatedly, if you cannot outright make a claim because it is potentially libellous, you shouldn't use vague insinuation to imply it to your massive and largely-unfamiliar-with-the-topic audience.

 

Strong disagree. If I know an important true fact, I can let people know in a way that doesn't cause legal liability for me.

Can you grapple with the fact that the "vague insinuation" is true? Like, assuming it's true and that Cade knows it to be true, your stance is STILL that he is not allowed to say it?

Your position seems to amount to epistemic equivalent of 'yes, the trial was procedurally improper, and yes the prosecutor deceived the jury with misleading evidence, and no the charge can't actually be proven beyond a reasonable doubt- but he's probably guilty anyway, so what's the issue'. I think the issue is journalistic malpractice. Metz has deliberately misled his audience in order to malign Scott on a charge which you agree cannot be substantiated, because of his own ideological opposition (which he admits). To paraphrase the same SSC post quoted above, he has locked himself outside of the walled garden. And you are "Andrew Cord", arguing that we should all stop moaning because it's probably true anyway so the tactics are justified.

It is not malpractice, because Cade had strong evidence for the factually true claim! He just didn't print the evidence. The evidence was of the form "interview a lot of people who know Scott and decide who to trust", which is a difficult type of evidence to put into print, even though it's epistemologically fine (in this case IT LED TO THE CORRECT BELIEF so please give it a rest with the malpractice claims).


Here is the evidence of Scott's actual beliefs:

https://twitter.com/ArsonAtDennys/status/1362153191102677001

As for your objections:

  • First of all, this is already significantly different, more careful and qualified than what Metz implied, and that's after we read into it more than what Scott actually said. Does that count as "aligning yourself"?

This is because Scott is giving a maximally positive spin on his own beliefs! Scott is agreeing that Cade is correct about him! Scott had every opportunity to say "actually, I disagree with Murray about..." but he didn't, because he agrees with Murray just like Cade said. And that's fine! I'm not even criticizing it. It doesn't make Scott a bad person. Just please stop pretending that Cade is lying.


Relatedly, even if Scott did truly believe exactly what Charles Murray does on this topic, which again I don't think we can fairly assume, he hasn't said that, and that's important. Secretly believing something is different from openly espousing it, and morally it can be much different if one believes that openly espousing it could lead to it being used in harmful ways (which from the above, Scott clearly does, even in the qualified form which he may or may not believe). Scott is going to some lengths and being very careful not to espouse it openly and without qualification, and clearly believes it would be harmful to do so, so it's clearly dishonest and misleading to suggest that he has "aligns himself" with Charles Murray on this topic. Again, this is even after granting the very shaky proposition that he secretly does align with Charles Murray, which I think we have established is a claim that cannot be substantiated.

Scott so obviously aligns himself with Murray that I knew it before that email was leaked or Cade's article was written, as did many other people. At some point, Scott even said that he will talk about race/IQ in the context of Jews in order to ease the public into it, and then he published this. (I can't find where I saw Scott saying it though.)

  • Further, Scott, unlike Charles Murray, is very emphatic about the fact that, whatever the answer to this question, this should not affect our thinking on important issues or our treatment of anyone. Is this important addendum not elided by the idea that he 'aligned himself' with Charles Murray? Would not that not be a legitimate "gripe"?


Actually, this is not unlike Charles Murray, who also says this should not affect our treatment of anyone. (I disagree with the "thinking on important issues" part, which Scott surely does think it affects.)

Replies from: Jiro, DPiepgrass
comment by Jiro · 2024-03-30T04:45:25.844Z · LW(p) · GW(p)

The vague insinuation isn't "Scott agrees with Murray", the vague insinuation is "Scott agrees with Murray's deplorable beliefs, as shown by this reference". The reference shows no such thing.

Arguing "well, Scott believes that anyway" is not an excuse for fake evidence.

Replies from: Muireall
comment by Muireall · 2024-03-30T18:38:37.171Z · LW(p) · GW(p)

That section is framed with

Part of the appeal of Slate Star Codex, faithful readers said, was Mr. Siskind’s willingness to step outside acceptable topics. But he wrote in a wordy, often roundabout way that left many wondering what he really believed.

More broadly, part of the piece's thesis is that the SSC community is the epicenter of a creative and influential intellectual movement, some of whose strengths come from a high tolerance for entertaining weird or disreputable ideas.

Metz is trying to convey how Alexander makes space for these ideas without staking his own credibility on them. This is, for example, what Kolmogorov Complicity is about; it's also what Alexander says he's doing with the neoreactionaries in his leaked email. It seems clear that Metz did enough reporting to understand this.

The juxtaposition of "Scott aligns himself with Murray [on something]" and "Murray has deplorable beliefs" specifically serves that thesis. It also pattern-matches to a very clumsy smear, which I get the impression is triggering readers before they manage to appreciate how it relates to the thesis. That's unfortunate, because the “vague insinuation” is much less interesting and less defensible than the inference that Alexander is being strategic in bringing up Murray on a subject where it seems safe to agree with him.

Replies from: Jiro, SaidAchmiz
comment by Jiro · 2024-03-30T23:02:46.338Z · LW(p) · GW(p)

It also pattern-matches to a very clumsy smear, which I get the impression is triggering readers before they manage to appreciate how it relates to the thesis.

It doesn't just pattern match to a clumsy smear. It's also not the only clumsy smear in the article. You're acting as though that's the only questionable thing Metz wrote and that taken in isolation you could read it in some strained way to keep it from being a smear. It was not published in isolation.

comment by Said Achmiz (SaidAchmiz) · 2024-03-30T20:01:26.275Z · LW(p) · GW(p)

What does it mean to “make space for” some idea(s)?

Replies from: DPiepgrass, Muireall
comment by DPiepgrass · 2024-04-04T04:05:47.189Z · LW(p) · GW(p)

I think about it differently. When Scott does not support an idea, but discusses or allows discussion of it, it's not "making space for ideas" as much as "making space for reasonable people who have ideas, even when they are wrong". And I think making space for people to be wrong sometimes is good, important and necessary. According to his official (but confusing IMO) rules, saying untrue things is a strike against you, but insufficient for a ban.

Also, strong upvote because I can't imagine why this question should score negatively.

comment by Muireall · 2024-03-30T20:28:16.937Z · LW(p) · GW(p)

It's just a figure of speech for the sorts of thing Alexander describes in Kolmogorov Complicity. More or less the same idea as "Safe Space" in the NYT piece's title—a venue or network where people can have the conversations they want about those ideas without getting yelled at or worse.

Mathematician Andrey Kolmogorov lived in the Soviet Union at a time when true freedom of thought was impossible. He reacted by saying whatever the Soviets wanted him to say about politics, while honorably pursuing truth in everything else. As a result, he not only made great discoveries, but gained enough status to protect other scientists, and to make occasional very careful forays into defending people who needed defending. He used his power to build an academic bubble where science could be done right and where minorities persecuted by the communist authorities (like Jews) could do their work in peace...

But politically-savvy Kolmogorov types can’t just build a bubble. They have to build a whisper network...

They have to serve as psychological support. People who disagree with an orthodoxy can start hating themselves – the classic example is the atheist raised religious who worries they’re an evil person or bound for Hell – and the faster they can be connected with other people, the more likely they are to get through.

They have to help people get through their edgelord phase as quickly as possible. “No, you’re not allowed to say this. Yes, it could be true. No, you’re not allowed to say this one either. Yes, that one also could be true as best we can tell. This thing here you actually are allowed to say still, and it’s pretty useful, so do try to push back on that and maybe we can defend some of the space we’ve still got left.”

They have to find at-risk thinkers who had started to identify holes in the orthodoxy, communicate that they might be right but that it could be dangerous to go public, fill in whatever gaps are necessary to make their worldview consistent again, prevent overcorrection, and communicate some intuitions about exactly which areas to avoid. For this purpose, they might occasionally let themselves be seen associating with slightly heretical positions, so that they stand out to proto-heretics as a good source of information. They might very occasionally make calculated strikes against orthodox overreach in order to relieve some of their own burdens. The rest of the time, they would just stay quiet and do good work in their own fields.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2024-03-30T20:48:48.141Z · LW(p) · GW(p)

So, basically, allowing the ideas in question to be discussed on one’s blog/forum/whatever, instead of banning people for discussing them?

Replies from: Muireall
comment by Muireall · 2024-03-30T21:05:01.584Z · LW(p) · GW(p)

Yeah, plus all the other stuff Alexander and Metz wrote about it, I guess.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2024-03-30T21:28:19.672Z · LW(p) · GW(p)

Could you (or someone else) summarize the other stuff, in the context of my question? I mean, I read it, there’s various things in there, but I’m not sure which of it is supposed to be a definition of “making space for” an idea.

comment by DPiepgrass · 2024-04-04T02:05:20.634Z · LW(p) · GW(p)

Scott had every opportunity to say "actually, I disagree with Murray about..." but he didn't, because he agrees with Murray

[citation needed] for those last four words. In the paragraph before the one frankybegs quoted, Scott said:

Some people wrote me to complain that I handled this in a cowardly way - I showed that the specific thing the journalist quoted wasn’t a reference to The Bell Curve, but I never answered the broader question of what I thought of the book. They demanded I come out and give my opinion openly. Well, the most direct answer is that I've never read it.

Having never read The Bell Curve, it would be uncharacteristic of him to say "I disagree with Murray about [things in The Bell Curve]", don't you think?

Replies from: andrew-burns
comment by Andrew Burns (andrew-burns) · 2024-04-04T13:54:45.053Z · LW(p) · GW(p)Replies from: tailcalled
comment by tailcalled · 2024-04-04T15:44:10.184Z · LW(p) · GW(p)

Actually one does need to read The Bell Curve to know what's in it. There's a lot of slander going around about it.

comment by wilkox · 2024-03-28T05:10:43.866Z · LW(p) · GW(p)

It seems like you think what Metz wrote was acceptable because it all adds up to presenting the truth in the end, even if the way it was presented was 'unconvincing' and the evidence 'embarassing[ly]' weak. I don't buy the principle that 'bad epistemology is fine if the outcome is true knowledge', and I also don't buy that this happened in this particular case, nor that this is what Metz intended.

If Metz's goal was to inform his readers about Scott's position, he failed. He didn't give any facts other than that Scott 'aligned himself with' and quoted somebody who holds a politically unacceptable view. The majority of readers will glean from this nothing but a vague association between Scott and racism, as the author intended. More sophisticated readers will notice what Metz is doing, and assume that if there was substantial evidence that Scott held an unpalatable view Metz would have gladly published that instead of resorting to an oblique smear by association. Nobody ends up better informed about what Scott actually believes.

I think trevor is right to invoke the quokka analogy [LW(p) · GW(p)]. Rationalists are tying ourselves in knots in a long comment thread debating if actually, technically, strictly, Metz was misleading. Meanwhile, Metz never cared about this in the first place, and is continuing to enjoy a successful career employing tabloid rhetorical tricks.

Replies from: LGS
comment by LGS · 2024-03-28T07:34:54.683Z · LW(p) · GW(p)

The epistemology was not bad behind the scenes, it was just not presented to the readers. That is unfortunate but it is hard to write a NYT article (there are limits on how many receipts you can put in an article and some of the sources may have been off the record).

Cade correctly informed the readers that Scott is aligned with Murray on race and IQ. This is true and informative, and at the time some people here doubted it before the one email leaked. Basically, Cade's presented evidence sucked but someone going with the heuristic "it's in the NYT so it must be true" would have been correctly informed.

I don't know if Cade had a history of "tabloid rhetorical tricks" but I think it is extremely unbecoming to criticize a reporter for giving true information that happens to paint the community in a bad light. Also, the post you linked by Trevor uses some tabloid rhetorical tricks: it says Cade sneered at AI risk but links to an article that literally doesn't mention AI risk at all.

Replies from: frankybegs, wilkox
comment by frankybegs · 2024-03-28T13:28:34.646Z · LW(p) · GW(p)

it is hard to write a NYT article


Clearly. But if you can't do it without resorting to deliberately misleading rhetorical sleights to imply something you believe to be true, the correct response is not to.

Or, more realistically, if you can't substantiate a particular claim with any supporting facts, due to the limitations of the form, you shouldn't include it nor insinuate it indirectly, especially if it's hugely inflammatory. If you simply cannot fit in the "receipts" needed to substantiate a claim (which seems implausible anyway), as a journalist you should omit that claim. If there isn't space for the evidence, there isn't space for the accusation.

comment by wilkox · 2024-03-28T09:07:55.080Z · LW(p) · GW(p)

The epistemology was not bad behind the scenes, it was just not presented to the readers. That is unfortunate but it is hard to write a NYT article (there are limits on how many receipts you can put in an article and some of the sources may have been off the record).

I'd have more trust in the writing of a journalist who presents what they believe to be the actual facts in support of a claim, than one who publishes vague insinuations because writing articles is hard.

Cade correctly informed the readers that Scott is aligned with Murray on race and IQ.

He really didn’t. Firstly, in the literal sense that Metz carefully avoided making this claim (he stated that Scott aligned himself with Murray, and that Murray holds views on race and IQ, but not that Scott aligns himself with Murray on these views). Secondly, and more importantly, even if I accept the implied claim I still don’t know what Scott supposedly believes about race and IQ. I don’t know what ‘is aligned with Murray on race and IQ’ actually means beyond connotatively ‘is racist’. If this paragraph of Metz’s article was intended to be informative (it was not), I am not informed.

comment by cubefox · 2024-03-27T10:20:18.950Z · LW(p) · GW(p)

Imagine you are a philosopher in the 17th century, and someone accuses you of atheism, or says "He aligns himself with Baruch Spinoza". This could easily have massive consequences for you. You may face extensive social and legal punishment. You can't even honestly defend yourself, because the accusation of heresy is an asymmetric discourse situation [LW(p) · GW(p)]. Is your accuser off the hook when you end up dying in prison? He can just say: Sucks for him, but it's not my fault, I just innocently reported his beliefs.

Replies from: LGS
comment by LGS · 2024-03-27T20:17:55.885Z · LW(p) · GW(p)

Wait a minute. Please think through this objection. You are saying that if the NYT encountered factually true criticisms of an important public figure, it would be immoral of them to mention this in an article about that figure?

Does it bother you that your prediction didn't actually happen? Scott is not dying in prison!

This objection is just ridiculous, sorry. Scott made it an active project to promote a worldview that he believes in and is important to him -- he specifically said he will mention race/IQ/genes in the context of Jews, because that's more palatable to the public. (I'm not criticizing this right now, just observing it.) Yet if the NYT so much as mentions this, they're guilty of killing him? What other important true facts about the world am I not allowed to say according to the rationalist community? I thought there was some mantra of like "that which can be destroyed by the truth should be", but I guess this does not apply to criticisms of people you like?

Replies from: cubefox
comment by cubefox · 2024-03-27T21:24:16.402Z · LW(p) · GW(p)

Wait a minute. Please think through this objection. You are saying that if the NYT encountered factually true criticisms of an important public figure, it would be immoral of them to mention this in an article about that figure?

No, not in general. But in the specific case at hand, yes. We know Metz did read quite a few of Scott's blog posts, and all necessary context and careful subtlety with which he (Scott) approaches this topic (e.g. in Against Murderism) is totally lost in an offhand remark in a NYT article. It's like someone in the 17th century writing about Spinoza, and mentioning, as a sidenote, "and oh by the way, he denies the existence of a personal God" and then moves on to something else. Shortening his position like this, where it must seem outrageous and immoral, is in effect defamatory.

If some highly sensitive topic can't be addressed in a short article with the required carefulness, it should simply not be addressed at all. That's especially true for Scott, who wrote about countless other topics. There is no requirement to mention everything. (For Spinoza an argument could be made that his, at the time, outrageous position plays a fairly central role in his work, but that's not the case for Scott.)

Does it bother you that your prediction didn't actually happen? Scott is not dying in prison!

Luckily Scott didn't have to fear legal consequences. But substantial social consequences were very much on the table. We know of other people who lost their job or entire career prospects for similar reasons. Nick Bostrom probably dodged the bullet by a narrow margin.

Replies from: LGS
comment by LGS · 2024-03-27T21:48:00.815Z · LW(p) · GW(p)

What you're suggesting amounts to saying that on some topics, it is not OK to mention important people's true views because other people find those views objectionable. And this holds even if the important people promote those views and try to convince others of them. I don't think this is reasonable.

As a side note, it's funny to me that you link to Against Murderism as an example of "careful subtlety". It's one of my least favorite articles by Scott, and while I don't generally think Scott is racist that one almost made me change my mind. It is just a very bad article. It tries to define racism out of existence. It doesn't even really attempt to give a good definition -- Scott is a smart person, he could do MUCH better than those definitions if he tried. For example, a major part of the rationalist movement was originally about cognitive biases, yet "racism defined as cognitive bias" does not appear in the article at all. Did Scott really not think of it?

Replies from: cubefox, DPiepgrass
comment by cubefox · 2024-03-27T22:24:31.773Z · LW(p) · GW(p)

What you're suggesting amounts to saying that on some topics, it is not OK to mention important people's true views because other people find those views objectionable.

It's okay to mention an author's taboo views on a complex and sensitive topic, when they are discussed in a longer format which does justice to how they were originally presented. Just giving a necessarily offensive sounding short summary is only useful as a weaponization to damage the reputation of the author.

comment by DPiepgrass · 2024-04-04T04:25:51.176Z · LW(p) · GW(p)

Huh? Who defines racism as cognitive bias? I've never seen that before, so expecting Scott in particular to define it as such seems like special pleading.

What would your definition be, and why would it be better?

Scott endorses this definition:

Definition By Motives: An irrational feeling of hatred toward some race that causes someone to want to hurt or discriminate against them.

Setting aside that it says "irrational feeling" instead of "cognitive bias", how does this "tr[y] to define racism out of existence"?

Replies from: Raemon
comment by Raemon · 2024-04-04T04:54:17.476Z · LW(p) · GW(p)

fyi I think "racism as cognitive bias" was a fairly natural and common way of framing it before I showed up on LessWrong 10 years ago.

comment by Elizabeth (pktechgirl) · 2024-03-28T18:49:55.578Z · LW(p) · GW(p)

aZMD: Looking at "Silicon Valley's Safe Space", I don't think it was a good article. Specifically, you wrote,

In one post, [Alexander] aligned himself with Charles Murray, who proposed a link between race and I.Q. in "The Bell Curve." In another, he pointed out that Mr. Murray believes Black people "are genetically less intelligent than white people."

 

 

End quote. So, the problem with this is that the specific post in which Alexander aligned himself with Murray was not talking about race. It was specifically talking about whether specific programs to alleviate poverty will actually work or not.

 

I think Zack's description might be too charitable to Scott. From his description I thought the reference would be strictly about poverty. But the full quote includes a lot about genetics and ability to earn money.  The full quote is

The only public figure I can think of in the southeast quadrant with me is Charles Murray. Neither he nor I would dare reduce all class differences to heredity, and he in particular has some very sophisticated theories about class and culture. But he shares my skepticism that the 55 year old Kentucky trucker can be taught to code, and I don’t think he’s too sanguine about the trucker’s kids either. His solution is a basic income guarantee, and I guess that’s mine too. Not because I have great answers to all of the QZ article’s problems. But just because I don’t have any better ideas1,2.

Scott doesn't mention race, but it's an obvious implication[1], especially when quoting someone the NYT crowd views as anathema. I think Metz could have quoted that paragraph, and maybe given the NYT consensus view on him for anyone who didn't know, and readers would think very poorly of Scott[2]

I bring this up for a couple of reasons: 

  1. it seems in the spirit of Zack's post to point out when he made an error in presenting evidence.
  2. it looks like Metz chose to play stupid symmetric warfare games, instead of the epistemically virtuous thing of sharing a direct quote. The quote should have gotten him what he wanted, so why be dishonest about it? I have some hypotheses, none of which lead me to trust Metz.
  1. ^

    ETA: If you hold the very common assumption that race is a good proxy for genetics. I disagree, but that is the default view.

  2. ^

    To be clear: that paragraph doesn't make me think poorly of Scott. I personally agree with Scott that genetics influences jobs and income. I like UBI for lots of reasons, including this one. If I read that paragraph I wouldn't find any of the views objectionable (although a little eyebrow raise that he couldn't find an example with a less toxic reputation- but I can't immediately think of another example that fits either). 

comment by interstice · 2024-03-26T22:12:16.708Z · LW(p) · GW(p)

ZMD: Looking at “Silicon Valley’s Safe Space”, I don’t think it was a good article. Specifically, you wrote,

In one post, [Alexander] aligned himself with Charles Murray, who proposed a link between race and I.Q. in “The Bell Curve.” In another, he pointed out that Mr. Murray believes Black people “are genetically less intelligent than white people.”

End quote. So, the problem with this is that the specific post in which Alexander aligned himself with Murray was not talking about race. It was specifically talking about whether specific programs to alleviate poverty will actually work or not.

So on the one hand, this particular paragraph does seem like it's misleadingly implying Scott was endorsing views on race/iq similar to Murray's even though, based on the quoted passages alone, there is little reason to think that. On the other hand, it's totally true that Scott was running a strategy of bringing up or "arguing" with hereditarians with the goal of broadly promoting those views in the rationalist community, without directly being seen to endorse them. So I think it's actually pretty legitimate for Metz to bring up incidents like this or the Xenosystems link in the blogroll. Scott was basically using a strategy of communicating his views in a plausibly deniable way by saying many little things which are more likely if he was a secret hereditarian, but any individual instance of which is not so damning. So I feel it's total BS to then complain about how tenuous the individual instances Metz brought up are -- he's using it as an example or a larger trend, which is inevitable given the strategy Scott was using.

(This is not to say that I think Scott should be "canceled" for these views or whatever, not at all, but at this stage the threat of cancelation seems to have passed and we can at least be honest about what actually happened)

Replies from: interstice, Zack_M_Davis, cubefox
comment by interstice · 2024-03-26T22:28:52.760Z · LW(p) · GW(p)

"For my friends, charitability -- for my enemies, Bayes Rule"

comment by Zack_M_Davis · 2024-03-27T06:19:36.834Z · LW(p) · GW(p)

Just because the defendant is actually guilty, doesn't mean the prosecutor should be able to get away with making a tenuous case! (I wrote more about this in my memoir.)

Replies from: deluks917, interstice
comment by sapphire (deluks917) · 2024-04-01T04:25:58.022Z · LW(p) · GW(p)

It feels like bad praxis to punish people for things they got right! 

comment by interstice · 2024-03-27T16:37:11.403Z · LW(p) · GW(p)

But was Metz acting as a "prosecutor" here? He didn't say "this proves Scott is a hereditarian" or whatever, he just brings up two instances where Scott said things in a way that might lead people to make certain inferences....correct inferences, as it turns out. Like yeah, maybe it would have been more epistemically scrupulous if he said "these articles represent two instances of a larger pattern which is strong Bayesian evidence even though they are not highly convincing on their own" but I hardly think this warrants remaining outraged years after the fact.

comment by cubefox · 2024-03-27T21:58:44.903Z · LW(p) · GW(p)

On the one hand you say

So I think it's actually pretty legitimate for Metz to bring up incidences like this

but also

This is not to say that I think Scott should be "canceled" for these views or whatever, not at all

which seems like a double standard. E.g. assume the consequence of the NYT article had actually lead to Scott's cancellation. Which wasn't an implausible thing for Metz to expect.

(On a historical analogy, Scott's case seems quite analogous to the historical case of Baruch Spinoza. Spinoza could be (and was) accused of employing a similar strategy to get, with his pantheist philosophy, the highly taboo topic of atheism into the mainstream philosophical discourse. If so, the strategy was successful.)

Replies from: interstice
comment by interstice · 2024-03-27T23:03:12.263Z · LW(p) · GW(p)

I mean it's epistemically legitimate for him to bring them up. They are in fact evidence that Scott holds hereditarian views.

Now, regarding the "overall" legitimacy of calling attention to someone's controversial views, it probably does have a chilling effect, and threatens Scott's livelihood which I don't like. But I think that continuing to be mad at Metz for his sloppy inference doesn't really make sense here. Sure, maybe at the time it was tactically smart to feign outrage that Metz would dare to imply Scott was a hereditarian, but now that we have direct documentation of Scott admitting exactly that, it's just silly. If you're still worried about Scott getting canceled (seems unlikely at this stage tbh) it's better to just move on and stop drawing attention to the issue by bringing it up over and over.

Replies from: cubefox
comment by cubefox · 2024-03-27T23:28:42.382Z · LW(p) · GW(p)

Beliefs can only be epistemically legitimate, actions can only be morally legitimate. To "bring something up" is an action, not a belief. My point is that this action wasn't legitimate, at least not in this heavily abridged form.

Replies from: interstice
comment by interstice · 2024-03-27T23:50:31.078Z · LW(p) · GW(p)

Statements can be epistemically legit or not. Statements have content, they aren't just levers for influencing the world.

Replies from: cubefox
comment by cubefox · 2024-03-27T23:56:16.133Z · LW(p) · GW(p)

If you mean by "statement" an action (a physical utterance) then I disagree. If you mean an abstract object, a proposition, for which someone could have more or less evidence, or reason to believe, then I agree.

comment by Austin Chen (austin-chen) · 2024-03-26T20:03:03.982Z · LW(p) · GW(p)

So, I love Scott, consider CM's original article poorly written, and also think doxxing is quite rude, but with all the disclaimers out of the way: on the specific issue of revealing Scott's last name, Cade Metz seems more right than Scott here? Scott was worried about a bunch of knock-off effects of having his last name published, but none of that bad stuff happened.[1]

I feel like at this point in the era of the internet, doxxing (at least, in the form of involuntary identity association) is much more of an imagined threat than a real harm. Beff Jezos's more recent doxxing also comes to mind as something that was more controversial for the controversy, than for any factual harms done to Jezos as a result.

  1. ^

    Scott did take a bunch of ameliorating steps, such as leaving his past job -- but my best guess is that none of that would have actually been necessary. AFAICT he's actually in a much better financial position thanks to subsequent transition to Substack -- though crediting Cade Metz for this is a bit like crediting Judas for starting Christianity.

Replies from: habryka4, ricraz, steve2152, JohnBuridan
comment by habryka (habryka4) · 2024-03-26T22:12:23.412Z · LW(p) · GW(p)

Scott was worried about a bunch of knock-off effects of having his last name published, but none of that bad stuff happened.

Didn't Scott quit his job as a result of this? I don't have high confidence on how bad things would have been if Scott hadn't taken costly actions to reduce the costs, but it seems that the evidence is mostly screened off by Scott doing a lot of stuff to make the consequences less bad and/or eating some of the costs in anticipation.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2024-03-26T22:32:14.556Z · LW(p) · GW(p)

I mean, Scott seems to be in a pretty good situation now, in many ways better than before.

And yes, this is consistent with NYT hurting him in expectation.

But one difference between doxxing normal people versus doxxing "influential people" is that influential people typically have enough power to land on their feet when e.g. they lose a job. And so the fact that this has worked out well for Scott (and, seemingly, better than he expected) is some evidence that the NYT was better-calibrated about how influential Scott is than he was.

This seems like an example of the very very prevalent effect that Scott wrote about in "against bravery debates", where everyone thinks their group is less powerful than they actually are. I don't think there's a widely-accepted name for it; I sometimes use underdog bias. My main diagnosis of the NYT/SSC incident is that rationalists were caught up by underdog bias, even as they leveraged thousands of influential tech people to attack the NYT.

Replies from: habryka4, localdeity
comment by habryka (habryka4) · 2024-03-26T22:39:41.487Z · LW(p) · GW(p)

I don't think the NYT thing played much of a role in Scott being better off now. My guess is a small minority of people are subscribed to his Substack because of the NYT thing (the dominant factor is clearly the popularity of his writing). 

My guess is the NYT thing hurt him quite a bit and made the potential consequences of him saying controversial things a lot worse for him. He has tried to do things to reduce the damage of that, but I generally don't believe that "someone seems to be doing fine" is almost ever much evidence against "this action hurt this person". Competent people often do fine even when faced with substantial adversity, this doesn't mean the adversity is fine. 

I do think it's clear the consequences weren't catastrophic, and I also separately actually have a lot of sympathy for giving newspapers a huge amount of leeway to report on whatever true thing they want to report on, so that I overall don't have a super strong take here, but I also think the costs here were probably on-net pretty substantial (and also separately that the evidence of how things have played out since then probably didn't do very much to sway me from my priors of how much the cost would be, due to Scott internalizing the costs in advance a bunch).

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2024-03-27T02:36:06.187Z · LW(p) · GW(p)

I don't think the NYT thing played much of a role in Scott being better off now. My guess is a small minority of people are subscribed to his Substack because of the NYT thing (the dominant factor is clearly the popularity of his writing).

What credence do you have that he would have started the substack at all without the NYT thing? I don't have much information, but probably less than 80%. The timing sure seems pretty suggestive.

(I'm also curious about the likelihood that he would have started his startup without the NYT thing, but that's less relevant since I don't know whether the startup is actually going well.)

My guess is the NYT thing hurt him quite a bit and made the potential consequences of him saying controversial things a lot worse for him.

Presumably this is true of most previously-low-profile people that the NYT chooses to write about in not-maximally-positive ways, so it's not a reasonable standard to hold them to. And so as a general rule I do think "the amount of adversity that you get when you used to be an influential yet unknown person but suddenly get a single media feature about you" is actually fine to inflict on people. In fact, I'd expect that many (or even most) people in this category will have a worse time of it than Scott—e.g. because they do things that are more politically controversial than Scott, have fewer avenues to make money, etc.

Replies from: habryka4
comment by habryka (habryka4) · 2024-03-27T06:58:37.128Z · LW(p) · GW(p)

What credence do you have that he would have started the substack at all without the NYT thing? I don't have much information, but probably less than 80%.

I mean, just because him starting a Substack was precipitated by a bunch of stress and uncertainty does not mean I credit the stress and uncertainty for the benefits of the Substack. Scott always could have started a Substack, and presumably had reasons for not doing so before the NYT thing. As an analogy, if I work at your company and have a terrible time, and then I quit, and then get a great job somewhere else, of course you get no credit for the quality of my new job. 

The Substack situation seems analogous. It approximately does not matter whether Scott would have started the Substack without the NYT thing, so I don't see the relevance of the question when trying to judge whether the NYT thing caused a bunch of harm. 

Replies from: localdeity
comment by localdeity · 2024-03-27T11:46:22.862Z · LW(p) · GW(p)

In general, I would say:

  • Just because someone wasn't successfully canceled, doesn't mean there wasn't a cancellation attempt, nor that most other people in their position would have withstood it
  • Just because they're doing well now, doesn't mean they wouldn't have been doing better without the cancellation attempt
  • Even if the cancellation attempt itself did end up actually benefiting them, because they had the right personality and skills and position, that doesn't mean this should have been expected ex ante
    • (After all, if it's clear in advance to everyone involved that someone is uncancellable, then they're less likely to try)
  • Even if it's factually true that someone has the qualities and position to come out ahead after cancellation, they may not know or believe this, and thus the prospect of cancellation may successfully silence them
  • Even if they're currently uncancellable and know this, that doesn't mean they'll remain so in the future
    • E.g. if they're so good at what they do as to be unfireable, then maybe within a few years they'll be offered a CEO position, at which point any cancel-worthy things they said years ago may limit their career; and if they foresee this, then that incentivizes self-censorship

The point is, cancellation attempts are bad because they create a chilling effect, an environment that incentivizes self-censorship and distorts intellectual discussion.  And arguments of the form "Hey, this particular cancellation attempt wasn't that bad because the target did well" fall down to one or more of the above arguments: they still create chilling effects and that still makes them bad.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2024-03-27T19:39:28.530Z · LW(p) · GW(p)

But it wasn't a cancellation attempt. The issue at hand is whether a policy of doxxing influential people is a good idea. The benefits are transparency about who is influencing society, and in which ways; the harms include the ones you've listed above, about chilling effects.

It's hard to weigh these against each other, but one way you might do so is by following a policy like "doxx people only if they're influential enough that they're probably robust to things like losing their job". The correlation between "influential enough to be newsworthy" and "has many options open to them" isn't perfect, but it's strong enough that this policy seems pretty reasonable to me.

To flip this around, let's consider individuals who are quietly influential in other spheres. For example, I expect there are people who many news editors listen to, when deciding how their editorial policies should work. I expect there are people who many Democrat/Republican staffers listen to, when considering how to shape policy. In general I think transparency about these people would be pretty good for the world. If those people happened to have day jobs which would suffer from that transparency, I would say "Look, you chose to have a bunch of influence, which the world should know about, and I expect you can leverage this influence to end up in a good position somehow even after I run some articles on you. Maybe you're one of the few highly-influential people for whom this happens to not be true, but it seems like a reasonable policy to assume that if someone is actually pretty influential then they'll land on their feet either way." And the fact that this was true for Scott is some evidence that this would be a reasonable policy.

(I also think that taking someone influential who didn't previously have a public profile, and giving them a public profile under their real name, is structurally pretty analogous to doxxing. Many of the costs are the same. In both cases one of the key benefits is allowing people to cross-reference information about that person to get a better picture of who is influencing the world, and how.)

Replies from: ryan_greenblatt, cubefox, localdeity
comment by ryan_greenblatt · 2024-03-27T20:49:45.490Z · LW(p) · GW(p)

The benefits are transparency about who is influencing society

In this particular case, I don't really see any transparency benefits. If it was the case that there was important public information attached to Scott's full name, then this argument would make sense to me.

(E.g. if Scott Alexander was actually Mark Zuckerberg or some other public figure with information attacked to their real full name then this argument would go through.)

Fair enough if NYT needs to have a extremely coarse grained policy where they always dox influential people consistently and can't do any cost benefit on particular cases.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2024-03-27T22:44:50.771Z · LW(p) · GW(p)

If it was the case that there was important public information attached to Scott's full name, then this argument would make sense to me.

In general having someone's actual name public makes it much easier to find out other public information attached to them. E.g. imagine if Scott were involved in shady business dealings under his real name. This is the sort of thing that the NYT wouldn't necessarily discover just by writing the profile of him, but other people could subsequently discover after he was doxxed.

To be clear, btw, I'm not arguing that this doxxing policy is correct, all things considered. Personally I think the benefits of pseudonymity for a healthy ecosystem outweigh the public value of transparency about real names. I'm just arguing that there are policies consistent with the NYT's actions which are fairly reasonable.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2024-03-28T08:24:42.180Z · LW(p) · GW(p)

Many comments pointed out that NYT does not in fact have a consistent policy of always revealing people's true names. There's even a news editorial about this which I point out in case you trust the fact-checking of NY Post more.

I think that leaves 3 possible explanations of what happened:

  1. NYT has a general policy of revealing people's true names, which it doesn't consistently apply but ended up applying in this case for no particular reason.
  2. There's an inconsistently applied policy, and Cade Metz's (and/or his editors') dislike of Scott contributed (consciously or subconsciously) to insistence on applying the policy in this particular case.
  3. There is no policy and it was a purely personal decision.

In my view, most rationalists seem to be operating under a reasonable probability distribution over these hypotheses, informed by evidence such as Metz's mention of Charles Murray, lack of a public written policy about revealing real names, and lack of evidence that a private written policy exists.

comment by cubefox · 2024-03-27T23:47:41.959Z · LW(p) · GW(p)

But it wasn't a cancellation attempt.

In effect Cade Metz indirectly accused Scott of racism. Which arguably counts as a cancellation attempt.

comment by localdeity · 2024-03-27T21:37:53.211Z · LW(p) · GW(p)

Ok, let's consider the case of shadowy influencers like that.  It would be nice to know who such people were, sure.  If they were up to nefarious things, or openly subscribed to ideologies that justify awful actions, then I'd like to know that.  If there was an article that accurately laid out the nefarious things, that would be nice.  If the article cherry-picked, presented facts misleadingly, made scurrilous insinuations every few paragraphs (without technically saying anything provably false)—that would be bad, possibly quite bad, but in some sense it's par for the course for a certain tier of political writing.

When I see the combination of making scurrilous insinuations every few paragraphs and doxxing the alleged villain, I think that's where I have to treat it as a deliberate cancellation attempt on the person.  If it wasn't actually deliberate, then it was at least "reckless disregard" or something, and I think it should be categorized the same way.  If you're going to dox someone, I figure you accept an increased responsibility to be careful about what you say about them.  (Presumably for similar reasons, libel laws are stricter about non-public figures.  No, I'm not saying it's libel when the statements are "not technically lying"; but it's bad behavior and should be known as such.)

As for the "they're probably robust" aspect... As mentioned in my other comment [LW(p) · GW(p)], even if they predictably "do well" afterwards, that doesn't mean they haven't been significantly harmed.  If their influence is a following of 10M people, and the cancellation attempt reduces their influence by 40%, then it is simultaneously true that (a) "They have an audience of 6M people, they're doing fine", and (b) "They've been significantly harmed, and many people in such a position who anticipated this outcome would have a significant incentive to self-censor".  It remains a bad thing.  It's less bad than doing it to random civilians, sure, but it remains bad.

comment by localdeity · 2024-03-27T15:10:47.105Z · LW(p) · GW(p)

But one difference between doxxing normal people versus doxxing "influential people" is that influential people typically have enough power to land on their feet when e.g. they lose a job.

It may decrease their influence substantially, though.  I'll quote at length from here.  It's not about doxxing per se, but it's about cancellation attempts (which doxxing a heretic enables), and about arguments similar to the above:

If you’re a writer, artist or academic who has strayed beyond the narrow bounds of approved discourse, two consequences will be intimately familiar. The first is that it becomes harder to get a hearing about anything. The second is that if you do manage to say anything publicly — especially if you talk about the silencing — it will be taken as proof that you have not been silenced.

This is the logic of witch-ducking. If a woman drowns, she isn’t a witch; if she floats, she is, and must be dispatched some other way. Either way, she ends up dead.

The only counter to this is specific examples. But censorship is usually covert: when you’re passed over to speak at a conference, exhibit in a gallery or apply for a visiting fellowship, you rarely find out. Every now and then, however, the censors tip their hands.

And so, for everyone who says I can’t have been cancelled because they can still hear me, here’s the evidence.

The first time I know I was censored was even before my book criticising trans ideology came out in mid-2021. I had been asked to talk about it on the podcast of Intelligence Squared, a media company that, according to its website, aims to “promote a global conversation”. We had booked a date and time.

But as the date approached I discovered I had been dropped. When I asked why, the response was surprisingly frank: fear of a social-media pile-on, sponsors getting cold feet and younger staff causing grief. The CEO of Intelligence Squared is a former war correspondent who has written a book about his experiences in Kosovo. But at the prospect of platforming a woman whose main message is that humans come in two sexes, his courage apparently ran out.

Next came the Irish Times, my home country’s paper of record. Soon after my book came out a well-known correspondent rang me, said he had stayed up all night to finish it and wanted to write about it. He interviewed me, filed the piece, checked the quotes — and then silence. When I nudged by email, he said the piece had been spiked as it was going to press.

Sometime around then it was the BBC’s turn. I don’t know the exact date because I only found out months later, when I met a presenter from a flagship news programme. Such a shame you couldn’t come on the show, he said, to which I replied I had never been asked. It turned out that he had told a researcher to invite me on, but the researcher hadn’t, instead simply lying that I wasn’t available. I’ve still never been on the BBC to discuss trans issues.

Next came ABC, the Australian state broadcaster, which interviewed me for a radio show about religion and ethics. This time, when I nudged, I was told there had been “technical glitches” with the recording, but they would “love to revisit this one in the future”. They’ve never been back in touch. [... several more examples ...]

Now, the author has a bestselling book, has been on dozens of podcasts, and now works for an advocacy organization that's 100% behind her message (she's not technically listed as a cofounder, but might as well be).  She has certainly landed on her feet and has a decent level of reach; yet, clearly, if not for a bunch of incidents like the above—and, as she says, probably a lot more incidents for which she doesn't have specific evidence—then she would have had much greater reach.

In Scott's case... if we consider the counterfactual where there wasn't a NYT article drawing such smears against Scott, then, who knows, maybe today some major news organizations (perhaps the NYT itself!) would have approached him for permission to republish some Slate Star Codex articles on their websites, perhaps specifically some of those on AI during the last ~year when AI became big news.  Or offered to interview him for a huge audience on important topics, or something.

So be careful not to underestimate the extent of unseen censorship and cancellation, and therefore the damage done by "naming and shaming" tactics.

comment by Richard_Ngo (ricraz) · 2024-03-26T20:56:15.195Z · LW(p) · GW(p)

+1, I agree with all of this, and generally consider the SSC/NYT incident to be an example of the rationalist community being highly tribalist.

(more on this in a twitter thread, which I've copied over to LW here [LW(p) · GW(p)])

Replies from: nikolas-kuhn
comment by Amalthea (nikolas-kuhn) · 2024-03-27T09:37:26.051Z · LW(p) · GW(p)

What do you mean by example, here? That this is demonstrating a broader property, or that in this situation, there was a tribal dynamic?

comment by Steven Byrnes (steve2152) · 2024-03-26T20:31:00.231Z · LW(p) · GW(p)

There were two issues: what is the cost of doxxing, and what is the benefit of doxxing. I think the main crux an equally important crux of disagreement is the latter, not the former. IMO the benefit was zero: it’s not newsworthy, it brings no relevant insight, publishing it does not advance the public interest, it’s totally irrelevant to the story. Here CM doesn’t directly argue that there was any benefit to doxxing; instead he kinda conveys a vibe / ideology that if something is true then it is self-evidently intrinsically good to publish it (but of course that self-evident intrinsic goodness can be outweighed by sufficiently large costs). Anyway, if the true benefit is zero (as I believe), then we don’t have to quibble over whether the cost was big or small.

Replies from: tailcalled
comment by tailcalled · 2024-03-26T21:32:11.035Z · LW(p) · GW(p)

Trouble is, the rationalist community tends to get involved with taboo topics and regularly defends itself by saying that it's because it's self-evidently good for the truth to be known. Thus there is (at least apparently) an inconsistency.

Replies from: steve2152, sil-ver, DPiepgrass
comment by Steven Byrnes (steve2152) · 2024-03-26T21:45:37.762Z · LW(p) · GW(p)

There's a fact of the matter about whether the sidewalk on my street has an odd vs even number of pebbles on it, but I think everyone including rationalists will agree that there's no benefit of sharing that information. It's not relevant for anything else.

By contrast, taboo topics generally become taboo because they have important consequences for decisions and policy and life.

Replies from: tailcalled, nikolas-kuhn
comment by tailcalled · 2024-03-27T08:13:05.170Z · LW(p) · GW(p)

This is the "rationalists' sexist and racist beliefs are linked to support for sexist and racist policies" argument, which is something that some of the people who promote taboo beliefs try to avoid. For example, Scott Alexander argues that it can be understood simply as having a curious itch to understand "the most irrelevant orthodoxy you can think of", which sure sounds different from "because they have important consequences for decisions and policy and life.

Replies from: steve2152, Algon
comment by Steven Byrnes (steve2152) · 2024-03-27T13:06:11.602Z · LW(p) · GW(p)

I don’t think I was making that argument.

If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.

Once we’ve established that “X is false” is newsworthy at all, we still need to weigh the cost vs benefits of disseminating that information.

I hope that everyone including rationalists are in agreement about all this. For example, prominent rationalists are familiar with the idea of infohazards, reputational risks, picking your battles, simulacra 2, and so on. I’ve seen a lot of strong disagreement on this forum about what newsworthy information should and shouldn’t be disseminated and in what formats and contexts. I sure have my own opinions!

…But all that is irrelevant to this discussion here. I was talking about whether Scott’s last name is newsworthy in the first place. For example, it’s not the case that lots of people around the world were under the false impression that Scott’s true last name was McSquiggles, and now NYT is going to correct the record. (It’s possible that lots of people around the world were under the false impression that Scott’s true last name is Alexander, but that misconception can be easily correctly by merely saying it’s a pseudonym.) If Scott’s true last name revealed that he was secretly British royalty, or secretly Albert Einstein’s grandson, etc., that would also at least potentially be newsworthy.

Not everything is newsworthy. The pebbles-on-the-sidewalk example I mentioned above is not newsworthy. I think Scott’s name is not newsworthy either. Incidentally, I also think there should be a higher bar for what counts as newsworthy in NYT, compared to what counts as newsworthy when I’m chatting with my spouse about what happened today, because of the higher opportunity cost.

Replies from: tailcalled, xpym
comment by tailcalled · 2024-03-27T14:26:13.518Z · LW(p) · GW(p)

I don’t think I was making that argument.

I agree, I'm just trying to say that the common rationalist theories on this topic often disagree with your take.

If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.

I think this argument would be more transparent with examples. Whenever I think of examples of popular beliefs that it would be reasonable to change one's support of in the light of this, they end up involving highly politicized taboos.

Once we’ve established that “X is false” is newsworthy at all, we still need to weigh the cost vs benefits of disseminating that information.

I hope that everyone including rationalists are in agreement about all this. For example, prominent rationalists are familiar with the idea of infohazards, reputational risks, picking your battles, simulacra 2, and so on. I’ve seen a lot of strong disagreement on this forum about what newsworthy information should and shouldn’t be disseminated and in what formats and contexts. I sure have my own opinions!

There's different distinctions when it comes to infohazards. One is non-Bayesian infohazards, where certain kinds of information is thought to break people's rationality; that seems obscure and not so relevant here. Another is recipes for destruction, where you give a small hostile faction the ability to unilaterally cause harm. This could theoretically be applicable if we were talking publishing Scott Alexander's personal address and his habits when for where he goes, as that makes it more practical for terrorists to attack him. But that seems less relevant for his real name, when it is readily available and he ends up facing tons of attention regardless.

Reputational risks can at times be acknowledged, but at the same time reputational risks are one of the main justifications for the taboos. Stereotyping is basically reputational risk on a group level; if rationalists dismiss the danger of stereotyping with "well, I just have a curious itch", that sure seems like a strong presumption of truthtelling over reputational risk.

Picking your battles seems mostly justified on pragmatics, so it seems to me that the NYT can just go "this is a battle that we can afford to pick".

Rationalists seem to usually consider simulacrum level 2 to be pathological, on the basis of presumption of the desirability of truth.

…But all that is irrelevant to this discussion here. I was talking about whether Scott’s last name is newsworthy in the first place. For example, it’s not the case that lots of people around the world were under the false impression that Scott’s true last name was McSquiggles, and now NYT is going to correct the record. (It’s possible that lots of people around the world were under the false impression that Scott’s true last name is Alexander, but that misconception can be easily correctly by merely saying it’s a pseudonym.) If Scott’s true last name revealed that he was secretly British royalty, or secretly Albert Einstein’s grandson, etc., that would also at least potentially be newsworthy.

Not everything is newsworthy. The pebbles-on-the-sidewalk example I mentioned above is not newsworthy. I think Scott’s name is not newsworthy either. Incidentally, I also think there should be a higher bar for what counts as newsworthy in NYT, compared to what counts as newsworthy when I’m chatting with my spouse about what happened today, because the higher opportunity cost.

I think this is a perfectly valid argument for why NYT shouldn't publish it, it just doesn't seem very strong or robust and doesn't square well with the general pro-truth ideology.

Like, if the NYT did go out and count the number of pebbles on your road, then yes there's an opportunity cost to this etc., which makes it a pretty unnecessary thing to do, but it's not like you'd have any good reason to whip out a big protest or anything. This is the sort of thing where at best the boss should go "was that really necessary?", and both "no, it was an accident" or "yes, because of <obscure policy reason>" are fine responses.

If one grants a presumption of the value of truth, and grants that it is permissible, admirable even, to follow the itch to uncover things that people would really rather downplay, then it seems really hard to say that Cade Metz did anything wrong.

Replies from: Jiro, steve2152, cubefox
comment by Jiro · 2024-03-30T04:55:08.350Z · LW(p) · GW(p)

Another is recipes for destruction, where you give a small hostile faction the ability to unilaterally cause harm. ... But that seems less relevant for his real name, when it is readily available and he ends up facing tons of attention regardless.

By coincidence, Scott has written about this subject.

Not being completely hidden isn't "readily available". If finding his name is even a trivial inconvenience [LW · GW], it doesn't cause the damage caused by plastering his name in the Times.

comment by Steven Byrnes (steve2152) · 2024-03-28T16:40:41.115Z · LW(p) · GW(p)

I think this is a perfectly valid argument for why NYT shouldn't publish it, it just doesn't seem very strong or robust… Like, if the NYT did go out and count the number of pebbles on your road, then yes there's an opportunity cost to this etc., which makes it a pretty unnecessary thing to do, but it's not like you'd have any good reason to whip out a big protest or anything.

The context from above is that we’re weighing costs vs benefits of publishing the name, and I was pulling out the sub-debate over what the benefits are (setting aside the disagreement about how large the costs are).

I agree that “the benefits are ≈0” is not a strong argument that the costs outweigh the benefits in and of itself, because maybe the costs are ≈0 as well. If a journalist wants to report the thickness of Scott Alexander’s shoelaces, maybe the editor will say it’s a waste of limited wordcount, but the journalist could say “hey it’s just a few words, and y’know, it adds a bit of color to the story”, and that’s a reasonable argument: the cost and benefit are each infinitesimal, and reasonable people can disagree about which one slightly outweighs the other.

But “the benefits are ≈0” is a deciding factor in a context where the costs are not infinitesimal. Like if Scott asserts that a local gang will beat him senseless if the journalist reports the thickness of his shoelaces, it’s no longer infinitesimal costs versus infinitesimal benefits, but rather real costs vs infinitesimal benefits.

If the objection is “maybe the shoelace thickness is actually Scott’s dark embarrassing secret that the public has an important interest in knowing”, then yeah that’s possible and the journalist should certainly look into that possibility. (In the case at hand, if Scott were secretly SBF’s brother, then everyone agrees that his last name would be newsworthy.) But if the objection is just “Scott might be exaggerating, maybe the gang won’t actually beat him up too badly if the shoelace thing is published”, then I think a reasonable ethical journalist would just leave out the tidbit about the shoelaces, as a courtesy, given that there was never any reason to put it in in the first place.

Replies from: tailcalled
comment by tailcalled · 2024-03-28T18:33:31.181Z · LW(p) · GW(p)

I get that this is an argument one could make. But the reason I started this tangent was because you said:

Here CM doesn’t directly argue that there was any benefit to doxxing; instead he kinda conveys a vibe / ideology that if something is true then it is self-evidently intrinsically good to publish it

That is, my original argument was not in response to the "Anyway, if the true benefit is zero (as I believe), then we don’t have to quibble over whether the cost was big or small" part of your post, it was to the vibe/ideology part.

Where I was trying to say, it doesn't seem to me that Cade Metz was the one who introduced this vibe/ideology, rather it seems to have been introduced by rationalists prior to this, specifically to defend tinkering with taboo topics.

Like, you mention that Cade Metz conveys this vibe/ideology that you disagree with, and you didn't try to rebut I directly, I assumed because Cade Metz didn't defend it but just treated it as obvious.

And that's where I'm saying, since many rationalists including Scott Alexander have endorsed this ideology, there's a sense in which it seems wrong, almost rude, to not address it directly. Like a sort of Motte-Bailey tactic.

comment by cubefox · 2024-03-28T00:21:53.229Z · LW(p) · GW(p)

If lots of people have a false belief X, that’s prima facie evidence that “X is false” is newsworthy. There’s probably some reason that X rose to attention in the first place; and if nothing else, “X is false” at the very least should update our priors about what fraction of popular beliefs are true vs false.

I think this argument would be more transparent with examples. Whenever I think of examples of popular beliefs that it would be reasonable to change one's support of in the light of this, they end up involving highly politicized taboos.

It is not surprising when a lot of people having a false belief is caused by the existence of a taboo. Otherwise the belief would probably already have been corrected or wouldn't have gained popularity in the first place. And giving examples for such beliefs of course is not really possible, precisely because it is taboo to argue that they are false.

Replies from: tailcalled
comment by tailcalled · 2024-03-28T09:07:13.473Z · LW(p) · GW(p)

It's totally possible to say taboo things, I do it quite often.

But my point is more, this doesn't seem to disprove the existence of the tension/Motte-Bailey/whatever dynamic that I'm pointing at.

comment by xpym · 2024-04-03T11:53:22.653Z · LW(p) · GW(p)

I think Scott’s name is not newsworthy either.

Metz/NYT disagree. He doesn't completely spell out why (it's not his style), but, luckily, Scott himself did:

If someone thinks I am so egregious that I don’t deserve the mask of anonymity, then I guess they have to name me, the same way they name criminals and terrorists.

Metz/NYT considered Scott to be bad enough to deserve whatever inconveniences/punishments would come to him as a result of tying his alleged wrongthink to his real name, is the long and short of it.

comment by Algon · 2024-03-27T12:45:43.080Z · LW(p) · GW(p)

Which racist and sexist policies?

Replies from: tailcalled
comment by tailcalled · 2024-03-27T13:00:24.486Z · LW(p) · GW(p)

None, if you buy the "we just have a curious itch to understand the most irrelevant orthodoxy you can think of" explanation. But if that's a valid reason for rationalists to dig into things that are taboo because of their harmful consequences, is it not then also valid for Cade Metz to follow a curious itch to dig into rationalists' private information?

Replies from: Algon, frankybegs
comment by Algon · 2024-03-27T13:58:43.551Z · LW(p) · GW(p)

Well, I don't understand what that position has to do with doxxing someone. What does obsessively pointing out how a reigning orthodoxy is incorrect have to do with revealing someone's private info and making it hard for them to do their jobs? The former is socially useful because a lot of orthodoxy's result in bad policies or cause people to err in their private lives or whatever. The latter mostly isn't. 

Yes, sometimes someone the two co-incide e.g. revealing that the church uses heliocentric models to calculate celesitial movements or watergate or whatever. But that's quite rare and I note Matz didn't provide any argument that doxxing scott is like one of those cases. 

Consider a counterfacual where Scott in his private life crusading against DEI policies in a visible way. Then people benifitting from those policies may want to know that "there's this political activist who's advocating for policies that harm you and the scope of his influence is way bigger than you thought". Which would clearly be useful info for a decent chunk of readers. Knowing his name would be useful! 

Instead, it's just "we gotta say his name. It's so obvious, you know?" OK. So what? Who does that help? Why's the knowledge valuable? I have not seen a good answer to those questions. Or consider: if Matz for some bizarre reason decided to figure out who "Algon" on LW is and wrote an article revealing that I'm X because "it's true" I'd say that's a waste of people's time and a bit of a dick move. 

Yes, he should still be allowed to do so, because regulating free-speech well is hard and I'd rather eat the costs than deal with poor regulations. Doesn't change the dickishness of it. 

Replies from: tailcalled
comment by tailcalled · 2024-03-27T15:13:20.858Z · LW(p) · GW(p)

The former is socially useful because a lot of orthodoxy's result in bad policies or cause people to err in their private lives or whatever. The latter mostly isn't.

I think once you get concrete about it in the discourse, this basically translates to "supports racist and sexist policies", albeit from the perspective of those who are pro these policies.

Let's take autogynephilia theory as an example of a taboo belief, both because it's something I'm familiar with and because it's something Scott Alexander has spoken out against, so we're not putting Scott Alexander in any uncomfortable position about it.

Autogynephilia theory has become taboo for various reasons. Some people argue that they should still disseminate it because it's true, even if it doesn't have any particular policy implications, but of course that leads to paradoxes where those people themselves tend to have privacy and reputation concerns where they're not happy about having true things about themselves shared publicly.

The alternate argument is on the basis of "a lot of orthodoxy's result in bad policies or cause people to err in their private lives or whatever", but when you get specific about what they are mean rather than dismissing it as "or whatever", it's something like "autogynephilia theory is important to know so autogynephiles don't end up thinking they're trans and transitioning", which in other language would mean something like "most trans women shouldn't have transitioned, and we need policies to ensure that fewer transition in the future". Which is generally considered an anti-trans position!

Now you might say, well, that position is a good position. But that's a spicy argument to make, so a lot of the time people fall back on "autogynephilia theory is true and we should have a strong presumption in favor of saying true things".

Now, there's also the whole "the discourse is not real life" issue, where the people who advocate for some belief might not be representative of the applications of that belief.

Replies from: Algon
comment by Algon · 2024-03-27T16:37:48.870Z · LW(p) · GW(p)

I think once you get concrete about it in the discourse, this basically translates to "supports racist and sexist policies", albeit from the perspective of those who are pro these policies.

That seems basically correct? And also fine. If you think lots of people are making mistakes that will hurt themselves/others/you and you can convince people about this by sharing info, that's basically fine to me. 

I still don't understand what this has to do with doxxing someone. I suspect we're talking past each other right now. 

but of course that leads to paradoxes where those people themselves tend to have privacy and reputation concerns where they're not happy about having true things about themselves shared publicly.

What paradoxes, which people, which things? This isn't a gotcha: I'm just struggling to parse this sentence right now. I can't think of any concrete examples that fit. Maybe some "there are autogenphyliacs who claim to be trans but aren't really and they'd be unhappy if this fact was shared because that would harm their reputation"? If that were true, and someone discovered a specific autogenphyliac who thinks they're not really trans but presents as such and someone outed them, I would call that a dick move. 

So I'm not sure what the paradox is. One stab at a potential paradox: a rational agent would come to similair conclusions if you spread the hypotheticaly true info that 99.99% of trans-females are autogenphyliacs, then a rational agent would conclude that any particular trans-woman is really a cis autogenphyliac. Which means you're basically doxxing them by providing info that would in this world be relevant to societies making decisions about stuff like who's allowed to compete in women's sports. 

I guess this is true but it also seems like an extreme case to me. Most people aren't that rational, and depending on the society, are willing to believe others about kinda-unlikely things about themselves. So in a less extreme hypothetical, say 99.99% vs 90%, I can see people believing most supposedly trans women aren't trans, but belives any specific person who claims they're a trans-woman.

 

EDIT: I believe that a signficant fraction of conflicts aren't mostly mistakes. But even there, the costs of attempts to restrict speech are quite high. 

Replies from: tailcalled
comment by tailcalled · 2024-03-27T16:58:14.751Z · LW(p) · GW(p)

That seems basically correct? And also fine. If you think lots of people are making mistakes that will hurt themselves/others/you and you can convince people about this by sharing info, that's basically fine to me.

I still don't understand what this has to do with doxxing someone. I suspect we're talking past each other right now.

I mean insofar as people insist they're interested in it for political reasons, it makes sense to distinguish this from the doxxing and say that there's no legitimate political use for Scott Alexander's name.

The trouble is that often people de-emphasize their political motivations, as Scott Alexander did when he framed it around the most irrelevant orthodoxy you can think of, that one is simply interested in out of a curious itch. The most plausible motivation I can think of for making this frame is to avoid being associated with the political motivation.

But regardless of the whether that explanation is true, if one says that there's a strong presumption in sharing truth to the point where people who dig into inconsequential dogmas that are taboo to question because they cover up moral contradictions that people are afraid will cause genocidal harm if unleashed, then it sure seems like this strong presumption in favor of truth also legitimizes mild cases of doxxing.

What paradoxes, which people, which things?

Michael Bailey tends to insist that it's bad to speculate about hidden motives that scientists like him might have for his research, yet when he explains his own research, he insists that he should study people's hidden motivation using only the justification of truth and curiosity.

Replies from: Algon
comment by Algon · 2024-03-27T17:53:05.277Z · LW(p) · GW(p)

OK, now I understand the connection to doxing much more clearly. Thank you. To be clear, I do not endorse legalizing a no-doxxing rule.

I still disagree because it didn't look like Metz had any reason to doxx Scott beyond "just because". There were no big benifits to readers or any story about why there was no harm done to Scott in spite of his protests. 

Whereas if I'm a journalist and encounter someone who says "if you release information about genetic differences in intelligence that will cause a genocide" I can give reasons for why that is unlikely. And I can give reasons for why I the associated common-bundle-of-beliefs-and-values ie. orthodoxy is not inconsequential, that there are likely, large (albeit not genocide large) harms that this orthodoxy is causing. 

Replies from: tailcalled
comment by tailcalled · 2024-03-27T18:31:11.391Z · LW(p) · GW(p)

I mean I'm not arguing Cade Metz should have doxxed Scott Alexander, I'm just arguing that there is a tension between common rationalist ideology that one should have a strong presumption in favor of telling the truth, and that Cade Metz shouldn't have doxxed Scott Alexander. As far as I can tell, this common rationalist ideology was a cover for spicier views that you have no issue admitting to, so I'm not exactly saying that there's any contradiction in your vibe. More that there's a contradiction in Scott Alexander's (at least at the time of writing Kolmogorov Complicity).

I'm not sure what my own resolution to the paradox/contradiction is. Maybe that the root problem seems to be that people create information to bolster their side in political discourse, rather than to inform their ideology about how to address problems that they care about. In the latter case, creating information does real productive work, but in the former case, information mostly turns into a weapon, which incentivizes creating some of the most cursed pieces of information known to the world.

Replies from: Jiro
comment by Jiro · 2024-03-30T04:59:20.558Z · LW(p) · GW(p)

I’m just arguing that there is a tension between common rationalist ideology that one should have a strong presumption in favor of telling the truth, and that Cade Metz shouldn’t have doxxed Scott Alexander.

His doxing Scott was in an article that also contained lies, lies which made the doxing more harmful. He wouldn't have just posted Scott's real name in a context where no lies were involved.

comment by frankybegs · 2024-03-27T14:30:31.374Z · LW(p) · GW(p)

Your argument rests on a false dichotomy. There are definitely other options than 'wanting to know truth for no reason at all' and 'wanting to know truth to support racist policies'. It is at least plausibly the case that beneficial, non-discriminatory policies could result from knowledge currently considered taboo. It could at least be relevant to other things and therefore useful to know!

What plausible benefit is there to knowing Scott's real name? What could it be relevant to?

Replies from: tailcalled
comment by tailcalled · 2024-03-27T15:22:28.731Z · LW(p) · GW(p)

People do sometimes make the case that knowing more information about sex and race differences can be helpful for women and black people. It's a fine approach to make, if one can actually make it work out in practice. My point is just that the other two approaches also exist.

comment by Amalthea (nikolas-kuhn) · 2024-03-27T13:05:57.057Z · LW(p) · GW(p)

You're conflating between "have important consequences" and "can be used as weapons in discourse"

comment by Rafael Harth (sil-ver) · 2024-03-27T22:48:34.685Z · LW(p) · GW(p)

I think this is clearly true, but the application is a bit dubious. There's a difference between "we have to talk about the bell curve here even though the object-level benefit is very dubious because of the principle that we oppose censorship" and "let's doxx someone". I don't think it's inconsistent to be on board with the first (which I think a lot of rationalists have proven to be, and which is an example of what you claimed exists) but not the second (which is the application here).

comment by DPiepgrass · 2024-04-04T05:22:52.174Z · LW(p) · GW(p)

Scott tried hard to avoid getting into the race/IQ controversy. Like, in the private email LGS shared, Scott states "I will appreciate if you NEVER TELL ANYONE I SAID THIS". Isn't this the opposite of "it's self-evidently good for the truth to be known"? And yes there's a SSC/ACX community too (not "rationalist" necessarily), but Metz wasn't talking about the community there.

My opinion as a rationalist is that I'd like the whole race/IQ issue to f**k off so we don't have to talk or think about it, but certain people like to misrepresent Scott and make unreasonable claims, which ticks me off, so I counterargue, just as I pushed a video by Shaun once when I thought somebody on ACX sounded a bit racist to me on the race/IQ topic.

Scott and myself are consequentialists. As such, it's not self-evidently good for the truth to be known. I think some taboos should be broached, but not "self-evidently" and often not by us. But if people start making BS arguments against people I like? I will call BS on that, even if doing so involves some discussion of the taboo topic. But I didn't wake up this morning having any interest in doing that.

Replies from: tailcalled
comment by tailcalled · 2024-04-04T16:40:41.549Z · LW(p) · GW(p)

I agree that Scott Alexander's position is that it's not self-evidently good for the truth about his own views to be known. I'm just saying there's a bunch of times he's alluded to or outright endorsed it being self-evidently good for the truth to be known in general, in order to defend himself when criticized for being interested in the truth about taboo topics.

comment by JohnBuridan · 2024-04-02T03:02:45.101Z · LW(p) · GW(p)

I for one am definitely worse off.

  1. I now have to read Scott on Substack instead of SSC.
  2. Scott doesn't write sweet things that could attract nasty flies anymore.
comment by Tenoke · 2024-03-26T17:51:27.274Z · LW(p) · GW(p)

He comes out pretty unsympathetic and stubborn.

Did any of your views of him change?

Replies from: MichaelDickens, elityre, elityre, Muireall
comment by MichaelDickens · 2024-03-27T02:00:50.563Z · LW(p) · GW(p)

When the NYT article came out, some people discussed the hypothesis that perhaps the article was originally going to be favorable, but the editors at NYT got mad when Scott deleted his blog so they forced Cade to turn it into a hit piece. This interview pretty much demonstrates that it was always going to be a hit piece (and, as a corollary, Cade lied to people saying it was going to be positive to get them to do interviews).

So yes this changed my view from "probably acted unethically but maybe it wasn't his fault" to "definitely acted unethically".

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2024-03-27T18:20:17.251Z · LW(p) · GW(p)

I think Metz acted unethically here, for other reasons. But there are lots of cases of people doing bad things that are newsworthy, that can't be covered without journalists lying to interview subjects. If you ban lying to subjects, a swath of important news becomes impossible to cover.

It's not obvious to me what the right way to handle this is, but I wanted to mark this cost. 

Replies from: SaidAchmiz, GWS
comment by Said Achmiz (SaidAchmiz) · 2024-03-27T23:09:08.191Z · LW(p) · GW(p)

If you ban lying to subjects, a swath of important news becomes impossible to cover.

What would be some examples of this?

Replies from: localdeity
comment by localdeity · 2024-03-28T03:56:32.495Z · LW(p) · GW(p)

The ones that come to my mind are "Person or Organization X is doing illegal, unethical, or otherwise unpopular practices which they'd rather conceal from the public."  Lie that you're ideologically aligned or that you'll keep things confidential, use that to gain access.  Then perhaps lie to blackmail them to give up a little more information before finally publishing it all.  There might be an ethical line drawn somewhere, but if it's not at "any lying" then I don't know where it is.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2024-03-28T12:13:46.872Z · LW(p) · GW(p)

I was actually looking for specific examples, precisely so that we could test our intuitions, rather than just stating our intuitions. Do you happen to have any particular ones in mind?

Replies from: localdeity
comment by localdeity · 2024-03-28T13:36:06.557Z · LW(p) · GW(p)

Looking at Wiki's Undercover Journalism article, one that comes to mind is Nellie Bly's Ten Days in a Mad-House.

[Bly] took an undercover assignment for which she agreed to feign insanity to investigate reports of brutality and neglect at the Women's Lunatic Asylum on Blackwell's Island. [...]

Once admitted to the asylum, Bly abandoned any pretense at mental illness and began to behave as she would normally. The hospital staff seemed unaware that she was no longer "insane" and instead began to report her ordinary actions as symptoms of her illness. Even her pleas to be released were interpreted as further signs of mental illness. Speaking with her fellow patients, Bly was convinced that some were as "sane" as she was.

Bly experienced the deplorable conditions firsthand. The nurses behaved obnoxiously and abusively, telling the patients to shut up, and beating them if they did not. The food consisted of gruel broth, spoiled beef, bread that was little more than dried dough, and dirty undrinkable water. The dangerous patients were tied together with ropes. The patients were made to sit for much of each day on hard benches with scant protection from the cold. Waste was all around the eating places. Rats crawled all around the hospital. [...]

The bath water was rarely changed, with many patients bathing in the same filthy water. Even when the water was eventually changed, the staff did not scrub or clean out the bath, instead throwing the next patient into a stained, dirty tub. The patients also shared bath towels, with healthy patients forced to dry themselves with a towel previously used by patients with skin inflammations, boils, or open sores.

Interestingly...

While physicians and staff worked to explain how she had deceived so many professionals, Bly's report prompted a grand jury to launch its own investigation[9] with Bly assisting. The jury's report resulted in an $850,000 increase in the budget of the Department of Public Charities and Corrections. The grand jury also ensured that future examinations were more thorough such that only the seriously ill were committed to the asylum. 

I can't say I'm happy with failure being rewarded with a higher budget.  Still, it may have been true that their budget was insufficient to provide sanitary and humane conditions.  Anyway, the report itself seems to have been important and worthwhile.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2024-03-28T18:03:12.988Z · LW(p) · GW(p)

I agree that this investigation was worthwhile and important.

But is it a case of “lying to interview subjects”? That is what we’re talking about [LW(p) · GW(p)], after all. Did Bly even interview anyone, in the course of her investigation?

Undercover investigative journalism has some interesting ethical conundrums of its own, but it’s not clear what it has to do with interviews, or lying to the subjects thereof…

comment by Stephen Bennett (GWS) · 2024-03-28T05:43:52.618Z · LW(p) · GW(p)

What do you think Metz did that was unethical here?

comment by Eli Tyre (elityre) · 2024-03-26T19:59:18.355Z · LW(p) · GW(p)

Poll option: No, this didn't change my view of CM or the situation around the his article on Slate Star Codex.

comment by Eli Tyre (elityre) · 2024-03-26T19:58:31.640Z · LW(p) · GW(p)

Poll option: Yes, this changed my view of CM or the situation around the his article on Slate Star Codex.

comment by Muireall · 2024-03-27T01:31:21.914Z · LW(p) · GW(p)

In 2021, I was following these events and already less fond of Scott Alexander than most people here, and I still came away with the impression that Metz's main modes were bumbling and pattern-matching. At least that's the impression I've been carrying around until today. I find his answers here clear, thoughtful, and occasionally cutting, although I get the impression he leaves more forceful versions on the table for the sake of geniality. I'm wondering whether I absorbed some of the community's preconceptions or instinctive judgments about him or journalists in general.

I do get the stubbornness, but I read that mostly as his having been basically proven right (and having put in the work at the time to be so confident).

comment by Seth Herd · 2024-03-26T17:58:17.149Z · LW(p) · GW(p)

Wow, what a bad justification for doxxing someone. I somehow thought the NYT had a slightly better argument.

Replies from: Algon
comment by Algon · 2024-03-26T18:34:57.089Z · LW(p) · GW(p)

Yeah, it is very poor. Knowing Scott Alexander's real name doesn't help their readers make better decisions or understand Scott's influence or know the history of his ideas, or help place him in a larger context. It is gossip. And I imagine they can't understand the harms of releasing private info can be quite large, so they don't see why what they did was bad. Personally, I feel like a lot of journalists are a particular sort of aggresively conventional-minded people who feel the need to reveal private info because if you have nothing to fear you have nothing to hide.

EDIT: Last sentence is malformed. I meant "if you have done nothing wrong you have nothing to hide".

comment by Askwho · 2024-03-26T18:11:02.375Z · LW(p) · GW(p)

I've produced a multi voiced Elevenlabs reading of this post, for ease of listening and speaker differentiation.

https://open.substack.com/pub/askwhocastsai/p/my-interview-with-cade-metz-on-his

comment by tailcalled · 2024-03-26T18:29:44.152Z · LW(p) · GW(p)

The New York Times ceases to serve its purpose if we're leaving out stuff that's obvious.

Actually a funny admission. Taken literally, this almost means The New York Times' purpose is to say stuff that's obvious, which of course is a very distinct purpose from informing people about nonobvious important things.

And whatever people think, my job at the Times is to give everyone their due, and to give everyone's point of view a forum and help them get that point of view into any given story.

This of course can't be literally true: most people and most views don't get represented in The New York Times.

Combining these two self-assessments, could one maybe say that the purpose of The New York Times is to comb through the obvious information in order to define the Overton Window for what the "legitimate" views on a topic are? 🤔

Replies from: NathanBarnard, None
comment by NathanBarnard · 2024-03-26T19:00:48.969Z · LW(p) · GW(p)

This reads as a gotcha to me rather than as a comment actually trying to understand the argument being made. 

Replies from: tailcalled
comment by tailcalled · 2024-03-26T19:26:18.021Z · LW(p) · GW(p)

Cade Metz's argument seems to be that what he wrote is basically some true stuff that his readers care about and which wasn't actually that harmful to share. Which seems like a valid argument in favor of writing it.

However, the fact that he has made a valid argument in favor of writing it does not mean that we aren't allowed to be interested in why his readers find certain things interesting to read about. These self-assessments seem like evidence about that to me.

comment by [deleted] · 2024-03-26T21:31:37.954Z · LW(p) · GW(p)

But it's not (obviously, and it's really comical) to be taken literally, that's the beauty of interpretation: can we interpret the message to readers as 'those to whom the obvious must be shown'? And can we say that their purpose is to inform people who do not grasp it? 

comment by frankybegs · 2024-03-27T13:40:45.729Z · LW(p) · GW(p)

This frankly enraged me. Good job on the directness of the opening question. I think you fell back a little quickly, and let him get away with quite a lot of hand-waving, where I would have tried to really nail him down on the details and challenge him to expand on those flimsy justifications. But that's very easy to say, and not so easy to do in the context of a live conversation.

Replies from: frankybegs
comment by frankybegs · 2024-03-27T13:58:36.772Z · LW(p) · GW(p)

One specific thing that I'd definitely have challenged is the 'I don't think the New Yorker article was very fair to my point of view'. What point of view, specifically, and how was it unfair? Again, very much easier from the comfort of my office than in live conversation with him, but I would have loved to see you pin him down on this.

comment by Adam Zerner (adamzerner) · 2024-04-15T05:15:05.223Z · LW(p) · GW(p)

I'll provide a dissenting perspective here. I actually came away from reading this feeling like Metz' position is maybe fine.

Everybody saw it. This is an influential person. That means he's worth writing about. And so once that's the case, then you withhold facts if there is a really good reason to withhold facts. If someone is in a war zone, if someone is really in danger, we take this seriously.

It sounds like he's saying that the Times' policy is that you only withhold facts if there's a "really" good reason to do so. I'm not sure what type of magnitude "really" implies, but I could see the amount of harm at play here falling well below it. If so, then Metz is in a position where his employer has a clear policy and doing his job involves following that policy.

As a separate question, we can ask whether "only withhold facts in warzone-type scenarios" is a good policy. I lean moderately strongly away from thinking it's a good policy. It seems to me that you can apply some judgement and be more selective than that.

However, I have a hard time moving from "moderately strongly" to "very strongly". To make that move, I'd need to know more about the pros and cons at play here, and I just don't have that good an understanding. Maybe it's a "customer support reads off a script" type of situation. Let the employee use their judgement; most of the time it'll probably be fine; once in a while they do something dumb enough to make it not worth letting them use their judgement. Or maybe journalists won't be dumb if they are able to use judgement here, but maybe they'll use that power to do bad things.

I dunno. Just thinking out loud.

Circling back around, suppose hypothetically we assume that the Times does have a "only withhold facts in a warzone-type scenario" policy, that we know that this is a bad and overall pretty harmful policy, and that Metz understands and agrees with all of this. What should Metz do in this hypothetical situation?

I feel unclear here. On the one hand, it's icky to be a part of something unethical and harmful like that, and if it were me I wouldn't want to live my life like that, so I'd want to quit my job and do something else. But on the other hand, there's various personal reasons why quitting your job might be tough. It's also possible that he should take a loss here with the doxing so that he is in position to do some sort of altruistic thing.

Probably not. He's probably in the wrong in this hypothetical situation if he goes along with the bad policy. I'm just not totally sure.

comment by tailcalled · 2024-03-26T18:05:43.953Z · LW(p) · GW(p)

Recently I've been getting into making "personality types" (after finding an alternative to factor analysis that seems quite interesting and underused - most personality type methods border on nonsense, but I like this one). One type that has come up across multiple sources is a type I call "avoidant" (because my poorly-informed impression is that it's related to something a lot of people call "avoidant attachment").

(Also possibly related to schizoid personality disorder - but personality disorders are kind of weird, so not sure.)

Avoidants are characterized by being introverted, not just in the sense of tending to be alone (which could lead to loneliness for many people, especially socially anxious people), but in the sense of preferring to be alone and tending to hide information about themselves. They tend to try to be independent, and distrust and criticize other people. They do their own research and form their own opinions.

A lot of phenomena in the rationalist community seem to be well-analyzed under the "avoidant" umbrella, including the issue of revealing Scott Alexander's name. (Scott Alexander seems avoidant. I'm avoidant. Aella seems very avoidant. Gwern is avoidant, possibly even moreso than Aella. Not sure if significant rationalist figures tend to be avoidant or if this is just a coincidence. Not sure whether you count as avoidant, your brand of introversion also seems to fit the "anxious" type, but you could be on both.)

The main take I've seen from non-avoidant people who want to do therapy on avoidants is that avoidants learned to deal with social threats by being disturbing to others. In full generality, this seems like an unfair take to me: often non-avoidant people do genuinely do useless/counterproductive stuff to seem attractive, or say misleading and false things about important matters for propagandistic purposes. Avoidants seem to tend to be opposed to this.

On the other hand, there's some dangers to fall into. Avoidants don't magically become specific in their critiques, so there's a need to be careful about accuracy. And being introverted, avoidants don't have much of a reputation, and being disagreeable, the reputation they do have can get quite negative, plus their nonconformist tendencies means they're less involved with the official reputation systems; this leaves avoidants vulnerable in a way that can motivate more bias than other people have. Also, introversion weakens the feedback loops as withdrawing socially means that one is further from "the action", and because one doesn't get as many signals from other people about how well one is doing.

Replies from: gwern, tailcalled
comment by gwern · 2024-03-26T19:09:44.028Z · LW(p) · GW(p)

As far as journalists go, I'm not 'avoidant'.

I have talked to a number of journalists over the years. I actually agreed to interview with Cade Metz himself on the subject of SSC before this all came out; Metz just ghosted me at the arranged time (and never apologized).

What I have always done, and what I always advised the people who ask me about how to handle journalists, is maintained the unconditional requirement of recording it: preferably text, but audio recording if necessary.* (I also remind people that you can only say things like "off the record" or "on background" if you explicitly say that before you say anything spicy & the journalist assents in some way.)

I've noticed that when people are unjustly burned by talking to journalists, of the "I never said that quote" sort, it always seems to be in un-recorded contexts. At least so far, it has worked out for me and as far as I know, everyone I have given this advice to.

* oddly, trying to do it via text tends to kill a lot of interview requests outright. It doesn't seem to matter to journalists if it's email or Twitter DM or Signal or IRC or Discord, they just hate text, which is odd for a profession which mostly deals in, well, text. Nor do they seem to care that I'm hearing-impaired and so just casually phoning me up may be awesome for them but it's not so awesome for me... My general view is that if a journalist cares so little about interviewing me that wanting to do it in text is a dealbreaker for them, then that interview wasn't worth my time either; and so these days when I get an interview request, I insist on doing it via text (which is, of course, inherently recorded).

Replies from: cmessinger, tailcalled
comment by chanamessinger (cmessinger) · 2024-03-26T21:30:00.334Z · LW(p) · GW(p)

I feel pretty sympathetic to the desire not to do things by text; I suspect you get much more practiced and checked over answers that way.

Replies from: localdeity, antanaclasis
comment by localdeity · 2024-03-26T22:46:53.382Z · LW(p) · GW(p)

I suspect you get much more practiced and checked over answers that way.

In some contexts this would be seen as obviously a good thing.  Specifically, if the thing you're interested in is the ideas that your interviewee talks about, then you want them to be able to consider carefully and double-check their facts before sending them over.

The case where you don't want that would seem to be the case where your primary interest is in the mental state of your interviewee, or where you hope to get them to stumble into revealing things they would want to hide.

comment by antanaclasis · 2024-03-27T01:42:36.858Z · LW(p) · GW(p)

Another big thing is that you can’t get tone-of-voice information via text. The way that someone says something may convey more to you than what they said, especially for some types of journalism.

comment by tailcalled · 2024-03-26T19:36:17.017Z · LW(p) · GW(p)

Good advice, though I should make clear that my discussion about avoidants isn't about journalists, but rather general personality (probably reducible by linear transformation to Big Five, since my method is still based on matrix factorizations of survey data - I'd guess low extraversion, low agreeableness, high emotional stability, high openness, low conscientiousness, though ofc varying from person to person).

comment by tailcalled · 2024-03-28T15:17:33.225Z · LW(p) · GW(p)

Why the downvotes? Because it's an irrelevant/tangential ramble? Or some more specific reason?

Replies from: winstonBosan, DPiepgrass
comment by winstonBosan · 2024-03-29T01:17:51.388Z · LW(p) · GW(p)

I am not everyone else, but the reason I downvoted on the second axis is because: 

  • I still don't really understand the avoidant/non-avoidant taxonomy. I am confused when avoidant is both "introverted... and prefer to be alone" while "avoidants... being disturbing to others" when Scott never intended to disturb Metz's life? And Scott doesn't owe anyone anything - avoidant or not. And the claim about Scott being low conscientious? Gwern being low conscientious? If it "varying from person to person" so much, is it even descriptive? 
  • Making a claim of Gwern being avoidant, and Gwern said that Gwern is not. It might be the case that Gwern is lying. But that seems far stretched and not yet substantiated. But it seemed confusing enough that Gwern also couldn't tell how wide the concept applies.
Replies from: tailcalled
comment by tailcalled · 2024-03-29T10:07:08.541Z · LW(p) · GW(p)

I still don't really understand the avoidant/non-avoidant taxonomy. I am confused when avoidant is both "introverted... and prefer to be alone" while "avoidants... being disturbing to others" when Scott never intended to disturb Metz's life?

The part about being disturbing wasn't supposed to refer to Scott's treatment of Cade Metz, it was supposed to refer to rationalist's interests in taboo and disagreeable topics. And as for trying to be disturbing, I said that I think the non-avoidant people were being unfair in their characterization of them, as it's not that simple and often it's correction to genuine deception by non-avoidants.

And the claim about Scott being low conscientious? Gwern being low conscientious? If it "varying from person to person" so much, is it even descriptive?

My model is an affine transformation applied to Big Five scores, constrained to make the relationship from transformed scores to items linear rather than affine, and optimized to make people's scores sparse.

This is rather technical, but the consequence is that my model is mathematically equivalent to a subspace of the Big Five, and the Big Five has similar issues where it can tend to lump different stuff together. Like one could just as well turn it around and say that the Big Five lumps my anxious and avoidant profiles together under the label of "introverted". (Well, the Big Five has two more dimensions than my model does, so it lumps fewer things together, but other models have more dimensions than Big Five, so Big Five lumps things together relative to those model.)

My model is new, so I'm still experimenting with it to see how much utility I find in it. Maybe I'll abandon it as I get bored and it stops giving results.

Making a claim of Gwern being avoidant, and Gwern said that Gwern is not. It might be the case that Gwern is lying. But that seems far stretched and not yet substantiated. But it seemed confusing enough that Gwern also couldn't tell how wide the concept applies.

Gwern said that he's not avoidant of journalists, but he's low extraversion, low agreeableness, low neuroticism, high openness, mid conscientiousness, so that definitionally makes him avoidant under my personality model (which as mentioned is just an affine transformation of the Big Five). He also alludes to having schizoid personality disorder, which I think is relevant to being avoidant. As I said, this is a model of general personality profiles, not of interactions with journalists specifically.

Replies from: tailcalled
comment by tailcalled · 2024-03-29T10:15:37.287Z · LW(p) · GW(p)

I guess for reference, here's a slightly more complete version of the personality taxonomy:

  • Normative: Happy, social, emotionally expressive. Respects authority and expects others to do so too.

  • Anxious: Afraid of speaking up, of breaking the rules, and of getting noticed. Tries to be alone as a result. Doesn't trust that others mean well.

  • Wild: Parties, swears, and is emotionally unstable. Breaks rules and supports others (... in doing the same?)

  • Avoidant: Contrarian, intellectual, and secretive. Likes to be alone and doesn't respect rules or cleanliness.

In practice people would be combinations of these archetypes, rather than purely being one of them. In some versions, the Normative type split into three:

  • Jockish: Parties and avoids intellectual topics.
  • Steadfast: Conservative yet patient and supportive.
  • Perfectionistic: Gets upset over other people's mistakes and tries to take control as a result.

This would make it as fully expressive as the Big Five.

... but there was some mathematical trouble in getting it to be replicable and "nice" if I included 6 profiles, so I'm expecting to be stuck at 4 types unless I discover some new mathematical tricks.

comment by DPiepgrass · 2024-04-04T05:57:10.752Z · LW(p) · GW(p)

Speaking for myself: I don't prefer to be alone or tend to hide information about myself. Quite the opposite; I like to have company but rare is the company that likes to have me, and I like sharing, though it's rare that someone cares to hear it. It's true that I "try to be independent" and "form my own opinions", but I think that part of your paragraph is easy to overlook because it doesn't sound like what the word "avoidant" ought to mean. (And my philosophy is that people with good epistemics tend to reach similar conclusions, so our independence doesn't necessarily imply a tendency to end up alone in our own school of thought, let alone prefer it that way.)

Now if I were in Scott's position? I find social media enemies terrifying and would want to hide as much as possible from them. And Scott's desire for his name not to be broadcast? He's explained it as related to his profession, and I don't see why I should disbelieve that. Yet Scott also schedules regular meetups where strangers can come, which doesn't sound "avoidant". More broadly, labeling famous-ish people who talk frequently online as "avoidant" doesn't sound right.

Also, "schizoid" as in schizophrenia? By reputation, rationalists are more likely to be autistic, which tends not to co-occur with schizophrenia, and the ACX survey is correlated with this reputation. (Could say more but I think this suffices.)

Replies from: tailcalled
comment by tailcalled · 2024-04-04T16:33:11.915Z · LW(p) · GW(p)

Speaking for myself: I don't prefer to be alone or tend to hide information about myself. Quite the opposite; I like to have company but rare is the company that likes to have me, and I like sharing, though it's rare that someone cares to hear it.

Sounds like you aren't avoidant, since introversion-related items tend to be the ones most highly endorsed by the avoidant profile.

Now if I were in Scott's position? I find social media enemies terrifying and would want to hide as much as possible from them. And Scott's desire for his name not to be broadcast? He's explained it as related to his profession, and I don't see why I should disbelieve that. Yet Scott also schedules regular meetups where strangers can come, which doesn't sound "avoidant". More broadly, labeling famous-ish people who talk frequently online as "avoidant" doesn't sound right.

Scott Alexander's MBTI type is INTJ. The INT part is all aligned with avoidant, so I still say he's avoidant. Do you think all the meetups and such mean that he's really ENTJ?

As for wanting to hide from social media enemies, I'd speculate that this causally contributes to avoidant personality.

Also, "schizoid" as in schizophrenia? By reputation, rationalists are more likely to be autistic, which tends not to co-occur with schizophrenia, and the ACX survey is correlated with this reputation. (Could say more but I think this suffices.)

Schizoid as in schizoid.

comment by Review Bot · 2024-03-26T20:49:44.789Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?