[Linkpost] Guardian article covering Lightcone Infrastructure, Manifest and CFAR ties to FTX

post by ROM (scipio ) · 2024-06-17T10:05:51.481Z · LW · GW · 9 comments

This is a link post for https://www.theguardian.com/technology/article/2024/jun/16/sam-bankman-fried-ftx-eugenics-scientific-racism

Contents

9 comments

Response from Habryka:

9 comments

Comments sorted by top scores.

comment by Jackson Silver (life-is-meaningless) · 2024-06-17T11:27:44.582Z · LW(p) · GW(p)

The response from Habryka points out several factual inaccuracies, but I don't see anything that directly refutes the core issue the article brings up. I recognize that engaging with the substance of the allegations might be awkward and difficult, not constituting "winning" in the rationalist sense. 

My experience and observations of the rationalist community have been completely resonant with this section:

Daniel HoSang, a professor of American studies at Yale University and a part of the Anti-Eugenics Collective at Yale, said: “The ties between a sector of Silicon Valley investors, effective altruism and a kind of neo-eugenics are subtle but unmistakable. They converge around a belief that nearly everything in society can be reduced to markets and all people can be regarded as bundles of human capital.”

 

HoSang added: “From there, they anoint themselves the elite managers of these forces, investing in the ‘winners’ as they see fit.”

“The presence of Stephen Hsu here is particularly alarming,” HoSang concluded. “He’s often been a bridge between fairly explicit racist and antisemitic people like Ron Unz, Steven Sailer and Stefan Molyneux and more mainstream figures in tech, investment and scientific research, especially around human genetics.”

In addition to predictably focusing on dismissing the article on the basis of the reporter acting in "bad faith," I'd be curious to see if there is any framing whatsoever that would facilitate, or even encourage, some community-wide introspection.

Replies from: pktechgirl, cubefox, habryka4
comment by Elizabeth (pktechgirl) · 2024-06-18T00:29:43.708Z · LW(p) · GW(p)

I'm sad this got voted down to zero (before I strong-upvoted), because I think "how can we have a good version of this discussion?" is a good question to ask. I'm not happy with how lesswrong discusses sensitive topics and would love to see those go better. 

I started writing out some specific ideas, and then got overwhelmed by how much work they'd be to write and then deal with the comments. Just writing up the ideas is an afternoon project.

Replies from: life-is-meaningless
comment by Jackson Silver (life-is-meaningless) · 2024-06-19T14:56:52.626Z · LW(p) · GW(p)

Thanks - appreciate the upvote and encouragement to discuss. I'll take this opportunity to point out some observations about rationalist communities:

  • Your usage of "sensitive" points correctly towards an emotional underpinning to why the topics are never addressed well. It's worth asking - sensitive for whom? The crux of the article is rationalist organizations inviting people who are comfortable holding provocative opinions on topics that are extremely sensitive for many people. If discourse in a group is structurally accommodating to sensitivities of some people but apathetic to sensitivities of others - how this is not explicitly unempathetic and clearly immoral?
  • Specifically, I notice that shifting the discourse from object level ("do we have a race problem?") to meta-level ("are people alleging a race problem in the precisely correct manner according to our framing? no? I guess that's that.") enables avoiding the core issue. I pointed racism in rationalist communities on Twitter once, and only received a link to Scott Alexander's Black People Likely as a response, which moved the discussion from racial bias in rationalist communities to racism against black people in polyamory circles (interesting derailment in itself) to whether allegations of racism are being brought about correctly.
  • It's not that rationalists are "bad" people; rather, the collective assumption/pretense that all their emotions (and emotional avoidance) in all contexts do and should stem from rational thought, while leading to success in mechanistic avenues like math, programming, and capitalism, and even benefiting others through (eg.) EA work, can cause disproportionate issues when dealing with highly emotionally sensitive topics.
  • None of this implies that the rationalist way of thinking is worse than anything else - clearly, if I presume a group is racist/immoral, I can choose to walk away (I specifically did this many years ago, after attending my first and last SSC meetup), but this presumption of airtight moral virtue, coalescence of reasoning abilities and undervaluing empathy seems to be heading towards rationality quite literally facilitating human catastrophe [EA · GW]while amplifying the very worst aspects of humanity.
Replies from: life-is-meaningless
comment by Jackson Silver (life-is-meaningless) · 2024-06-20T02:45:26.864Z · LW(p) · GW(p)

I see a response to my reply above saying "This seems to misunderstand the thing that it argues against". I wasn't arguing against anything specific - this was my attempt to understand why rationalists repeatedly fall into this pattern, but I must have missed something. 

I spent a few difficult hours today reading through the discussion on the Manifest allegations on the EA forum and Twitter (figured it's an appropriate way to spend Juneteenth) and my thoughts have converged to this tweet by Shakeel. 

I'm done with reading or posting on LW (like I mentioned, I've had past in-person experience in this realm), but I'm leaving this suggestion here for any person of color or anyone who is firmly opposed to racism trying to  disambiguate the extent of racism in the rationalist community - RUN!

comment by cubefox · 2024-06-17T18:27:38.428Z · LW(p) · GW(p)

One problem with discussing this is that we here arguably have an asymmetric discourse situation [LW(p) · GW(p)].

Replies from: daniel-glasscock
comment by Daniel (daniel-glasscock) · 2024-06-18T01:49:44.342Z · LW(p) · GW(p)

I don't think we can engage in much "community-wide introspection" without discussing the object-level issues in question, and I can't think of a single instance of an online discussion of that specific issue going particularly well. 

That's why I'm (mostly) okay tabooing these sorts of discussions. It's better to deal with the epistemic uncertainty than to risk converging on a false belief.

comment by habryka (habryka4) · 2024-06-18T00:35:10.135Z · LW(p) · GW(p)

Yeah, to be clear, my response was just trying to point out the easily-verifiable mistakes. I think almost everything related to Lightcone and Lighthaven is also quite off-base, but I agree that the relevant conversation for that hasn't happened. 

I do think this article is a quite bad context in which to discuss those things. It sets a bad level for the discourse, and I think if anyone wants to talk about this stuff, I would just bring it up in a week or two when we will still face the same decisions, but there isn't a bunch of angry internet mobs waiting around to take sides, and no badly researched article setting a high level of background confusion about what the actual facts are.

Not confident of this. I certainly won't stop anyone from wanting to poke at this stuff and discuss it now.

Replies from: evhub
comment by evhub · 2024-06-18T00:43:22.135Z · LW(p) · GW(p)

Fwiw this sort of thing was definitely a component of why I didn't go to Manifest and was initially on the fence about LessOnline. The number one factor was just how busy I am, but I would definitely feel more comfortable going to events like that if there were a stronger policy against racists/fascists/nazis/white supremacists/etc.

Replies from: habryka4
comment by habryka (habryka4) · 2024-06-18T03:27:41.851Z · LW(p) · GW(p)

(To be clear, for LessOnline we didn't invite anyone who I think even remotely fits that description, I think? It's plausible we missed something, but like, actual racism is totally the kind of thing that would have caused me to remove someone from the "blogs we love" list, if it was part of their blogging. 

Manifest runs a much stronger "just invite people who are popular and share interests, with less regards for why they are popular" policy, which I think has a bunch of stuff going for it, but definitely produces a very different selection of speakers as I think is apparent from looking at the invited speaker lists.)