Posts

Comments

Comment by Jackson Silver (life-is-meaningless) on [Linkpost] Guardian article covering Lightcone Infrastructure, Manifest and CFAR ties to FTX · 2024-06-20T02:45:26.864Z · LW · GW

I see a response to my reply above saying "This seems to misunderstand the thing that it argues against". I wasn't arguing against anything specific - this was my attempt to understand why rationalists repeatedly fall into this pattern, but I must have missed something. 

I spent a few difficult hours today reading through the discussion on the Manifest allegations on the EA forum and Twitter (figured it's an appropriate way to spend Juneteenth) and my thoughts have converged to this tweet by Shakeel. 

I'm done with reading or posting on LW (like I mentioned, I've had past in-person experience in this realm), but I'm leaving this suggestion here for any person of color or anyone who is firmly opposed to racism trying to  disambiguate the extent of racism in the rationalist community - RUN!

Comment by Jackson Silver (life-is-meaningless) on [Linkpost] Guardian article covering Lightcone Infrastructure, Manifest and CFAR ties to FTX · 2024-06-19T14:56:52.626Z · LW · GW

Thanks - appreciate the upvote and encouragement to discuss. I'll take this opportunity to point out some observations about rationalist communities:

  • Your usage of "sensitive" points correctly towards an emotional underpinning to why the topics are never addressed well. It's worth asking - sensitive for whom? The crux of the article is rationalist organizations inviting people who are comfortable holding provocative opinions on topics that are extremely sensitive for many people. If discourse in a group is structurally accommodating to sensitivities of some people but apathetic to sensitivities of others - how this is not explicitly unempathetic and clearly immoral?
  • Specifically, I notice that shifting the discourse from object level ("do we have a race problem?") to meta-level ("are people alleging a race problem in the precisely correct manner according to our framing? no? I guess that's that.") enables avoiding the core issue. I pointed racism in rationalist communities on Twitter once, and only received a link to Scott Alexander's Black People Likely as a response, which moved the discussion from racial bias in rationalist communities to racism against black people in polyamory circles (interesting derailment in itself) to whether allegations of racism are being brought about correctly.
  • It's not that rationalists are "bad" people; rather, the collective assumption/pretense that all their emotions (and emotional avoidance) in all contexts do and should stem from rational thought, while leading to success in mechanistic avenues like math, programming, and capitalism, and even benefiting others through (eg.) EA work, can cause disproportionate issues when dealing with highly emotionally sensitive topics.
  • None of this implies that the rationalist way of thinking is worse than anything else - clearly, if I presume a group is racist/immoral, I can choose to walk away (I specifically did this many years ago, after attending my first and last SSC meetup), but this presumption of airtight moral virtue, coalescence of reasoning abilities and undervaluing empathy seems to be heading towards rationality quite literally facilitating human catastrophe while amplifying the very worst aspects of humanity.
Comment by Jackson Silver (life-is-meaningless) on [Linkpost] Guardian article covering Lightcone Infrastructure, Manifest and CFAR ties to FTX · 2024-06-17T11:27:44.582Z · LW · GW

The response from Habryka points out several factual inaccuracies, but I don't see anything that directly refutes the core issue the article brings up. I recognize that engaging with the substance of the allegations might be awkward and difficult, not constituting "winning" in the rationalist sense. 

My experience and observations of the rationalist community have been completely resonant with this section:

Daniel HoSang, a professor of American studies at Yale University and a part of the Anti-Eugenics Collective at Yale, said: “The ties between a sector of Silicon Valley investors, effective altruism and a kind of neo-eugenics are subtle but unmistakable. They converge around a belief that nearly everything in society can be reduced to markets and all people can be regarded as bundles of human capital.”

 

HoSang added: “From there, they anoint themselves the elite managers of these forces, investing in the ‘winners’ as they see fit.”

“The presence of Stephen Hsu here is particularly alarming,” HoSang concluded. “He’s often been a bridge between fairly explicit racist and antisemitic people like Ron Unz, Steven Sailer and Stefan Molyneux and more mainstream figures in tech, investment and scientific research, especially around human genetics.”

In addition to predictably focusing on dismissing the article on the basis of the reporter acting in "bad faith," I'd be curious to see if there is any framing whatsoever that would facilitate, or even encourage, some community-wide introspection.

Comment by Jackson Silver (life-is-meaningless) on William_S's Shortform · 2024-05-04T21:35:04.045Z · LW · GW

At least one of them has explicitly indicated they left because of AI safety concerns, and this thread seems to be insinuating some concern - Ilya Sutskever's conspicuous silence has become a meme, and Altman recently expressed that he is uncertain of Ilya's employment status. There still hasn't been any explanation for the boardroom drama last year.

If it was indeed run-of-the-mill office politics and all was well, then something to the effect of "our departures were unrelated, don't be so anxious about the world ending, we didn't see anything alarming at OpenAI" would obviously help a lot of people and also be a huge vote of confidence for OpenAI.

It seems more likely that there is some (vague?) concern but it's been overridden by tremendous legal/financial/peer motivations.

Comment by Jackson Silver (life-is-meaningless) on Sam Altman's sister, Annie Altman, claims Sam has severely abused her · 2024-02-10T18:16:04.174Z · LW · GW

I've been thinking about these allegations often in the context of Altman's firing circus a few months ago. I've known multiple people who suffered early childhood abuse/sexual trauma - and even dated one for a few tumultuous years a decade ago. I had a perfectly normal, happy childhood myself, and eventually came to learn that this disconnect between who they were most times vs times of high-stress was tremendously unintuitive (and initially intriguing) for me. It also seemed to facilitate an certain meticulousness in duplicity/compartmentalization of presenting the required image and confidently saying whatever needed to be said, which often yielded great success in many situations. 

Elon Musk, as another example, has been quite public about his difficult childhood - and how it might have helped him professionally, and there is ample corroboration for this. There are also definite allusions to some psycho-sexual aspects.

I cannot help but see patterns of Extreme Disconnection with Sam and consequently with OpenAI. There seems to be a clear division between people who are on his side, and people who aren't. He was quite literally fired for not being candid with the OpenAI's board, and his initial reaction was completely contradictory to the tone and messaging of "benefit for all mankind".  The (mostly) seamless transition from a relentlessly vocalized emphasis on the "open" benevolent non-profit with an all-powerful board to whatever OpenAI is now, the selective silence of the board and especially Ilya Sutskever, presumably in the face of legal and financial muscle-flexing, Geoffrey Irving's tweet - all seem to speak to this idea of a world in which many well meaning, intelligent people who have never been in actual conflict with him, and have massive aligned incentives, would readily believe him to be a certain kind of "good" person X who would never extrapolate to be a kind of "bad" person Y, not accounting for the unconscious-level disconnection that undergirds this. 

I guess I'm wondering if I'm being unreasonably concerned about this in regard to the "future of humanity", or just projecting my own biases and experiences.