Abuse in LessWrong and rationalist communities in Bloomberg News
post by whistleblower67 · 2023-03-07T18:45:39.017Z · LW · GW · 72 commentsThis is a link post for https://www.bloomberg.com/news/features/2023-03-07/effective-altruism-s-problems-go-beyond-sam-bankman-fried#xj4y7vzkg
Contents
Damning allegations; but I expect this forum to respond with minimization and denial. One woman in the community, who asked not to be identified for fear of reprisals, says she was sexually abused by a prominent AI researcher. After she confronted him, she says, she had job offers rescinded and conference speaking gigs canceled and was disinvited from AI events. She says others in th... The paper clip maximizer, as it’s called, is a potent meme about the pitfalls of maniacal fixation. None 72 comments
Try non-paywalled link here.
Damning allegations; but I expect this forum to respond with minimization and denial.
A few quotes:
At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.
Of the subgroups in this scene, effective altruism had by far the most mainstream cachet and billionaire donors behind it, so that shift meant real money and acceptance. In 2016, Holden Karnofsky, then the co-chief executive officer of Open Philanthropy, an EA nonprofit funded by Facebook co-founder Dustin Moskovitz, wrote a blog post explaining his new zeal to prevent AI doomsday. In the following years, Open Philanthropy’s grants for longtermist causes rose from $2 million in 2015 to more than $100 million in 2021.
Open Philanthropy gave $7.7 million to MIRI in 2019, and Buterin gave $5 million worth of cash and crypto. But other individual donors were soon dwarfed by Bankman-Fried, a longtime EA who created the crypto trading platform FTX and became a billionaire in 2021. Before Bankman-Fried’s fortune evaporated last year, he’d convened a group of leading EAs to run his $100-million-a-year Future Fund for longtermist causes.
Even leading EAs have doubts about the shift toward AI. Larissa Hesketh-Rowe, chief operating officer at Leverage Research and the former CEO of the Centre for Effective Altruism, says she was never clear how someone could tell their work was making AI safer. When high-status people in the community said AI risk was a vital research area, others deferred, she says. “No one thinks it explicitly, but you’ll be drawn to agree with the people who, if you agree with them, you’ll be in the cool kids group,” she says. “If you didn’t get it, you weren’t smart enough, or you weren’t good enough.” Hesketh-Rowe, who left her job in 2019, has since become disillusioned with EA and believes the community is engaged in a kind of herd mentality.
In extreme pockets of the rationality community, AI researchers believed their apocalypse-related stress was contributing to psychotic breaks. MIRI employee Jessica Taylor had a job that sometimes involved “imagining extreme AI torture scenarios,” as she described it in a post on LessWrong [LW · GW]—the worst possible suffering AI might be able to inflict on people. At work, she says, she and a small team of researchers believed “we might make God, but we might mess up and destroy everything.” In 2017 she was hospitalized for three weeks with delusions that she was “intrinsically evil” and “had destroyed significant parts of the world with my demonic powers,” she wrote in her post. Although she acknowledged taking psychedelics for therapeutic reasons, she also attributed the delusions to her job’s blurring of nightmare scenarios and real life. “In an ordinary patient, having fantasies about being the devil is considered megalomania,” she wrote. “Here the idea naturally followed from my day-to-day social environment and was central to my psychotic breakdown.”
Taylor’s experience wasn’t an isolated incident. It encapsulates the cultural motifs of some rationalists, who often gathered around MIRI or CFAR employees, lived together, and obsessively pushed the edges of social norms, truth and even conscious thought. They referred to outsiders as normies and NPCs, or non-player characters, as in the tertiary townsfolk in a video game who have only a couple things to say and don’t feature in the plot. At house parties, they spent time “debugging” each other, engaging in a confrontational style of interrogation that would supposedly yield more rational thoughts. Sometimes, to probe further, they experimented with psychedelics and tried “jailbreaking” their minds, to crack open their consciousness and make them more influential, or “agentic.” Several people in Taylor’s sphere had similar psychotic episodes. One died by suicide in 2018 and another in 2021.
Within the group, there was an unspoken sense of being the chosen people smart enough to see the truth and save the world, of being “cosmically significant,” says Qiaochu Yuan, a former rationalist.
Yuan started hanging out with the rationalists in 2013 as a math Ph.D. candidate at the University of California at Berkeley. Once he started sincerely entertaining the idea that AI could wipe out humanity in 20 years, he dropped out of school, abandoned the idea of retirement planning, and drifted away from old friends who weren’t dedicating their every waking moment to averting global annihilation. “You can really manipulate people into doing all sorts of crazy stuff if you can convince them that this is how you can help prevent the end of the world,” he says. “Once you get into that frame, it really distorts your ability to care about anything else.”
That inability to care was most apparent when it came to the alleged mistreatment of women in the community, as opportunists used the prospect of impending doom to excuse vile acts of abuse. Within the subculture of rationalists, EAs and AI safety researchers, sexual harassment and abuse are distressingly common, according to interviews with eight women at all levels of the community. Many young, ambitious women described a similar trajectory: They were initially drawn in by the ideas, then became immersed in the social scene. Often that meant attending parties at EA or rationalist group houses or getting added to jargon-filled Facebook Messenger chat groups with hundreds of like-minded people.
The eight women say casual misogyny threaded through the scene. On the low end, Bryk, the rationalist-adjacent writer, says a prominent rationalist once told her condescendingly that she was a “5-year-old in a hot 20-year-old’s body.” Relationships with much older men were common, as was polyamory. Neither is inherently harmful, but several women say those norms became tools to help influential older men get more partners. Keerthana Gopalakrishnan, an AI researcher at Google Brain in her late 20s, attended EA meetups where she was hit on by partnered men who lectured her on how monogamy was outdated and nonmonogamy more evolved. “If you’re a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men” who are sometimes in positions of influence or are directly funding the movement, she wrote on an EA forum about her experiences. Her post was strongly downvoted, and she eventually removed it.
The community’s guiding precepts could be used to justify this kind of behavior. Many within it argued that rationality led to superior conclusions about the world and rendered the moral codes of NPCs obsolete. Sonia Joseph, the woman who moved to the Bay Area to pursue a career in AI, was encouraged when she was 22 to have dinner with a 40ish startup founder in the rationalist sphere, because he had a close connection to Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him. Joseph says he also argued that it was normal for a 12-year-old girl to have sexual relationships with adult men and that such relationships were a noble way of transferring knowledge to a younger generation. Then, she says, he followed her home and insisted on staying over. She says he slept on the floor of her living room and that she felt unsafe until he left in the morning.
On the extreme end, five women, some of whom spoke on condition of anonymity because they fear retribution, say men in the community committed sexual assault or misconduct against them. In the aftermath, they say, they often had to deal with professional repercussions along with the emotional and social ones. The social scene overlapped heavily with the AI industry in the Bay Area, including founders, executives, investors and researchers. Women who reported sexual abuse, either to the police or community mediators, say they were branded as trouble and ostracized while the men were protected.
In 2018 two people accused Brent Dill, a rationalist who volunteered and worked for CFAR, of abusing them while they were in relationships with him. They were both 19, and he was about twice their age. Both partners said he used drugs and emotional manipulation to pressure them into extreme BDSM scenarios that went far beyond their comfort level. In response to the allegations, a CFAR committee circulated a summary of an investigation it conducted into earlier claims against Dill, which largely exculpated him. “He is aligned with CFAR’s goals and strategy and should be seen as an ally,” the committee wrote, calling him “an important community hub and driver” who “embodies a rare kind of agency and a sense of heroic responsibility.” (After an outcry, CFAR apologized for its “terribly inadequate” response, disbanded the committee and banned Dill from its events. Dill didn’t respond to requests for comment.)
Rochelle Shen, a startup founder who used to run a rationalist-adjacent group house, heard the same justification from a woman in the community who mediated a sexual misconduct allegation. The mediator repeatedly told Shen to keep the possible repercussions for the man in mind. “You don’t want to ruin his career,” Shen recalls her saying. “You want to think about the consequences for the community.”
One woman in the community, who asked not to be identified for fear of reprisals, says she was sexually abused by a prominent AI researcher. After she confronted him, she says, she had job offers rescinded and conference speaking gigs canceled and was disinvited from AI events. She says others in the community told her allegations of misconduct harmed the advancement of AI safety, and one person suggested an agentic option would be to kill herself.
For some of the women who allege abuse within the community, the most devastating part is the disillusionment. Angela Pang, a 28-year-old who got to know rationalists through posts on Quora, remembers the joy she felt when she discovered a community that thought about the world the same way she did. She’d been experimenting with a vegan diet to reduce animal suffering, and she quickly connected with effective altruism’s ideas about optimization. She says she was assaulted by someone in the community who at first acknowledged having done wrong but later denied it. That backpedaling left her feeling doubly violated. “Everyone believed me, but them believing it wasn’t enough,” she says. “You need people who care a lot about abuse.” Pang grew up in a violent household; she says she once witnessed an incident of domestic violence involving her family in the grocery store. Onlookers stared but continued their shopping. This, she says, felt much the same.
The paper clip maximizer, as it’s called, is a potent meme about the pitfalls of maniacal fixation.
Every AI safety researcher knows about the paper clip maximizer. Few seem to grasp the ways this subculture is mimicking that tunnel vision. As AI becomes more powerful, the stakes will only feel higher to those obsessed with their self-assigned quest to keep it under rein. The collateral damage that’s already occurred won’t matter. They’ll be thinking only of their own kind of paper clip: saving the world.
72 comments
Comments sorted by top scores.
comment by ChristianKl · 2023-03-08T03:04:27.115Z · LW(p) · GW(p)
“5-year-old in a hot 20-year-old’s body.”
40ish startup founder in the rationalist sphere, because he had a close connection to Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him.
To me, two of the stories look like they are about the same person and that person has been banned from multiple rationalist spaces without the journalist considering it important to mention that.
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-03-08T04:02:24.303Z · LW(p) · GW(p)
Yeah, this seems very likely to be about Michael Vassar. Also, HPMOR spoiler:
I also think him "bragging" about this is quite awkward, since modeling literal Voldemort after you is generally not a compliment. I also wouldn't believe that "bragging" has straightforwardly occurred.
comment by Vanessa Kosoy (vanessa-kosoy) · 2023-03-08T06:10:22.374Z · LW(p) · GW(p)
FWIW, I'm a female AI alignment researcher and I never experienced anything even remotely adjacent to sexual misconduct in this community. (To be fair, it might be because I'm not young and attractive; more likely the Bloomberg article is just extremely biased.)
Replies from: Making_Philosophy_Better↑ comment by Portia (Making_Philosophy_Better) · 2023-09-24T14:05:53.060Z · LW(p) · GW(p)
That unfortunately implies nothing. Abusers will rarely abuse everyone they encounter, but pick vulnerable and isolated victims purposefully, and often also purposefully cultivate a public persona that covers their abuse. It is entirely possible and common to work with abusers daily and experience them as charming and lovely while they are absolutely awful to others. I believe you had a great time, but that does not make me believe the victims less in any way, and I would hope this is true for other readers, too.
comment by Vladimir_Nesov · 2023-03-07T20:48:45.470Z · LW(p) · GW(p)
Relevant: Cardiologists and Chinese Robbers.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-03-07T22:38:21.978Z · LW(p) · GW(p)
I read it, that was a pretty good one and also short. It reminds me of Gell-Mann amnesia.
comment by Vaniver · 2023-03-09T23:09:34.457Z · LW(p) · GW(p)
Damning allegations; but I expect this forum to respond with minimization and denial.
One quoted section is about Jessica Taylor's post on LW, which was controversial but taken seriously. (I read a draft of the post immediately preceding it and encouraged her to post it on LW.) Is that minimization or denial?
Out of the other quoted sections (I'm not going to click thru), allegations are only against one named person; Brent Dill. We took that seriously at the time [LW(p) · GW(p)] and I later banned him from LessWrong [LW · GW]. Is that minimization or denial?
To be clear, I didn't ban him directly for the allegations, but for related patterns of argumentation and misbehavior. I think the risks of online spaces are different from the risks of in-person spaces; like the original Oxford English Dictionary, I think Less Wrong the website should accept letters from murderers in asylums, even if those people shouldn't be allowed to walk the streets. I think it's good for in-person events and organizations do their part to keep their local communities welcoming and safe, while it isn't the place of the whole internet to try to adjudicate those issues; we don't have enough context to litigate them in a fair and wise way.
[I do not hold any positions of power in my local Berkeley rationalist scene, but am nevertheless open to hear people's worries and try to pass them on to people who are in the appropriate positions of power to do something about it.]
comment by titotal (lombertini) · 2023-03-08T14:41:09.297Z · LW(p) · GW(p)
A lot of the defenses here seem to be relying on the fact that one of the accused individuals was banned from several rationalist communities a long time ago. While this definitely should have been included in the article, I think the overall impression they are giving is misleading.
In 2020, the individual was invited to give a talk for an unofficial SSC online meetup (scott alexander was not involved, and does ban the guy from his events). The post was announced on lesswrong with zero pushback, and went ahead.
Here is a comment from Anna Salamon 2 years ago, discussing him, and stating that his ban on meetups should be lifted:
I hereby apologize for the role I played in X's ostracism from the community, which AFAICT was both unjust and harmful to both the community and X. There's more to say here, and I don't yet know how to say it well. But the shortest version is that in the years leading up to my original comment X was criticizing me and many in the rationality and EA communities intensely, and, despite our alleged desire to aspire to rationality, I and I think many others did not like having our political foundations criticized/eroded, nor did I and I think various others like having the story I told myself to keep stably “doing my work” criticized/eroded. This, despite the fact that attempting to share reasoning and disagreements is in fact a furthering of our alleged goals and our alleged culture. The specific voiced accusations about X were not “but he keeps criticizing us and hurting our feelings and/or our political support” — and nevertheless I’m sure this was part of what led to me making the comment I made above (though it was not my conscious reason), and I’m sure it led to some of the rest of the ostracism he experienced as well. This isn’t the whole of the story, but it ought to have been disclosed clearly in the same way that conflicts of interest ought to be disclosed clearly. And, separately but relatedly, it is my current view that it would be all things considered much better to have X around talking to people in these communities, though this will bring friction.
There’s broader context I don’t know how to discuss well, which I’ll at least discuss poorly:
Should the aspiring rationality community, or any community, attempt to protect its adult members from misleading reasoning, allegedly manipulative conversational tactics, etc., via cautioning them not to talk to some people? My view at the time of my original (Feb 2019) comment was “yes”. My current view is more or less “heck no!”; protecting people from allegedly manipulative tactics, or allegedly misleading arguments, is good — but it should be done via sharing additional info, not via discouraging people from encountering info/conversations. The reason is that more info tends to be broadly helpful (and this is a relatively fool-resistant heuristic even if implemented by people who are deluded in various ways), and trusting who can figure out who ought to restrict their info-intake how seems like a doomed endeavor (and does not degrade gracefully with deludedness/corruption in the leadership). (Watching the CDC on covid helped drive this home for me. Belatedly noticing how much something-like-doublethink I had in my original beliefs about X and related matters also helped drive this home for me.)
Should some organizations/people within the rationality and EA communities create simplified narratives that allow many people to pull in the same direction, to feel good about each others’ donations to the same organizations, etc.? My view at the time of my original (Feb 2019) comment was “yes”; my current view is “no — and especially not via implicit or explicit pressures to restrict information-flow.” Reasons for updates same as above.
It is nevertheless the case that X has had a tendency to e.g. yell rather more than I would like. For an aspiring rationality community’s general “who is worth ever talking to?” list, this ought to matter much less than the above. Insofar as a given person is trying to create contexts where people reliably don’t yell or something, they’ll want to do whatever they want to do; but insofar as we’re creating a community-wide include/exclude list (as in e.g. this comment on whether to let X speak at SSC meetups), it is my opinion that X ought to be on the “include” list.
Here is Scott Alexander, talking about the same guy a year ago, after discussing a pattern of very harmful behaviour perpetrated by X:
I want to clarify that I don't dislike X, he's actually been extremely nice to me, I continue to be in cordial and productive communication with him, and his overall influence on my life personally has been positive. He's also been surprisingly gracious about the fact that I go around accusing him of causing a bunch of cases of psychosis. I don't think he does the psychosis thing on purpose, I think he is honest in his belief that the world is corrupt and traumatizing (which at the margin, shades into values of "the world is corrupt and traumatizing" which everyone agrees are true) and I believe he is honest in his belief that he needs to figure out ways to help people do better. There are many smart people who work with him and support him who have not gone psychotic at all. I don't think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people. My main advice is that if he or someone related to him asks you if you want to take a bunch of drugs and hear his pitch for why the world is corrupt, you say no.
I don't think X is still part of the rationalist community, but these definitely make it look like he is welcome and respected within your community, despite the many many allegations against him. I'll end with a note that "Being ostracised from a particular subculture" is not actually a very severe punishment, and that maybe you should consider raising your standards somewhat?
Replies from: habryka4, Kenny↑ comment by habryka (habryka4) · 2023-03-08T18:20:04.627Z · LW(p) · GW(p)
I personally think the current relationship the community has to Michael feels about right in terms of distance.
I also want to be very clear that I have not investigated the accusations against Michael and don't currently trust them hugely for a bunch of reasons, though they seem credible enough that I would totally investigate them if I thought that Michael would pose a risk to more people in the community if the accusations were true.
As it is, the current level of distance I don't see it as hugely my, or the rationality community's, responsibility to investigate them though if I had more time and was less crunched, I might.
comment by Joseph Miller (Josephm) · 2023-03-09T23:10:05.877Z · LW(p) · GW(p)
Several things can be true simultaneously:
- This article is similar to much other mainstream coverage of EA/rationality and paints the community in an unfairly negative light.
- The specific claims in the article have been previously addressed.
- There is no good evidence that the LW / rationalist community has higher than average levels of abuse.
- It is worthwhile putting effort into finding out if the community has higher than average levels of abuse, which it does not seem has been done by people in the community. Given the gender imbalance, our prior should be that higher than average levels of abuse are somewhat likely.
- We can and should have much lower than average levels of abuse.
- This community strives to exceed the rest of society in many domains. It is anomalous that people are quite uninterested in optimizing this as it seems clearly important.
To be clear, I'm not at all confident that all of the empirical claims above are true. But it seems that people are using the earlier points as an excuse to ignore the later ones.
Replies from: lc, Kenny↑ comment by lc · 2023-03-10T01:10:03.009Z · LW(p) · GW(p)
Agreed. I think while we're at it, we should also investigate the DNC for child sex trafficking. After all:
-
There is no good evidence that DNC staffers abuse children at any higher-than-average rate, but:
-
We can and should lower average levels of child sexual abuse.
-
It is worthwhile putting effort into finding out if a community has higher than average levels of child sexual abuse, which does not seem to have been done by DNC staffers.
-
The DNC strives to exceed the rest of society in many domains. It's anomalous that such people seem quite disinterested in optimizing this problem, as preventing child sexual abuse is clearly important.
↑ comment by Joseph Miller (Josephm) · 2023-03-10T02:42:05.748Z · LW(p) · GW(p)
I don't think there is grounds for a high profile external investigation into the rationalist community.
But yes, we should try to be better than the rest of society in every way. I think the risk of sexual abuse is high enough that this would be a profitable use of resources whereas my prior is that the risk of child abuse (at least child sex trafficking) does not merit spending effort to investigate.
Idk anything about the DNC so I don't know what it's worth their effort to do.
I think you are suggesting that I am committing the fallacy of privileging the hypothesis, but I think the stories in the article and associated comment sections are sufficient to raise this to our attention.
Replies from: lc↑ comment by lc · 2023-03-10T07:52:55.311Z · LW(p) · GW(p)
I think you are suggesting that I am committing the fallacy of privileging the hypothesis...
No, I am accusing you of falling for a naked political trap. Internet accusations of pedophilia by DNC staffers are not made in good faith, and in fact the content of the accusation (dems tend to be pedophiles) is selected to be maximally f(hard to disprove, disturbing). If the DNC took those reports seriously and started to "allocate resources toward the problem", it would first be a waste of resources, but second (and more importantly) it would lend credibility to the initial accusations no matter what their internal investigation found or what safeguards they put in place. There's no basic reason to believe the DNC contains a higher number of sexual predators than e.g. a chess club, so the review itself is unwarranted and is an example of selective requirements.
In the DNC's case, no one actually expects them to be stupid enough to litigate the claim in public by going over ever time someone connected to the DNC touched a child and debating whether or not it's a fair example. I think that's a plausible outcome for rationalists, though, who are not as famously sensible as DNC staffers.
Replies from: Making_Philosophy_Better, Josephm↑ comment by Portia (Making_Philosophy_Better) · 2023-09-24T14:24:40.177Z · LW(p) · GW(p)
You don't think that picture ought to change in the hypothetical parallel scenario of multiple children independently saying that they were sex trafficked by DNC staffers, and also notably saying that they were given reasons for why this was normal and unfixable and in fact probably an average and hence acceptable rate of sex trafficking, reasons and arguments that were directly derived from Democratic positions?
This is not a random outside accusation to frame the rationalist community. It comes from people drawn to the community for the promise of rationality and ethics, and then horribly disillusioned. Who are referencing not just abuse, but abuse specifically related to rationalist content. The girl who committed suicide because she had literally been led to believe that this community was the only place to be rational, and that being repeatedly sexually assaulted in it in ways that she found unbearable was utterly inevitable, was horrifying. She wasn't just assaulted, she was convinced that it was irrational to expect humane treatment as a woman, to a degree where she might as well commit suicide if she was committed to rationality. That speaks to a tremendous systematic problem. How can the first response to that be "I bet it is this bad in other communities, too, so we needn't do anything, not even investigate if it actually is equally bad elsewhere or if that is just a poor justification for doing nothing"?
↑ comment by Joseph Miller (Josephm) · 2023-03-10T11:37:51.756Z · LW(p) · GW(p)
Oh okay, I misunderstood. I forgot about that whole DNC scandal.
I agree that a public investigation would probably hurt the rationalist's reputation.
However reputation is only one consideration and the key disanalogy is still the level of evidence. Also a discreet investigation may be possible.
↑ comment by Kenny · 2023-03-28T02:11:19.127Z · LW(p) · GW(p)
It is anomalous that people are quite uninterested in optimizing this as it seems clearly important.
I have the opposite sense. Many people seem very interested in this.
"This community" is a nebulous thing and this site is very different than any of the 'in-person communities'.
But I don't think there's strong evidence that the 'communities' don't already "have much lower than average levels of abuse". I have an impression that, among the very-interested-in-this people, any abuse is too much.
comment by lc · 2023-03-08T18:39:02.150Z · LW(p) · GW(p)
Damning allegations; but I expect this forum to respond with minimization and denial.
Minimization and denial is appropriate when you're being slandered.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-03-08T21:53:30.755Z · LW(p) · GW(p)
I don't agree with this take, though I do think there's a common error that almost everyone makes, including LWers, and that's ignoring base rates and over focusing on special, tailored explanations over general explanations, and this article seems to commit this error.
I do think that the verifiable facts are correct, but I don't believe the framing is right.
Replies from: Kennycomment by whistleblower67 · 2023-03-08T01:00:34.527Z · LW(p) · GW(p)
The minimization and denial among these comments is horrifying.
I am a female AI researcher. I come onto this forum for Neel Nanda's interpretability research which has recently been fire. I've experienced abuse in these communities which makes the reaction here all the more painful.
I don't want to come onto this forum anymore.
This is how women get driven out of AI.
Replies from: daniel-glasscock, lahwran, Liron, valley9↑ comment by Daniel (daniel-glasscock) · 2023-03-08T01:54:06.367Z · LW(p) · GW(p)
It is appropriate to minimize things which are in fact minimal. The majority of these issues have been litigated (metaphorically) before. The fact that they are being brought up over and over again in media articles does not ipso facto mean that the incident has not been adequately dealt with. You can make the argument that these incidents are part of a larger culture problem, but you have to actually make the argument. We're all Bayesians here, so look at the base rates.
The one piece of new information which seems potentially important is the part where Sonia Joseph says, "he followed her home and insisted on staying over." I would like to see that incident looked into a bit more.
Replies from: pmk, whistleblower67↑ comment by pmk · 2023-03-10T03:49:28.718Z · LW(p) · GW(p)
Given the gender ratio in EA and rationality, it would be surprising if women in EA/rationality didn’t experience more harassment than women in other social settings with more even gender ratios. Consider a simplified case: suppose 1% of guys harass women and EA/rationality events are 10% women. Then in a group of 1000 EAs/rationalists there would be 9 harassers targeting 100 women. But if the gender ratio was even, then there would be 5 harassers targeting 500 women. So the probability of each woman being targeted by a harasser is lower in a group with more even gender ratio. For it to be the case that women in EA/rationality experience the same amount of harassment as women in other social settings the men in EA/rationality would need to be less likely to harass women than the average man in other social settings. It is also possible that the average man in EA/rationality is more likely to harass women than the average man in other social settings. I can think of some reasons for this (being socially clumsy, open to breaking social norms etc) and some against (being too shy to make advances, aspiring to high moral standards in EA etc).
↑ comment by whistleblower67 · 2023-03-09T02:09:28.913Z · LW(p) · GW(p)
While many of these claims are "old news" to those communities, many of these claims are fresh. The baseline rate reasoning is flawed because a) sexual assault remains the most underreported crime, so there is likely instead an iceberg effect, and b) women who were harassed/assaulted have left the movement which changes your distribution, and c) women who would enter your movement otherwise now stay away due to whisper networks and bad vibes.
Replies from: daniel-glasscock↑ comment by Daniel (daniel-glasscock) · 2023-03-09T18:08:41.254Z · LW(p) · GW(p)
While many of these claims are "old news" to those communities, many of these claims are fresh.
Can you clarify which specific claims are new? A claim which hasn’t been previously reported in a mainstream news article might still be known to people who have been following community meta-drama.
The baseline rate reasoning is flawed because a) sexual assault remains the most underreported crime, so there is likely instead an iceberg effect,
I’m not sure how this refutes the base rate argument. The iceberg effect exists for both the rationalist community and for every other community you might compare it to (including the ones used to compute the base rates). These should cancel out unless you have reason to believe the iceberg effect is larger for the rationalist community than for others. (For all you know, the iceberg effect might be lower than baseline due to norms about speaking clearly and stating one’s mind.)
b) women who were harassed/assaulted have left the movement which changes your distribution,
Maybe? This seems more plausible to confound the data than a) or c), but again there are reasons to suppose the effect might lean the other way. (Women might be more willing to tolerate bad behavior if they think it’s important to work on alignment than they would tolerate at say, their local Magic the Gathering group).
c) women who would enter your movement otherwise now stay away due to whisper networks and bad vibes.
Even if true, I don’t see how that would be relevant here? Women who enter the movement, get harassed, and then leave would make the harassment rate seem lower because their incidents don’t get counted. Women who never entered the movement in the first place wouldn’t affect the rate at all.
↑ comment by the gears to ascension (lahwran) · 2023-03-08T03:05:03.184Z · LW(p) · GW(p)
Strong upvote. As another female ai researcher: yeah, it's bad here, as it is everywhere to some degree.
To other commenters, especially ones hesitant to agree that there have been problems due to structural issues, claiming otherwise doesn't make this situation look better - the local network of human connections can only look good to the outer world of humans by being precise about what problems have occurred and what actual knowledge and mechanisms can prevent them. you're not gonna retain your looking good points to the public by groveling about it, nor by claiming there's no issue; you'll retain looking good points by actually considering the problem each time it comes up, discussing the previous discussions, etc. (though, of course, like, efficiently, according to taste. Not everyone has to write huge braindumps like I find myself often wanting to.) nobody can tell you how to be a good person; just be one. put your zines in the slot in the door and we'll copy em and print em out. but dont worry about making a fuss apologizing; make a fuss explaining a verifiable understanding.
Some local fragments of social network are somewhat well protected by local emotional habits; but many subgroups have unhealthy patterns even now, years and years after the event in the article.
So, to minimizing commenters, don't dismiss the messenger; people being upset are not a reason for you to be upset too, it's a reason for you to have sympathy and take time to think. You're safe from this criticism with me, and I will argue for you to be safe from it for the most part even if you've caused harm, because I believe in reparative and restorative justice; victims aren't safe until there's a network of solidarity that prevents the harm in the first place and the process of getting there can be accelerated by frequent sources of harm getting to [pretend to not be causing harm]{comment 7mo later: ehh, I mean more like, others won't apply current typical retribution; not hiding of the harm} if they'll help as hard as they can with preventing anyone from ever harming another again - starting with themselves.
(Though it's understandable if you're sympathetically upset or worried about being a victim. These things are less gendered than popular portrayal implies.)
In the local words: Simulacrum 3 is not optional, you just gotta ensure you can promise to be on Simulacrum 1 when discussing issues and your views about how to deal with them.
To whistleblower67, some commentary on why I'm here: very few communities live like the future we should have, though some institutions are better at preventing abuse than others - I imagine there are institutions the reader thinks of as good, maybe better than the lesswrong crowd or maybe not, and I would caution that many of those institutions also have abuse patterns. eg, universities come to mind as probably having a slightly lower rate of abusers but still very much not zero, except, now that I say that I'm not even confident. I initially hoped that it would be less terrible here in the rat community; it hasn't been. it's the same problem as every other online network I've been in, and this one has it about as bad as a typical online network of emotional connections, maybe worse sometimes, maybe better others. Which is to say, fuckin awful, not to be minimized, we don't heal all communities magically with a snap of our fingers. I want to feel comfy here - and I don't.
And also, I'm not leaving the research field just because nobody agentically abusive wants to fess up to having been an agentic abuser. If human-scale community abusers are enough to stop our research on inter-agent coprotective alignment, that seems bad, especially if some of the people working on inter-agent coprotective alignment don't care to actually work on it because they themselves are misaligned with decency! It seems to me that these situations are issues of the interpersonal alignment problem not being solved: how do we design a system where the members of that system can learn to trust each other when needed and distrust when it isn't valid, so that if someone is abused, the victim can speak out against the abuser safely, and yet the abuser cannot use this same system against the victim.
If you have suggestions for other forums I should also visit to discuss the mechanical details of coprotective ai design as an independent researcher, I'd love to hear them, ideally in DM so the annoying boys don't join too quick. but there are plenty of ladies roughing it here in the hope of making a difference, plenty of "people of gender" so to speak, in general. We can protect each other, but doing so is not trivial; don't join this community blindly, but don't join any community blindly. To build solidarity, one must be ready to defend those in need.
and a quick demo of what this looks like when I don't feel so agitated by the topic:
Cults are bad actually, and I often say so with links to cult resistance education [LW(p) · GW(p)] on this forum when people fret about cults or even giggle about cults being fun, because of the issues that have occurred around here from people thinking trusting authorities is a good idea.
The links from my referenced comment, pasted here for others passing by:
- https://www.microsolidarity.cc/articles/cults - this site is in general my favorite site on the internet right now; it discusses how to create healthy co-supportive, solidarity-heavy social groups without accepting domination or cutting off friendships. Domination and isolation being a key component of cults.
- https://medium.com/@zelphontheshelf/10-signs-youre-probably-in-a-cult-1921eb5a3857
- https://www.culteducation.com/warningsigns.html
- http://www.ex-cult.org/fwbo/CofC.htm
- https://www.lesswrong.com/tag/cults [? · GW] (of course)
- https://faenrandir.github.io/a_careful_examination/asking-if-its-a-cult-is-wrong-question/
- https://cultrecover.com/cultdef
- https://cultrecovery101.com/faq/
- https://cultrecovery101.com/cult-recovery-readings/checklist-of-cult-characteristics/
it is the system of unfairness that must end, not the people in it, even on top. no death penalty, no penalty of forced labor, no penalty of total confinement, for causing harm to others, in my view. Temporary confinement and fair representation of harms to the social network instead. We need to change how these systems of justice work fundamentally in order to make them durable to the degree needed to protect against the coming era.
↑ comment by Liron · 2023-03-08T01:58:51.813Z · LW(p) · GW(p)
It seems from your link like CFAR has taken responsibility, taken corrective action, and states how they’ll do everything in their power to avoid a similar abuse incident in the future.
I think in general the way to deal with abuse situations within an organization is to identify which authority should be taking appropriate disciplinary action regarding the abuser’s role and privileges. A failure to act there, like CFAR’s admitted process failure that they later corrected, would be concerning if we thought it was still happening.
If every abuse is being properly disciplined by the relevant organization, and the rate of abuse isn’t high compared to the base rate in the non-rationalist population, then the current situation isn’t a crisis - even if some instances of abuse unfortunately involve the perpetrator referencing rationality or EA concepts.
↑ comment by Ebenezer Dukakis (valley9) · 2023-03-08T11:54:52.773Z · LW(p) · GW(p)
Sorry you experienced abuse. I hope you will contact the CEA Community Health Team and make a report: https://forum.effectivealtruism.org/posts/hYh6jKBsKXH8mWwtc/contact-people-for-the-ea-community [EA · GW]
comment by David_Kristoffersson · 2023-03-08T21:31:29.810Z · LW(p) · GW(p)
I think the healthy and compassionate response to this article would be to focus on addressing the harms victims have experienced. So I find myself disappointed by much of the voting and comment responses here.
I agree that the Bloomberg article doesn't acknowledge that most of the harms that they list have been perpetrated by people who have already mostly been kicked out of the community, and uses some unfair framings. But I think the bigger issue is that of harms experienced by women that may not have been addressed: that of unreported cases, and of insufficient measures taken against reported ones. I don't know if enough has been done, so it seems unwise to minimize the article and people who are upset about the sexual misconduct. And even if enough has been done in terms of responses and policy, I would prefer seeing more compassion.
Replies from: Kenny↑ comment by Kenny · 2023-03-28T03:28:47.217Z · LW(p) · GW(p)
I think some empathy and sympathy is warranted to the users of the site that had nothing to do with any of the alleged harms!
It is pretty tiresome to be accused-by-association. I'm not aware of any significant problems with abuse "in LessWrong". And, from what I can tell, almost all of the alleged abuse happened in one particular 'rationalist community', not all, most, or even many of them.
I'm extremely skeptical that the article or this post were inspired by compassion towards anyone.
comment by trevor (TrevorWiesinger) · 2023-03-07T21:59:36.014Z · LW(p) · GW(p)
I read it, wish I hadn't. It's the usual thing with very large amounts of smart-sounding words and paragraphs, and a very small amount of thought being used to generate them.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-03-07T22:01:35.743Z · LW(p) · GW(p)
Thanks for saying. Sounds like another piece I will skip!
While I am generally interested in justice around these parts, I generally buy the maxim that if the news is important, I will hear they key info in it directly from friends (this was true both for covid and for Russia-nukes stuff), and that otherwise the news media spend enough effort to do narrative-control that I'd much rather not even read the media's account of things.
Replies from: ojorgensen↑ comment by ojorgensen · 2023-03-08T16:44:36.475Z · LW(p) · GW(p)
This seems like a bad rule of thumb. If your social circle is largely comprised of people who have chosen to remain within the community, ignoring information from "outsiders" seems like a bad strategy for understanding issues with the community.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-03-08T18:11:22.045Z · LW(p) · GW(p)
Yeah, but that doesn't sound like my strategy. I've many times talked to people who are leaving or left and interviewed them about why and what they didn't like and their reasons for leaving.
Replies from: ojorgensen↑ comment by ojorgensen · 2023-03-08T18:21:14.493Z · LW(p) · GW(p)
Didn't get that impression from your previous comment, but this seems like a good strategy!
comment by Kenny · 2023-03-28T04:05:05.898Z · LW(p) · GW(p)
Damning allegations; but I expect this forum to respond with minimization and denial.
This is so spectacularly bad faith that it makes me think the reason you posted this is pretty purely malicious.
Out of all of the LessWrong and 'rationalist' "communities" that have existed, how many are ones for which any of the alleged bad acts occurred? One? Two?
Out of all of the LessWrong users and 'rationalists', how many have been accused of these alleged bad acts? Mostly one or two?
My having observed extremely similar dynamics about, e.g. sexual harassment, in several different online and in-person 'communities', the 'communities' of or affiliated with 'rationality', LessWrong, and EA have been, far and away, the most diligent about actually effectively mitigating, preventing, and (reasonably) punishing bad behavior.
It is really unclear what standards the 'communities' are failing to meet and that makes me very suspicious that those standards are unreasonable.
comment by nim · 2023-03-07T22:36:47.567Z · LW(p) · GW(p)
I read the first half and kind of zoned out -- I wish that the author had shown any examples of communities lacking such problems, to contrast EA against.
Replies from: Celer, TAG, MSRayne↑ comment by Celer · 2023-03-08T01:49:34.828Z · LW(p) · GW(p)
How do you expect journalism to work? The author is trying to contribute one specific story, in detail. Readers have other experiences to compare and draw from. If this was an academic piece, I might be more sympathetic.
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-03-08T05:02:11.863Z · LW(p) · GW(p)
I feel confused by this argument.
The core thesis of the post seems to rely on the level of abuse in this community being substantially higher than in other communities (the last sentence seems to make that pretty explicit). I think if you want to compellingly argue for your thesis you should provide the best evidence you have for that thesis. Journalism commonly being full of fallacious reasoning doesn't mean that it's good or forgivable for journalism to reason fallaciously.
I do think journalists from time to time summarizing and distilling concrete data is good, but in that case people still clearly benefit if the data is presented in a relatively unbiased way that doesn't distort the underlying truth a lot, or omits crucial pieces of information that the journalist very likely knew but didn't contribute to their narrative. I think journalists not doing that is condemnable and the resulting articles are rarely worth reading.
Replies from: ben-lang, Quadratic Reciprocity, LVSN↑ comment by Ben (ben-lang) · 2023-03-08T10:55:25.078Z · LW(p) · GW(p)
I don't think the core thesis is "the level of abuse in this community is substantially higher than in others". Even if we (very generously) just assumed that the level of abuse in this community was lower than that in most places, these incidents would still be very important to bring up and address.
When an abuse of power arises the organisation/community in which it arises has roughly two possible approaches - clamping down on it or covering it up. The purpose of the first is to solve the problem, the purpose of the second is to maintain the reputation of the organisation. (How many of those catholic church child abuse stories were covered up because they were worried about the reputational damage to the church). By focusing on the relative abuse level it seem like you are seeing these stories (primarily) as an attack on the reputation of your tribe ("A blue abused someone? No he didn't its Green propaganda!"). Does it matter whether the number of children abused in the catholic church was higher than the number abused outside it?
If that is the case, then there is nothing wrong with that emotional response. If you feel a sense of community with a group and you yourself have never experienced the problem, it can just feel like an attack on something you like. The journalist might even be motivated badly (eg. they think an editorial line against EA will go down well). But I still think its a fairly unhelpful response
Of course, one could argue that the position "Obviously deal with these issues, but also they are very rare and our tribe is actually super good" is perfectly logically consistent. And it is. But the language is doing extra work - by putting "us good" next too the issue it sounds like minimising or dismissing the issue. Put another way claims of "goodness" could be made in one post, and then left them out of the sex abuse discussion. The two are not very linked.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-03-08T14:21:18.382Z · LW(p) · GW(p)
Does it matter whether the number of children abused in the catholic church was higher than the number abused outside it?
Yes, it does matter here, since base rates matter in general.
Honestly, one of my criticisms that I want to share as a post later on is that LW ignores the base rates and focuses too much on the inside view over the outside view, but in this case, it does matter here since the analogous claim would be that the church is uniquely bad at sexual assault, and if it turned out that it wasn't uniquely bad, then it means we don't have to panic.
That's the importance of base rates: It gives you a solid number that is useful to compare against. Nothing is usually nearly as unprecedented or new as a first time person thinks.
Replies from: ben-lang↑ comment by Ben (ben-lang) · 2023-03-08T16:38:24.242Z · LW(p) · GW(p)
The base-rates post sounds like an interesting one, I look forward to it. But, unless I am very confused, the base rates are only ever going to help answer questions like: "is this group of people better than society in general by metric X" (You can bring a choice Hollywood producer and Prince out as part of the control group). My point was that I think a more useful question might be something like "Why was the response to this specific incident inadequate?".
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-03-08T17:04:08.567Z · LW(p) · GW(p)
That might be the problem here, since there seem to be two different conversations, going by the article:
-
Why was this incident not responded to accurately?
-
Is our group meaningfully worse or better, compared to normal society? And why is it worse or better?
↑ comment by Quadratic Reciprocity · 2023-03-08T16:07:49.359Z · LW(p) · GW(p)
I can see how the article might be frustrating for people who know the additional context that the article leaves out (where some of the additional context is simply having been in this community for a long time and having more insight into how it deals with abuse). From the outside though, it does feel like some factors would make abuse more likely in this community: how salient "status" feels, mixing of social and professional lives, gender ratios, conflicts of interests everywhere due to the community being small, sex positivity and acceptance of weirdness and edginess (which I think are great overall!). There are also factors pushing in the other direction of course.
I say this because it seems very reasonable for someone who is new to the community to read the article and the tone in the responses here and feel uncomfortable interacting with the community in the future. A couple of women in the past have mentioned to me that they haven't engaged much with the in-person rationalist community because they expect the culture to be overly tolerant of bad behaviour, which seems sad because I expect them to enjoy hanging out in the community.
I can see the reasons behind not wanting to give the article more attention if it seems like a very inaccurate portrayal of things. But it does feel like that makes this community feel more unwelcoming to some newer people (especially women) who would otherwise like to be here and who don't have the information about how the things mentioned in the article were responded to in the past.
↑ comment by habryka (habryka4) · 2023-03-08T18:15:12.797Z · LW(p) · GW(p)
Yeah, I might want to write a post that tries to actually outline the history of abuse that I am aware of, without doing weird rhetorical tricks or omitting information. I've recently been on a bit of a "let's just put everything out there in public" spree, and I would definitely much prefer for new people to be able to get an accurate sense of the risk of abuse and harm, which, to be clear, is definitely not zero and feels substantial enough that people should care about it.
I do think the primary reason why people haven't written up stuff in the past is exactly because they are worried their statements will get ripped out of context and used as ammunition in hit pieces like this, so I actually think articles like this make the problem worse, not better, though I am not confident of this, and the chain of indirect effects is reasonably long here.
Replies from: Quadratic Reciprocity↑ comment by Quadratic Reciprocity · 2023-03-09T22:32:19.352Z · LW(p) · GW(p)
I would be appreciative if you do end up writing such a post.
Sad that sometimes the things that seem good for creating a better, more honest, more accountable community for the people in it also give outsiders ammunition. My intuitions point strongly in the direction of doing things in this category anyway.
↑ comment by LVSN · 2023-03-08T13:19:46.159Z · LW(p) · GW(p)
I don't disagree with the main thrust of your comment, but,
I just wanna point out that 'fallacious' is often a midwit objection, and either 'fallacious' is not the true problem or it is the true problem but the stereotypes about what is fallacious do not align with reality: A Unifying Theory in Defense of Logical Fallacies
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-03-08T18:10:54.357Z · LW(p) · GW(p)
Yeah, that's fair. I was mostly using it as a synonym for "badly reasoned and inaccurate" here. Agree that there are traps around policing speech by trying to apple rhetorical fallacies, which I wasn't trying to do here.
↑ comment by TAG · 2023-03-08T00:36:26.880Z · LW(p) · GW(p)
Mainstream academia?
Replies from: ChristianKl↑ comment by ChristianKl · 2023-03-08T02:42:47.996Z · LW(p) · GW(p)
A bit of searching brings me to https://elephantinthelab.org/sexual-harassment-in-academia/ :
Is Sexual Harassment in the Academy a Problem?
Yes. Research on sexual harassment in the academy suggests that it remains a prevalent problem. In a 2003 study examining incidences of sexual harassment in the workplace across private, public, academic, and military industries, Ilies et al (2003) found academia to have the second highest rates of harassment, second only to the military. More recently, a report by the The National Academies of Sciences, Engineering, and Medicine (NASEM) summarized the persistent problem of sexual harassment in academia with regard to faculty-student harassment, as well as faculty-faculty harassment. To find more evidence of this issue, one can also turn to Twitter – as Times Higher Education highlighted in their 2019 blog.
Another paper suggests:
In 2019, the Association of American Universities surveyed 33 prominent research universities and found 13% of all students experienced a form of sexual assault and 41.8% experienced sexual harassment (Cantor et al., Citation2020).
Mainstream academia is not free from sexual abuse.
↑ comment by MSRayne · 2023-03-08T01:24:53.794Z · LW(p) · GW(p)
Whataboutism is a fallacy.
Replies from: gjm, Kenny↑ comment by gjm · 2023-03-08T01:52:54.243Z · LW(p) · GW(p)
It is. But if someone is saying "this group of people is notably bad" then it's worth asking whether they're actually worse than other broadly similar groups of people or not.
I think the article, at least to judge from the parts of it posted here, is arguing that rationalists and/or EAs are unusually bad. See e.g. the final paragraph about paperclip-maximizers.
Replies from: MSRayne↑ comment by MSRayne · 2023-03-08T14:14:49.859Z · LW(p) · GW(p)
I fail to see why it matters what other broadly similar groups of people do. Rationalists ought to predict and steer the future better than other kinds of people, and so should be held to a higher standard. Deflecting with "but all the other kids are equally abusive!" is just really stupid.
As for the article, I'm not concerned with the opinion of a journalist either; they can be confused or bombastic about the exact extent of the problem if they want, it's rather standard for journalists; but I don't doubt that the problem is real and hasn't been preemptively fixed before it happened, which bothers me because the founders of this community are more than smart enough to have at least made an attempt to do so.
Replies from: gjm↑ comment by gjm · 2023-03-08T21:32:29.909Z · LW(p) · GW(p)
Whether it matters what other broadly similar groups do depends on what you're concerned with and why.
If you're, say, a staff member at an EA organization, then presumably you are trying to do the best you could plausibly do, and in that case the only significance of those other groups would be that if you have some idea how hard they are trying to do the best they can, it may give you some idea of what you can realistically hope to achieve. ("Group X has such-and-such a rate of sexual misconduct incidents, but I know they aren't really trying hard; we've got to do much better than that." "Group Y has such-and-such a rate of sexual misconduct incidents, and I know that the people in charge are making heroic efforts; we probably can't do better.")
So for people in that situation, I think your point of view is just right. But:
If you're someone wondering whether you should avoid associating with rationalists or EAs for fear of being sexually harassed or assaulted, then you probably have some idea of how reluctant you are to associate with other groups (academics, Silicon Valley software engineers, ...) for similar reasons. If it turns out that rationalists or EAs are pretty much like those, then you should be about as scared of rationalists as you are of them, regardless of whether rationalists should or could have done better.
If you're a Less Wrong reader wondering whether these are Awful People that you've been associating with and you should be questioning your judgement in thinking otherwise, then again you probably have some idea of how Awful some other similar groups are. If it turns out that rationalists are pretty much like academics or software engineers, then you should feel about as bad for failing to shun them as you would for failing to shun academics or software engineers.
If you're a random person reading a Bloomberg News article, and wondering whether you should start thinking of "rationalist" and "effective altruist" as warning signs in the same way as you might think of some other terms that I won't specify for fear of irrelevant controversy, then once again you should be calibrating your outrage against how you feel about other groups.
For the avoidance of doubt, I should say that I don't know how the rate of sexual misconduct among rationalists / EAs / Silicon Valley rationalists in particular / ... compares with the rate in other groups, nor do I have a very good idea of how high it is in other similar groups. It could be that the rate among rationalists is exceptionally high (as the Bloomberg News article is clearly trying to make us think). It could be that it's comparable to the rate among, say, Silicon Valley software engineers and that that rate is horrifyingly high (as plenty of other news articles would have us think). It could be that actually rationalists aren't much different from any other group with a lot of geeky men in it, and that groups with a lot of geeky men in them are much less bad than journalists would have us believe. That last one is the way my prejudices lean ... but they would, wouldn't they?, so I wouldn't put much weight on them.
[EDITED to add:] Oh, another specific situation one could be in that's relevant here: If you are contemplating Reasons Why Rationalists Are So Bad (cf. the final paragraph quoted in the OP here, which offers an explanation for that), it is highly relevant whether rationalists are in fact unusually bad. If rationalists or EAs are just like whatever population they're mostly drawn from, then it doesn't make sense to look for explanations of their badness in rationalist/EA-specific causes like alleged tunnel vision about AI.
[EDITED again to add:] To whatever extent the EA community and/or the rationalist community claims to be better than others, of course it is fair to hold them to a higher standard, and take any failure to meet it as evidence against that claim. (Suppose it turns out that the rate of child sex abuse among Roman Catholic clergy is exactly the same as that in some reasonably chosen comparison group. Then you probably shouldn't see Roman Catholic Clergy as super-bad, but you should take that as evidence against any claim that the Roman Catholic Church is the earthly manifestation of a divine being who is the source of all goodness and moral value, or that its clergy are particularly good people to look to for moral advice.) How far either EAs or rationalists can reasonably be held to be making such a claim seems like a complicated question.
Replies from: nim↑ comment by nim · 2023-03-08T23:10:57.911Z · LW(p) · GW(p)
I am a pessimist who works from the assumption that humans are globally a bit terrible. Thus, I don't consider the isolated data point of "humans in group x have been caught being terrible" to be particularly novel or useful.
Reporting that I would find useful would ultimately take the form "humans in group x trend toward differently terrible from humans in other groups", whether that's claiming that they're worse, differently bad, or better.
Whenever someone claims that a given group is better than most of society, the obvious next question is "better at being excellent to each other, or better at covering it up when they aren't?".
The isolated data point of "people in power are accused of using that power to harm others" is like... yes, and? That's kind of baseline for our species.
And as a potential victim, reporting on misconduct is only useful to me if it updates the way I take precautions against it, by pointing out that the misconduct in a given community is notably different from that in the world at large.
comment by Portia (Making_Philosophy_Better) · 2023-09-24T14:09:16.806Z · LW(p) · GW(p)
That article had me horrified. But I was hoping the reactions would point to empathy and a commitment to concrete improvement.
The opposite happened, the defensive and at times dismissive or demanding comments made it worse. It was the responses here and on the effective altruism forum that had me reassess EA related groups as likely unsafe to work for.
This sounds like a systematic problem related to the way this community is structured, and the community response seems aimed not at fixing the problem, but at justifying why it isn't getting fixed, abusing rationality to frame abuse as normal and inevitable.
comment by Templarrr (templarrr) · 2023-03-08T12:18:50.330Z · LW(p) · GW(p)
At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him.
Like... an actual evil amoral BBEG wizard? Is this something true rationalists now brag about?
Just because someone uses rationality toolset doesn't make them role model :(
↑ comment by habryka (habryka4) · 2023-03-08T18:21:13.883Z · LW(p) · GW(p)
I wouldn't trust the news articles to mention this accurately. Separately HPMOR was also not over back then, so it is plausible that Michael owned himself by bragging about this before the professors real identity was revealed.
Replies from: gwern↑ comment by gwern · 2023-03-08T20:34:59.124Z · LW(p) · GW(p)
It didn't have to be revealed. That Quirrel was Voldemort was obvious almost within the first chapter introducing him (eg [LW · GW] already taken for granted in the earliest top-level discussion page in 2010), to the extent that fandebate over 'Quirrelmort' was mostly "he's so obviously Voldemort, just like in canon - but surely Yudkowsky would never make it that easy, so could he possibly be someone or something else? An impostor? some sort of diary-like upload? a merger of Quirrel/Voldemort? Harry from the future?" Bragging about being the inspiration for a character so nakedly amoral, at best, that the defense in fan debates is "he's too evil to be Voldemort!" is not a great look no matter when in the series he was doing that bragging.
comment by Søren Elverlin (soren-elverlin-1) · 2023-03-07T20:12:58.942Z · LW(p) · GW(p)
This post may be more relevant on EAForum than here.
Replies from: ivy-mazzola↑ comment by Ivy Mazzola (ivy-mazzola) · 2023-03-08T03:41:02.680Z · LW(p) · GW(p)
As an EA, please don't try and pin this on us. The claims are more relevant to the rationality community than the EA community. On the other hand, if you think the piece is irrelevant or poorly written and doesn't belong anywhere, then be epistemically brave and say that. You are totally allowed to think and express that, but don't try and push it off on the EA Forum. If this piece doesn't belong here, it doesn't belong there.
(fwiw as things stand it's already crossposted there!)
Replies from: soren-elverlin-1, Kenny↑ comment by Søren Elverlin (soren-elverlin-1) · 2023-03-08T10:14:09.574Z · LW(p) · GW(p)
It was crossposted after I commented, and did find a better reception on EA Forum.
I did not mean my comment to imply that the community here does not need to be less wrong. However, I do think that that there is a difference between what is appropriate to post here and what is appropriate to post on the EA Forum.
I reject a norm that I ought to be epistemically brave and criticise the piece in any detail. It is totally appropriate to just downvote bad posts and move on. Writing a helpful meta-comment to the poster is a non-obligatory prosocial action.
Replies from: ivy-mazzola↑ comment by Ivy Mazzola (ivy-mazzola) · 2023-03-08T15:09:58.849Z · LW(p) · GW(p)
I reject a norm that I ought to be epistemically brave and criticise the piece in any detail.
Fine. It's fine to say nothing and just downvote, sure. Also fine to have said, "this looks like it should go on the EA Forum too".
It.. did find a better reception on EA Forum... I do think that that there is a difference between what is appropriate to post here and what is appropriate to post on the EA Forum.
Sigh. I suspect you have missed my point or you are missing understanding of what EA forum really is. The EA Forum is where people can ask questions and discuss effective causes, interventions, and ways of being. The latter is similar to LW rationality but broader scope into moral behavior. If something does not have expected effect of increasing good done and solving problems, it should not be posted on the EA Forum. Yes, the EA Forum has community news, but so does LW, as LW also has community meetups and posts discussing communitybuilding tactics. And LW is where some original discussion about the cases in the Bloomberg piece occurred (re: Vassar, Brent Dill, CFAR).
So it is not fine is to imply this clearly ratfocused piece goes there and not here. If it is appropriate for the EA Forum it is appropriate for here. Period. Fine, it is getting more interest there. But just because it is getting better reception on the EA Forum doesn't mean it should be. That is still not really a point that it should go there, if you think it doesn't go on LW (having factored in the paragraph above this one) and you care about the epistemics of the EA forum (which, it would be very rude not to!!).
Yes, the EA Forum has a problem with being more drama hungry and shaming. But this is a problem, not something users on LW should lean into and throw it under the bus. doesn't mean a piece that the forum might "go for" necessarily belongs there. That the audience might go for it does not imply that it is in line with the actual purpose of the EA Forum: to increase effectiveness. For Example: Theoretically, I could post a salacious 3 page story full of misquotes and false claims and lacking context and scale, and theoretically the EA forum audience might make that salacious post of mine the most upvoted and most discussed post in all the forum's history. BUT that doesn't mean my 3 page story should have gone on the EA Forum. Even if people might drink it up, it would not belong there by definition of being a bad irrelevant piece.
And again, if this post is true and relevant, it is more relevant to the rationality community. Flat out. It might belong in both places, but it definitely does not belong only on the EA Forum.
I apologize for my bluntness but come on. Some of us are trying to keep the EA Forum from falling into chaos and undue self flagellation, and be the best forum it can be. Maybe you never hung out there but it used to epic and community drama has significantly worsened it. It's honestly hard to convey how frustrating it is to see that a LW user would try to throw the EA Forum under the bus and typecast the EA Forum when EA Forum users and mods are working hard to try to undo the community overfocus that never should have happened there. And it honestly reads like you don't respect the EA Forum and are essentially treating it with snobbery, that the LW audience is either too epistemically sound for this piece, or just too good for it... but not the EA Forum audience who should handle all the community gruntwork you don't like I guess. (Newsflash: we don't like it either)
A final different point: Have you considered that perhaps some of the reason the EA Forum can be typecast as so "community focused" is because users here on LW would just happily throw all the drama they aren't willing to handle head on, over there to the EA Forum? To the extent this Bloomberg piece has to do with the rationality community (almost totally), the rationality community should 100% own it as part of their past and acknowledge it (which it looks like other LW users are somewhat doing) so the EAs are not unfairly forced to host and handle rationalistcaused problems (either resulting valid concerns or resulting invalid PR disasters) again.
↑ comment by Søren Elverlin (soren-elverlin-1) · 2023-03-08T16:32:21.027Z · LW(p) · GW(p)
I strongly support your efforts to improve the EA Forum, and I can see your point that using upvotes as a proxy for appropriateness fails when there is a deliberate effort to push the forum in a better direction.
Replies from: ivy-mazzola↑ comment by Ivy Mazzola (ivy-mazzola) · 2023-03-08T17:39:49.403Z · LW(p) · GW(p)
Thank you so much for understanding, and sticking with my lengthy comment long enough to understand
↑ comment by Kenny · 2023-03-28T03:34:58.675Z · LW(p) · GW(p)
Please don't pin the actions of others on me!
Replies from: ivy-mazzola↑ comment by Ivy Mazzola (ivy-mazzola) · 2023-04-02T17:33:26.081Z · LW(p) · GW(p)
Wasn't doing that.
comment by MSRayne · 2023-03-08T01:23:36.855Z · LW(p) · GW(p)
I now see why people call LessWrong a cult. From inside it's not obvious, particularly since I interact with the community only through the internet and don't live anywhere near in-person rationalist communities, and don't work at any EA organization; but from this outside perspective, yeah, it kinda looks like a cult. The inevitable outcome of putting one idea above everything else is gradual trashing of everything else in life, including basic decency - particularly when heterosexual males are involved. Now, if rationalists truly aim for rationality, they will coordinate to fix the problem; but... I have my doubts about how likely that is.
Replies from: FireStormOOO↑ comment by FireStormOOO · 2023-03-09T20:26:12.134Z · LW(p) · GW(p)
I very much understand the frustration, but I'll ask, as someone also not directly adjacent to any of this either, what would you have me and others like me do? There's no shortage of anger and frustration in any discussion like this, but I don't see any policy suggestions floating around that sound like they might work and aren't already being tried (or at least any suggestion that there's countermeasures that should be deployed and haven't been).
comment by Disgraceful55 · 2023-03-08T00:55:16.978Z · LW(p) · GW(p)
Seeing the comments here ignore and downplay the abuse in this article that includes this very forum gives me no hope for this community. Its evident rape culture is real here with disgusting comments downplaying these abuses. Its no wonder why people now look down on this community when you have 4Chan comments attempting to excuse sexual abuse and rape. From a PR perspective it won't look nice to be tied to here when more stories like these make the frontpage and further comments are made excusing this behavior. At least the EA forum is making an attempt to address this.