Decentralized Exclusion
post by jefftk (jkaufman) · 2023-03-13T15:50:01.710Z · LW · GW · 19 commentsContents
19 comments
I'm part of several communities that are relatively decentralized. For example, anyone can host a contra dance, rationality meetup, or effective altruism dinner. Some have central organizations (contra has CDSS, EA has CEA) but their influence is mostly informal. This structure has some benefits (lower overhead, more robust) but one drawback is in handling bad behavior. If several people reported very bad experiences with someone at my local dance we'd kick them out, but that wouldn't keep them from harming others at, say, any of the hundreds of other events run by other organizations.
I have seen cases, though, where someone was fully removed from a decentralized community. Looking at why these cases succeeded and others failed, I think it took:
Clear misbehavior, in a way that nearly everyone would agree if they looked into it.
Detailed public accusations, so people can look into it if they doubt the consensus.
The combination of these means that you can have an initial burst of 'drama' in which lots of people learn about the situation and agree that the person should be kicked out, and then this can be maintained whenever they show up again. For example:
2016: Gleb Tsipursky from the EA community, for a range of shady things and a pattern of apologizing and then continuing (details [EA · GW]).
2017: Jordy Williams from contra dance, after accusations of grooming and rape (details).
2018: Brent Dill, from the rationality community after accusations of sexual abuse, gaslighting, and more (details).
Unfortunately this approach relies on people making public accusations, which is really hard. We should support people when they do and recognize their bravery, but people will often have valid reasons why they won't: fear of retaliation, unwilling to have that level of public scrutiny, risk of legal action. In those cases it's still possible to make some progress privately, and we definitely need to try, but you keep bumping into the limitations of decentralization and defamation law.
To clarify why I think (1) and (2) are key for community-wide exclusion, however, let's look at two cases where this has been tried but was only partially successful. Within EA I've seen people apply this approach to Jacy Reese, but because the details of what he was apologizing for are vague he's only mostly kicked out and people aren't sure how to view him.
The second case is Michael Vassar, in the rationality community (with some overlap into EA). He's been mostly expelled, but not as clearly as Tsipursky/Williams/Dill. His case had (1) and (2) but not entirely:
Some alleged misbehavior was clearly bad (sexual assault) but some was strange and hard to evaluate (inducing psychosis).
The sexual assault allegation was public, but it was in a hard-to-follow Twitter thread, included accusations against other people, and mixed accusations of many levels of severity (assault, distasteful, being a bad partner). I don't fault the accuser for any of this, but as a "here's a link that explains the problems" resource it didn't work as well as the three more successful cases above.
The psychosis allegations were even harder to link, scattered across [LW(p) · GW(p)] multiple [LW(p) · GW(p)] threads [LW(p) · GW(p)] with people changing their minds.
Vassar was widely banned (REACH, SSC), but there were still holes. For example, he was invited to speak at an online SSC meetup. With the publication of the Bloomberg article on abuse in the rationalist community, however, which contained additional allegations and also provided something clear to link to I think we now have (1) and (2) and the expulsion will stick.
Disclosure: while I'm on the BIDA Safety Team I'm
speaking only for myself. My wife is on the Community
Health Team at CEA, but I haven't run this post by her and don't
know her views.
19 comments
Comments sorted by top scores.
comment by Raemon · 2023-03-13T18:02:43.688Z · LW(p) · GW(p)
FYI this was sort of an edge-case for frontpage – the post itself is a pretty timeless point, articulated in a clear, gearsy way, which normally means frontpage. The examples are largely from rationalist/EA community and tend to leave internal-drama posts as personal. We left this one on personal-blog but just wanted to flag edge-cases where they come up so people have some sense of where our line is.
comment by tailcalled · 2023-03-13T21:51:30.446Z · LW(p) · GW(p)
I have been getting interested in Vassarism recently, but this post makes me think that is a bad decision. I was otherwise just about to set up a meeting where he would teach me stuff while I pay him $250/h. This now seems like a bad idea.
Replies from: SaidAchmiz, cousin_it↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-13T23:13:27.631Z · LW(p) · GW(p)
What is “Vassarism”, anyway? Could you (or anyone else) give an “executive summary”?
Replies from: tailcalled↑ comment by tailcalled · 2023-03-14T07:45:13.354Z · LW(p) · GW(p)
The one-sentence summary as I understand it would be "The forms of discourse considered to be civilized/nice by elites (in the education system, workplace and politics) works by obscuring and suppressing information; we need to talk about this so we can stop it".
An example that I'm not personally familiar with but which seems broadly accepted by non-Vassarite rationalists would be how tech startups appeal to funders: https://www.lesswrong.com/posts/3JzndpGm4ZgQ4GT3S/parasitic-language-games-maintaining-ambiguity-to-hide [LW · GW] (this is not written by a Vassarite)
Another example that is not written by a Vassarite but which seems relevant: https://slatestarcodex.com/2017/06/26/conversation-deliberately-skirts-the-border-of-incomprehensibility/
Vassar didn't like my recent substack post but he did really like White Fragility and from what I've heard (not from a Vassarite) this blog post I also linked to in my substack post contains the important part of White Fragility: https://thehumanist.com/magazine/july-august-2015/fierce-humanism/the-part-about-black-lives-mattering-where-white-people-shut-up-and-listen/
According to Michael Vassar, the core schtick of rationalism is that we want truth-promoting discourse. So the followup implication if the Vassarites are right is that Vassarism is the proper continuation to rationalism.
Replies from: lahwran, SaidAchmiz↑ comment by the gears to ascension (lahwran) · 2023-03-14T08:14:19.746Z · LW(p) · GW(p)
yeah I mean makes sense. my question is whether his style also obscures things. rotation can cut up a shape, if the lens isn't lined up to the types.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-14T12:01:22.041Z · LW(p) · GW(p)
I see, thank you.
Replies from: tailcalled↑ comment by tailcalled · 2023-03-14T12:57:58.509Z · LW(p) · GW(p)
I guess I should add, the Vassarites are especially concerned whith this phenomenon when it acts to protect corrupt people in power, and a lot of the controversy between the Vassarites and rationalist institutions such as MIRI/CEA/CFAR is about the Vassarites arguing that those institutions are guilty of this too.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-03-14T13:19:50.568Z · LW(p) · GW(p)
Are there any institutions, according to Vassarites, that are not guilty of this?
Replies from: tailcalled↑ comment by tailcalled · 2023-03-14T14:59:49.384Z · LW(p) · GW(p)
Dunno, maybe Quakers. But the point is not that rationalism is especially egregrious about how much it does it, but rather that the promise of rationalism is to do better.
(And! Some of the key rationalist concerns are bottlenecked on information-suppression. Like a lot of people deploy the information suppression strategies against AI x-risk.)
comment by Dagon · 2023-03-13T18:12:02.422Z · LW(p) · GW(p)
It's disturbing to me that these examples are "community" problems rather than "individual" problems, actionable in a legal framework and applied to most of civil society. Why is it OK to keep someone out of all Contra groups but not worry too much if they switch to Square? I get that each group must prune their garden, and has the right to decide who to include/exclude, as well as a right and duty to share some information (some clubs require references or introductions from other clubs, most modern ones do not).
This is similar to the Mastodon federation problem - local servers are expected to have "community norms", but almost all of the actual value is federated across servers, making local server choice kind of irrelevant.
Replies from: Viliam↑ comment by Viliam · 2023-03-13T20:22:40.437Z · LW(p) · GW(p)
It's disturbing to me that these examples are "community" problems rather than "individual" problems, actionable in a legal framework and applied to most of civil society. Why is it OK to keep someone out of all Contra groups but not worry too much if they switch to Square?
The problem is that someone has to enforce the bans. They are not going to enforce themselves.
It is a good idea to create a ban list, and share it with your friends, but the problem is that this does not scale well. Your friends may trust you, but what about the friends of your friends, etc.?
Would you exclude people from your events just because some stranger put their names on the list? If yes, this has a potential for abuse. Someone in the chain will be an asshole, and will put people on the list for wrong reasons (personal issues, because they have different political opinions, whatever). If no, then by the same logic, reasonable strangers will refuse to use your list.
It is easier if you can link evidence from the list, but not all evidence can be shared. What if the victim does not want to press charges? Or it is something not strictly illegal, just super annoying?
Replies from: Dagon↑ comment by Dagon · 2023-03-13T21:04:15.736Z · LW(p) · GW(p)
I should clarify that "disturbing to me" is because our societal and legal systems haven't kept up with the decentralized large-scale nature of communities, not because I think the communities involved don't care. It really sucks that laws against rape, fraud, and unsafe drug pushing are un-enforceable, and it's left to individuals to avoid predators as best they can rather than actually using the state's monopoly on violence to deter/remove the perpetrators.
Sure, there's always a huge gap between what's officially actionable and what's important to address informally. That sucks.
Replies from: Viliam↑ comment by Viliam · 2023-03-14T09:45:50.941Z · LW(p) · GW(p)
I spent some time trying to think about a solution, but all solutions I imagined were obviously wrong, and I am not sure there exists a good one.
Problem 1: Whatever system you design, someone needs to put information in it. That person could be a liar. You cannot fully solve it by a majority vote or whatever, because some information is by its nature only known to a few people, and that information may be critical in your evaluation of someone's character.
For example: Two people alone in a room. One claims to be raped by the other. The other either denies that it happened, or claims that it was consensual. Both enter their versions of the event into the database. What happens next? One possibility is to ignore the information for now, and wait for more data (and maybe one day conclude "N people independently claim that X did something to them privately, so it probably happened"). But in the meanwhile, you have a serious accusation, potentially libelous, in your database -- are you going to share it publicly?
Problem 2: People can punish others in real world for entering (true) data in the system. In the example above, the accused could sue for libel (even if the accusation is true, but unprovable in court). People providing unpleasant information about high-status people can be punished socially. People can punish those who report on their friends or on their political allies.
If you allow anonymous accusations, this again incentivizes false accusations against one's enemies. (Also, some accusations cannot in principle be made anonymously, because if you say what happened, when and where, the identity of the person can be figured out.)
A possible solution against libel is to provide an unspecific accusation, something like "I say that X is seriously a bad person and should be avoided, but I refuse to provide any more details; you have to either trust my judgment, or take the risk". But this would work only among sufficiently smart and honest people, because I would expect instant retaliation (if you flag me, I have nothing to lose by flagging you in turn, especially if the social norm is that I do not have to explain), the bad actor providing their own version of what "actually happened", and bad actors in general trying to convince gullible people to also flag their enemies. (Generally, if gullible people use the system, it is hopeless.) Flagging your boss would still be a dangerous move.
At the very minimum, a good prestige-tracking system would require some basic rationality training of all participants. Like to explain the difference between "I have observed a behavior X" and "my friend told me about X, and I absolutely trust my friend", between "X actually helped me" and "X said a lot of nice words, but that was all", between "dunno, X seems weird, but never did anything bad to me" and "X did many controversial things, but always had a good excuse", etc. If people do not use the same flags to express the same things, the entire system collapses into "a generic like" and "a generic dislike", with social consequences for voting differently from a majority.
So maybe it should not be individuals making entries in the database, but communities. Such as local LW meetups. "X is excommunicated from our group; no more details are publicly provided". This provides some level of deniability: X cannot sue the group; if the group informally provides information about X, X doesn't know which member did it. On the other hand, the list is maintained by a group, so an individual cannot simply add there their personal enemies. Just distinguish between "X is on our banlist" and "X is banned from our activities, because they are on a banlist of a group we trust", where each group makes an individual decision about which groups to trust.
Replies from: Darmani↑ comment by Darmani · 2023-03-15T06:35:34.686Z · LW(p) · GW(p)
A possible solution against libel is to provide an unspecific accusation, something like "I say that X is seriously a bad person and should be avoided, but I refuse to provide any more details; you have to either trust my judgment, or take the risk
FYI, this doesn't actually work. https://www.virginiadefamationlawyer.com/implied-undisclosed-facts-as-basis-for-defamation-claim/
Replies from: Viliam↑ comment by Viliam · 2023-03-15T09:45:05.356Z · LW(p) · GW(p)
Damn. Okay, what about "person X is banned from our activities, we do not explain why"?
Replies from: Darmani↑ comment by Darmani · 2023-03-18T02:49:00.741Z · LW(p) · GW(p)
You're probably safe so long as you restrict distribution to the minimum group with an interest. There is conditional privilege if the sender has a shared interest with the recipient. It can be lost through overpublication, malice, or reliance on rumors.
comment by ChristianKl · 2023-03-13T21:34:57.921Z · LW(p) · GW(p)
If you take the Twitter thread making allegations against Vassar and against other people was well the obvious question is why no actions are taken against the other people who stand accused but actions are taken against Vassar.
If you would ask Vassar he would say something like: "It's because I criticized the EA and the rationality community and people wanted to get rid of me."
Anna Salamon's comment [LW(p) · GW(p)]seems like an admission that this is central to what was going on. In a decentralized environment that makes it very hard to know whether to copy the decisions of others to ban people, especially if it comes without public reasoning.
Besides that Twitter thread you linked, there are also the accusations that Brent Dill made against Vassar and others (which I currently don't believe to be true). Interestingly, Vassar hinted in the direction that those allegations are the reasons Brent Dill was thrown out of the community at the SSC online meetup.
There are also allegations about Vassar lying where the most consequential one was told to me under confidentiality. I think there's a failure of EA institutions not to share that one.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2023-03-14T00:02:30.979Z · LW(p) · GW(p)
If you take the Twitter thread making allegations against Vassar and against other people was well the obvious question is why no actions are taken against the other people who stand accused but actions are taken against Vassar.
Are there any allegations in that thread against other people that you'd consider assault?