ialdabaoth is banned

post by Vaniver · 2019-12-13T06:34:41.756Z · LW · GW · 68 comments

Contents

  Some background context:
  Some frameworks and reasoning:
  Plan for including content of banned users
None
68 comments

ialdabaoth [LW · GW] is banned from LessWrong, because I think he is manipulative in ways that will predictably make the epistemic environment worse. This ban is unusual in several respects: it relies somewhat heavily on evidence from in-person interactions and material posted to other sites as well as posts on LessWrong, and the user in question has been an active user of the site for a long time. While this decision was made in the context of other accusations, I think it can be justified solely on epistemic concerns. I also explain some of the reason for the delay below.

However, in the interests of fairness, and because we believe ideas from questionable sources can be valid, we’ll make edits he suggests to his post Affordance Widths [LW · GW] so that it can fully participate in the 2018 Review. My hope is that announcing this now will cause the discussion on that post to be focused solely on the post rather than social coordination about whether he should or should not be banned. (Commentary on this decision should happen here.)

Some background context:

Back in September of 2018, I posted this comment [LW(p) · GW(p)] about a discussion involving allegations of serious misconduct, and said LW was not the place to conduct investigations, but that it would be appropriate to link to findings once the investigation concluded.

As far as I'm aware, he has made no claims of either guilt or innocence, and went into exile, including ceasing posting or commenting on LessWrong. To the best of my knowledge, none of the panels that conducted investigations posted findings, primarily for reasons of legal liability, and so there was never an obvious time to publicly and transparently settle his status (until now).

One of the primary benefits of courts is that they allow for a cognitive specialization of labor, where a small number of people can carefully collect information, come to a considered judgment, and then broadcast that judgment. Though a number of groups have run their own investigations and made calls about whether ialdabaoth is welcome in their spaces, generally choosing no, there has been no transparent and accountable process which has made public pronouncement on the allegations brought against him.

About six months ago, ialdabaoth messaged Raemon, asking if he was banned. Raemon replied that the team was considering banning him but had multiple conflicting lines of thought that hadn’t been worked through yet, and that if he commented Raemon or someone else would respond with another comment making that state of affairs transparent.

I think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban, and holds without investigating or making any sort of ruling on other allegations. This ban is not intended to provide a ruling either way on other allegations, as we have not conducted any investigation of our own into those allegations, nor do we plan to, nor do we think we have the necessary resources for such work.

It does seem important to point out that some of the standards used to assess that risk stem from processing what happened in the wake of the allegations. A brief characterization is that I think the community started to take more seriously not just the question of "will I be better off adopting this idea?" but also the question "will this idea mislead someone else, or does it seem designed to?". If I had my current standards in 2017, I think they would have sufficed to ban ialdabaoth then, or at least do more to identify the need for arguing against the misleading parts of his ideas.

This processing was gradual, and ialdabaoth going into exile meant there wasn't any time pressure. I think it's somewhat awkward that we became comfortable with the status quo and didn't notice when a month and then a year had passed without us making this state transparent, or doing the discussion necessary to prepare this post. However, with one of his posts nominated for 2018 in Review, this post became urgent as well as important.

Some frameworks and reasoning:

In moderating LessWrong, I don’t want to attempt to police the whole world or even the whole internet. If someone comes to LessWrong with an accusation that a LW user mistreated them someplace else, the response is generally “handle it there instead of here.” This is part of a desire to keep LW free of politics and factionalism, and instead focused on the development of shared tools and culture, as well as cause issues to be settled in contexts that have the necessary information. That said, it also seems to me like sensible Bayesianism to keep evidence from the rest of the world in mind when judging behavior on the site, and paying more attention to users who we expect to be problematic in one way or another.

But what does it mean that ideas from questionable sources can be valid? Argument screens off authority [LW · GW], but authority (positive or negative) has some effect. Consider these cases:

Suppose you are running a physics journal, and a convicted murderer sends you a paper draft; you might feel some disgust at handling the paper, but it seems to me that the correct thing to do is handle the paper like any other, and accept it if the science checks out and reject it if it doesn’t. If your primary goal is getting the best physics, blinded review seems useful; you don’t care very much whether or not the author is violent, and you care a lot about whether the thing they said was true. If, instead, the person was convicted of manufacturing data or the other sorts of scientific misconduct that are difficult to detect with peer review, it seems justified to simply reject the submission. You also might not want them to give a talk at your conference.

Suppose instead you are running a trading fund, and someone previously convicted of fraud sends you an idea for a new financial instrument. Here, it seems like you should be much more suspicious, not just of the idea but also of your ability to successfully notice the trap if there is one. It seems relevant now to check both whether the idea is true and whether or not it is manipulative. Rather than just performing a process that catches simple mistakes or omissions, one needs to perform a process that's robust to active attempts to mislead the judging process.

Suppose instead you’re running an entertainment business like a sports team, and someone affiliated with the team does something unpopular. Since the primary goal you’re maximizing is not anything epistemic, but instead how popular you are, it seems efficient to primarily act based on how the affiliation affects your reputation.

I think the middle case is closest to the situation we’re in now, for reasons like those discussed in comments by jimrandomh [LW(p) · GW(p)] and by Zack_M_Davis [LW(p) · GW(p)]. Much of ialdabaoth's output is claims about social dynamics and reasoning systems that seem, at least in part, designed to manipulate the reader, either by making them more vulnerable to predation or more likely to ignore him / otherwise give him room to operate.

While we can’t totally ignore reputation costs, I currently think LessWrong can and should consider reputation costs as much less important than epistemic costs. I don’t think we should ban people simply for having bad reputations or committing non-epistemic crimes, but I think we should act vigorously to maintain a healthy epistemic environment, which means both being open and having an active immune system. This, of course, is not meant to be a commentary on how in-person gatherings should manage who is and isn't welcome, as the dynamics of physical meetups and social communities are quite different than websites. When the two intersect, we do take seriously our duty of care towards our users and people in general.

Plan for including content of banned users

In general, LessWrong does not remove the posts or comments of banned users, with the exception of spam. It seems worth sharing our rough plan of what to do if a post by a banned user passes through an annual review, but it seems to me like the standard mechanisms we have in place for the review will handle this possibility gracefully.

As with all posts, the post will only be included with the consent of the author. If a post is controversial for any reason, we may decide that inclusion requires some sort of editor's commentary or inclusion of user comments or reviews, which would be shared with the author before they make their decision to consent or not to inclusion.

68 comments

Comments sorted by top scores.

comment by Zack_M_Davis · 2019-12-13T08:05:44.406Z · LW(p) · GW(p)

We think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban

I'm having trouble convincing myself that this is the real reason [LW · GW]. Imagine an alternate universe where their local analogue of Ialdabaoth was just as manipulative, wrote almost all the same things about status, power, social reality, &c., but was definitely not guilty of any sex crimes for reasons that had nothing to do with his moral character. (Perhaps imagine him having some kind of exotic sexual orientation that would be satisfied without human contact, like a statue fetish.) Would we really ban such a person on the grounds of manipulative epistemic tactics?

Your "fraudster designs a financial instrument" scenario explains why one should definitely be suspicious of the value of such a user's contributions—but frankly, I'm suspicious of a lot of people in that way: how do you decide who to ban?

It occurs to me that the "reputation vs. merit" framing completely fails to address the reasons many would assign such bad reputation. (The function of bad reputation is to track things that are actually bad!) Maybe we just have a "moral taste" for not working with people who (are believed to) have committed sufficiently bad crimes, and we're willing to pay the cost of forgoing positive contributions from them?

If that's actually the psychology driving the decision, it would be better to fess up to it rather making up a fake reason [LW · GW]. Better for the physics journal editor to honestly say, "Look, I just don't want to accept a paper from a murderer, okay?", rather than claiming to be impartial and then motivatedly subjecting the paper to whatever isolated demands for rigor were necessary to achieve the appearance of having rejected it on the merits.

Replies from: Vaniver, quanticle, jimrandomh
comment by Vaniver · 2019-12-13T09:16:04.618Z · LW(p) · GW(p)

Would we really ban such a person on the grounds of manipulative epistemic tactics?

One of the big updates that I made over the course of this affair was the value of having a community-wide immune system, rather than being content with not getting sick myself. I think this [LW(p) · GW(p)] is an example of what that sort of update looks like. Michael isn't banned from LessWrong, but also hasn't posted here in a year [LW · GW] in a way that makes that question seem somewhat irrelevant. (Appropriately enough, his last comment was about suggesting that someone who is losing mental ground to their model of HPMOR!Quirrell talk to him to get the decision procedures from the source.) [Edit: I forgot about his more recent account [LW · GW], which is still fairly inactive.] [Edit2: I think it was probably a mistake to write the bits of this paragraph after the first sentence, because the example is unclear and mentioning users in the context of bans can have a chilling effect that I didn't want to have here.]

So far, it seems like lots of things have been of the form: person (or group) has a mixed reputation, but is widely held in low regard (without the extent of that opinion being common knowledge), the generator of the low regard causes an explosion that makes them an outcast, and then after the fact people go "well, we saw that coming individually but didn't know how to do anything about it socially." It would be nice if we knew how to do things about it socially; when this happened a year ago, I made a list of "the next ialdabaoth" and one of the top 3 of that list is at the center of current community drama.

[This seems especially important given that normal coordination mechanisms of this form--gossip, picking up on who's 'creepy' and who isn't--rely on skills many rationalists don't have, and sometimes have deliberately decided not to acquire.]

Replies from: Zack_M_Davis, Zack_M_Davis, Zack_M_Davis, riceissa
comment by Zack_M_Davis · 2019-12-14T00:00:30.410Z · LW(p) · GW(p)

the value of having a community-wide immune system, rather than being content with not getting sick myself

I'd be very interested if you could elaborate on what observations make you think "the community" is doing the kind of information-processing that would result in "immune system" norms actually building accurate maps, rather than accelerating our decline into a cult [LW · GW].

It seems to me that what actually helped build common knowledge in the Ialdabaoth case was the victims posting their specific stories online, serving a role analogous to transcripts of witness testimony in court.[1]

In contrast, the conjunction of the "immune system" metaphor and your mention of Anna's comment about Michael makes me imagine social norms that make it easier for high-ranking community members to silence potential rivals or whistleblowers by declaring them to be bad thinkers and therefore not worth listening to.

That is, I perceive a huge difference between, "Witnesses A, B, and C testified that X commited a serious crime and no exculpatory evidence has emerged, therefore I'm joining the coalition for ostracizing X" (analogous to a court) vs. "The mods declared that X uses manipulative epistemic tactics, therefore I'm going to copy that 'antibody' and not listen to anything X says" (analogous to an immune system).

But, maybe I'm completely misunderstanding what you meant by "immune system"? It would be great if you could clarify what you're thinking here.

It would certainly be nice to have a distributed intellectual authority I could trust. I can imagine that such a thing could exist. But painful personal experience has me quite convinced that, under present conditions, there really is just no substitute for thinking for yourself ("not getting sick [one]self").


  1. Thanks to Michael Vassar for teaching me about the historical importance of courts! ↩︎

Replies from: Vaniver
comment by Vaniver · 2019-12-14T02:17:39.107Z · LW(p) · GW(p)

It seems to me that what actually helped build common knowledge in the Ialdabaoth case was the victims posting their specific stories online, serving a role analogous to transcripts of witness testimony in court.[1] [LW · GW]

I think the effects of that (on my beliefs, at least) were indirect. The accusations themselves didn't move me very much, but caused a number of private and semi-public info-sharing conversations that did move me substantially.

That is, I perceive a huge difference between, "Witnesses A, B, and C testified that X commited a serious crime and no exculpatory evidence has emerged, therefore I'm joining the coalition for ostracizing X" (analogous to a court)

I do want to stress the ways in which the exile of Ialdabaoth does not match my standards for courts (altho I agree it is analogous). The main issue, in my mind at least, is that no one had the clear mandate within the community to 'try the case', and those that stepped forward didn't have broader social recognition of even their limited mandate. (No one could sue a judge or jury for libel if they found ialdabaoth guilty, but the panels that gathered evidence could be sued for libel for publishing their views on ialdabaoth.) And this is before we get to the way in which 'the case' was tried in multiple places with varying levels of buy-in from the parties involved.

But, maybe I'm completely misunderstanding what you meant by "immune system"? It would be great if you could clarify what you're thinking here.

The thing that's missing, in my mind, is the way in which antibodies get developed and amplified. That is, I'm less concerned with people deciding whether or not to copy a view, and more concerned with the view being put in public in the first place. My sense is that, by default, people rarely publicly share their worries about other people, and this gets worse instead of better if they suspect the person in question is adversarial. (If I think Bob is doing shady things, including silencing his enemies, this makes it harder to ask people what they think of Bob, whereas if Carol is generally incompetent and annoying, this makes it easier to ask people what they think of Carol.)

If you suspect there's adversarial optimization going on, the default strategies seem to be ignoring it and hoping it goes away, or letting it develop until it destroys itself, and the exceptional case is one where active countermeasures are taken. This is for a handful of reasons, one of which includes attempting to take such active countermeasures is generally opposed-by-default, unless clear authority or responsibility has been established beforehand.

Replies from: ChristianKl
comment by ChristianKl · 2019-12-18T14:40:12.986Z · LW(p) · GW(p)

When it comes to putting views in public it seems to me like posts like the OP or Anna's post about Vassar do note concerns but they leave the actual meat of the issue unsaid.

Michael Vassar for example spent a good portion of this year in Berlin and I had decisions to make about to what extend I want to try to integrate him into the local community or avoid doing that.

Without the links in the comments I wouldn't have had a good case for making decisions should ialdabaoth appear in Berlin.

I don't know to where ialdabaoth went into exil but there's a good chance that he will interact with other local rationality groups who will have to make decisions and who benefit from getting information.

comment by Zack_M_Davis · 2019-12-13T20:37:49.395Z · LW(p) · GW(p)

I think this [LW(p) · GW(p)] is an example of what that sort of update looks like. Michael isn't banned from LessWrong

Interesting that you should mention this. I've hugely benefited from collaborating with Michael recently. I think the linked comment is terrible, and I've argued with Anna about it several times. I had started drafting a public reply several months ago, but I had set it aside because (a) it's incredibly emotionally painful to write because I simultaneously owe eternal life-debts of eternal loyalty to both Michael and Anna,[1] and (b) it isn't even the most important incredibly-emotionally-painful high-community-drama-content piece of writing I have to do. The fact that you seem to take it this seriously suggests that I should prioritize finishing and posting my reply, though I must ask for your patience due to (b).


  1. Like a robot in an Isaac Asimov story forced to choose between injuring a human being or, through inaction, allowing a human being to come to harm, I briefly worried that my behavior isn't even well-defined in the event of a Michael–Anna conflict. (For the same reason, I assume it's impossible to take more than one Unbreakable Vow in the world of Harry Potter and the Methods.) Then I remembered that disagreeing with someone's blog comment isn't an expression of disloyalty. If I were to write a terrible blog comment (and I've written many), then I should be grateful if Anna were to take the time to explain what she thinks I got wrong. ↩︎

comment by Zack_M_Davis · 2019-12-15T17:59:10.269Z · LW(p) · GW(p)

You know, this is a really lame cheap shot—

(Appropriately enough, his last comment was about suggesting that someone who is losing mental ground to their model of HPMOR!Quirrell talk to him to get the decision procedures from the source.)

If we're going to play this frankly puerile game of bringing up who partially inspired what fictional characters, do I at least get to bring up "The Sword of Good"?

The Lord of Dark stared at Hirou as though he were the crazy one. "The Choice between Good and Bad," said the Lord of Dark in a slow, careful voice, as though explaining something to a child, "is not a matter of saying 'Good!' It is about deciding which is which."

Dolf uttered a single bark of laughter. "You're mad!" his voice boomed. "Can you truly not know that you are evil? You, the Lord of Dark?"

"Names," said the Lord of Dark quietly.

[...]

Hirou staggered, and was distantly aware of the Lord of Dark catching him as he fell, to lay him gently on the ground.

In a whisper, Hirou said "Thank you—" and paused.

"My name is Vhazhar."

"You didn't trust yourself," Hirou whispered. "That's why you had to touch the Sword of Good."

Hirou felt Vhazhar's nod, more than seeing it.

The air was darkening, or rather Hirou's vision was darkening, but there was something terribly important left to say. "The Sword only tests good intentions," Hirou whispered. "It doesn't guide your steps. That which empowers a hero does not make us wise—desperation strengthens your hand, but it strikes with equal force in any direction—"

"I'll be careful," said the Lord of Dark, the one who had mastered and turned back the darkness. "I won't trust myself."

"You are—" Hirou murmured. "Than me, you are—"

I should have known. I should have known from the beginning. I was raised in another world. A world where royal blood is not a license to rule, a world whose wizards do more than sneer from their high towers, a world where life is not so cheap, where justice does not come as a knife in the night, a world where we know that the texture of a race's skin shouldn't matter—

And yet for you, born in this world, to question what others took for granted; for you, without ever touching the Sword, to hear the scream that had to be stopped at all costs—

"I don't trust you either," Hirou whispered, "but I don't expect there's anyone better," and he closed his eyes until the end of the world.

Replies from: philh
comment by philh · 2019-12-17T15:35:28.861Z · LW(p) · GW(p)

I confess I don't know what you're trying to say here. I have a few vague hypotheses, but none that stand out as particularly likely based on either the quoted text or the context. (E.g. one of them is "remember that something that looks/is called evil, may not be"; but only a small part of the text deals with that, and even if you'd said it explicitly I wouldn't know why you'd said it. The rest are all on about that level.)

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-12-18T04:42:29.621Z · LW(p) · GW(p)

Vaniver mentioned that Michael Vassar was one of the partial inspirations for a supervillain in one of Eliezer Yudkowsky's works of fiction. I'm saying that, firstly, I don't think that's germane in a discussion of moderation policies that aspires to impartiality, even as a playful "Appropriately enough [...]" parenthetical. But secondly, if such things are somehow considered to be relevant, then I want to note that Michael was also the explicit namesake of a morally-good fictional character ("Vhazhar") in another one of Yudkowsky's stories.

The fact that the latter story is also about the importance of judging things on their true merits rather than being misled by shallow pattern-matching (e.g., figuing that a "Lord of Dark" must be evil, or using someone's association with a fictional character [LW · GW] to support the idea that they might be worth banning) made it seem worth quoting at length.

comment by riceissa · 2019-12-13T09:25:12.680Z · LW(p) · GW(p)

Michael isn't banned from LessWrong, but also hasn't posted here in 5 years

He seems to have a different account [LW · GW] with more recent contributions.

Replies from: Vaniver
comment by Vaniver · 2019-12-13T18:46:54.075Z · LW(p) · GW(p)

Thanks, fixed.

comment by quanticle · 2019-12-13T08:49:38.522Z · LW(p) · GW(p)

Imagine an alternate universe where their local analogue of Ialdabaoth was just as manipulative, wrote almost all the same things about status, power, social reality, &c., but was definitely not guilty of any sex crimes for reasons that had nothing to do with his moral character.

The post is arguing that the the things ialdabaoth writes regarding social dynamics, power, manipulation, etc. are the result of their presumed guilt. In other words, if ialdabaoth had a different fetish, they wouldn't write the things that they do about social reality, etc. and we wouldn't even be having this discussion in the first place. The argument, which I'm not sure I endorse, is that a world in which ialdabaoth writes exactly what he writes without being guilty is as logically coherent as a world in which matches don't light, but cells still use ATP.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-12-24T06:24:55.036Z · LW(p) · GW(p)

I see the argument, but I don't buy it empirically. Understanding social dynamics, power, manipulation, &c. is useful for acquiring the funds to buy the best statues.

comment by jimrandomh · 2019-12-13T20:06:13.673Z · LW(p) · GW(p)

I participated in the LW team discussion about whether to ban, but not in the details of this announcement. I agree that, in your hypothetical, we probably wouldn'tve banned. In another hypothetical where he were accused of sex crimes but everyone was fine with his epistemic tactics, we probably wouldn'tve banned either.

Replies from: frontier64
comment by frontier64 · 2019-12-14T06:04:50.142Z · LW(p) · GW(p)

In another hypothetical where he were accused of sex crimes but everyone was fine with his epistemic tactics, we probably wouldn'tve banned either

It seems that the issues with ialdabaoth's argumentation only appear in comments made after the allegations related to other behaviors of his. Therefore the argument that he is being banned for his epistemic tactics rather than his misconduct is just removing the issue up one step:

Ialdabaoth is being banned for his poor epistemic tactics, not his conduct.
But his epistemic tactics are manipulative because of his conduct. 

So the hypothetical where he was accused of sex crimes[1] and people didn't mind his epistemic tactics isn't a hypothetical. It was actuality. What we've observed is that after the accusations, certain people went from being fine with his epistemic tactics to not fine with them.


  1. Which I don't believe is actually true. I have read the relevant literature and no post makes an accusation that a crime has been committed, only manipulative sexual behavior. I will hedge this statement by acknowledging that I do not know the full situation. ↩︎

Replies from: orthonormal, Vaniver
comment by orthonormal · 2019-12-15T08:33:54.746Z · LW(p) · GW(p)

I mistrusted ialdabaoth from the start, though it's worth saying that I judged him to be a dangerous manipulator and probable abuser from in-person interactions long before the accusations came out, so it's not just his LessWrong content.

In any case, I found it impossible to argue on his own terms (not because he'd make decent counterarguments, but because he'd try to corrupt the frame of the conversation instead of making counterarguments). So instead I did things like write this post [LW · GW] as a direct rebuttal to something he'd written (maybe on LessWrong, maybe on Facebook) about how honesty and consent were fake concepts used to disguise the power differentials of popularity (which ultimately culminates in an implied "high status people sometimes get away with bad behavior X, so don't condemn me when I do X".)

comment by Vaniver · 2019-12-14T22:14:44.096Z · LW(p) · GW(p)

I agree with this point, and it's what originally motivated this paragraph of the OP:

It does seem important to point out that some of the standards used to assess that risk stem from processing what happened in the wake of the allegations. A brief characterization is that I think the community started to take more seriously not just the question of "will I be better off adopting this idea?" but also the question "will this idea mislead someone else, or does it seem designed to?". If I had my current standards in 2017, I think they would have sufficed to ban ialdabaoth then, or at least do more to identify the need for arguing against the misleading parts of his ideas.

One nonobvious point from this is that 2017 is well before the accusations were made, but a point at which I think there was sufficient community unease that a consensus could have been built if we had the systems to build that consensus absent accusations. 

Replies from: Benquo
comment by Benquo · 2019-12-15T14:15:05.774Z · LW(p) · GW(p)

OK but what's actually being done is a one-off ban of someone with multiple credible public rape allegations against them. The specific policy goal of developing better immune responses to epistemic corruption is just not relevant to that and I don't see how anyone on the mod team is doing anything best explained by an attempt to solve that problem.

Replies from: Vaniver
comment by Vaniver · 2019-12-15T17:40:34.234Z · LW(p) · GW(p)

OK but what's actually being done is a one-off ban of someone with multiple credible public rape allegations against them.

Also, what's actually being done is a one-off ban of a user whose name starts with 'i.'  That is, yes, I agree with the facts you present, and contest the claim of relevance / the act of presenting an interpretation as if it were a brute fact.

There is a symmetry to the situation, of course, where I am reporting what I believe my intentions are / the interpretation I was operating under while I made the decision, but no introspective access is perfect, and perhaps there are counterfactuals where our models predict different things and it would have gone the way you predict instead of the way I predict. Even so, I think it would be a mistake to not have the stated motivation as a hypothesis in your model to update towards or against as time goes on.

The specific policy goal of developing better immune responses to epistemic corruption is just not relevant to that

According to me, the relevance is that this action was taken to further that policy goal; I agree it is only weak evidence [LW(p) · GW(p)] that we will succeed at that goal or even successfully implement policies that work towards that goal. I view this as a declaration of intent, not success, and specifically the intent that "next time, we will act against people who are highly manipulative and deceitful before they have clear victims" instead of the more achievable but less useful "once there's consensus you committed crimes, not posting on LW is part of your punishment".

comment by Tenoke · 2019-12-13T08:54:35.201Z · LW(p) · GW(p)

I read this post where you keep claiming you are banning him for 'epistemic concerns' but then link to 0 examples and mostly talk about some unrelated real-life thing which you also give 0 real explanation for.

The comments here mention a sex crime, but OP doesn't. If that's what happened why vaguebook, stay silent for a year and lie the ban's for 'epistemic concerns'? Who else have you banned for 'epistemic concerns' - nobody?

Honestly, after reading everything here I do have major concerns about ialdabaoth's character, but the main epistemic concerns I have are about OP presenting this dishonestly after a year of silence.

Replies from: clone of saturn, Vaniver
comment by clone of saturn · 2019-12-14T00:39:15.307Z · LW(p) · GW(p)

My understanding is that the epistemic concern is "after writing the affordance widths post, he would tell young women he needed to do BDSM stuff they weren't comfortable with in order to stay within his affordance width." And similar things for some of his other posts. I'm not sure why the OP was so vague about this.

Edit: and he also managed to fool members of CFAR with a similar line.

We believe that Brent is fundamentally oriented towards helping people grow to be the best versions of themselves. In this way he is aligned with CFAR’s goals and strategy and should be seen as an ally.

In particular, Brent is quite good at breaking out of standard social frames and making use of unconventional techniques and strategies. This includes things that have Chesterton’s fences attached, such as drug use, weird storytelling, etc. A lot of his aesthetic is dark, and this sometimes makes him come across as evil or machiavellian.

Brent also embodies a rare kind of agency and sense of heroic responsibility. This has caused him to take the lead in certain events and be an important community hub and driver. The flip side of this is that because Brent is deeply insecure, he has to constantly fight urges to seize power and protect himself. It often takes costly signalling for him to trust that someone is an ally, and even then it’s shaky.

Brent is a controversial figure, and disliked by many. This has led to him being attacked by many and held to a higher standard than most. In these ways his feelings of insecurity are justified. He also has had a hard life, including a traumatic childhood. Much of the reason people don’t like him comes from a kind of intuition or aesthetic feeling, rather than his actions per se.

Replies from: Viliam
comment by Viliam · 2019-12-16T01:15:36.269Z · LW(p) · GW(p)

Reading the ACDC conclusion, it feels like:

"Everything bad that happened as a result of Brent's actions is someone else's fault. He only wants the best things to happen, but to allow that, everyone else needs to trust him first. Because bad people not trusting him is one of those things that make him do the things he regrets. Seriously, stop pointing out how Brent's actions hurt people; don't you see the pain this causes him?"

I knew a guy who did things similar to what Brent is accused of, and his PR was pretty similar. He also had a blog with complaints about his suffering and unfairness of the world. None of his written thoughts hinted at the possibility of him changing his own behavior; it was always how other people should accomodate him better.

My opinion on the articles is that they are a "fruit of the poisonous mind". No matter how true or insightful, the text is optimized for some purpose. If you believe there is something valuable there, the only safe way is to throw the text away and try expressing the valuable idea using your own words, your own metaphors, and your own examples, as if the original text had never existed. Then you might notice e.g. that sometimes the space of action is not one-dimensional, however people are often blind about some options they have.

comment by Vaniver · 2019-12-13T09:32:33.889Z · LW(p) · GW(p)

jimrandomh's comment [LW(p) · GW(p)], linked in the OP, is the current best explanation of the epistemic concerns. Most of my impression comes from in-person discussions with ialdabaoth, which I obviously can't link to, and Facebook posts, of which relatively few individual posts would be damning in isolation but the whole of them adds up to a particular impression. I also want to stress that this is not just my opinion; I currently know of no person who would describe ialdabaoth as a good influence, merely people who think he is redeemable. [I'd be happy to edit this if anyone wants to defend him privately; I have tried to counteract the effect where people might be unwilling to defend him publicly because of the unpopularity of defending a man accused of sexual assault.]

I currently don't have an interest in writing a post with this level of detail [EA · GW] explaining what I'm concerned about, because it has massive costs and unclear benefits. One of ialdabaoth's virtues is that he was cooperative about leaving; rather than us having to figure out whether or not we wanted to ban him, he stopped commenting and posting to not force the issue, which is why this is happening now instead of before. If I were going to invest that level of effort into something, it would be people who I think are actively making things worse now.

Replies from: Tenoke, Zack_M_Davis
comment by Tenoke · 2019-12-13T10:31:22.677Z · LW(p) · GW(p)
jimrandomh's comment, [LW(p) · GW(p)] linked in the OP, is the current best explanation of the epistemic concerns.

Excluding the personal stuff, this comment is just a somewhat standard LW critique of a LW post (which has less karma than the original post fwiw). If this is the criteria for an 'epistemic concerns' ban, then you must've banned hundreds of people. If you haven't you are clearly banning him for the other reasons, I don't know why you insist on being dishonest about it.

Replies from: gjm
comment by gjm · 2019-12-13T16:24:23.823Z · LW(p) · GW(p)

Why would it make sense to "exclude the personal stuff"? Isn't the personal stuff the point here?

Replies from: Tenoke
comment by Tenoke · 2019-12-13T16:37:20.203Z · LW(p) · GW(p)

He is using this comment to show the 'epistemic concerns' side specifically, and claiming the personal stuff were separate .

This is the specific claim.

We think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban, and holds without investigating or making any sort of ruling on other allegations.
Replies from: gjm
comment by gjm · 2019-12-13T17:37:03.092Z · LW(p) · GW(p)

Maybe I'm confused about what you mean by "the personal stuff". My impression is that what I would consider "the personal stuff" is central to why ialdabaoth is considered to pose an epistemic threat: he has (allegedly) a history of manipulation which makes it more likely that any given thing he writes is intended to deceive or manipulate. Which is why jimrandomh said:

The problem is, I think this post may contain a subtle trap, and that understanding its author, and what he was trying to do with this post, might actually be key to understanding what the trap is.

and, by way of explanation of why "this post may contain a subtle trap", a paragraph including this:

So he created narratives to explain why those conversations were so confusing, why he wouldn't follow the advice, and why the people trying to help him were actually wronging him, and therefore indebted. This post is one such narrative.

Unless I'm confused, (1) this is not "a somewhat standard LW critique of a LW post" because most such critiques don't allege that the thing critiqued is likely to contain subtle malignly-motivated traps, and (2) the reason for taking it seriously is "the personal stuff".

Who's saying, in what sense, that "the personal stuff were separate"?

Replies from: Tenoke, Vaniver
comment by Tenoke · 2019-12-13T17:45:22.873Z · LW(p) · GW(p)

Vaniver is saying that the personal stuff didn't come into account when banning him and that epistemic concerns were enough. From OP:

We think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban, and holds without investigating or making any sort of ruling on other allegations.

but then the epistemic concerns seem to be purely based on stuff from the "other allegations" part.

And honestly, the quality of that post is (subjectively) higher than the quality of > 99% of current LW posts, yet the claim is that content is what he is banned for, which is a bit ridiculous. What I am asking is, why pretend it is the content and that the "other allegations" have no part?

Replies from: Vaniver
comment by Vaniver · 2019-12-13T19:33:04.851Z · LW(p) · GW(p)

What I am asking is, why pretend it is the content and that the "other allegations" have no part?

As mentioned in a sibling comment [LW(p) · GW(p)], I am trying to establish the principle that 'promoting reasoning styles in a way we think is predatory' can be a bannable offense, independent of whether or not predation has obviously happened, in part because I think that's part of having a well-kept garden [LW · GW] and in part so that the next person in ialdabaoth's reference class can be prevented from doing significant harm. Simply waiting until someone has been exiled doesn't do that.

Replies from: SaidAchmiz, Pattern
comment by Said Achmiz (SaidAchmiz) · 2019-12-13T23:14:39.951Z · LW(p) · GW(p)

the principle that ‘promoting reasoning styles in a way we think is predatory’ can be a bannable offense

I don’t think I have the slightest idea what this means, even having read the OP and everything else about ialdabaoth’s actions. That’s a problem.

Replies from: Vaniver
comment by Vaniver · 2019-12-14T02:30:04.970Z · LW(p) · GW(p)

I don’t think I have the slightest idea what this means, even having read the OP and everything else about ialdabaoth’s actions. That’s a problem.

I am not worried about you in this regard; if anyone is interested in whether or not they should be worried for themselves, please reach out to me.

Many sorts of misbehavior can be adequately counteracted by clear rules; a speed limit is nicely quantitative and clearly communicable. Misbehavior on a higher level of abstraction like "following the letter of the law but not the spirit" cannot be adequately counteracted in the same sort of way. If one could letter out the spirit of the law, they would have done that the first time. Similarly, if I publish my sense of "this is how I detect adversarial reasoners," then an adversarial reasoner has an easier time getting past my defenses.

I will put some thought into whether I can come up with a good discussion of what I mean by 'predatory', assuming that's where the confusion is; if instead it's in something like "promoting reasoning styles" I'd be happy to attempt to elaborate that.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-12-14T05:04:04.776Z · LW(p) · GW(p)

If one could letter out the spirit of the law, they would have done that the first time.

Some sort of typo / word substitution here, I think? Well, I think I get the sense of what you meant here from context. Anyhow…

Similarly, if I publish my sense of “this is how I detect adversarial reasoners,” then an adversarial reasoner has an easier time getting past my defenses.

I appreciate that, but this was not the point of my comment. Rather, I was saying that it’s entirely unclear to me what you are even trying to detect. (“Adversarial reasoners”, yes, but that is obviously far too broad a category… isn’t it? Maybe I don’t know what you mean by that, either…)

I will put some thought into whether I can come up with a good discussion of what I mean by ‘predatory’, assuming that’s where the confusion is; if instead it’s in something like “promoting reasoning styles” I’d be happy to attempt to elaborate that.

Both, in fact. I would appreciate some elaboration on both counts!

Replies from: Vaniver
comment by Vaniver · 2019-12-15T06:48:03.588Z · LW(p) · GW(p)

Some sort of typo / word substitution here, I think? Well, I think I get the sense of what you meant here from context. Anyhow…

Yeah, I tried to shorten "If one could capture the spirit of the law in letters, they would have done that the first time." and I don't think it worked very well.

Both, in fact. I would appreciate some elaboration on both counts!

So thinking about this response [LW(p) · GW(p)] to Zack_M_Davis made me realize that there's a bit of my model that I might be able to explain easily.

I often think of things like LessWrong as having a 'gravity well', where people come close and then get drawn in by some features that they want more of. Gravity is obviously disanalogous in several ways, as it ignores the role played by preferences (on seeing something, some people become more interested whereas others become more bored), but I think that doesn't affect the point too much; the main features that I want are something like "people near the center feel a stronger pull to stay in than people further away, and the natural dynamics tend to pull people closer over time." I think often a person is in many such wells at once, and there's some dynamic equilibrium between how all their different interests want to pull them.

Sometimes, people want to have a gravity well around themselves to pull other people closer to them. This class contains lots of ordinary things; someone starting a company and looking to build a team does this, someone trying to find apprentices or otherwise mentor people does this, someone attempting to find a romantic partner does a narrower version of this.

Whether this is 'predatory' seems to be a spectrum that for me mostly depends on a few factors, primarily how relative benefit is being prioritized and how information asymmetries are being handled. A clearly non-predatory case is Gallant attempting to start a company where it will succeed and all founders / early employees / investors will share in the profits and whose customers will be satisfied, and a clearly predatory case is Goofus attempting to start a company where the main hope is to exit before things collapse, and plans to pay very little by promising lots of equity while maintaining sufficient control that all of the early employees can be diluted out of profits when the time comes.

One way to have a gravity well around you is to create something new and excellent that people flock to. Another way to have a gravity well pulling other people towards you is moving towards the center of a pre-existing well. Imagine someone getting good at a martial art so that they can teach that martial art to others and feel knowledgeable and helpful.

A third option is becoming a 'fringe master', where you create something new at the boundary of something pre-existing. You don't have to pay the costs of being part of the 'normal system' and dealing with its oversight, but do gain many of the benefits of its advertising / the draw of the excellence at the center of the pre-existing thing. This is basically the category that I am most worried about / want to be able to act against, where someone will take advantage of new people drawn in by LessWrong / the rationality community who don't know about the missing stairs to watch out for or who is held in low regard. A general feature of this case is that the most unsavory bits will happen in private, or only be known through rumor, or assessments of technical skill or correctness that seem obvious to high-skill individuals will not be widely held.

Put another way, it seems to me like the rationality community is trying to draw people in, so that they can get the benefits of rationality / they can improve rationality / they can contribute to shared projects, and it would be good to devote attention to the edges of the community and making sure that we're not losing marginal people we attract to traps set up on the edge of the community.

Furthermore, given the nature of our community, I expect those traps to look a lot like reasoning styles or models that contain traps that the people absorbing the model are unable to see. One of the main things that rationalists do is propose ways of thinking things through that could lead to better truth-tracking or superior performance in some way.

The primary thing I'm worried about there is when the superior performance is on a metric like "loyalty to the cause" or "not causing problems for ialdabaoth." On a top-level comment [LW(p) · GW(p)], Isnasene writes:

Using Affordance Widths to manipulate people into doing things for you is basically a fancy pseudo-rationalist way of manipulating people into doing things for you by making them feel guilty and responsible.

Which is basically my sense; the general pattern was that ialdabaoth would insist on his frame / interpretation that he was being oppressed by society / the other people in the conversation, and would use a style of negotiation that put the pressure on the other person to figure out how to accommodate him / not be guilty of oppressing him instead of on him to not violate their boundaries. There's also a general sense that he was aware the of the risk to him for open communication or clear thinking about about him, and thus would work against both when possible.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-12-15T18:15:37.838Z · LW(p) · GW(p)

becoming a 'fringe master', where you create something new at the boundary of something pre-existing. You don't have to pay the costs of being part of the 'normal system' and dealing with its oversight, but do gain many of the benefits of its advertising / the draw of the excellence at the center of the pre-existing thing. This is basically the category that I am most worried about / want to be able to act against, where someone will take advantage of new people drawn in by LessWrong / the rationality community who don't know about the missing stairs to watch out for or who is held in low regard.

Thanks for explaining this part; this is really helpful. This model seems to assume that the "oversight" of the "normal system" at the center of the gravity well is trustworthy. I'm currently most worried[1] about the scenario where the "normal system" is corrupt: that new people are getting drawn in by the awesomeness of the Sequences and Harry Potter and the Methods,[2] only to get socialized into a community whose leadership and dominant social trend pays lip service to "rationality", but is not actually interested in using reasoning (at least, not in public) when using reasoning would be socially inconvenient (whether due to the local area's political environment [LW · GW], the asymmetric incentives faced by all sufficiently large organizations [LW · GW], the temptation to wirehead on our own "rationality" and "effectiveness" marketing promises, or many other possible reasons) and therefore require a small amount of bravery [LW · GW].

As Michael Vassar put it in 2013 [LW(p) · GW(p)]:

The worst thing for an epistemic standard is not the person who ignores or denies it, but the person who tries to mostly follow it when doing so feels right or is convenient while not acknowledging that they aren't following it when it feels weird or inconvenient, as that leads to a community of people with such standards engaging in double-think WRT whether their standards call for weird or inconvenient behavior.

Have you thought at all about how to prevent the center of the gravity well from becoming predatory? Obviously, I'm all in favor of having systems to catch missing-stair rapists. But if you're going to build an "immune system" to delegitimize anyone "held in low regard" without having to do the work of engaging with their arguments—without explaining why an amorphous mob that holds something in "low regard" can be trusted to reach that judgement for reliably good reasons—then you're just running a cult [LW · GW]. And if enough people who remember the spirit of the Sequences notice their beloved rationality community getting transformed into a cult, then you might have a rationalist civil war on your hands.

(Um, sorry if that's too ominous or threatening of a phrasing. I think we mostly want the same thing, but have been following different strategies and exposed to different information, and I notice myself facing an incentive to turn up the rhetoric and point menacingly at my BATNA in case that helps with actually being listened to, because recent experiences have trained my brain to anticipate that even high-ranking "rationalists" are more interested in avoiding social threat than listening to arguments. As I'm sure you can also see, this is already a very bad sign of the mess we're in.)


  1. "Worried" is an understatement. It's more like panicking continuously all year with many hours of lost sleep, crying fits, pacing aimlessly instead of doing my dayjob, and eventually doing enough trauma processing to finish writing my forthcoming 20,000-word memoir explaining in detail (as gently and objectively as possible while still telling the truth about my own sentiments and the world I see) why you motherfuckers are being incredibly intellectually dishonest (with respect to a sense of "intellectual dishonesty" that's about behavior relative to knowledge [LW · GW], not conscious verbal "intent"). ↩︎

  2. Notably, written at a time Yudkowsky and "the community" had a lower public profile and therefore faced less external social pressure. This is not a coincidence because nothing is ever a coincidence. ↩︎

Replies from: Vaniver, Wei_Dai
comment by Vaniver · 2019-12-16T01:24:32.589Z · LW(p) · GW(p)

This model seems to assume that the "oversight" of the "normal system" at the center of the gravity well is trustworthy.

On the core point, I think you improve / fix problems with the normal system in the boring, hard ways, and do deeply appreciate you championing particular virtues even when I disagree on where the balance of virtues lies.

I find something offputting here about the word "trustworthy," because I feel like it's a 2-place word [LW · GW]; I think of oversight as something like "good enough to achieve standard X", whereas "trustworthy" alone seems to imply there's a binary standard that is met or not (and has been met). It seems like we could easily have very different standards for trustworthiness that cause us to not disagree on the facts while disagreeing on the implications.

(Somehow, it reminds me of this post [LW · GW] and Caledonian's reaction to it.)

Have you thought at all about how to prevent the center of the gravity well from becoming predatory?

Yes. Mostly this has focused on recruitment work for MIRI, where we really don't want to guilt people into working on x-risk reduction (as not only is it predatory, it also is a recipe for them burning out instead of being productive, and so morality and efficiency obviously align), and yet most of the naive ways to ask people to consider working on x-risk reduction risk guilting them, and you need a more sophisticated way to remove that failure mode than just saying "please don't interpret this as me guilting you into it!". This [LW · GW] is a thing that I've already written that's parts of my longer thoughts here.

And, obviously, when I think about moderating LessWrong, I think about how to not become corrupt myself, and what sorts of habits and systems lower the chances of that, or make it more obvious if it does happen.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-12-16T04:57:42.412Z · LW(p) · GW(p)

it's a 2-place word [...] It seems like we could easily have very different standards for trustworthiness that cause us to not disagree on the facts while disagreeing on the implications.

Right, I agree that we don't want to get into a pointless pseudo-argument where everyone agrees that x = 60, and yet we have a huge shouting match over whether this should be described using the English word "large" or "small."

Maybe a question that would lead to a more meaningful disagreement would be, "Should our culture become more or less centralized?"—where centralized is the word I'm choosing to refer to a concept I'm going to try to describe extensionally [LW · GW]/ostensively in the following two paragraphs.[1]

A low-centralization culture has slogans like, "Nullis in verba" or "Constant vigilance!". If a fringe master sets up shop on the outskirts of town, the default presumption is that (time permitting) you should "consider it open-mindedly and then steal only the good parts [...] [as] an obvious guideline for how to do generic optimization" [LW · GW], not because most fringe masters are particularly good (they aren't), but because thinking for yourself actually works and it's not like our leaders in the town center know everything already.

In a high-centralization culture, there's a stronger presumption that our leaders in the town center come closer to knowing everything already, and that the reasoning styles or models being hawked by fringe masters are likely to "contain traps that the people absorbing the model are unable to see": that is, thinking for yourself doesn't work. As a result, our leaders might talk up "the value of having a community-wide immune system" so that they can "act against people who are highly manipulative and deceitful before they have clear victims." If a particular fringe master starts becoming popular, our leaders might want to announce that they are "actively hostile to [the fringe master], and make it clear that [we] do not welcome support from those quarters."

You seem to be arguing that we should become more centralized. I think that would be moving our culture in the absolute wrong direction. As long as we're talking about patterns of adversarial optimization, I have to say that, to me, this kind of move looks optimized for "making it easier to ostracize and silence people who could cause trouble for MIRI and CfAR (e.g., Vassar or Ziz), either by being persistent critics or by embarrassing us in front of powerful third parties who are using guilt-by-association heuristics", rather than improving our collective epistemics.

This seems like a substantial disagreement, rather than a trivial Sorites problem about how to use the word "trustworthy".

do deeply appreciate you championing particular virtues even when I disagree on where the balance of virtues lies

Thanks. I like you, too.


  1. I just made this up, so I'm not at all confident this is the right concept [LW · GW], much like how I didn't think contextualing-vs.-decoupling was the right concept [LW · GW]. ↩︎

Replies from: Vaniver
comment by Vaniver · 2019-12-16T08:43:06.971Z · LW(p) · GW(p)

not because most fringe masters are particularly good (they aren't), but because thinking for yourself actually works and it's not like our leaders in the town center know everything already.

I think the leaders in the town center do not know everything already. I think different areas have different risks when it comes to "thinking for yourself." It's one thing to think you can fly and jump off a roof yourself, and another thing to think it's fine to cook for people when you're Typhoid Mary, and I worry that you aren't drawing a distinction here between those cases. 

I have thought about this a fair amount, but am not sure I've discovered right conceptual lines here, and would be interested in how you would distinguish between the two cases, or if you think they are fundamentally equivalent, or that one of them isn't real.

You seem to be arguing that we should become more centralized. I think that would be moving our culture in the absolute wrong direction.

In short, I think there are some centralizing moves that are worth it, and others that aren't, and that we can choose policies individually instead of just throwing the lever on "centralization: Y/N". Well-Kept Gardens Die by Pacifism [LW · GW] is ever relevant; here, the thing that seems relevant to me is that there are some basic functions that need to happen (like, say, the removal of spam), and fulfilling those functions requires tools that could also be used for nefarious functions (as we could just mark criticisms of MIRI as 'spam' and they would vanish). But the conceptual categories that people normally have for this are predicated on the interesting cases; sure, both Nazi Germany and WWII America imprisoned rapists, but the interesting imprisonments are of political dissidents, and we might prefer WWII America because it had many fewer such political prisoners, and further prefer a hypothetical America that had no political prisoners. But this spills over into the question of whether we should have prisons or justice systems at all, and I think people's intuitions on political dissidents are not very useful for what should happen with the more common sort of criminal.

Like, it feels almost silly to have to say this, but I like it when people put forth public positions that are critical of an idea I favor, because then we can argue about it and it's an opportunity for me to learn something, and I generally expect the audience to be able to follow it and get things right. Like, I disagreed pretty vociferously with The AI Timelines Scam [LW · GW], and yet I thought the discussion it prompted was basically good. It did not ping my Out To Get You [LW · GW] sensors in the way that ialdabaoth does. To me, this feels like a central example of the sort of thing you see in a less centralized culture where people are trying to think things through for themselves and end up with different answers, and is not at risk from this sort of moderation.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-12-17T02:42:02.166Z · LW(p) · GW(p)

I don't think this conversation is going to make any progress at this level of abstraction and in public. I might send you an email.

Replies from: Vaniver
comment by Vaniver · 2019-12-17T02:45:09.581Z · LW(p) · GW(p)

I look forward to receiving it.

comment by Wei Dai (Wei_Dai) · 2019-12-16T04:26:51.268Z · LW(p) · GW(p)

“Worried” is an understatement. It’s more like panicking continuously all year with many hours of lost sleep, crying fits, pacing aimlessly instead of doing my dayjob, and eventually doing enough trauma processing to finish writing my forthcoming 20,000-word memoir explaining in detail (as gently and objectively as possible while still telling the truth about my own sentiments and the world I see) why you motherfuckers are being incredibly intellectually dishonest (with respect to a sense of “intellectual dishonesty” that’s about behavior relative to knowledge, not conscious verbal “intent”).

I think I'm someone who might be sympathetic to your case, but just don't understand what it is [LW(p) · GW(p)], so I'm really curious about this "memoir". (Let me know if you want me to read your current draft.) Is this guess [LW · GW], i.e., you're worried about the community falling prey to runaway virtue signaling, remotely close?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-12-16T05:27:49.320Z · LW(p) · GW(p)

Close! I'll PM you.

comment by Pattern · 2019-12-13T20:47:46.607Z · LW(p) · GW(p)

This comment of yours does a better job of explaining that than the post.

comment by Vaniver · 2019-12-13T19:15:35.442Z · LW(p) · GW(p)

The separation I'm hoping to make is between banning him because "we know he committed sex crimes" and banning him because "he's promoting reasoning styles in a way we think is predatory." We do not know the first to the standard of legal evidence; ialdabaoth has not been convicted in a court of law, and while I think the community investigations were adequate for the questions of whether or not he should be allowed at particular events or clubs, my sense is that his exile was decided by amorphous community-wide processing in a way that I'm reluctant to extend further. 

I'm making the additional claim that "The Bay Area community has a rough consensus to kick this guy out" by itself does not meet my bar for banning someone from LessWrong, given the different dynamics of in-person and online interactions. (As a trivial example, suppose someone's body has a persistent and horrible smell; it could easily be the utilitarian move to not allow them at any physical meetups while giving them full freedom to participate online.) I think this is the bit Tenoke is finding hardest to swallow; it's one thing to say "yep, this guy is exiled and we're following the herd" and another thing to say "we've exercised independent judgment here, despite obvious pressures to conform." The latter is a more surprising claim, and correspondingly would require more evidence. 

and (2) the reason for taking it seriously is "the personal stuff".

I think this is indirectly true. That is, there's a separation between expected harm and actual harm, and I'm trying to implement procedures that reduce expected harm. Consider the difference between punishing people for driving drunk and just punishing people for crashing. It's one thing to just wait until someone accumulates a cloud of 'unfortunate events' around them that leads to them finally losing their last defenders, and another to take active steps to assess risks and reduce them. Note that this requires a good model of how 'drunkenness' leads to 'crashes', and I do not see us as having presented a convincing model of that in this case.

Of course, this post isn't an example of that; as mentioned, this post is years late, and the real test of whether we can do the equivalent of punishing people for driving drunk is whether we can do anything about people currently causing problems [in expectation]. But my hope is that this community slowly moves from a world where 'concerns about X' are published years after they've become mutual knowledge among people in the know to one where corrosive forces are actively cleaned up before they make things substantially worse.

comment by Zack_M_Davis · 2019-12-13T19:00:54.709Z · LW(p) · GW(p)

I currently know of no person who would describe ialdabaoth as a good influence

"Good influence" in what context? I remember finding some of his Facebook posts/comments insightful, and I'm glad I read them. It shouldn't be surprising that someone could have some real insights (and thereby constitute a "good influence" in the capacity of social-studies blogging), while also doing a lot of bad things (and thereby constituting a very bad influence in the capacity of being a real-life community member), even if the content of the insights is obviously related to the bad things. (Doing bad things and getting away with them for a long time requires skills that might also lend themselves to good social-studies blogging.)

Less Wrong is in the awkward position of being a public website (anyone can submit blog posts about rationality under a made-up name), and also being closely associated with a real-life community with dense social ties, group houses, money, &c. If our actual moderation algorithm is, "Ban people who have have been justly ostracized from the real-life community as part of their punishment, even if their blog comments were otherwise OK", that's fine, but we shouldn't delude ourselves about what the algorithm is.

Replies from: Vaniver
comment by Vaniver · 2019-12-13T19:43:49.506Z · LW(p) · GW(p)

I remember finding some of his Facebook posts/comments insightful, and I'm glad I read them.

If you learned that someone in the rationality community had taken on ialdabaoth as a master (like in the context of zen, or a PhD advisor, or so on), would you expect them to grow in good directions or bad directions? [Ideally, from the epistemic state you were in ~2 years ago, rather than the epistemic state you're in now.]

I acknowledge that this is quite different from the "would you ever appreciate coming across their thoughts?" question; as it happens, I upvoted Affordance Widths when I first saw it, because it was a neat simple presentation of a model of privilege, and wasn't taking responsibility for his collected works or thinking about how it fit into them. I mistakenly typical-minded on which parts of his work people were listening to, and which they were safely ignoring.

If our actual moderation algorithm is, "Ban people who have have been justly ostracized from the real-life community as part of their punishment, even if their blog comments were otherwise OK", that's fine, but we shouldn't delude ourselves about what the algorithm is.

I agree that algorithm could be fine, for versions of LessWrong that are focused primarily on the community instead of on epistemic progress, but am trying to help instantiate the version of LessWrong that is primarily focused on epistemic progress.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-12-13T20:30:52.946Z · LW(p) · GW(p)

If you learned that someone in the rationality community had taken on ialdabaoth as a master (like in the context of zen, or a PhD advisor, or so on), would you expect them to grow in good directions or bad directions?

Bad directions. The problem is that I also think this of other users who we aren't banning, which suggests that our standard for "allowed to post on Less Wrong" is lower than "would be a good master."

[Ideally, from the epistemic state you were in ~2 years ago, rather than the epistemic state you're in now.]

Okay, right, I'm much less sure that I would have said "bad directions" as confidently 2 years ago.

am trying to help instantiate the version of LessWrong that is primarily focused on epistemic progress.

Thanks for this!! (I think I'm much more worried than you about the failure mode where something claiming to make intellectual progress is actually doing something else, which makes me more willing to tolerate pragamatic concessions of principle that are explicitly marked as such, when I'm worried that the alternative is the concession being made anyway with a fake rationale attached.)

Replies from: Vaniver
comment by Vaniver · 2019-12-14T01:24:41.526Z · LW(p) · GW(p)

The problem is that I also think this of other users who we aren't banning, which suggests that our standard for "allowed to post on Less Wrong" is lower than "would be a good master."

I interpreted ialdabaoth as trying to be a master, in a way that I do not interpret most of the people who fail my "would be a good master" check. (Most of them are just not skilled in that sort of thing and don't seek it out.) If there are people you think would both predictably mislead their students and appear to be trying to recruit students, I'm interested in hearing about it.

I think I'm much more worried than you about the failure mode where something claiming to make intellectual progress is actually doing something else

This is possible; my suspicion is that we have similar levels of dispreference for that and different models of how attempts to make intellectual progress go astray. I try to be upfront about when I make pragmatic concessions, as I think you've have some evidence of, in part so that when I am not making such a concession it's more believable. [Of course, for someone without observations of that sort of thing, I don't expect me claiming that I do it to be much evidence.]

comment by Wei Dai (Wei_Dai) · 2019-12-16T04:02:17.285Z · LW(p) · GW(p)

I'm confused that there's not an explanation in the post, or a link to an explanation, of what ialdabaoth's manipulative epistemic tactics were, either in general terms or how ialdabaoth specifically used those tactics in his posts or comments. (ETA: In fact I'm not even sure what concept you're referring to by the words "manipulative epistemic tactics" because this post seems to be the first time it's ever been used on LW, or even on the entire searchable net.) Maybe you can't talk about the investigations into the more personal allegations, but surely legal considerations don't rule out explaining this? Aside from convincing people that you made the right decision or satisfying their curiosity about this case, it would also help people defend themselves from similar tactics in other circumstances.

Given that ialdabaoth's "Affordance Widths" is at 141 points and received two nominations, if it does constitute manipulation it seemingly is something that even LWers are extremely vulnerable to, which probably deserves a full post or maybe even a sequence to help immunize people against. My questions now are:

  1. Why is there not a full official explanation from the mod team about the manipulative epistemic tactics?
  2. Can you give one now or in the near future?
Replies from: ChristianKl, Vaniver
comment by ChristianKl · 2019-12-16T14:36:41.589Z · LW(p) · GW(p)

https://medium.com/@mittenscautious/warning-3-8097bb6747b1 seems to contain a few very concrete suggestions about bad epistemic tactics by ialdabaoth.

Early, Brent explicitly emphasised the power of belief as a thing independent of epistemology. That it could be important to choose what to believe, or believe false things, if that worked better. I didn’t like that at all. It didn’t sit well, and I refused, vocally. For the next week, the conversations were direct: [...]
“You need to just acknowledge me as the wizard and trust I know what I’m doing”

I think I see how "Affordance Widths" might be abused and it would likely be good to have a longer post about how it's problematic.

comment by Vaniver · 2019-12-17T02:06:38.019Z · LW(p) · GW(p)

Why is there not a full official explanation from the mod team about the manipulative epistemic tactics?

I can tell you why I didn't write one; I believe other people on the mod team are interested in writing one, and so there may be one soon.

First, cost. This post already went through something like 10-human hours of effort (maybe 20 depending on how you count meeting time that preceded writing it), and I think a recounting of what specifically went wrong and why we think it was wrong might take something like an additional 20 human-hours of effort. 

Second, incompleteness. Suppose one of the big updates that I made came from a private conversation with ialdabaoth, and I ask him if I can quote him on a post here, and he says no. My private beliefs don't change, but my public case changes significantly, and someone who is trying to infer how I'm reasoning from my public case might end up with less accurate beliefs. For example, one of the reasons why I didn't link to the allegations is because if I said "epistemic concerns and [linked allegations]", the weight of the text would lean much more towards the allegations.

Is it better to say 20% than 5%? Unclear. Both are definitely less satisfying for an observer than 100%, but my current sense is that those are the numbers we're picking between, instead of 20% and 80%. [If I thought I could say 80% of my evidence, I might feel very differently about this; as is, it feels like I'll be able to give detailed labels but not sufficient reason for you to think those labels correspond to reality, in a way that just passes the buck on where the 'trust me' weight lands.]

Third, scope. There's just so much to say, in a way that gives similar incompleteness concerns. A much simpler standard is an explanation of his manipulative epistemic goals, like "he acted in a way that seems intended to give him room to operate" (by which I mean, encouraged people who could resist his tactics or point them out to leave him alone with those who couldn't or didn't), but this is a subjective interpretation that might or might not follow from the observed tactics. And what tactics I personally observed is a function of how I behaved around him and what his plans were for me; one of the elements of the semi-public conversation that moved my estimate significantly was one of ialdabaoth's defenders realizing that ialdabaoth had been carefully not behaving in a particular way around the defender, such that the defender's estimate of how ialdabaoth behaved would be substantially different from everyone else's. What evidence filtered evidence, and all that.

Fourth, costs of inaction. The decision to implement a ban was made on Monday, and yet the post didn't go up until Thursday, and it seemed better to delay the ban and finish the post then ban him on Monday and have an unannounced ban for however many days. I think several previous moderation scenarios have gone poorly because the team thought they had longer to contemplate the issue than they actually had. 

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2020-01-17T03:55:18.140Z · LW(p) · GW(p)

I'm less worried about "cancel culture" becoming a thing on LW than in EA (because we seem naturally more opposed to that kind of thing), but I'm still a bit worried. I think having mods be obligated to explain all non-trivial banning decisions (with specifics instead of just pointing to broad categories like "manipulative") would be a natural Schelling fence [LW · GW] to protect against a potential slippery slope, so the costs involved may be worth paying from that perspective.

comment by Isnasene · 2019-12-14T06:21:30.676Z · LW(p) · GW(p)

As a guy on the internet, I mostly agree with this post in the sense that I think points you bring up to warrant a ban. That said...

Suppose instead you are running a trading fund, and someone previously convicted of fraud sends you an idea for a new financial instrument. Here, it seems like you should be much more suspicious, not just of the idea but also of your ability to successfully notice the trap if there is one. It seems relevant now to check both whether the idea is true and whether or not it is manipulative. Rather than just performing a process that catches simple mistakes or omissions, one needs to perform a process that's robust to active attempts to mislead the judging process.
...
I think the middle case is closest to the situation we’re in now, for reasons like those discussed in comments by jimrandomh [LW(p) · GW(p)] and by Zack_M_Davis [LW(p) · GW(p)]. Much of ialdabaoth's output is claims about social dynamics and reasoning systems that seem, at least in part, designed to manipulate the reader, either by making them more vulnerable to predation or more likely to ignore him / otherwise give him room to operate.

Having read Affordance Widths and seeing the way that it may be used to justify awful behavior, I don't see the risks of these kinds of posts being much higher than the lot of Less Wrong and Rationalist style writing. Less Wrong and Rationalist style writing by nature talks in the abstract about a lot of really broad ideas that can have significant implications for how someone should make decisions in real life and, unless you're already a very skilled rationalists, you can botch those implications in really damaging ways (personally speaking, reading Eliezer's meta-ethics sequence when I was 14 was a mistake. But scrupolosity in general can also be a mine-field). Also, epistemic learned helplessness is a thing and it's especially a rationalist thing.

So, regarding the above justification:

  • from an epistemic point of view, Ialdabaoth's post (Affordance Widths) does not strike me as intrinsically more harmful than other posts
  • from a manipulation point of view, Ialdabaoth's post (Affordance Widths) does not strike me as intrinsically more manipulator-friendly than a lot of other posts
  • while Affordance Widths is more manipulator friendly than a lot of other posts in the sense that at least one manipulator (Ialdabaoth) knows that it can be used for manipulation, I do not think this is very relevant because

[Epistemological Status: Maybe the rationalist community dynamic is unusual and I'm mis-gauging things here].

    • 1. Using Affordance Widths to manipulate people into doing things for you is basically a fancy pseudo-rationalist way of manipulating people into doing things for you by making them feel guilty and responsible. This is such a common way for people to get manipulated that, from a pragmatic perspective, I'm skeptical that Affordance Widths allows manipulators to be more dangerous than they would have otherwise been just engaging in direct emotional manipulation.
    • 2. Even the epistemics used in Affordance Widths already exist. People who have been disenfranchised in various ways do use their own personal struggles as a way to convey an implicit duty to those around them pretty frequently. In my circles, memetic immune systems have even built up against these sorts of things (ie the phrase "mental health is not an excuse"). Affordance Widths strikes me as epistemically superfluous in the context of the world's current epistemic environment. Moreover, I could imagine good people who don't do things like Ialdabaoth did writing a post very similar to Affordance Widths and, if someone else wrote this post, I really doubt that it would be banned.
    • 3. As you note, posts that may be both manipulative but also epistemically useful (as Affordance Widths is) merit consideration if you believe that the post's safety can be screened. Less Wrong has a uniquely intelligent community and a pretty well-regarded comments section so my expectation would be that someone here should be around to identify epistemological traps in posts in general. If this expectation is appropriate, then accepting this kind of post shouldn't be considered risky. If it isn't appropriate, yall have bigger problems.
      • As a caveat, it's of course possible for someone being manipulated to gloss over the comments. But the set of people who get manipulated if and only if they are subjected to epistemologically manipulative (rather than emotionally manipulative) traps who also gloss ove the comments on an article is probably smaller than the set of such people who would read the comments and update away from Ialdabaoth's claims. Of course someone could get emotionally manipulated enough to gloss over the comments but is somehow resistant to full manipulation in a way that can only be achieved through abstract epistemic posts but thisi a really specific trajectory compared to a lot of others

Of course, despite my dislike of the analogy and its focus on potentially harmful Less Wrong posts, I still support the ban. It's important to have an epistemic immune system and because:

#1. Less Wrong, and any community focused on self-analysis and improvement, requires a high trust environment

#2. Ialdabaoth has demonstrated clearly manipulative behaviors in real life, causing a lot of harm

#3. We cannot separate Ialdabaoth's real life manipulative behavior from manipulative behavior on Less Wrong

#4. Ialdabaoth should therefore be banned on Less Wrong for the sake of maintaining a high-trust environment

While posts like Affordance Widths are supporting evidence of #3, I think that, given things Ialdabaoth has done, claim #3 should really be treated as the default assumption even sans that kind of supporting evidence. And this is even more true in this particular context where Less Wrong's community apparently overlaps so much with his real life community. We shouldn't give people the benefit of the doubt about compartmentalizing bad behavior just to areas that don't affect us and we definitely shouldn't give them the benefit of the doubt when the areas with and without bad behavior aren't mutually distinct.

Replies from: Vaniver
comment by Vaniver · 2019-12-14T22:07:28.810Z · LW(p) · GW(p)

I think instead of 'high trust environment' I would want a phrase like 'intention to cooperate' or 'good faith', where I expect that the other party is attempting to seek the truth and is engaging in behaviors that will help move us towards the truth, but could be deeply confused themselves and so it makes sense to 'check their work' and ensure that, if I'm confused on a particular point, they can explain it to me or otherwise demonstrate that it is correct instead of just trusting that they have it covered.

To be clear, I would not ban someone for only writing Affordance Widths; I think it is one element of a long series of deceit and manipulation, and it is the series that is most relevant to my impression.

Replies from: Isnasene
comment by Isnasene · 2019-12-15T01:10:55.641Z · LW(p) · GW(p)
I think instead of 'high trust environment' I would want a phrase like 'intention to cooperate' or 'good faith', where I expect that the other party is attempting to seek the truth and is engaging in behaviors that will help move us towards the truth

I agree -- I think 'intention to cooperate' or 'good faith' are much more appropriate terms that get more at the heart of things. To move towards the truth or improve yourself or what-have-you, you don't necessarily need to trust people in general but you do need to be willing to admit some forms of vulnerability (ie "I could be wrong about this" or "I could do better"). And the expectation or presence of adversarial manipulation (ie "I want you to do X for me but you don't want to so I'll make you feel wrong about what you want) heavily disincentivizes these forms of vulnerability.

To be clear, I would not ban someone for only writing Affordance Widths; I think it is one element of a long series of deceit and manipulation, and it is the series that is most relevant to my impression.

Thanks for clarifying -- and I think this point is also born out by many statements in your original post. My response was motivated less by Affordance Widths specifically and more by the trading firm analogy. To me, the problem with Ialdabaoth isn't that his output may pose epistemic risks (which would be analogous to the fraud-committer's output posing legal risks); it's that Ialdabaoth being in a good-faith community would hurt the community's level of good faith.

This is an important distinction because the former problem would isolate Ialdabaoth's manipulativeness just to Bayesian updates about the epistemic risks of his output on Less Wrong (which I'm skeptical about being that risky) while the latter problem would considers Ialdabaoth's general manipulativeness in the context of community impact (which I think may be potentially more serious and does definitely take into consideration things like sex crimes, to address Zack_M_Davis's comments a little bit).

comment by ChristianKl · 2019-12-13T07:39:45.695Z · LW(p) · GW(p)

Reading this post, I feel curiosity about the exact allegations against him. I can understand if you don't want to rehearse them for legal reasons but if that's the case I think it would be great if you are explicit about that decision.

(if anybody else has insights, I'm happy about getting the information in a non-public form)

Replies from: clone of saturn
comment by moridinamael · 2019-12-13T16:21:19.900Z · LW(p) · GW(p)

If you're looking for feedback ...

On one level I appreciate this post as it provides delicious juicy social drama that my monkey brain craves and enjoys on a base, voyeuristic level. (I recognize this as being a moderately disgusting admission, considering the specific subject matter; but I'm also pretty confident that most people feel the same, deep down.) I also think there is a degree of value to understanding the thought processes behind community moderation, but I also think that value is mixed.

On another level, I would rather not know about this. I am fine with Less Wrong being moderated by a shadowy cabal. If the shadowy cabal starts making terrible moderation decisions, for example banning everyone who is insufficiently ideologically pure, or just going crazy in some general way, it's not like there's anything I can do about it anyway. The good/sane/reasonable moderator subjects their decisions to scrutiny, and thus stands to be perpetually criticized. The bad/evil moderator does whatever they want, doesn't even try to open up a dialogue, and usually gets away with it.

Fundamentally you stand to gain little and lose much by making posts like this, and now I've spent my morning indulging myself reading up on drama that has not improved my life in any way.

Replies from: Zack_M_Davis, Tenoke
comment by Zack_M_Davis · 2019-12-13T18:02:31.965Z · LW(p) · GW(p)

it's not like there's anything I can do about it anyway.

Not having control of a website isn't the same thing as "nothing I can do." If an authority who you used to trust goes subtly crazy in a way that you can detect, but you see other people still trusting the authority, you could do those other people a favor by telling them, "Hey, I think the authority has gone crazy, which conclusion I came to on the basis of this-and-such evidence. Maybe you should stop trusting the authority!"

The good/sane/reasonable moderator subjects their decisions to scrutiny, and thus stands to be perpetually criticized.

Right, and then they update on the good criticism (that contains useful information about how to be a better moderator) and ignore the bad criticism (that does not contain useful information). That's how communication works. Would you prefer to not be perpetually criticized?!

Fundamentally you stand to gain little and lose much by making posts like this

In a system of asymmetric justice [LW · GW] where people are mostly competing to avoid being personally blamed for anything, sure. Maybe a website devoted to discovering and mastering the art of systematically correct reasoning should aspire to a higher standard [LW · GW] than not getting personally blamed for anything?!

Replies from: moridinamael, habryka4
comment by moridinamael · 2019-12-13T18:22:44.928Z · LW(p) · GW(p)

For the record, I view the fact that I commented in the first place, and that I now feel compelled to defend my comment, as being Exhibit A of the thing that I'm whining about. We chimps feel compelled to get in on the action when the fabric of the tribe is threatened. Making the banning of a badguy the subject of a discussion rather than being an act of unremarked moderator fiat basically sucks everybody nearby into a vortex of social wagon-circling, signaling, and reading a bunch of links to figure out which chimps are on the good guy team and which chimps are on the bad guy team. It's a significant cognitive burden to impose on people, a bit like an @everyone in a Discord channel, in that it draws attention and energy in vastly disproportionate scope relative to the value it provides.

If we were talking about something socio-emotionally neutral like changing the color scheme or something, cool, great, ask the community. I have no opinion on the color scheme, and I'm allowed to have no opinion on the color scheme. But if you ask me what my opinion is on Prominent Community Abuser, I can't beg off. That's not an allowed social move. Better not to ask, or if you're going to ask, be aware of what you're asking.

Sure, you can pull the "but we're supposed to be Rationalists(tm)" card, as you do in your last paragraph, but the Rationalist community has pretty consistently failed to show any evidence of actually being superior, or even very good, at negotiating social blow-ups.

Replies from: Zack_M_Davis, habryka4
comment by Zack_M_Davis · 2019-12-13T19:07:24.040Z · LW(p) · GW(p)

I agree that the attention sinkhole is a problem.

comment by habryka (habryka4) · 2019-12-13T18:29:39.582Z · LW(p) · GW(p)

nods I do agree with this to a significant degree. Note that one of the reasons for the frontpage/personal distinction is to allow people to opt-out of a lot of social-drama stuff, and generally create a space (the frontpage) in which you don't have to keep track of a lot of this social stuff, and can focus on the epistemic content of the posts. 

comment by habryka (habryka4) · 2019-12-13T18:19:59.830Z · LW(p) · GW(p)

I agree with most of this, and do think that it's very clearly worth it for us to continue announcing and publicly communicating anything in the reference class of the OP (as well as the vast majority of things less large than that). 

comment by Tenoke · 2019-12-13T16:54:43.312Z · LW(p) · GW(p)
it's not like there's anything I can do about it anyway.

It's sad it's gotten that bad with the current iteration of LW. Users here used to think they have a chance at influencing how things are done and plenty of things were heavily community-influenced despite having a benevolent dictator for life.

Replies from: Vaniver, moridinamael, habryka4
comment by Vaniver · 2019-12-13T18:44:09.405Z · LW(p) · GW(p)

Users here used to think they have a chance at influencing how things are done and plenty of things were heavily community-influenced despite having a benevolent dictator for life.

That is... not how I would characterize the days when Eliezer was the primary moderator [LW(p) · GW(p)].

Replies from: Tenoke
comment by Tenoke · 2019-12-13T21:19:09.249Z · LW(p) · GW(p)

I mean, he uses the exact same phrase I do here but yes, I see your point.

comment by moridinamael · 2019-12-13T17:27:12.568Z · LW(p) · GW(p)

I wasn’t really intending to criticize the status quo. Social consensus has its place. I’m not sure moderation decisions like this one require social consensus.

comment by habryka (habryka4) · 2019-12-13T17:41:28.810Z · LW(p) · GW(p)

I think you are misunderstanding the comment above. As moridinamael says, this is about the counterfactual in which the moderation team goes crazy for some reason, which I think mostly bottoms out in where the actual power lies. If Eliezer decides to ban everyone tomorrow, he was always able to do that, and I don't think anyone would really have the ability to stop him now (since MIRI still owns the URL and a lot of the data). This has always been the case, and if anything is less the case now, but in either case is a counterfactual I don't think we should optimize too much for. 

comment by leggi · 2019-12-14T05:59:00.954Z · LW(p) · GW(p)

Honestly, as someone reading this with no personal knowledge of the situation and so any evidence is 'internet hearsay' I say:

The post by the author in question isn't worth this interest or drama. (Debating principles is good but this is clearly too personal for some.)

It coins a catchy phrase but the concepts of different people having to conform to different standards is not new. (the "bad man" has sent a representation of the human visual spectrum to a physics journal and called it a rainbow.)

"one rule for one and one for another" and many other adages through the ages sums up the idea.