Zoe Curzi's Experience with Leverage Research

post by Ilverin · 2021-10-13T04:44:49.020Z · LW · GW · 177 comments

This is a link post for https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b

177 comments

Comments sorted by top scores.

comment by RyanCarey · 2021-10-13T11:36:49.041Z · LW(p) · GW(p)

Thanks for your courage, Zoe!

Personally, I've tried to maintain anonymity in online discussion of this topic for years. I dipped my toe into openly commenting last week [LW(p) · GW(p)], and immediately received an email that made it more difficult to maintain anonymity - I was told "Geoff has previously speculated to me that you are 'throwaway', the author of the 2018 basic facts post". Firstly, I very much don't appreciate my ability to maintain anonymity being narrowed like this. Rather, anonymity is a helpful defense in any sensitive online discussion, not least this one. But secondly, yes, I am throwaway/anonymoose - I posted anonymously because I didn't want to suffer adverse consequences from friends who got more involved than me. But I'm not throwaway2,  anonymous, or BayAreaHuman - those three are bringing evidence that is independent from me at least.

I only visited Leverage for a couple months, back in 2014. One thing that resonated strongly with me about your post is that the discussion is badly confused by lack of public knowledge and strong narratives, about whether people are too harsh on Leverage, what biases one might have, and so on. This is why I think we often retreat to just stating "basic" [EA · GW] or "common knowledge" [LW · GW] facts; the facts cut through the spin.

Continuing in that spirit, I personally can attest that much of what you have said is true, and the rest congruent with the picture I built up there. They dogmatically viewed human nature as nearly arbitrarily changeable. Their plan was to study how to change their psychology, to turn themselves into Elon Musk type figures, to take over the world. This was going to work because Geoff was a legendary theoriser, Connection Theory had "solved psychology", and the resulting debugging tools were exceptionally powerful. People "worked" for ~80 hours a week - which demonstrated the power of their productivity coaching.

Power asymmetries and insularity were present to at least some degree. I personally didn't encounter an NDA, or talk of "demons" etc. Nor did I get a solid impression of the psychological effects on people from that short stay, though of course there must have been some.

What's frustrating about still hearing noisy debate on this topic, so many years later, is that Leverage being a really bad org seems overdetermined at this point. On the one hand, if I ranked MIRI, CFAR, CEA, FHI, and several startups I've visited, in terms of how reality-distorting they can be, Leverage would score ~9, while no other would surpass ~7. (It manages to be nontransparent and cultlike in other ways too!).  While on the other hand, their productive output was... also like a 2/10? It's indefensible. But still only a fraction of the relevant information is in the open.

As you say, it'll take time for people to build common understanding, and to come to terms with what went down. I hope the cover you've offered will lead some others to feel comfortable sharing their experiences, to help advance that process.

Replies from: Linch, Evan_Gaensbauer
comment by Linch · 2021-10-13T23:23:09.885Z · LW(p) · GW(p)

What's frustrating about still hearing noisy debate on this topic, so many years later, is that Leverage being a really bad org seems overdetermined at this point. On the one hand, if I ranked MIRI, CFAR, CEA, FHI, and several startups I've visited, in terms of how reality-distorting they can be, Leverage would score ~9, while no other would surpass ~7. (It manages to be nontransparent and cultlike in other ways too!).  While on the other hand, their productive output was... also like a 2/10? It's indefensible. But still only a fraction of the relevant information is in the open.

One thing to note is that if you "read the room" instead of only looking at the explicit arguments, it's noticeable that a lot of people left Leverage and the new org ("Leverage 2.0") completely switched research directions, which to me seems like tacit acknowledgement that their old methods etc aren't as good.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-14T09:19:31.403Z · LW(p) · GW(p)

As far as people leaving organizations I'd love to have good data for MIRI, CFAR, CEA and FHI.

Replies from: habryka4, ozziegooen
comment by habryka (habryka4) · 2021-10-14T23:24:36.693Z · LW(p) · GW(p)

I think I could write down a full history of employment for all of these orgs (except maybe FHI, which I've had fewer tabs on), in an hour or two of effort. It's somewhat costly for me (in terms of time), but if lots of people are interested, I would be happy to do it. 

Replies from: David Hornbein, kohaku-none
comment by David Hornbein · 2021-10-15T00:38:12.925Z · LW(p) · GW(p)

I'm personally interested, and also I think having information like this collected in one place makes it much easier for everyone to understand the history and shape of the movement. IMO an employment history of those orgs would make for a very valuable top-level post.

comment by ozziegooen · 2021-10-14T14:42:45.482Z · LW(p) · GW(p)

As someone who's been close to these, some had a few related issues, but Leverage seemed much more extreme in many of these dimensions to me.

However, now there are like 50 small EA/rationalist groups out there, and I am legitimately worried about quality control.

Replies from: Viliam, ChristianKl
comment by Viliam · 2021-10-15T08:03:44.210Z · LW(p) · GW(p)

I generally worry about all kinds of potential bad actors associating themselves with EA/rationalists.

There seems to be a general pattern where new people come to an EA/LW/ACX/whatever meetup or seminar, trusting the community, and there they meet someone who abuses this trust and tries to extract free work / recruit them for their org / abuse them sexually, and the new person trusts them as representatives of the EA/rationalist community (they can easily pretend to be), while the actual representatives of EA/rationalist community probably don't even notice that this happens, or maybe feel like it's not their job to go reminding everyone "hey, don't blindly trust everyone you meet here".

I assume the illusion of transparency plays a big role here, where the existing members generally know who is important and who is a nobody, who plays a role in the movement and who is just hanging out there, what kind of behavior is approved and what kind is not... but the new member has no idea about anything, and may assume that if someone acts high-status then the person actually is high-status in the movement, and that whatever such person does has an approval of the community.

To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously -- from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them.

Not sure what exactly to do about this, but perhaps the first step could be to write some warnings about this, and read them publicly at the beginning of every public event where new people come. Preferably with specific examples of things that happened in the past; like, not the exact name and place, but the pattern, like "hey, I have a startup that aims to improve the world, wanna code for me this app for free, I will totally donate something to some effective charity, pinky swear".

Replies from: ozziegooen, ozziegooen, farp
comment by ozziegooen · 2021-10-16T06:29:06.396Z · LW(p) · GW(p)

I very much agree about the worry, My original comment was to make the easiest case quickly, but I think more extensive cases apply to. For example, I’m sure there have been substantial problems even in the other notable orgs, and in expectation we should expect there to continue to be so. (I’m not saying this based on particular evidence about these orgs, more that the base rate for similar projects seems bad, and these orgs don’t strike me as absolutely above these issues.)

One solution (of a few) that I’m in favor of is to just have more public knowledge about the capabilities and problems of orgs.

I think it’s pretty easy for orgs of about any quality level to seem exciting to new people and recruit them or take advantage of them. Right now, some orgs have poor reputations among those “in the know” (generally for producing poor quality output), but this isn’t made apparent publicly.[1] One solution is to have specialized systems that actually present negative information publicly; this could be public rating or evaluation systems.

This post by Nuno was partially meant as a test for this:

https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations [EA · GW]

Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have. I think that in the case of Leverage, there really should have been some deep investigation a few years ago, perhaps after a separate setup to flag possible targets of investigation. Back then things were much more disorganized and more poorly funded, but now we’re in a much better position for similar efforts going forward.

[1] I don’t particularly blame them, consider the alternative.

Replies from: agrippa
comment by agrippa · 2021-10-17T06:44:32.536Z · LW(p) · GW(p)

[1] I don’t particularly blame them, consider the alternative.

I think the alternative is actually much better than silence!

For example I think the EA Hotel is great and that many "in the know" think it is not so great. I think that the little those in the know have surfaced about their beliefs has been very valuable information to the EA Hotel and to the community. I wish that more would be surfaced. 

Simply put, if you are actually trying to make a good org, being silently blackballed by those "in the know" is actually not so fun. Of course there are other considerations, such as backlash, but IDK I think transparency is good on all sorts of angles. The opinions of those "in the know" matter; they lead, and I think its better for everyone if that leadership happens in the light.

Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have. 

I think this is more than warranted at this point, yeah. I wonder who might be trusted enough to lead something like that.

Replies from: ozziegooen
comment by ozziegooen · 2021-10-17T15:48:36.226Z · LW(p) · GW(p)

I agree that it would have been really nice for grantmakers to communicate with the EA Hotel more, and other orgs more, about their issues. This is often a really challenging conversation to have ("we think your org isn't that great, for these reasons"), and we currently have very few grantmaker hours for the scope of the work, so I think grantmakers don't have much time now to spend on this. However, there does seem to be a real gap here to me. I represent a small org and have been around other small orgs, and the lack of communication with small grantmakers is a big issue. (And I probably have it much easier than most groups, knowing many of the individuals responsible)

I think the fact that we have so few grantmakers right now is a big bottleneck that I'm sure basically everyone would love to see improved. (The situation isn't great for current grantmakers, who often have to work long hours). But "figuring out how to scale grantmaking" is a bit of a separate discussion. 

Around making the information public specifically, that's a whole different matter. Imagine the value proposition, "If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see." Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund. 

(Note: I was a guest manager on the LTFF for a few months, earlier this year)

Replies from: agrippa
comment by agrippa · 2021-10-17T16:46:28.520Z · LW(p) · GW(p)

"If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see."

I feel confident that Greg of EA Hotel would very much prefer this in the case of EA Hotel. It can be optional, maybe.

Replies from: ozziegooen
comment by ozziegooen · 2021-10-17T18:38:38.263Z · LW(p) · GW(p)

That's good to know. 

I imagine grantmakers would be skeptical about people who would say "yes" to an optional form. Like, they say they're okay with the information being public, but when it actually goes out, some of them will complain about it, leading to a lot of extra time.

However, some of our community seems unusually reasonable, so perhaps there's some way to make it viable.

comment by ozziegooen · 2021-10-16T06:32:27.108Z · LW(p) · GW(p)

To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously -- from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them. —-

For what it’s worth, I think this is true for basically all intense and moral communities out there. The EA/rationalist groups generally seem better than many religious and intense political groups in these areas, to me. However, even “better” is probably not at all good enough.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-10-16T08:54:37.439Z · LW(p) · GW(p)

What are “intense” and/or “moral” communities? And, why is it (or is it?) a good thing for a community to be “moral” and/or “intense”?

Replies from: ChristianKl, ozziegooen
comment by ChristianKl · 2021-10-16T09:53:14.023Z · LW(p) · GW(p)

There are certain goals for which having a moral or intense community is helpful. Whether or not I want to live in such a community I consider it okay for other people to build those communities. On the other hand, building cults is not okay in the same sense. 

Intense communities also generally focus on something where otherwise there's not much focus in society, increase cognitive diversity and are thus able to produce certain kinds of innovations that wouldn't happen with less cognitive diversity.

comment by ozziegooen · 2021-10-16T15:32:44.656Z · LW(p) · GW(p)

I was just thinking of the far right-wing and left-wing in the US; radical news organizations and communities. Q-anon, some of the radical environmentalists, conspiracy groups of all types. Many intense religious communities. 

I'm not making a normative claim about the value of being "moral" and/or "intense", just saying that I'd expect moral/intense groups to have some of the same characteristics and challenges.

comment by farp · 2021-10-17T06:35:47.110Z · LW(p) · GW(p)

while the actual representatives of EA/rationalist community probably don't even notice that this happens

I think it matters a lot whether this is true, and there is widely known evidence that it isn't true. For example Brent Dill and (if you are willing to believe victims) Robert Lecnik. 

Your post is well said and I am also very worried about EA/rat spaces as a fruitful space for predatory actors. 

Replies from: AnnaSalamon, Viliam
comment by AnnaSalamon · 2021-10-17T18:44:16.427Z · LW(p) · GW(p)

Which thing are you claiming here? I am a bit confused by the double negative (you're saying there's "widely known evidence that it isn't true that representatives don't even notice when abuse happens", I think; might you rephrase?).

I've made stupid and harmful errors at various time, and e.g. should've been much quicker on the uptake about Brent, and asked more questions when Robert brought me info about his having been "bad at consent" as he put it. I don't wish to be and don't think I should be one of the main people trying to safeguard victims' rights; I don't think I have needed eyes/skill for it. (Separately, I am not putting in the time and effort required to safeguard a community of many hundreds, nor is anyone that I know of, nor do I know if we know how or if there's much agreement on what kinds of 'safeguarding' are even good ideas, so there are whole piles of technical debt and gaps in common knowledge and so on here.)

Nonetheless, I don't and didn't view abuse as acceptable, nor did I intend to tolerate serious harms. Parts of Jay's account of the meeting with me are inaccurate (differ from what I'm really pretty sure I remember, and also from what Robert and his husband said when I asked them for their separate recollections). (From your perspective, I could be lying, in coordination with Robert and his husband who also remember what I remember. But I'll say my piece anyhow. And I don't have much of a reputation for lying.) If you want details on how the me/Robert/Jay interaction went as far as I can remember, they're discussed in a closed FB group with ~130 members that you might be able to join if you ask the mods; I can also paste them in here I guess, although it's rather personal/detailed stuff about Robert to have on the full-on public googleable internet so maybe I'll ask his thoughts/preferences first, or I'm interested in others' thoughts on how the etiquette of this sort of thing ought to go. Or could PM them or something, but then you skip the "group getting to discuss it" part. We at CFAR brought Julia Wise into the discussion last time (not after the original me/Robert/Jay conversation, but after Jay's later allegations plus Somni's made it apparent that there was something more serious here), because we figured she was trustworthy and had a decent track record at spotting this kind of thing.

Replies from: farp
comment by farp · 2021-10-17T20:25:11.895Z · LW(p) · GW(p)

Which thing are you claiming here?

I'm claiming that CFAR representatives did in fact notice bad things happening, and that the continuation of bad things happening was not for lack of noticing. I think that you are pretty familiar with this view.

I don't wish to be and don't think I should be one of the main people trying to safeguard victims' rights; I don't think I have needed eyes/skill for it. (Separately, I am not putting in the time and effort required to safeguard a community of many hundreds,

I want to point out what is in my mind a clear difference between taking a major role as a safeguard, and failing people who trust you and when the accused confesses to you. You can dispute whether that happened but it's not as though I am asking you to be held liable for all harms.

I can also paste them in here I guess, although it's rather personal/detailed stuff about Robert to have on the full-on public googleable internet so maybe I'll ask his thoughts/preferences first

If you think this guy raped people (with 80% credence or whatever) then you should probably warn people about him (in a public googleable way). If you don't think so then you can just say so. Basically, it seems like your willingness to publish this stuff should mostly just depend on how harmful you think this person was. 

I'm personally not aware of anything you did with respect to Robert that demonstrates intolerance for serious harms. Allowing somebody to continue to be an organizer for something after they confess to rape qualifies as tolerance of serious harms to me.

 

Of course my comment here seems litigious -- I am not really trying to litigate. 

In very plain terms: It has been alleged that CFAR leadership knew that Brent and Robert were committing serious harms and at the very least tolerated it. I take these allegations seriously. Anyone who takes these allegations seriously would obviously be troubled by it being taken for granted that community leaders do not even notice harms taking place.

Replies from: Duncan_Sabien
comment by Duncan_Sabien · 2021-10-18T01:23:42.194Z · LW(p) · GW(p)

"It has been alleged" strikes me as not meeting the bar that LW should strive to clear, when dealing with such high stakes, with this much uncertainty.

Allegations come with an alleger attached.  If that alleger is someone else (i.e. if you don't want to tie your own credibility to their account) then it's good to just ... link straight to the source.

If that alleger is you (including if you're repeating someone else's allegations because you found them credible enough that you're adopting them, and repeating them on your own authority), you should be able to state them directly and concretely.

"It has been alleged" is a vague, passive-voice, miasmatic phrase that is really difficult to work with, or think clearly around.

It also implies that these allegations have not been, or cannot be, settled, as questions of fact, or at least probability.  It perpetuates a sort of un-pin-downable quality, because as long as the allegations are mist and fog, just floating around absent anyone who's taking ownership of them, they can't be conclusively settled or affirmed, and can be repeated forever.

I think it's pretty bad to lean into a dynamic like that.

In very plain terms: it is the explicit and publicly stated position of CFAR leadership that they were unaware of Brent's abuses, and that as soon as they became aware of them, they took quick and final action.

In that very statement, you can also find CFAR's mea culpas re: places where CFAR feels it should have become aware, prior to the moment it did become aware.  CFAR does not claim that it did a good job with Brent.  CFAR explicitly acknowledges pretty serious failures.

No one is asking anyone to take for granted that community leaders either [always see], or [never wrongly ignore], harms.  That was a strawman.  Obviously it is a valid hypothesis that community leaders can fail to see harms, or fail in their response to them.  You can tell it's a valid hypothesis because CFAR is an existence proof of community leaders outright admitting to just such a mistake.

It seems to me that Anna is trying pretty hard, in her above reply, to be open, and legible, and give-as-much-as-she-can without doing harm, herself.  I read in Anna's reply something analogous to the CFAR Brent statement: that, with hindsight, she wishes she had done some things differently, and paid more attention to some concerning signals, but that she did not suppress information, or ignore or downplay evidence of harm once it came clearly to her attention (I say "evidence of harm" rather than "harm" because it's important to be clear about my epistemic status with regards to this question, which is that I have no idea).  

I furthermore see in Anna's comment evidence that there are non-CFAR-leadership people looking at the situation, and taking action, albeit in a venue that you and I cannot see.  It doesn't sound like anything is being ignored or suppressed.

So insofar as "things that have been alleged" are concerned, I think it boils down to something like:

Either one believes CFAR (in the Brent case) or Anna (above), or one explicitly registers (whether publicly or privately) a claim that they're lying, or somehow blind or incompetent to a degree tantamount to lying.

Which is a valid hypothesis to hold, to be clear.  Right now the whole point of the broader discussion is "are these groups and individuals good or bad, and in what ways?"  It's certainly reasonable to think "I do not believe them."

But that's different from "it has been alleged," and the implication that no response has been given.  To the allegation that CFAR leadership ignored Brent, there's a clear, on-the-record answer from CFAR.  To the allegation that CFAR leadership ignored Robert or other similar situations, there's a clear, on-the-record answer from Anna above (that, yes, is not fully forthright, but that's because there are other groups already involved in trying to answer these questions and Anna is trying not to violate those conversations nor the involved parties' privacy).

I think that you might very well have further legitimate beef, à la your statement that "I'm claiming that CFAR representatives did in fact notice bad things happening, and that the continuation of bad things happening was not for lack of noticing."

But I think we're at a point where it's important to be very clear, and to own one's accusations clearly (or, if one is not willing to own them clearly, because e.g. one is pursuing them privately, to not leave powerful insinuations in places where they're very difficult to responsibly answer).

The answer, in both cases given above, seems to me to be, unambiguously:

"No, we did not knowingly tolerate harm."

If you believe CFAR and/or Anna are lying, then please proceed with that claim, whether publicly or privately.

If you believe CFAR and/or Anna are confused or incompetent, then please proceed with that claim, whether publicly or privately.

But please ... actually proceed? Like, start assembling facts, and presenting them here, or presenting them to some trusted third-party arbiter, or whatever.  In particular, please do not imply that no answer to the allegations has been given (passive voice).  I don't think that repeating sourceless substanceless claims—

(especially in the Brent case, where all of the facts are in common knowledge and none of them are in dispute at this point)

—after Anna's already fairly in-depth and doing-its-best-to-cooperate reply, is doing good for anybody in either branch of possibility.  It feels like election conspiracy theorists just repeating their allegations for the sake of the power the repetition provides, and never actually getting around to making a legible case.

EDIT: For the record, I was a CFAR employee from 2015 to 2018, and left (for entirely unrelated reasons) right around the same time that the Brent stuff was being resolved.  The linked document was in part written with my input, and sufficiently speaks for me on the topic.

comment by Viliam · 2021-10-17T12:10:13.511Z · LW(p) · GW(p)

I think it matters a lot whether this is true, and there is widely known evidence that it isn't true.

If that's so, then it's very bad, and I feel like some people should receive a wake-up slap. I live on the opposite side of the planet, and I usually only learn about things after they have already exploded. Sometimes I wonder if anything would be different if I lived where most of the action happens. Generally, it seems like they should import some adults into the Bay Area.

As far as I know, in the Vienna community we do not tolerate this type of behavior. (Anyone feel free to correct me if I am wrong, publicly or privately at your choice.)

comment by ChristianKl · 2021-10-15T14:57:31.217Z · LW(p) · GW(p)

It seems to me that quality control has always an issue with some groups no matter how many groups there were.

Replies from: ozziegooen
comment by ozziegooen · 2021-10-16T15:30:39.268Z · LW(p) · GW(p)

Agreed, though I think that the existence of many groups makes it a more obvious problem, and a more complicated problem.

comment by Evan_Gaensbauer · 2021-10-14T23:11:30.386Z · LW(p) · GW(p)

I dipped my toe into openly commenting last week [LW(p) · GW(p)], and immediately received an email that made it more difficult to maintain anonymity - I was told "Geoff has previously speculated to me that you are 'throwaway', the author of the 2018 basic facts post".

Leverage Research hosted a virtual open house and AMA a couple weeks ago for their relaunch as a new kind of organization that has been percolating for the last couple years. I attended. One subject Geoff and I talked about was the debacle that was the article in The New York Times (NYT) on Scott Alexander from several months ago. I expressed my opinion that:

  1.  Scott Alexander could have managed his online presence much better than he did on and off for a number of years.
  2. Scott Alexander and the rationality community in general could have handled the situation much better than they did.
  3. Those are parts of this whole affair that too few in the rationality community have been willing to face, acknowledge or discuss about what can be learned from mistakes made.
  4. Nonetheless, NYT was the instigating party in whatever of the situation constituted a conflict between NYT, and Scott Alexander and his supporters, and NYT is the party that should be held more accountable and is more blameworthy if anyone wants to make it about blame.

Geoff nodded, mostly in agreement, and shared his own perspective on the matter that I won't share. Yet if Geoff considers NYT to have done one or more things wrong in that case, 

You yourself, Ryan, never made any mistake of posting your comments online in a way that might make it easier for someone else to de-anonymize you. If you made any mistake, it's that you didn't anticipate how adeptly Geoff would apparently infer or discern your identity. I expect why it wouldn't be so hard for Geoff to have figured it out it was you because you would have shared information about the internal activities at Leverage Research you are one of only a small number of people would have had access to. 

Yet that's not something you should not have had to anticipate. A presumption of good faith in a community or organization entails a common assumption that nobody would do that to their other peers. Whatever Geoff himself has been thinking about you as the author of those posts, he understands exactly the way in which to de-anonymize you or whoever would also be considered a serious violation of a commonly respected norm.

Based on how you wrote your comment, it seems that the email you received may have come across as intimidating. Obviously I don't expect you to disclose anything else about it, and would respect and understand if you don't, but it seems the email may have been meant to provide you with a well-intended warning. If so, there is also a chance Geoff had discerned that you were the account-holder for 'throwaway' (at least at the time of the posts in question) but hasn't even considered the possibility of de-anonymizing you, at least in more than a private setting. Yet either way, Geoff has begun responding in a way that if he were to act upon enough would only have become more disrespectful to you, your privacy and your anonymity. 

Of course, if it's not already obvious to anyone, neither am I someone who has an impersonal relationship with Leverage Research as an organization. I'm writing this comment with the anticipation that Geoff may read it himself or may not be comfortable with what I've disclosed above. Yet what I've shared was not from a particularly private conversation. It was during an AMA Leverage Research hosted that was open to the public. I've already explained above as well that in this comment I could have disclosed more, like what Geoff himself personally said, but I haven't. I mention that to also show that I am trying to come at this with good faith toward Geoff as well. 

During the Leverage AMA, I also asked a question that Geoff called the kind of 'hard-hitting journalistic' question he wanted more people to have asked. If that's something he respected during the AMA, I expect this comment is one he would be willing to accept being in public as well. 

Replies from: Viliam
comment by Viliam · 2021-10-15T17:24:45.776Z · LW(p) · GW(p)

Based on how you wrote your comment, it seems that the email you received may have come across as intimidating.

I think the important information here is how did Geoff / Leverage Research handle similar criticism in the past. (I have no idea. I assume both you and Ryan probably know more about this.) As they say, past behavior is the best predictor of future behavior. The wording of the e-mail is not so important.

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2021-10-15T20:18:00.189Z · LW(p) · GW(p)

I previously have not been as aware that this is a pattern of how so many people have experienced responses to criticism from Geoff and Leverage in the past. 

comment by Beth Barnes (beth-barnes) · 2021-10-13T17:39:35.948Z · LW(p) · GW(p)

Many of these things seem broadly congruent with my experiences at Pareto, although significantly more extreme. Especially: ideas about psychology being arbitrarily changeable, Leverage having the most powerful psychology/self-improvement tools, Leverage being approximately the only place you could make real progress, extreme focus on introspection and other techniques to 'resolve issues in your psyche', (one participant's 'research project' involved introspecting about how they changed their mind for 2 months) and general weird dynamics (e.g. instructors sleeping with fellows; Geoff doing lectures or meeting individually with participants in a way that felt very loaded with attempts to persuade and rhetorical tricks), and paranoia (for example: participants being concerned that the things they said during charting/debugging would be used to blackmail or manipulate them; or suspecting that the private slack channels for each participant involved discussion of how useful the participants were in various ways and how to 'make use of them' in future). On the other hand, I didn't see any of the demons/objects/occult stuff, although I think people were excited about 'energy healers'/'body work', not actually believing that there was any 'energy' going on, but thinking that something interesting in the realm of psychology/sociology was going on there. Also, I benefitted from the program in many ways, many of the techniques/attitudes were very useful, and the instructors generally seemed genuinely altruistic and interested in helping fellows learn.

comment by Unreal · 2021-10-18T01:54:29.432Z · LW(p) · GW(p)

Edit: (One person reading this reports below that this made them more reluctant to come forward with their story, and so that seems bad to me. I have mentally updated as a result. More relevant discussion below.) 

I notice that there's not that much information public about what Geoff actually Did and Did Not Do. Or what he instigated and what he did not. Or what he intended or what he did not intend. 

Um, I would like more direct evidence of what he actually did and did not do. This is cruxy for me in terms of what should happen next. 

Right now, based just on the Medium post, one plausible take is that the people in Geoff's immediate circle may have been taking advantage of their relative power in the hierarchy to abuse the people under them. 

See this example from Zoe:

A few weeks after this big success, this person told me my funding was in question — they had done all they could do to train me and thought I might be too blocked to sufficiently progress into a Master on the project. They and Geoff were questioning my commitment to and understanding of the project, and they had concerns about my debugging trajectory.

"They and Geoff" makes it sound like Zoe's supervisor basically name-dropped Geoff as a way to add weight to a scare tactic. Like "better watch out cuz the boss thinks you're not committed enough..." But it's not really clear what the boss actually said or did not say... This supervisor might just be using a move. (I welcome additional clarity.) 

The most directly 'damning' thing, as far as I can tell, is Geoff pressuring people to sign NDAs. 

A lot of the other stuff seems like it's due to the people around Geoff elevating him to an unreasonable pedestal and treating him like a savior. Maybe Geoff should have done more to stop this from escalating / done more to make people chill out about him and his supposed specialness. But him failing to control his flock is a different failure from him feeding them lies or requiring worship. I'm not seeing any statements about this. I welcome more information and clarity. 

I am wanting clarity here because I am very aware of people's strong desire for a [cult] leader. It can be pretty severe. And this is very much a co-participation between leaders and followers. 

I know what it's like from the inside to want someone to be my cult leader, god or parent figure. And I have low-tolerance for narratives that try to take my personal agency away from me—that claim I was a victim of mind control or whatever, rather than someone who bottom-level gave up my power to them. 

Even if I didn't consciously give away my power and it just sort of happened, I think it's still wrong to write a narrative where I merely blame the other person and absolve myself of all responsibility or agency. This sounds unhealthy to hold onto, as a story. 

I'm def not trying to absolve Geoff (or anyone) of responsibility, accountability, or agency. But also ew scapegoating is gross? 

My main desire is for more information, or for people to realize that we might not be meeting relevant cruxes for how to move forward, and that we should continue to investigate and hold off on taking heavy actions. 

Replies from: Unreal
comment by Unreal · 2021-10-18T13:55:09.140Z · LW(p) · GW(p)

Another thing I want to mentally watch out for: 

It might be tempting for some ex-Leverage people to use Geoff as the primary scapegoat rather than implicating themselves fully. So as more stories come out, I plan to be somewhat delicate with the evidence. The temptation to scapegoat a leader is pretty high and may even seem justifiable in a "ends justifies the means" kind of thinking. 

I don't seem to personally be OK with using misleading information or lies to bolster a case against a person, even if this ends up "saving" a lot of people. (I don't think it actually saves them... people should come to grips with their own errors, not hide behind a fallback person.) 

So... Leverage, I'm looking at you as a whole community! You're not helpless peons of Geoff Anders. 

When spiritual gurus go out of control, it's not a one-man operation; there are corroborators, enablers, people who hid information, yes-men and sycophants, those too afraid to do the right thing or speak out against wrongdoing, those too protective of personal benefits they may be receiving (status, friends, food, housing), etc. 

There's stages of 'coming to terms' with something difficult. And a very basic model would be like 

  1. Defensive / protective stage. I am still blended and identified with a problematic pattern or culture, so I defend it. It feels like my own being is at stake or on the line. It's hard to see what's true, and I am partially in denial or in dissociation—although I myself may not realize it. 
  2. Mitosis stage. I am in the process of a painful identity-level separation from the pattern or culture. I start feeling anger towards it, some grief, horror, etc. It's likely I feel victimized. For the sake of gaining clarity, a victim narrative is more helpful than the previous narrative of "the thing is actually good though" or whatever fog of denial I was in. 
  3. Grief stage. Even more open and full realization of what happened and its problematic nature. Realizing my own personal part in it and the extent to which my actions were my own and also contributed to harm. This can be a very difficult stage, and may come with shame, guilt, remorse, and immense sadness. 
  4. Letting go and integration stage. Happy relief comes when all the disparate parts are integrated and all is forgiven. I hold myself to a new, higher standard, and I hold others also to a higher standard. I feel good about where I stand now, with more clarity and compassion. I see clearly what the mistakes were and how to avoid them. I can guide or warn others from making similar mistakes. There's no emotional or trauma residue left in me. My capacity has expanded to hold more complexity and diversity. I am more accepting of the past, can see from many perspectives, and ready to live fully present. 

Stage 2 is a dangerous stage, and it is one I have been in, and where I was most volatile, angry, and likely to cause damage. Kind of wanting more common knowledge about this as a Thing so that we are collectively aware that damage is best minimized. Although I imagine disagreements with this. 

Replies from: Spiracular, Spiracular, Spiracular, weft
comment by Spiracular · 2021-10-18T14:37:26.305Z · LW(p) · GW(p)

I basically agree with this.

But also, I think pretty close to ZERO people who were deeply affected (aside from Zoe, who hasn't engaged beyond the post) have come forward in this thread. And I... guess we should talk about that.

I know from firsthand, that there were some pretty bad experiences in the incident that tore Leverage 1.0 apart, which nobody appears to feel able to talk about.

I am currently not at all optimistic that we're managing to balance this correctly? I also want this to go right. I'm not quite sure how to do it.

Replies from: Unreal
comment by Unreal · 2021-10-18T15:44:31.991Z · LW(p) · GW(p)

That's pretty fair. I am open to taking down this comment, or other comments I've made. (Not deleting them forever, I'll save them offline or something.) Your feedback is helpful here and revealing to me, and I feel myself updating because of it. 

I have commented somewhere else that I do not like LessWrong for this discussion... because a) It seems bad for justice to be served. and b) It removes a bunch of context data that I personally think is super relevant (including emotional, physical layers) and c) LW is absolutely not a place designed for healing or reconciliation... and it also seems only 'okay' for sense-making as a community. It is maybe better for sense-making at the individual intellectual level. So... I guess LW isn't my favorite place for this discussion to be happening... I wonder what you think. 

Replies from: Unreal, elityre, Spiracular, Spiracular
comment by Unreal · 2021-10-18T15:46:32.618Z · LW(p) · GW(p)

(Separately) I care about folks from Leverage. I am very fond of the ones I've met. Zoe charted me once, and I feel fondly about that. I've been charted a number of times at Leverage, and it was good, and I personally love CT charting / Belief Reporting and use, reference, and teach it to others to this day. Although it's my own version now. I went to a Paradigm workshop once, as well as several parties or gatherings. 

My felt sense of my time at the workshop (especially during more casual hang-out-y parts of it) is like a sense of sad distance... like, oh I would like to be friends with these people... but mentally / emotionally they seem "hard to access." 

I'm feeling compassion towards the ones who have suffered and are suffering. I don't need to be personal friends with anyone, but ... if there's a way I can be of service, I am interested. 

Open and free invitation: If anyone involved in the Leverage stuff in some way wants someone to hold space for you as you process things, I am open to offer that, over Zoom, in a confidential manner. (I am not very involved in the community normally, as I am committed to being at the Monastic Academy in Vermont for a long while, and I don't engage in divisive / gossipy speech. It is wrong speech :P) Cat would probably vouch for me. But basically uhh, even if what you want to say would normally be totally crazy to most rationalists or even most Westerners, I have ventured so far outside the overton window that I doubt I'll be taken aback. If that helps. :P 

You can FB msg me or gmail me (unrealeel). 

comment by Eli Tyre (elityre) · 2021-10-19T00:31:02.465Z · LW(p) · GW(p)

Do you have a suggestion for another forum that you think would be better? 

In particular, do you have pointers to online forums that do incorporate the emotional and physical layers ("in a non-toxic way", he adds, thinking of twitter). Or do you think that the best way to do this is just not online at all?

Replies from: Unreal
comment by Unreal · 2021-10-19T02:25:33.671Z · LW(p) · GW(p)

CFAR's recent staff reunion seemed to do all right. It wasn't, like, optimized for safety or making sure everyone was heard equally or something like that, but such features could be added if desired. Having skilled third-party facilitators seemed good. 

Oh you said 'online'. Uhhh. 

Online fishbowl Double Cruxes would get us like ... 30% of the way there maybe? Private / invite only ones? 

One could run an online Group Process like thing too. Invite a group of people into a Zoom call, and facilitate certain breakout sessions? Ideally with facilitation in each breakout group? 

I am not thinking very hard about it. 

We need a lot of skill points in the community to make such things go well. I'm not sure how many skill points we're at. 

comment by Spiracular · 2021-10-18T17:05:08.195Z · LW(p) · GW(p)

Since it's mostly just pointers to stuff I've already said/implied... I'll throw out a quick comment.

I would like it if somebody started something like a carefully-moderated private Facebook group, mostly of core people who were there, to come to grips with their experiences? I think this could be good.

I am slightly concerned that people who are still in the grips of "Leverage PR campaigning" tendencies, will start trying to take it over or otherwise poison the well? (Edit: Or conversely, that people who still feel really hurt or confused about it might lash out more than I'd wish. I personally, am more worried about the former.) I still think it might be good, overall.

Be sure to be clear EARLY about who you are inviting, and who you are excluding! It changes what people are willing to talk about.

...I am not personally the right person to do this, though.

(It is too easy to "other" me, if that makes sense.)


I feel like one of the only things the public LW thread could do here?

Is ensuring public awareness of some of the unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms, and showing a public ramp-down of opportunities to do so in the future.

Along with doing what we can, to signal that we generally stand against people over-simplistically demonizing the people and organizations involved in this.

Replies from: Unreal
comment by Unreal · 2021-10-18T18:58:01.848Z · LW(p) · GW(p)

... unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms ... 

Hmm. This seems worth highlighting. 

The NDAs (plus pressure to sign) point to this. 

... 

( The rest of this might be triggering to anyone who's been through gaslighting / culty experiences. Blunt descriptions of certain forms of control and subjugation. ) 

...

The rest of the truth-suppressive measures I can only speculate. Here's a list of possible speculative mechanisms that come to mind, some of which were corroborated by Zoe's report but not all:

  • Group hazing or activities that cause collective shame, making certain things hard to admit to oneself and others (plus, inserting a bucket error where 'shameful activity' is bucketed with 'the whole project' or something)
    • This could include implanting group delusions that are shameful to admit. 
  • Threats to one's physical person or loved ones for revealing things
  • Threats to one's reputation or ability to acquire resources for revealing things
  • Deprivation used to negatively / positively reinforce certain behaviors or stories ("well, if you keep talking like that, we're gonna have to take your phone / food / place to sleep") 
  • Gaslighting specific individuals or subgroups ("what you're experiencing is in your own head; look at other people, they are doing fine, stop being crazy / stop ruining the vibe / stop blocking the project")
    • A lot of things could fit into this category. 
  • Causing dissociation. (Thus disconnecting a person from their true yes/no or making it harder for them to discern truth from fiction.) This is very common among modern humans, though, and doesn't seem as evil-sounding as the other examples. Modern humans are already very dissociated afaict. 
    • It would become more evil if it was intentionally exploited or amplified.
    • Dissociation could be generalized or selective. Selective seems more problematic because it could be harder to detect. 
  • Pretending there is common knowledge or an obvious norm around what should be private / confidential, when there is not. (There is some of this going around rationalist spaces already.) "Don't talk about X behind their back, that's inappropriate." or "That's their private business, stay out of it." <-- Said in situations where it's not actually inappropriate or when claims of it being someone's 'private business' is overreaching.
  • Deliberately introducing and enforcing a norm of privacy or confidentiality that breaks certain normal and healthy social accountability structures. (Compassionate gossip is healthy in groups, especially those living in residential community,. Rationalists seem to not get this though and tend to break Chesterton's fence on this, but I attribute this to hubris. It seems worse to me if these norms are introduced out of self-serving fear.)
  • Sexual harassment, molestation, or assault. (This tends to result in silencing pretty effectively.) 
  • Creating internal jockeying, using an artificial scarcity around status or other resources. A culture of oneupmanship. A culture of having to play 'loyal'. People getting way too sucked into this game and having their motives hijacked. They internally align themselves with the interests of certain leaders or the group, leading to secrecy being part of their internal motivation system.
  • This one is really speculative, but if I imagine buying into the story that Geoff is like, a superintelligence basically, and can somehow predict my own thoughts and moves before I can, then ... maybe I get paranoid about even having thoughts that go against (my projection of) his goals. 
    • Basically, if I thought someone could legit read my mind and they were not-compassionate or if I thought that they could strategically outmaneuver me at every turn due to their overwhelming advantage, that might cause some fucked up stuff in my head that stays in there for a while. 
    • If this resonates with you, I am very sorry. 

I welcome additions to this list. 

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-18T22:01:49.485Z · LW(p) · GW(p)
  • "You can't rely on your perspective / Everything is up for grabs." All of your mental content--ideas, concepts, motions, etc.--are potentially good (and should be leaned more heavily on, overriding others) / bad (and should be ignored / downvoted / routed around / destroyed / pushed against), and more openness to change is better, and there's no solid place from which you can stand and see things. Of course, this is in many ways true and useful; but leaning into this creates much more room for others to selectively up/downvote stuff in you to avoid you reaching conclusions they don't want you to reach; or more likely, up/downvote conclusions, and have you rearrange yourself to harmonize with those judgements.
  • Trolling Hope [LW · GW] placed in the project / leadership. Like: I care deeply that things go well in the world; the only way I concretely see that might happen, is through this project; so if this project is doomed, then there's no Hope; so I may as well bet everything on worlds where the project isn't doomed; so worlds where the project is doomed are irrelevant; so I don't see / consider / admit X if X implies that the project is doomed, since X is entirely about irrelevant worlds.
  • Emotional reward conditioning. (This one is simple or obvious, but I think it's probably actually a significant portion of many of these sorts of situations.) When you start to say information I don't like, I'm angry at you, annoyed, frustrated, dismissive, scornful, derisive, insulting, blank-faced, uninterested, condescending, disgusted, creeped out, pained, hurt, etc. When you start to hide information I don't like, or expound the opposite, I'm pleasant, endeared, happy, admiring, excited, etc. etc. Conditioning shades into + overlaps other tactics like stonewalling (blank-faced, aiming at learned helplessness), shaming, and running intereference (changing the subject), but conditioning has a particular systematic effect of making you "walk on eggshells" about certain things and feeling relief / safety when you stick to appropriate narratives. And this systematic effect can be very strong and persist even when you're away from the people who put it there, if you didn't perfectly compartmentalize how-to-please-them from everything else in your mind.
comment by Spiracular · 2021-10-18T17:24:08.268Z · LW(p) · GW(p)

Meta: I think it makes some good points. I do not think it was THAT bad, and I think the discussion was good. I would keep it up, but it's your call. Possibly adding an "Edit: (further complicated thoughts)" at the top? (Respect for thinking about it, though.)

comment by Spiracular · 2021-10-18T14:39:23.430Z · LW(p) · GW(p)

I see what you're doing? And I really appreciate that you are doing it.

...but simultaneously? You are definitely making me feel less safe to talk about my personal shit.

(My position on this is, and has always been: "I got a scar from Leverage 1.0. I am at least somewhat triggered; on both that level, and by echoes from a past experience. I am scared that me talking about my stuff, rather than doing my best to make and hold space, will scare more centrally-affected people off. And I know that some of those people, had an even WORSE experience than I did. In what was, frankly, a surreal and really awful experience for me.")

comment by Spiracular · 2021-10-18T15:16:39.558Z · LW(p) · GW(p)

My current sense? Is that both Unreal and I are basically doing a mix of "take an advocate role" and "using this as an opportunity to get some of what the community got wrong last time -with our own trauma- right." But for different roles, and for different traumas.

It seemed worth being explicit and calling this out. (I don't necessarily think this is bad? I also think both of us seem to have done a LOT of "processing our own shit' already, which helps.)

But doing this is... exhausting for me, all the same. I also, personally, feel like I've taken up too much space for a bit. It's starting to wear on me in ways I don't endorse.

I'm going to take a step back from this for a week, and get myself to focus on living the rest of my life. After a week, I will circle back. In fact, I COMMIT to circling back.


And honestly? I have told several people about the exact nature of my Leverage trauma. I will tell at least several more people about it, before all of this is over.

It's not going to vanish. I've already ensured that it can't. I can't quite commit to "going full public," because that might be the wrong move? But I will not rest on this until I have done something broadly equivalent.

I am a little bit scared of some sort of attempts to undermine me emerging as a consequence, because there's a trend in even the casual reports that leans in this direction? But if it happens, I will go public about THAT fact.

I am a lot less scared of the repercussions than almost anyone else would be. So, fuck it.

(But also? My experience doesn't necessarily rule out "most of the bad that happened here was a total lack of guard-rails + culty death-spirals." It would take some truly awful negligence to have that few guard-rails, and I would not want that person running a company again? But still, just fyi. Yeah, I know, I know, it undercuts the drama of my last statement.)


But if anyone wonders why I vanished? I'm taking a break. That is what I'm doing.

comment by weft · 2021-10-18T17:17:24.651Z · LW(p) · GW(p)

Multiple times on this thread I've seen you make the point about figuring out what responsibility should fall on Geoff, and what should be attributed to his underlings.

I just want to point out that it is a pattern for powerful bad actors to be VERY GOOD at never explicitly giving a command for a bad thing to happen, while still managing to get all their followers on board and doing the bad thing that they only hinted at/ set up incentive structures for, etc.

Replies from: Unreal
comment by Unreal · 2021-10-18T17:38:46.276Z · LW(p) · GW(p)

I wanted to immediately agree. Now I'm pausing...

It seems good to try to distinguish between:

  • Well-meaning but flawed leader sets up a system or culture that has blatant holes that allow abuse to happen. This was unintentional but they were careless or blind or ignorant, and this resulted in harm. (In this case, the leader should be held accountable, but there's decent hope for correction.) 
    • Of course, some of the 'flawed' thing might be shadow stuff, in which case it might be slippery and difficult to see, and the leader may have various coping mechanisms that make accountability difficult. I think this is often the case with leaders, and as far as I can tell, most leaders have shadow stuff, and it negatively impacts their groups, to varying degrees. (I'm worried about Geoff in this case because I think intelligence + competence + shadow stuff is a lot more difficult. The more intelligent and powerful you are, the longer you can keep outmaneuvering attempts to get you to see your own shadow; I've seen this kind of thing, it's bad.) 
  • The leader is not well-meaning and is deliberately exploitative in an intentional way. They created a system that was designed to exploit people systematically, and they lack care in their body or soul for the beings they hurt. They internally applaud when they come up with clever systems that avoid accountability or responsibility while gaining personal benefit. They hope they can keep this up forever. They have a deep-seated fear of failure, and they will do whatever it takes to avoid failure. (This feels more like Jeffrey Epstein.) 
    • You could try to argue that this is also 'shadow stuff', but I think the intention matters. If the leader's goal and desire was to create healthy and wholesome community and failed, this is different from the goal and plan being to exploit people. 

But anyway, point is: I am wanting discernment on this level of detail. For the sake of knowing best interventions and moves. 

I am not interested in putting blame on particular individuals. I am not interested in the epistemic question of who's more or less responsible. I am interested in group dynamics without the question of who's more or less responsible. 

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-18T22:28:42.374Z · LW(p) · GW(p)

I'm not sure about this, and I don't think you were trying to say this, but, I doubt that the two categories you gave usefully cover the space, even at this level of abstraction. Someone could be "well-meaning" in the sense of all their explicit, and even all their conscious, motives being compassionate, life-oriented, etc., while still systematically agentically cybernetically motivatedly causing and amplifying harm. I think you were getting at this in the sub-bullet-point, but the sort of person I'm describing would both meet the description "well-meaning; unintentional harm" and also this from your second bullet-point:

They created a system that was designed to exploit people systematically, and they lack care in their body or soul for the beings they hurt. They internally applaud when they come up with clever systems that avoid accountability or responsibility while gaining personal benefit. They hope they can keep this up forever. They have a deep-seated fear of failure, and they will do whatever it takes to avoid failure.

Maybe I'm just saying, I don't know what you (or I, or anyone) mean by "well-meaning": I don't know what it is to be well-meaning, and I don't know how we would know, and I don't know what predictions to make if someone is well-meaning or not. (I'm not saying it's not a thing, it's very clearly a thing; it's just that I want to develop our concepts more, because at least my concepts are pushed past the breaking point in abusive situations.) For example, someone might both (1) have never once consciously explicitly worked out any strategy or design to make it easier to harm people, and (2) across contexts, take actions that reliably develop/assemble a social field where people are being systematically harmed, and not update on information about how to not do that.

Maybe it would help to distinguish "categories of essence" from "categories of treatment". Like, if someone is so drowning in their shadow that they reliably, proactively, systematically harm people, then a category of essence question is like, "in principle is there information that could update them to stop doing this", and a category of treatement is like, "regardless of what they really are, we are going to treat them exactly like we'd treat a conscious, malevolent, deliberate exploiter".

Replies from: Unreal
comment by Unreal · 2021-10-19T02:44:30.339Z · LW(p) · GW(p)

I appreciate the added discernment here. This is definitely the kind of conversation I'd like to be having. ! 

someone might both (1) have never once consciously explicitly worked out any strategy or design to make it easier to harm people, and (2) across contexts, take actions that reliably develop/assemble a social field where people are being systematically harmed, and not update on information about how to not do that.

Agree. I was including that in 'shadow stuff'. 

The main difference between well-meaning and not, I think for me, is that the well-meaning person is willing to start engaging in conversations or experimenting with new systems in order to help the problems be less. Even though it's in their shadow and they cannot see it and it might take a lot to convince them, after some time period (which could be years!), they are game enough to start making changes, trying to see it, etc. 

I believe Anna S is an example of such a well-meaning person, but also I think it took her a pretty long time to come to grips with the patterns? I think she's still in the process of discerning it? But this seems normal. Normal human level thing. Not sociopathic Epstein thing. 

More controversially perhaps, I think Brent Dill has the potential to see and eat his shadow (cuz I think he actually cares about people and I've seen his compassion), but as you put it, he is "so drowning in his shadow that he reliably, systematically harms people." And I actually think it's the compassionate thing to do to prevent him from harming more people. 

So where does Geoff fall here? I am still in that inquiry. 

comment by AnnaSalamon · 2021-10-13T03:25:15.689Z · LW(p) · GW(p)

More thoughts:

I really care about the conversation that’s likely to ensue here, like probably a lot of people do.

I want to speak a bit to what I hope happens, and to what I hope doesn’t happen, in that conversation. Because I think it’s gonna be a tricky one.

What I hope happens:

  • Curiosity
  • Caring,
  • Compassion,
  • Interest in understanding both the specifics of what happened at Leverage, and any general principles it might hint at about human dynamics, or human dynamics in particular kinds of groups.

What I hope doesn’t happen:

  • Distancing from uncomfortable data.
  • Using blame and politics to distance from uncomfortable data.
  • Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.

This is LessWrong; let’s show the world how curiosity/compassion/inquiry is done!

Replies from: Ruby, Spiracular, rohinmshah, farp
comment by Ruby · 2021-10-13T04:30:37.375Z · LW(p) · GW(p)

Thanks, Anna!

As a LessWrong mod, I've been sitting and thinking about how to make the conversation go well for days now and have been stuck on what exactly to say.  This intention setting is a good start.

I think to your list I would add judging each argument and piece of data on its merits, .i.e., updating on evidence even if it pushes against the position we currently hold.

Phrased alternatively, I'm hoping we don't: treating arguments as soldiers: accepting bad arguments because they favor our preferred conclusion, rejecting good arguments because they don't support our preferred conclusion. I think there's a risk in this cases of knowing which side you're on and then accepting and rejecting all evidence accordingly.

comment by Spiracular · 2021-10-15T17:48:09.719Z · LW(p) · GW(p)

I hesitated a bit before saying this? I thought it might add a little bit of clarity, so I figured I'd bring it up.

(Sorry it got long; I'm still not sure what to cut.)

There are definitely some needs-conflicts. Between (often distant) people who, in the face of this, feel the need to cling to the strong reassurance that "this could not possibly happen to them"/"they will definitely be protected from this," and would feel reassured at seeing Strong Condemning Action as soon as possible...

...and "the people who had this happen." Who might be best-served, if they absorbed that there is always some risk of this sort of shit happening to people. For them, it would probably be best if they felt their truth was genuinely heard, and took away some actionable lessons about what to avoid, without updating their personal identity to "victim" TOO much. And in the future, embraced connections that made them more robust against attaching to this sort of thing in the future.

("Victim" is just not a healthy personal identity in the long-term, for most people.)


Sometimes, these needs are so different, that it warrants having different forums of discussion. But there is some overlap in these needs (working out what happened, improving reporting, protecting people from cultish reprisals), and I'm not sure that separation is always necessary.

My read of the direction Anna seems to be trying to steer this is "do everything she can to clearly hear out people's stories carefully First." Only later, after people have really really listened, use that to formulate carefully considered harm-reducing actions.

Understanding the issue, in all its complexity, before working on coming up with solutions? I feel pretty on-board with that.


...I admit, I initially chaffed a bit? I have some memories of times Anna has leaned a bit more into the former-group's needs. Some of her attempts to aim differently this time, have felt a little awkward.

I did also get an "ordering other people to ignore politics and be vulnerable" vibe off this, which put my armor up to around my ears. (Something with more of a feel of... "showing own vulnerability to elicit other's vulnerability," would have generally felt more natural to me? I think her later responses cycled to this, a little).

...but I'm starting to think that even the awkwardness, is its own sort of evidence? Of someone who is used to wielding frame control, trying to put it aside to listen. And I feel a lot of affection, in seeing it show that she's working on this.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-15T18:05:25.820Z · LW(p) · GW(p)

There's also the need to learn from what happened, so that when designing organizations in the future the same mistakes aren't repeated. 

comment by rohinmshah · 2021-10-13T22:53:15.288Z · LW(p) · GW(p)

Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.

Are you somehow guaranteeing or confidently predicting that others will not take them in a politicized way, use them as an excuse for false judgments, or otherwise cause harm to those sharing the true relevant facts? If not, why are you asking people not to refrain from sharing such facts?

(My impression is that it is sheer optimism, bordering on wishful thinking, to expect such a thing, that those who have such a fear are correct to have such a fear, and so I am confused that you are requesting it anyway.)

Replies from: AnnaSalamon, clone of saturn, TekhneMakre
comment by AnnaSalamon · 2021-10-14T03:16:37.131Z · LW(p) · GW(p)

Thanks for the clarifying question, and the push-back. To elaborate my own take: I (like you) predict that some (maybe many) will take shared facts in a politicized way, will use them as an excuse for false or uncareful judgments, etc. I am not guaranteeing, nor predicting, that this won’t occur.

I am intending to myself do inference and conversation in a way that tries to avoids these “politicized speech” patterns, even if it turns out politically costly or socially awkward for me to do so. I am intending to make some attempt (not an infinite amount of effort, but some real effort, at some real cost if needed) to try to make it easier for others to do this too, and/or to ask it of others who I think may be amenable to being asked this, and/or to help coalesce a conversation in what I take to be a better pattern if I can figure out how to do so. I also predict, independently of my own efforts, that a nontrivial number of others will be trying this.

If “reputation management” is a person’s main goal, then the small- to medium-sized efforts I can hope to contribute toward a better conversation, plus the efforts I’m confident in predicting independently of mine, would be insufficient to mean that a person’s goals would be well-served in the short run by following my request to avoid “refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.”

However, I’m pretty sure most people in this ecosystem, maybe everyone, would deep down like to figure out how to actually see what kinds of fucked up individual and group and inter-group dynamics we’ve variously gotten into, and why and how, so that we can have a realer shot at things going forward. And I'm pretty sure we want this (in the deeper, long-run sense) much more than we want short-run reputation management. Separately, I suspect most peoples’ reputation-management will be kinda fucked in the long run if we don’t figure out how to get enough right to do actual progress on the world (vs creating local illusions of the same), although this last sentence is more controversial and less obvious. So, yeah, I’m asking people to try to engage in real conversation with me and others even though it’ll probably mess up parts of their/our reputation in the short run, and even though probably many won't manage to joint this in the short run. And I suspect this effort will be good for many peoples’ deeper goals despite the political dynamics you mention.

Here’s to trying.

Replies from: rohinmshah
comment by rohinmshah · 2021-10-14T08:00:48.519Z · LW(p) · GW(p)

It sounds like you are predicting that the people who are sharing true relevant facts have values such that the long-term benefits to the group overall outweigh the short-term costs to them. In particular, it's a prediction about their values (alongside a prediction of what the short-term and long-term effects are).

I'll just note that, on my view of the short-term and long-term effects, it seems pretty unclear whether by my values I should share additional true relevant information, and I lean towards it being negative. Of course, you have way more private info than me, so perhaps you just know a lot more about the short-term and long-term effects.

I'm also not a fan of requests that presume that the listener is altruistic, and willing to accept personal harm for group benefit. I'm not sure if that's what you meant -- maybe you think in the long term sharing of additional facts would help them personally, not just help the group.

Fwiw I don't have any particularly relevant facts. I once tagged along with a friend to a party that I later (i.e. during or after the party) found out was at Leverage. I've occasionally talked briefly with people who worked at Leverage / Paradigm / Reserve. I might have once stopped by a poster they had at EA Global. I don't think there have been other direct interactions with them.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-14T09:00:21.678Z · LW(p) · GW(p)

I'm also not a fan of requests that presume that the listener ...

From my POV, requests, and statements of what I hope for, aren't advice. I think they don't presume that the listener will want to do them or will be advantaged by them, or anything much else about the listener except that it's okay to communicate my request/hope to them. My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people, and I guess I'm also assuming that my words won't be assumed to be a trustworthy voice of authorities that know where the person's own interests are, or something. That I can be just some person, hoping and talking and expecting to be evaluated by equals.

Is it that you think these assumptions of mine are importantly false, such that I should try to follow some other communication norm, where I more try to only advocate for things that will turn out to be in the other party's interests, or to carefully disclaim if I'm not sure what'll be in their interests? That sounds tricky; I'm not peoples' parents and they shouldn't trust me to do that, and I'm afraid that if I try to talk that way I'll make it even more confusing for anyone who starts out confused like that.

I think I'm missing part of where you're coming from in terms of what good norms are around requests, or else I disagree about those norms.

you have way more private info than me, so perhaps...

I don't have that much relevant-info-that-hasn't-been-shared, and am mostly not trying to rely on it in whatever arguments I'm making here. Trying to converse more transparently, rather.

Replies from: rohinmshah, TekhneMakre
comment by rohinmshah · 2021-10-14T11:47:48.202Z · LW(p) · GW(p)

I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people,

I feel like this assumption seems false. I do predict that (at least in the world where we didn't have this discussion) your statement would create a social expectation for the people to report true, relevant facts, and that this social expectation would in fact move people in the direction of reporting true, relevant facts.

I immediately made the inference myself on reading your comment. There was no choice in the matter, no execution of a deliberate strategy on my part, just an inference that Anna wants people to give the facts, and doesn't think that fear of reprisal is particularly important to care about. Well, probably, it's hard to remember exactly what I thought, but I think it was something like this. I then thought about why this might be, and how I might have misunderstood. In hindsight, the explanation you gave above should have occurred to me, that is the sort of thing that people who speak literally would do, but it did not.

I think there are lots of LWers who, like me, make these sorts of inferences automatically. (And I note that these kinds of inferences are excellent for believing true things about people outside of LW.) I think this is especially true of people in the same reference class as Zoe, and that such people will feel particularly pressured by it. (There are a sadly large number of people in this community who have a lot of self-doubt / not much self-esteem, and are especially likely to take other people's opinions seriously, and as a reason for them to change their behavior even if they don't understand why.) This applies to both facts that are politically-pro-Leverage and facts that are politically-anti-Leverage.

So overall, yes, I think your words would lead people to infer that it would be better for them to report true relevant facts and that any fear they have is somehow misplaced, and to be pressured by that inference into actually doing so, i.e. it coerces them.

I don't have a candidate alternative norm. (I generally don't think very much about norms, and if I made one up now I'm sure it would be bad.) But if I wanted to convey something similar in this particular situation, I would have said something like "I would love to know additional true relevant facts, but I recognize there is a risk that others will take them in a politicized way, or will use them as an excuse to falsely judge you, so please only do this if you think the benefits are worth it".

(Possibly this is missing something you wanted to convey, e.g. you wish that the community were such that people didn't have to fear political judgment?)

(I also agree with TekhneMakre's response about authority.)

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2021-10-15T03:05:09.728Z · LW(p) · GW(p)

Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can't claim to know much about setting up effective norms for defending whistleblowers though.

comment by TekhneMakre · 2021-10-14T09:57:53.176Z · LW(p) · GW(p)
My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people, and I guess I'm also assuming that my words won't be assumed to be a trustworthy voice of authorities that know where the person's own interests are, or something.
these assumptions of mine are importantly false

If someone takes you as an authority, then they're likely to take your wishes as commands. Imagine a CEO saying to her employees, "What I hope happens: ... What I hope doesn't happen: ...", and the (vocative/imperative mood) "Let's show the world...". That's only your responsibility insofar as you're somehow collaborating with them to have them take you as an authority; but it could happen without your collaboration.

such that I should try to follow some other communication norm

IMO no, but you could, say, ask LW to make a "comment signature" feature, and then have every comment you make link, in small font, to the comment you just made.

comment by clone of saturn · 2021-10-14T18:17:45.734Z · LW(p) · GW(p)

I read Anna's request as an attempt to create a self-fulfilling prophecy. It's much easier to bully a few individuals than a large crowd.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-14T18:57:38.640Z · LW(p) · GW(p)

Yeah, I also read Anna as trying to create/strengthen local norms to the effect of 'whistleblowers, truth-tellers, and people-saying-the-emperor-has-no-clothes are good community members and to-be-rewarded/protected'. That doesn't make reprisals impossible, but I appreciated the push (as I interpreted it).

I also interpreted Anna as leading by example to some degree -- a lot of orgs wouldn't have their president join a public conversation like this, given the reputational risks. If I felt like Anna was taking on zero risk but was asking others to take on lots of risk, I may have felt differently.

Saying this publicly also (in my mind) creates some accountability for Anna to follow through. Community leaders who advocate value X and then go back on their word are in much more hot water than ones who quietly watch bad things happen.

E.g., suppose this were happening on the EA Forum. People might assume by default that CEA or whoever is opposed to candor about this topic, because they're worried hashing things out in public could damage the EA-brand (or whatever). This creates a default pressure against open and honest truth-seeking. Jumping in to say 'no, actually, having this conversation here is good, and it seems good to try to make it as real as we can' can relieve a lot of that perceived pressure, even if it's not a complete solution. I perceive Anna as trying to push in that direction on a bunch of recent threads (e.g., here [LW(p) · GW(p)]).

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-14T19:07:59.141Z · LW(p) · GW(p)

I'm not sure what I think of Rohin's interpretation. My initial gut feeling is that it's asking too much social ownership of the micro, or asking community leaders to baby the community too much, or spend too much time carefully editing their comments to address all possible errors (with the inevitable result that community leaders say very little and the things they say are more dead and safe).

It's not that I particularly object to the proposed rephrasings, more just that I have a gut-level sense that this is in a reference class of a thousand other similarly-small ways community leaders can accidentally slightly nudge folks in the wrong direction. In this particular case, I'd rather expect a little more from the community, rather than put this specific onus on Anna.

I agree there's an empirical question of how socially risky it actually is to e.g. share negative stuff about Leverage in this thread. I'm all in favor of a thread to try to evaluate that question (which could also switch to PMs as needed if some people don't feel safe participating), and I see the argument for trying to do that first, since resolving that could make it easier to discuss everything else. I just think people here are smart and independent enough to not be 'coerced' by Anna if she doesn't open the conversation with a bunch of 'you might suffer reprisals' warnings (which does have a bit of a self-fulfilling-prophecy ring to it, though I think there are skillful ways to pull it off).

Replies from: rohinmshah
comment by rohinmshah · 2021-10-14T23:41:58.367Z · LW(p) · GW(p)

You're reading too much into my response. I didn't claim that Anna should have this extra onus. I made an incorrect inference, was confused, asked for clarification, was still confused by the first response (honestly I'm still confused by that response), understood after the second response, and then explained what I would have said if I were in her place when she asked about norms.

(Yes, I do in fact think that the specific thing said had negative consequences. Yes, this belief shows in my comments. But I didn't say that Anna was wrong/bad for saying the specific thing, nor did I say that she "should" have done something else. Assuming for the moment that the specific statement did have negative consequences, what should I have done instead?)

(On the actual question, I mostly agree that we probably have too many demands on public communication, such that much less public communication happens than would be good.)

I just think people here are smart and independent enough to not be 'coerced' by Anna if she doesn't open the conversation with a bunch of 'you might suffer reprisals' warnings

I also would have been fine with "I hope people share additional true, relevant facts". The specific phrasing seemed bad because it seemed to me to imply that the fear of reprisal was wrong. See also here [LW(p) · GW(p)].

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-15T00:03:22.706Z · LW(p) · GW(p)

OK, thanks for the correction! :]

comment by TekhneMakre · 2021-10-13T23:15:05.255Z · LW(p) · GW(p)

Of course there's also the possibility that it's worth it. E.g. because people could then notice who is doing a rush-to-judgement thing or confirmation-bias-y thing. (This even holds if there's threat of personal harm to fact-sharers, though personal harm looks like something you added to the part you quoted.)

Replies from: rohinmshah
comment by rohinmshah · 2021-10-14T07:44:12.729Z · LW(p) · GW(p)

I agree that's possible, but then I'd say something like "I would love to know additional true relevant facts, but I recognize there are real risks to this and only recommend people do this if they think the benefits are worth it".

Analogy: it could be worth it for an employee to publicly talk about the flaws of their company / manager (e.g. because then others know not to look for jobs at that company), even though it might get them fired. In such a situation I would say something like "It would be particularly helpful to know about the flaws of company X, but I recognize there are substantial risks involved and only recommend people do this if they feel up to it". I would not say "I hope people don't refrain from speaking up about the flaws of company X out of fear that they might be fired", unless I had good reason to believe they wouldn't be fired, or good reason to believe that it would be worth it on their values (though in that case presumably they'd speak up anyway).

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-14T10:14:08.630Z · LW(p) · GW(p)

Thanks. I'm actually still not sure what you're saying.

Hypothesis 1: you're saying, stating "I hope person A does X" implies a non-dependence on person A's information, which implies the speaker has a lot of hidden evidence (enough to make their hope unlikely to change given A's evidence). And, people might infer that there's this hidden evidence, and update on it, which might be a mistake.

Hypothesis 2: you're pointing at something about how "do X, even if you have fear" is subtly coercive / gaslighty, in the sense of trying to insert an external judgement to override someone's emotion / intuition / instinct. E.g. "out of fear" might subtly frame an aversion as a "mere emotion".

(Maybe these are the same...)

Replies from: rohinmshah
comment by rohinmshah · 2021-10-14T11:17:24.966Z · LW(p) · GW(p)

Hypothesis 2 feels truer than hypothesis 1.

(Just to state the obvious: it is clearly not as bad as the words "coercion" and "gaslighting" would usually imply. I am endorsing the mechanism, not the magnitude-of-badness.)

I agree that hypothesis 1 could be an underlying generator of why the effect in hypothesis 2 exists.

I think I am more confident in the prediction that these sorts of statements do influence people in ways-I-don't-endorse, than in any specific mechanism by which that happens.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-14T17:17:15.295Z · LW(p) · GW(p)

Okay.

comment by farp · 2021-10-13T22:17:23.425Z · LW(p) · GW(p)

I would like it if we showed the world how accountability is done, and given your position, I find it disturbing that you have omitted this objective. That is, if I wanted to deflect the conversation away from accountability, I think I would write a post similar to yours. 

Replies from: AnnaSalamon, deluks917
comment by AnnaSalamon · 2021-10-14T03:34:34.832Z · LW(p) · GW(p)

I would like it if we showed the world how accountability is done

So would I. But to do accountability (as distinguished from scapegoating, less-epistemic blame), we need to know what happened, and we need to accurately trust each other (or at least most of each other) to be able to figure out what happened, and to care what actually happened.

The “figure out what happened” and “get in a position where we can have a non-fucked conversation” steps come first, IMO.

I also sort of don’t expect that much goal divergence on the accountability steps that very-optimistically come after those steps, either, basically because integrity and visible trustworthiness serve most good goals in the long run, and vengeance or temporarily-overextended-trust serves little.

Though, accountability is admittedly a weak point of mine, so I might be missing/omitting something. Maybe spell it out if so?

Replies from: farp
comment by farp · 2021-10-15T02:21:39.125Z · LW(p) · GW(p)

I also sort of don’t expect that much goal divergence on the accountability steps that very-optimistically come after those steps, either, basically because integrity and visible trustworthiness serve most good goals in the long run, and vengeance or temporarily-overextended-trust serves little.

To clarify: goal divergence between whom? Geoff and Zoe? Zoe and me? Me and you?

comment by deluks917 · 2021-10-13T22:19:26.344Z · LW(p) · GW(p)

This reaction has been predictable for years IMO. As usual, a reasonable response required people to go public. There is no internal accountability process. Luckily things have been made public.

comment by Geoff_Anders · 2021-10-17T11:21:31.301Z · LW(p) · GW(p)

Zoe - I don’t know if you want me to respond to you or not, and there’s a lot I need to consider, but I do want to say that I’m so, so sorry. However this turns out for me or Leverage, I think it was good that you wrote this essay and spoke out about your experience.

It’s going to take me a while to figure out everything that went wrong, and what I did wrong, because clearly something really bad happened to you, and it is in some way my fault. In terms of what went wrong on the project, one throughline I can see was arrogance, especially my arrogance, which affected so many of the choices we made. We dismissed a lot of the actually useful advice and tools and methods from more typical sources, and it seems that blocking out society made room for extreme and harmful narratives that should have been tempered by a lot more reality. It’s terrible that you felt like your funding, or ability to rest, or take time off, or choose how to interact with your own mind were compromised by Leverage’s narratives, including my own. I totally did not expect this, or the negative effects you experienced after leaving, though maybe I would have, had I not narrowed my attention and basically gotten way too stuck in theoryland.

I agree with you that we shouldn’t skip steps. I’ve updated accordingly. Again I’m truly sorry. I really wanted your experience on the project to be good.

Replies from: Spiracular, Spiracular, Spiracular, homosexuallover22poopoo
comment by Spiracular · 2021-10-17T15:51:57.597Z · LW(p) · GW(p)

Edit: I got a request to cut the chaff and boil this down to discrete actionables. Let me do that.

  1. Will you release everyone from any NDAs

  2. Will you step down from any management roles (e.g. Leverage and Paradigm)

  3. Will you state for the record, that you commit to not threaten* anyone who comes forward with reports that you do not like, in the course of this process

I get the sense that you have made people afraid to stand against you, historically. Engaging in any further threats, seems likely to impede all of our ability to make sense of, and come to terms with, whatever happened. It could also be quite incriminating on its own.

* For full points, commit to also not make any strong stealthy attempts to socially discredit people.

Replies from: Unreal, Spiracular
comment by Unreal · 2021-10-17T20:22:44.054Z · LW(p) · GW(p)

There's good ways to do this kind of thing and bad ways. I feel that this is a bad way? Unless I'm missing a lot of context about what's happening here. 

Other ways to go about this:

  • Hire a third-party mediator to connect aggrieved parties with Geoff
  • Have a mutual trusted friend mediate conversations between aggrieved parties and Geoff
  • Geoff and ex-Leverage staff do a postmortem of some kind
  • Leverage creates an accountability system through which is collects data and feedback

I want to suggest that Geoff doesn't need to respond to Spiracular's requests because they contain a lot of assumptions, in the same way the question "Where were you on the night of the murder" contains a lot of assumptions. And this is a bad way to go about justice. Unless, again, I'm missing a bunch of context. 

Replies from: Spiracular, Spiracular
comment by Spiracular · 2021-10-17T21:03:19.172Z · LW(p) · GW(p)

For whatever it's worth, I think "No" is a pretty acceptable answer to some of these.


"No, for reasons X, Y, Z" is a pretty ordinary answer to the NDA concern. I'd still like to see that response.

"Leverage 2.0 was deliberately structured to avoid a lot of the drawbacks of Leverage 1.0" is something I actually think is TRUE. The fact that Leverage 1.0 was sun-setted deliberately, is something that I thought actually reflected well on both Geoff and the people there.

I think from that, an argument could be made that stepping down is not necessary. I can't say I would necessarily agree with it, but I think the argument could be made.


Most of my stance, is that currently most people are too SCARED to talk. And this is actually really worrying to me.

I don't think "introducing a mediator," who would be spending about half of their time with Geoff --the epicenter of a lot of that fear-- would actually completely solve all of that problem. It would surprise me a lot if it worked here.


My #1 most desired commitment, right now? Is actually #3, and I maybe should have put it first.

A commitment to, in the future, not go after people and especially not to threaten them, for talking about their experiences.

That by itself, would be quite meaningful to me.

Replies from: Unreal
comment by Unreal · 2021-10-17T21:37:44.381Z · LW(p) · GW(p)

Well, I am at least gonna name a fraction of the assumptions that are implied by this set of requests. I am not asking you to do anything about this, but I am going to name them out loud, in the hopes that people come away more conscious of what other assumptions might be present. 

  • Geoff was the center of the problem and, by himself, should be held accountable 
  • If Geoff agrees to this, he is also agreeing on behalf of Leverage itself, including current members and potentially even past members. Meaning that if not-Geoff people break or violate these commitments, Geoff himself should be held responsible
  • Geoff has a meaningful degree of control over what other people do or do not do / say or do not say
  • The people who are scared of retaliation of some kind are mostly afraid of Geoff in particular
  • People's views of Geoff's willingness and ability to retaliate are basically correct / their fears are justified
  • The aggrieved parties should put the mass of the blame on Geoff
  • They should feel better if Geoff agrees to these requests
  • Geoff is totally free to say "no" to these requests on a public internet forum, and this won't cause a bunch of misunderstanding / assumption of guilt if he does 
  • Zoe's account is more or less the full picture of what happened
  • Geoff shouldn't be in a position of leadership 
    • Geoff is bad in a way that cannot easily be corrected through a postmortem, accountability, or feedback process
    • Geoff meant ill-will toward individuals or sort-of knowingly abused people or used them for ego-inflating grandiose aims
    • Geoff knew the basics of Zoe's post / experience

(If you or others basically agree that the above list is true, that would be illuminating and help me understand where you're coming from.) 

It seems bad that people are scared to talk. I appreciate your and other people's efforts to make it easier for them to make sense of their experience and to process it out loud. I suspect Zoe's account has helped a lot and created space for bravery, and I feel trust that others will come forward when it's time. 

Spiracular, I sense good faith from you and appreciate the thoughtfulness you seem to be bringing. I am wanting to have this conversation in the open so that people aren't left with weird impressions about who knows what, what's true, and we we have collectively agreed is true. 

I'd like to suggest not moving too fast past the "processing what happened" phase by jumping ahead of "observe orient" to "decide act." Even if for the sake of helping people, I think it's important to "slow our roll" when it comes to deciding what a person should do and engaging them with a leading set of questions or demands. I don't like the things Geoff would subtly be 'agreeing to' by saying 'yes' or 'no' to any of these requests. Him engaging them at all seems like a trap. (He might choose to do so anyway. I am not trying to protect HIM in particular. I am protecting against 'doing justice in a way that doesn't serve'. For that reason, I don't want to see him respond to your requests until there's more 'orientation'.) 

LessWrong is not the ideal medium for handling justice, as far as I can tell, and so I also generally feel like we shouldn't be trying to handle this on LW. 

Replies from: Duncan_Sabien, Spiracular
comment by Duncan_Sabien · 2021-10-17T22:22:19.664Z · LW(p) · GW(p)

(In the Duncan-culture version of LW, comments like the above are both commonplace and highly appreciated.  I mention this because Unreal has mentioned having a tough time with LW, and imo the above comment demonstrates solidly central LW virtue.)

comment by Spiracular · 2021-10-18T01:02:40.507Z · LW(p) · GW(p)

I appreciate this too. I think this form of push-back, is a potentially highly-productive one.

I may need to think for a bit about how to respond? But it seemed worth expressing my appreciation for it, first.

comment by Spiracular · 2021-10-17T21:11:36.658Z · LW(p) · GW(p)

Meta-note: I tried the longer-form gentler one? But somebody ELSE complained about that structure.

(A piece of me recognizes that I can't make everybody happy here, but it's a little annoying.)

comment by Spiracular · 2021-10-17T16:40:05.389Z · LW(p) · GW(p)

That last point sub-point is a little vague, so let my clarify my personal cut-off on this. Others may disagree.

I wouldn't object to seeing the occasional brief overt statement coming directly from Geoff that his recollection doesn't match someone else's interpretation.

I would object to any further encouragement of things that resemble the "strong, repeated pressure by someone close to Geoff to have the post marked as flawed" that Ruby described [LW(p) · GW(p)].

Consistently denouncing the later going forward, would be very helpful.

Replies from: Ruby
comment by Ruby · 2021-10-17T17:24:02.605Z · LW(p) · GW(p)

I want to clarify that using the word "threat" in my case would cause one to overestimate the severity by 5-20x or something of the pressure I experienced (more so than "strong pressure"). Not that the word is strictly wrong, but the connotations of it are too strong. I might end up listing the actual behaviors in a bit, maybe after more dialog with the person in question.

Replies from: Spiracular
comment by Spiracular · 2021-10-17T19:58:16.878Z · LW(p) · GW(p)

When I said "last sub-point?"

I was referring to "make any strong stealthy attempts to socially discredit people," not "threaten" (by which I mean, "threaten").

I was deliberately treating "no threats" as minimum, and "no strong social pressure" as extra-credit.

Replies from: Ruby
comment by Ruby · 2021-10-17T20:34:06.038Z · LW(p) · GW(p)

Ah, gotcha. I misunderstood the meaning of "sub-point".

comment by Spiracular · 2021-10-17T14:51:26.591Z · LW(p) · GW(p)

I recognize it took some courage to talk about this in the first place, and I don't want to discount that. I am glad that you said something.

...but I also don't want to lose track of this thread [LW(p) · GW(p)].

Edit: I got a request to boil this down, so I separated it to that thread.

And reading the room? I think there is, broadly speaking, a lot of fear of you. And I think part of why that is true, is because you cultivated that.

You have noticed that you made some errors which blinded you to the consequences of some of your actions, and I think that's a good start? I hope you might be able to agree with me that this attitude of fear, is probably blinding you to the reporting of any further harms.

I recognize processing takes time, and there hasn't been a lot of time yet. But also, I think somebody needed to say this to your face, and it might as well be me.

How do you want to help wind down this aura of fear, which I think is still blinding not just most of us, but also YOU, to a lot of the full reality of what happened?

(And it might well be, that you will help with this by saying almost nothing and going after no-one. But if so? I think it would help, if you briefly committed to that outright.)

comment by Spiracular · 2021-10-17T14:51:11.248Z · LW(p) · GW(p)

I appreciate hearing from you about some of what you probably got wrong.

I'm pretty sure that a lot of this started out relatively benignly, and spiraled?

I agree with your impression that arrogance was at least one of several pressures that made it hard to see that things were going in a bad direction. A lot of invisible guard-rails were dropped or traded away over time, and the absence of a certain amount of reality-checking made it very hard to fix after things had veered off the rails.

I hope your account contributes to making people less likely to make similar errors in the future.

(I would also be very unhappy, if I ever saw you having a substantial amount of power over people again though, fwiw.)

comment by konstell (parsley) · 2021-10-13T19:05:03.391Z · LW(p) · GW(p)

Epistemic status: I have not been involved with Leverage Research in any way, and have no knowledge of what actually happened beyond what's been discussed on LessWrong. This comment is an observation I have after reading the post.

I had just finished reading Pete Walker's Complex PTSD before coming across this post. In the book, the author describes a list of calm, grounded thoughts to respond to inner critic attacks. A large part of healing is for the survivor to internalize these thoughts so they can psychologically defend themselves.

I see a stark contrast between what the CPTSD book tries to instill and the ideas Leverage Research tried to instill, per Zoe's account. It's as if some of the programs at Leverage Research were trying to unravel almost all of one's sense of self.

A few examples:

Perfectionism

From the CPTSD book:

I do not have to be perfect to be safe or loved in the present. I am letting go of relationships that require perfection. I have a right to make mistakes. Mistakes do not make me a mistake.

From the post:

We might attain his level of self-efficacy, theoretical & logical precision, and strategic skill only once we were sufficiently transformed via the use of our debugging techniques. The overarching objective was to discover and “update” deep irrationalities and eventually become a sort of Musk-level super-person (“attain Mastery” of a world-saving-relevant domain).

All-or-None & Black-and-White Thinking

From the CPTSD book:

I reject extreme or overgeneralized descriptions, judgments or criticisms [...] Statements that describe me as “always” or “never” this or that, are typically grossly inaccurate.

From the post:

Another supervisor spoke wonderingly about Geoff’s presence in our lives, “It’s hard to make sense of the fact that this guy exists at all, and then on top of it, for some reason our lives have intersected with his, at this moment in history. It’s almost impossible to believe that we are the only people who have ever lived with access to the one actual theory of psychology.”

Micromanagement/Worrying/Obsessing/Looping/Over-Futurizing

From the CPTSD book:

I will not repetitively examine details over and over. I will not jump to negative conclusions. I will not endlessly second-guess myself. I cannot change the past. I forgive all my past mistakes. I cannot make the future perfectly safe. I will stop hunting for what could go wrong. I will not try to control the uncontrollable. I will not micromanage myself or others. I work in a way that is “good enough”, and I accept the existential fact that my efforts sometimes bring desired results and sometimes they do not.

From the post:

I sat in many meetings in which my progress as a “self-debugger” was analyzed or diagnosed, pictographs of my mental structure put on a whiteboard. What were my bottlenecks? Was I just not trying hard enough, did I need to be pushed out of the nest? Did I just need support in fixing that one psych issue? Or were there ten, and this wasn’t going to work out? If I was introspectively blocked for too long, I’d have to come up with something else I could do for the project, like operations or sociology, and quick. And if I wasn’t any good at those, I was out. I couldn’t be out, so I doubled down on trying to mold my mind in the “right” direction.”

Unfair/Devaluing Comparisons

From the CPTSD book:

I refuse to compare myself unfavorably to others. I will not compare “my insides to their outsides”. I will not judge myself for not being at peak performance all the time. In a society that pressure us into acting happy all the time, I will not get down on myself for feeling bad.

From the post:

No one else, with one exception being the head of sociology, was considered to have any theories that could possibly be good enough to significantly further the plan like Geoff’s could.

Overproductivity/Workaholism/Busyholism

From the CPTSD book:

I am a human being not a human doing. I will not choose to be perpetually productive. I am more productive in the long run, when I balance work with play and relaxation. I will not try to perform at 100% all the time. I subscribe to the normalcy of vacillating along a continuum of efficiency.

From the post:

I was regularly left with the feeling that I was low status, uncommitted, and kind of useless for wanting to socialize on the weekends or in the evenings. Geoff was known to sleep only about 5–6 hours. Multiple people in leadership had themselves booked from 7am until past midnight every single day of the week.

comment by AnnaSalamon · 2021-10-14T20:08:08.951Z · LW(p) · GW(p)

I imagine a lot of people want to say a lot of things about Leverage and the dynamics around it, except it’s difficult or costly/risky or hard-to-imagine-being-heard-about or similar.

If anyone is up for saying a bit about how that is for you personally (about what has you reluctant to try to share stuff to do with Leverage, or with EA/Leverage dynamics or whatever, that in some other sense you wish you could share — whether you had much contact with Leverage or not), I think that would be great and would help open up space.

I’d say err on the side of including the obvious.

Replies from: Ruby, mingyuan, habryka4, BayAreaHuman, Spiracular, Evan_Gaensbauer, Linch
comment by Ruby · 2021-10-16T05:21:12.070Z · LW(p) · GW(p)

I interacted with Leverage some over the years. I felt like they had useful theory and techniques, and was disappointed that it was difficult to get access to their knowledge. I enjoyed their parties. I did a Paradigm workshop. I knew people in Leverage to a casual degree.

What's live for me now is that when the other recent post about Leverage was published, I was subjected to strong, repeated pressure by someone close to Geoff to have the post marked as flawed, and asked to lean on BayAreaHuman to approximately retract the post or acknowledge its flaws. (This request was made of me in my new capacity as head of LessWrong.) "I will make a fuss" is what I was told. I agreed that the post has flaws (I commented to that effect in the thread) and this made me feel the pressure wasn't illegitimate despite being unpleasant. Now it seems to be part of a larger concerning pattern.

Further details seem pertinent, but I find myself reluctant to share them (and already apprehensive that this more muted description will have the feared effect) because I just don't want to damage the relationship I have with the person who was pressuring me. I'm unhappy about it, but I still value that relationship. Heck, I haven't named them. I should note that this person updated (or began reconsidering their position) after Zoe's post and has since stopped applying any pressure on me/LessWrong. 

With Geoff himself (with whom I personally have had a casual positive relationship) I feel more actual fear of being critical or in anyway taking the side against Leverage. I predict that if I do so, I'll be placed on the list of adversaries. And something like, just based on the reaction to the Common knowledge post, Leverage is very agenty when it comes to their reputation. Or I don't know, I don't fear any particularly terrible retribution myself, but I loathe to make "enemies".

I'd like to think that I've got lots of integrity and will say true things despite pressures and incentives otherwise, but I'm definitely not immune to them. 

Replies from: RobbBB, TekhneMakre
comment by Rob Bensinger (RobbBB) · 2021-10-16T05:55:36.701Z · LW(p) · GW(p)

With Geoff himself (with whom I personally have had a casual positive relationship) I feel more actual fear of being critical or in anyway taking the side against Leverage. I predict that if I do so, I'll be placed on the list of adversaries. And something like, just based on the reaction to the Common knowledge post, Leverage is very agenty when it comes to their reputation. Or I don't know, I don't fear any particularly terrible retribution myself, but I loathe to make "enemies".

If you do make enemies in this process, in trying to help us make sense of the situation: count me among the people you can call on to help.

Brainstorming more concrete ideas: if someone makes a GoFundMe to try to offset any financial pressure/punishment Leverage-adjacent people might experience from sharing their stories, I'll be very happy to contribute.

comment by TekhneMakre · 2021-10-16T05:44:03.020Z · LW(p) · GW(p)
I'm unhappy about it, but I still value that relationship

Positive reinforcement for finding something you could say that (1) protects this sort of value at least somewhat and (2) opens the way for aggregation of the metadata, so to speak; like without your comment, and other hypothetical comments that haven't happened yet for similar reasons, the pattern could go unnoticed.


I wonder if there's an extractable social norm / conceptual structure here. Something like separating [the pattern which your friend was participating in] from [your friend as a whole, the person you have a relationship]. Those things aren't separate exactly, but it feels like it should make sense to think of them separately, e.g. to want to be adversarial towards one but not the other. Like, if there's a pattern of subtly suppressing certain information or thoughts, that's adversarial, and we can be agnostic about the structure/location of the agency behind that pattern while still wanting to respond appropriately in the adversarial frame.

comment by mingyuan · 2021-10-15T18:44:55.363Z · LW(p) · GW(p)

My contact with Leverage over the years was fairly insignificant, which is part of why I don’t feel like it’s right for me to participate in this discussion. But there are some things that have come to mind, and since Anna’s made space for that, I’ll note them now. I still think it’s not really my place to say anything, but here’s my piece anyway. I’m speaking only for myself and my own experience.

I interviewed for an ops position at Leverage/Paradigm in early 2017, when I was still in college. The process took maybe a couple months, and the in-person interview happened the same week as my CFAR workshop; together these were my first contact with the Bay community. Some of the other rationalists I met that week warned me against Leverage in vague terms; I discussed their allegations with the ops team at my interview and came away feeling satisfied that both sides had a point.

I had a positive experience at the interview and with the ops and their team hiring process in general. The ops lead seemed to really believe in me and recommended me to other EA orgs after I didn’t get hired at Paradigm, and that was great. My (short-term) college boyfriend had a good relationship with Leverage and later worked at Paradigm. In mid-2017 I met a Leverage employee in a non-Leverage context and we went on a couple dates; that ended amicably. All that’s just to say that at that point, I thought I had a fairly positive relationship with them.

Then, Leverage/Paradigm put on EA Summit in the summer of 2018. I applied to attend and was rejected. My boyfriend, who I think attended a Paradigm workshop around that time, managed to get that decision reversed, but told me that I was rejected because I was on a list of people who might speak ill of Leverage. That really rubbed me the wrong way. I didn’t think I had ever acted in a way to deserve that, and it seemed bad to me that they were so paranoid about their reputation that they would reject large swaths of people from a conference that’s supposed to bring together EAs from around the world, just because of vague suspicion. Ironically that’s the personal experience that led me to distrust Leverage the most. 

The bottom line being that discussions around Leverage’s reputation have always been really fraught and murky, and it’s totally understandable to me that people would fear unknown repercussions for discussing Leverage in public. Many other people in these threads have said that in various ways, but there’s my concrete example.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-15T19:17:15.632Z · LW(p) · GW(p)

The obsession with reputation control is super concerning to me, and I wonder how this connects up with Leverage's poor reputation over the years.

Like, I could imagine two simplified stories...

Story 1:

  • Leverage's early discoveries and methods were very promising, but the inferential gap was high -- they really needed a back-and-forth with someone to properly communicate, because everyone had such different objections and epistemic starting points. (This is exactly the trouble MIRI had in its early comms -- if you try to anticipate which objections will be salient to the reader, you'll usually miss the mark. And if you do this a lot, you miss the mark and are long-winded.)
  • Because of this inferential gap, Leverage acquired a very bad reputation with a bunch of people who (a) misunderstood its reasoning, and then (b) didn't get why Leverage wasn't investing more into public comms.
  • Leverage then responded by sharing less and trying to reset its public reputation to 'normal'. It wasn't trying to become super high-status, just trying to undo the damage already done / prevent things from further degrading as rumors mutated over time. Unfortunately, its approach was heavy-handed and incompetent, and backfired.

Story 2:

  • Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers.
  • This was one of the causes of Leverage's bad reputation, from an early date. Through some combination of 'people noticing when Leverage bungles a PR operation' and 'humans are pretty good at detecting other humans' character, and picking up on subtle cues that someone is manipulative'.

To what extent is one or the other true? (Another possibility is that there isn't much of a causal tie between Leverage's PR obsession and its bad reputation, and they just both occurred for other reasons.)

comment by habryka (habryka4) · 2021-10-14T23:20:47.341Z · LW(p) · GW(p)

My current feelings are a mixture of the following: 

  • I disagree with a lot of the details of what many people have said (both people who had bad experiences and people defending their Leverage experiences and giving positive testimonials), and feel like expressing my take has some chance of making those people feel like their experiences are invalidated, or at least spark some conflict of some type
  • I know that Geoff and Leverage more broadly in the past have said pretty straightforwardly that they will take pretty adversarial action if someone threatens their reputation or brand, and that makes me both feel like I can trust many fewer things in the discussion, and makes me personally more hesitant to share some things (while also feeling like that's kind of cowardly, but I haven't yet had the time to really work through my feelings here, which in itself has some chilling effects that I feel uncomfortable with, etc.)
  • On the other side, there have been a lot of really vicious and aggressive attacks to anyone saying anything pro-leverage for many years, with a strength that I think is overall even greater and harder to predict than what Geoff and Leverage have been doing. It's also been more of a crowd-driven phenomenon, which makes it less predictable and more scary.
  • I feel like it's going to be really hard to say anything without people pigeonholing me into belonging to some group that is trying to rewrite the rationality social and political landscape some way, and that makes me feel like I have to actively think about how to phrase what I am saying in a way that avoids that pigeonholing effect (as a concrete example, one person approached me who read Ben's initial comment on the "BayAreaHuman" post that said "I confirm that this is a real person in good standing" as an endorsement of the post, when the comment was really just intended as confirming some facts about the identity of the poster, with basically complete independence from the content of the post)
  • I myself have access to some sensitive and somewhat confidential information, and am struggling with navigating exactly which parts are OK to share and which ones are not. 
Replies from: RobbBB, farp
comment by Rob Bensinger (RobbBB) · 2021-10-15T00:13:45.915Z · LW(p) · GW(p)

Geoff and Leverage more broadly in the past have said pretty straightforwardly that they will take pretty adversarial action if someone threatens their reputation or brand

I assume there isn't a public record of this anywhere? Could I hear more details about what was said? This sounds atrocious to me.

I similarly feel that I can't trust the exculpatory or positive evidence about Leverage much so long as I know there's pressure to withhold negative information. (Including informal NDAs and such.)

On the other side, there have been a lot of really vicious and aggressive attacks to anyone saying anything pro-leverage for many years, with a strength that I think is overall even greater and harder to predict than what Geoff and Leverage have been doing. It's also been more of a crowd-driven phenomenon, which makes it less predictable and more scary.

I agree with this too, and think it's similarly terrible, but harder to blame any individual for (and harder to fix).

I assume it's to a large extent an extreme example of the 'large inferential gaps + true beliefs that sound weird' afflicting a lot of EA orgs, including MIRI. Though if Leverage has been screwed up for a long time, some of that public reaction may also have been watered over the years by true rumors spreading about the org.

comment by farp · 2021-10-15T03:52:56.028Z · LW(p) · GW(p)

Let's stand up for the truth regardless of threats from Geoff/Leverage, and let's stand up for the truth regardless of the mob. 

I feel like it's going to be really hard to say anything without people pigeonholing me into belonging to some group that is trying to rewrite the rationality social and political landscape some way.

Let's stand up for the truth! Maintaining some aura of neutrality or impartiality at the expense of the truth would be IMO quite obviously bad. 

I myself have access to some sensitive and somewhat confidential information, and am struggling with navigating exactly which parts are OK to share and which ones are not.

I think that it is seen as not very normative on LW to say "I know things, confidential things I will not share, and because of that I have a very [bad/good] impression of this person or group". But IMO its important to surface. Vouching is an important social process. 

Replies from: ChristianKl, Ruby
comment by ChristianKl · 2021-10-15T08:58:32.137Z · LW(p) · GW(p)

Let's stand up for the truth regardless of threats from Geoff/Leverage, and let's stand up for the truth regardless of the mob. 

It seems that your account is registered to just participate in this discussion and you withold your personal identity. 

If you sincerely believe that information should be shared, why are you withholding yourself and tell other people to take risks?

Replies from: farp
comment by farp · 2021-10-15T21:59:14.679Z · LW(p) · GW(p)

I have no private information to share. I think there is an obvious difference between asking powerful people in the community to stand up for the truth, and asking some rando commentator to de-anonymize. 

comment by Ruby · 2021-10-15T04:37:39.265Z · LW(p) · GW(p)

Anna is attempting to make people comfortable having this difficult conversation about Leverage by first inviting them just to share what factors are affecting their participation. Oliver is kindly obliging and saying what's going through his mind. 

This seems like a good approach to me for getting the conversation going. Once people have shared what's going through their minds–and probably these need to received with limited judgmentality–the group can then understand the dynamics at play and figure out how to proceed having a productive discussion.

All that to say, I think it's better to hold off on pressuring people or saying their reactions aren't normative [1] in this sub-thread. Generally, I think having this whole conversation well requires a gentleness and patience in the face of the severe, hard-to-talk-about situation.  Or to be direct, I think your comments in this thread have been brusque/pushy in a way that's hurting the conversation (others feel free to chime in if that seems wrong to them).

[1] For what it's worth, I think disclosing that your stance is informed by private info is good and proper.

Replies from: mayleaf
comment by mayleaf · 2021-10-16T00:42:33.421Z · LW(p) · GW(p)

I think your comments in this thread have been brusque/pushy in a way that's hurting the conversation (others feel free to chime in if that seems wrong to them).

I mentioned in a different comment that I've appreciated some of farp's comments here for pushing back against what I see as a missing mood in this conversation (acknowledgment that the events described in Zoe's account are horrifying, as well as reassurance that people in leadership positions are taking the allegations seriously and might take some actions in response). I also appreciate Ruby's statement that we shouldn't pressure or judge people who might have something relevant to say.

The unitofcaring post on mediators and advocates seems relevant here. I interpret farp (edit: not necessarily in the parent comment, but in various other comments in this thread) as saying that they'd like to see more advocacy in this thread instead of just mediation. I am not someone who has any personal experiences to share about Leverage, but if I imagine how I'd personally feel if I did, I think I agree.

Replies from: Spiracular, RobbBB, farp
comment by Spiracular · 2021-10-16T01:42:06.851Z · LW(p) · GW(p)

On mediators and advocates: I think order-of-operations MATTERS.

You can start seeking truth, and pivot to advocate, as UOC says.

What people often can't do easily is start with advocate, and pivot to truth.

And with something like this? What you advocated early can do a lot to color both what and who you listen to, and who you hear from.

Replies from: farp
comment by farp · 2021-10-17T06:18:48.023Z · LW(p) · GW(p)

You can start seeking truth, and pivot to advocate, as UOC says.

The entire thesis of the post is that you want a mixture of advocacy and mediation in the community. So if your proposal is that we all mediate, and then pivot to advocacy, I think that is not at all what UOC says. 

Not that I super endorse the prescription / dichotomy that the post makes to begin with.

comment by Rob Bensinger (RobbBB) · 2021-10-16T01:05:54.515Z · LW(p) · GW(p)

I liked Farp's "Let's stand up for the truth" comment, and thought it felt appropriate. (I think for different reasons than "mediators and advocates" -- I just like people bluntly stating what they think, saying the 'obvious', and cheerleading for values that genuinely deserve cheering for. I guess I didn't expect Ollie to feel pressured-in-a-bad-way by the comment, even if he disagrees with the implied advice.)

Replies from: farp
comment by farp · 2021-10-17T06:06:19.393Z · LW(p) · GW(p)

Thanks. Your comments and mayleaf's do mean a lot to me. Also, I was surprised by negative reaction to that comment and didn't really expect it to come off as admonishment or pressure. Love 2 cheerlead \o/

comment by farp · 2021-10-17T07:59:52.595Z · LW(p) · GW(p)

I have thought about this UOC post and it has grown on me.

The fact is that I believe Zoe and I believe her experience is not some sort of anomaly. But I am happy to advocate for her just on principle.

Geoff has much more resources and much at stake. Zoe just has (IMO) the truth and bravery and little to gain but peace. Justice for Geoff just doesn't need my assistance, but justice for Zoe might. 

So I am happy to blindly ally with Zoe and any other victims. And yes I would like others to do the same, and broadcast that we will fight for them. Otherwise they are entering a potentially shitty looking fight with little to gain against somebody with everything to lose.

I don't demand that no mediation take place, but if I want to plant my flag, that's my business. It's not like I am doing anything dishonest in the course of my advocacy.

And to be completely frank, as an advocate for the victims, I don't really want AnnaSalomon to be one of the major mediators here. I don't think she's got a good track record with CFAR stuff at all -- I have mentioned Robert Lecnik a few times already. 

I think Kelsey's post is right -- mediators need to seem impartial. For me, Anna can't serve this role. I couldn't say how representative I am.

Replies from: Viliam
comment by Viliam · 2021-10-17T12:13:33.196Z · LW(p) · GW(p)

I will be happy to contribute financially to Zoe's legal defense, if Geoff decides to take revenge.

In the meanwhile, I am curious about what actually happened. The more people talk, the better.

comment by BayAreaHuman · 2021-10-16T02:24:59.474Z · LW(p) · GW(p)

I appreciate this invitation. I'll re-link to some things I already said on my own stance: https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QKKnepsMoZmmhGSe [LW(p) · GW(p)]

Beyond what I laid out there:

  • It was challenging being aware of multiple stories of harm, and feeling compelled to warn people interacting with Geoff, but not wanting to go public with surprising new claims of harm. (I did mention awareness of severe harm very understatedly in the post. I chose instead to focus on "already known" properties that I feel substantially raise the prior on the actually-observed type of harm, and to disclose in the post that my motivation in cherry-picking those statements was to support pattern-matching to a specific template of harm).

  • After posting, it was emotionally a bit of a drag to receive comments that complained that the information-sharing attempt was not done well enough, and comparatively few comments grateful for attempting to share what I could, as best I could, to the best of my ability at the time, although the upvote patterns felt encouraging. I was pretty much aware that that was what was going to happen. In general, "flinching in anticipation of a high criticism-to-gratitude ratio" is an overall feeling I have when I imagine posting anything on LessWrong.

  • I was told by friends before posting that I ought to consider the risk to myself and to my contacts of tangible real-world retribution. I don't have any experience with credible risk of real-world retribution. It feels mind-numbing.

  • Meta: I haven't felt fully comfortable describing retribution concerns, including in the post, because I haven't been able to rule out that revealing the tactical landscape of why I'm sharing or avoiding certain details is simply more information that can be used by Geoff and associates to make life harder for people pursuing clarity. This is easier now that Zoe has written firsthand about specific retribution concerns.

  • Meta-meta: It doesn't feel great to talk about all this paranoid adversarial retribution thinking, because I don't want to contribute to the spread of paranoia and adversarial thinking. It feels contagious. Zoe describes a very paranoid atmosphere within Leverage and among those who left, and I feel that attesting to a strategically-aware disclosure pattern carries that toxic vibe into new contexts.

Replies from: Spiracular, TekhneMakre
comment by Spiracular · 2021-10-16T17:50:26.010Z · LW(p) · GW(p)

Since it sounds like just-upvotes might not be as strong a signal of endorsement as positive engagement...

I want to say that I really appreciate and respect that you were willing to come forward, with facts that were broadly-known in your social graph, but had been systematically excluded from most people's models.

And you were willing to do this, in a pretty adversarial environment! You had to deal with a small invisible intellectual cold-war that ensured, almost alone, without backing down. This ​counts for even more.


I do have a little bit of sensitive insider information, and on the basis of that: Both your posts and Zoe's have looked very good-faith to me.

In a lot of places, they accord with or expand on what I know. There are a few parts I was not close enough to confirm, but they have broadly looked right to me.

Replies from: Spiracular
comment by Spiracular · 2021-10-16T17:51:21.616Z · LW(p) · GW(p)

I also have a deep appreciation, for Zoe calling out that different corners of Leverage had very different experiences with it. Because they did! Not all time-slices or sub-groups within it experienced the same problems.

This is probably part of why it was so easy, to systematically play people's personal experiences against each other: Since he knew the context through which Leverage was experienced, Geoff or others could systematically bias whose reports were heard.

(Although I think it will be harder in the future to engage in this kind of bullshit, now that a lot of people are aware of the pattern.)


To those who had one of the better firsthand experiences of Leverage:

I am still interested in hearing your bit! But if you are only engaging with this due to an inducement that probably includes a sampling-bias, I appreciate you including that detail.

(And I am glad to see people in this broader thread, being generally open about that detail.)

comment by TekhneMakre · 2021-10-16T02:41:40.708Z · LW(p) · GW(p)
Meta-meta: It doesn't feel great to talk about all this paranoid adversarial retribution thinking, because I don't want to contribute to the spread of paranoia and adversarial thinking. It feels contagious. Zoe describes a very paranoid atmosphere within Leverage and among those who left, and I feel that attesting to a strategically-aware disclosure pattern carries that toxic vibe into new contexts.

I don't have anything to add, but I just want to say I felt a pronounced pang of warmth/empathy towards you reading this part. Not sure why, something about fear/bravery/aloneless/fog-of-war.

comment by Spiracular · 2021-10-15T16:24:21.913Z · LW(p) · GW(p)

I will talk about my own bit with Leverage later, but I don't feel like it's the right time to share it yet.

(But fwiw: I do have some scars, here. I have a little bit of skin in this one. But most of what I'm going to talk about, comes from analogizing this with a different incident.)

A lot of the position I naturally slide into around this, which I have... kind of just embraced, is of trying to relate hard to the people who:

  • WERE THERE
  • May have received a lot of good along with the bad
  • May have developed a very complicated and narratively-unsatisfying opinion because of that, which feels hard to defend
  • Are very sensitized to condemning mob-speak. Because they've been told, again and again, that anything good they got out of the above, will be swept out with the bathwater if the bad comes to light.
    • This sort of thing only stays covered up for this long, if there was a lot of pressure and plausible-sounding arguments pointing in the direction of "say nothing." The particular forms of that, will vary.
    • Core Leverage seems pretty willing to resort to manipulation and threats? And despite me generally trying so hard to avoid this vibe: I want to condemn that outright.
    • Also, in any other circumstance: Most people are very happy to condemn people who break strong secrecy agreements that they've made. If you feel like you've made one, I recognize that this is not easy to defy.
      • (My own part in this story is small. The only reason I'm semi-comfortable with sharing it, is because I got all of my own "vaguely owning the fact that I broke a very substantial secrecy agreement, publicly, to all my friends" out of the way EARLY. It would be bogging me down like crazy, otherwise. I respect Zoe, and others, for defying comparable pulls, or even worse ones.)
        • If you're stuck on this bit, I would like to say: This is an exceptional circumstance. You should maybe talk to somebody, eventually. Maybe only once your own processing has settled down. Publicly might not be the right call for you, and I won't push for it. Please take care for yourself, and try to be careful to pick someone who is not especially prone to demonizing things.
  • People can feel their truth drowned out by mobs of uninvested people, condemning it from afar.
    • The people who know what happened here, are in the minority. They have the most knowledge of what actually happened, and the most skin in this. They are also the people with the most to fear, and the most to lose.

People often don't appreciate, how much the sheer numbers game can weigh on you. It can come to feel like the chorus is looming over you, in this sort of circumstance; poised, always ready to condemn you and yours from afar. Each individual member is only "speaking-their-truth" once, but in aggregate, they can feel like an army.

It's hard to keep appropriate sight of the fact that the weight of the people who were there, and their story, are probably worth 1000x as much as even the most coherent but distant and un-invested condemning statement. They will not get as many shares. It might not even qualify as a story! But their contributions are worth a lot more, at least in my mind. Because they were THERE.

And I... want to stick up for them where relevant? Because this one wasn't my incident, but I know how hard it might be for them to do it for themselves. I can't swear I will do a good job of it? But the desire is there.


I do think a more-private forum, that is enriched for people who were closer to the event, might be a more comfortable place for some people to recount. It's part of why I tried to talk up that possibility, in another thread.

...it is unfortunately not my place to make this, though. For various reasons, which feel quite solid, to me.

(And after Ryan's account? I honestly have some concerns about it getting infiltrated by one of the more manipulative people around Leverage. I don't want to discount that fear! I still think it might be a good idea?)

I do think we could stand to have a clearer route for things to be shared anonymously, because I suspect at least some people would be more comfortable that way.

EDIT: Aella is operating a route to publish anonymous accounts about Leverage to Medium. If you are interested, it is described here.

(Since "attempts at deanonymization" appears to be a known issue, it may be worth having a flag for "only share as numeric aggregations of >1, using my recounting as a data-point.")

Replies from: Spiracular, TekhneMakre
comment by Spiracular · 2021-10-15T16:25:24.065Z · LW(p) · GW(p)

I was once in a similar position, due to my proximity to a past (different) thing. I kinda ended up excruciatingly sensitive, to how some things might read or feel to someone who was close, got a lot of good out of it (with or without the bad), and mostly felt like there was no way their account wouldn't be twisted into something unrecognizable. And who may be struggling, with processing an abrupt shift in their own personal narrative --- although I sincerely hope the 2 years of processing helped to make this less of a thing? But if you are going through it anyway, I am sorry.

And... I want this to go right. It didn't go right then; not entirely. I think I got yelled at by someone I respect, the first time I opened up about it. I'm not quite sure how to make this less scary for them? But I want it to be.

The people I know who got swept up in this includes some exceptionally nice people. There is at least one of them, who I would ordinarily call exceptionally sane. Please don't feel like you're obligated to identify as a bad person, or as a victim, because you were swept up in this. Just because some people might say it about you, doesn't make it who you are.

comment by TekhneMakre · 2021-10-16T01:03:07.271Z · LW(p) · GW(p)

An abstract note: putting stock in anonymous accounts potentially opens wider a niche for false accounts, because anonymity prevents doing induction about trustworthiness across accounts by one person. (I think anonymity is a great tool to have, and don't know if this is practically a problem; I just want to track the possibility of this dynamic, and appreciate the additional value of a non-anonymous account.)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-16T01:07:51.898Z · LW(p) · GW(p)

One tool here is for a non-anonymous person to vouch for the anonymous person (because they know the person, and/or can independently verify the account).

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-16T01:15:32.923Z · LW(p) · GW(p)

True. A maybe not-immediately-obvious possibility: someone playing Aella's role of posting anonymous accounts could offer the following option: if you given an account and take this option, then if the poster later finds out that you seriously lied, then, they have the option to de-anonymize you. The point being, in the hypothetical where the account is egregiously false, the accounter's reputation still takes a hit; and so, these accounts can be trusted more. If there's no possibility of de-anonymization, then the account can only be trusted insofar as you trust the poster's ability to track accounter's trustworthiness. Which seems like a more complicated+difficult task. (This might be terrible thing to do, IDK.)

Replies from: Spiracular, Viliam
comment by Spiracular · 2021-10-16T01:57:40.728Z · LW(p) · GW(p)

I get VERY creepy vibes from this proposal, and want to push back hard on it.

Although, hm... I think "lying" and "enemy action" are different?

Enemy action occasionally warrants breaking contracts back, after they didn't respect yours.

Whereas if there is ZERO lying-through-negligence in accounts of PERSONAL EXPERIENCES, we can be certain we set the bar-of-entry far too high.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-16T02:22:40.918Z · LW(p) · GW(p)

(Downvoted. I'd have strong downvoted but -5 seems too harsh. Sounds like you're responding to something other than what I said, and if that's right, I don't like that you said "VERY creepy" about the proposal, rather than about whatever you took from it.)

Replies from: Spiracular
comment by Spiracular · 2021-10-16T16:09:23.759Z · LW(p) · GW(p)

I was very up-front about the role I am attempting to embody in this: Relating to, and trying to serve, people with complicated opinions who are finding it hard to talk about this.

I feel we needed someone to take this role. I wish someone had done it for me, when my stuff happened.


You seem to not understand that I am making this statement, from that place and in that capacity.

Try seeing it through through the lens of that, rather than thinking that I'm making confident statements about your epistemic creepiness.

Hopefully this helps to resolve your confusion.

comment by Viliam · 2021-10-16T20:30:01.525Z · LW(p) · GW(p)

Depends on the algorithm to determine whether "you seriously lied".

Imagine a hypothetical situation where telling the truth puts you in danger, but you read this offer, think "well, I am telling the truth, so they will protect my anonymity", and describe truthfully your version. Unluckily for you, your opponent lied, and was more convincing than you. Afterwards, because your story contradicts the accepted version of events, it seems that you were lying, accusing unfairly the people who are deemed innocent. As a punishment for "seriously lying", your identity is exposed.

If people with sensitive information suspect that something like this could happen, then it defeats the purpose of the proposal.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-16T21:03:24.861Z · LW(p) · GW(p)

Yeah, that seems like a big potential flaw. (Which could just mean, no one should stick their neck out like that.) I'm imagining that there's only potential benefit here in cases where the accounter also has strong trust in the poster, such that they think the poster almost certainly won't be falsely convinced that a truth is an egregious lie.

In particular, the agreement isn't about whether the court of public opinion decides it was a lie, just the poster's own opinion. (The poster can't be held accountable to that by the public, unless the public changes its mind again, but the poster can at least be held accountable by the accounter.) (We could also worry that this option would only be taken by accounters with accounts that are infeasible to ever reveal as egregious lies, which would be a further selection bias, though this is sort of going down a hypothetical rabbit hole.)

comment by Evan_Gaensbauer · 2021-10-15T03:56:37.885Z · LW(p) · GW(p)

In the past, I've been someone who has found it difficult and costly to talk about Leverage and the dynamics around it, or organizations that are or have been affiliated with effective altruism, though the times I've spoken up I've done more than others. I would have done it more but the costs were that some of my friends in effective altruism interacted with me less, seemed to take me less seriously in general and discouraged me from speaking up more often again with what sometimes amounted to nothing more than peer pressure. 

That was a few years ago. For lots of reasons, it's easier, less costly, less risky and easier to not feel fear for me now. I don't know yet what I'll say regarding any or all of this related to Leverage because I don't have any sense of how I might be prompted or provoked to respond. Yet I expect I'll have more to say and towards what I might share as relevant I don't have any particular feelings about yet. I'm sensitive to how my statements might impact others but for myself personally I feel almost indifferent. 

comment by Linch · 2021-10-14T23:51:37.203Z · LW(p) · GW(p)

My general feeling about this is that the information I know is either well-known or otherwise "not my story to tell." 

I've had very few direct interactions with Leverage except applying to Pareto, a party or two, and some interactions with Leverage employees (not Geoff) and volunteers.  As is common with human interactions, I appreciated many but not all of my interactions.

Like many people in the extended community, I've been exposed to a non-overlapping subset of accounts/secondhand rumors of varying degrees of veracity. For some things it's been long enough that I can't track the degree of confidences I'm supposed to keep, and under which conditions, so it seems better to err on the side of silence. 

At any rate, it's ultimately not my story/tragedy. My own interactions with Leverage has not been personally noticeably harmful or beneficial.

comment by Viliam · 2021-10-17T13:25:35.877Z · LW(p) · GW(p)

Some thoughts related to this topic:

*

For someone familiar with Scientology, the similarities are quite funny. There is a unique genius who develops a new theory of human mind called [Dianetics | Connection Theory]. For people familiar with psychology, it's mostly a restatement of some dubious existing theories, with huge simplifications and little [LW(p) · GW(p)] evidence [LW · GW]. But many people have their minds blown.

The genius starts a group with the goal of providing almost-superpowers such as [perfect memory | becoming another Elon Musk] to his followers, with the ultimate goal of saving the planet. The followers believe this is the only organization capable of achieving such goal. They must regularly submit to having their thoughts checked at [auditing | debugging], where their sincerity is verified using [e-meter | Belief Reporting]. When the leaders runs out of useful or semi-useful ideas to teach, there is always the unending task of exorcising the [body thetans | demons].

The former members are afraid of consequences if they speak about their experience in the organization.

*

Some people expressed epistemic frustration about situation that seems important to understand correctly, but information is scarce. Please note that from one party's perspective, this is a feature, not a bug! The whole situation was intentionally designed to be difficult to figure out.

When you are provided filtered evidence [? · GW], it makes sense to assume that your ignorance plays in favor of the party who censors the information. That means, that the actual reality is worse than you assume based on the information you already have. (Maybe even worse than when you take this into account. Because, that party still has an opportunity to stop the information embargo if public suspicion goes too far.. and yet they chose not to.)

When reading Geoff's comment [LW(p) · GW(p)], please also notice the part that is missing: revoking the NDA, or promising not to take legal or other action against Zoe (or anyone else who talks about their experience at Leverage).

Also, it's more of an excuse than an apology. "It’s terrible that you felt like [...]. I totally did not expect this". Says the guy whose alleged superpower is modelling other people's minds, about the one who regularly had to submit to having her thoughts inspected by a supervisor. (Also, notice that the terrible thing is how she felt, not what she was subjected to. It's kinda her fault for being so irrationally sensitive, am I right? /s)

comment by Geoff_Anders · 2021-10-14T01:33:09.398Z · LW(p) · GW(p)

Hi everyone. I wanted to post a note to say first, I find it distressing and am deeply sorry that anyone had such bad experiences. I did not want or intend this at all.

I know these events can be confusing or very taxing for all the people involved, and that includes me. They draw a lot of attention, both from those with deep interest in the matter, where there may be very high stakes, and from onlookers with lower context or less interest in the situation. To hopefully reduce some of the uncertainty and stress, I wanted to share how I will respond.

My current plan (after writing this note) is to post a comment about the above-linked post. I have to think about what to write, but I can say now that it will be brief and positive. I’m not planning to push back or defend. I think the post is basically honest and took incredible courage to write. It deserves to be read.

Separately, I’m going to write a letter in my role as Executive Director of Leverage Research on the topic of harms from our previous psychology research and the structure of the organization.

It may be useful to address the Leverage/Rationality relation or the Leverage/EA relation as well, but discussion of that might distract us from what is most important right now.

Replies from: ChristianKl, Freyja, throwaway2456
comment by ChristianKl · 2021-10-14T08:59:16.221Z · LW(p) · GW(p)

Given what the post said about the NDA that people signed when leaving, it seems to me like explictely releasing people from that NDA (maybe with a provision to anonymize names of other people) would be very helpful for having a productive discussion that can integrate the experiences of many people into public knowledge and create a shared understanding of what happened. 

comment by Freyja · 2021-10-16T17:03:59.138Z · LW(p) · GW(p)

Hi Geoff—have you posted the brief response comment anywhere yet?

Replies from: Benito, Geoff_Anders
comment by Ben Pace (Benito) · 2021-10-17T00:36:46.268Z · LW(p) · GW(p)

I would also be interested in knowing a timeline for the response.

comment by Geoff_Anders · 2021-10-17T11:23:08.521Z · LW(p) · GW(p)

Yes, here: https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=3gMWA8PjoCnzsS7bB

comment by throwaway2456 · 2021-10-15T20:19:25.622Z · LW(p) · GW(p)
  • For Geoff or Reserve: What is the relationship between Leverage and Reserve, and related individuals and entities?
  • For everyone: Under what conditions does restitution to ex-Leveragers make sense? Under what conditions does it make sense for leadership to divest themselves of resources?
  • For everyone: In arguendo, what could restitution or divestment concretely look like?

Edit: I was going to leave the original comment, to provide context to Vaniver’s reply. But it started receiving upvotes that brought it above “-1", making it a more prominent bad example of community norms. I think the upvotes indicate importance in the essence of the questions, but their form were ill-considered and rushed to judgement. In compromise, I've tried to rewrite them more neutrally and respectfully to all involved. I may revisit them a few more times.

Replies from: Vaniver, farp
comment by Vaniver · 2021-10-15T21:33:48.458Z · LW(p) · GW(p)

I wanted to note that I think this comment both a) raises a good point (should Leverage pay restitution to people that were hurt by it? Why and how much?) and b) does so in a way that I think is hostile and assumes way more buy-in than it has (or would need to get support for its proposal).

First, I think most observers are still in "figuring out what's happened" mode. Was what happened with Zoe unusually bad or typical, predictable or a surprise? I think it makes sense to hear more stories before jumping to judgment, because the underlying issue isn't that urgent and the more context, the wiser a decision we can make.

Second, I think a series of leading questions asked to specific people in public looks more like norm enforcement than it does like curious information-gathering, and I think the natural response is suspicion and defensiveness. [I think we should go past the defensiveness and steelman.]

Third, I do think that it makes sense for people to make things right with money when possible; I think that this should be proportional to damages done and expectations of care, rather than just 'who has the money.' Suppose, pulling these numbers out of a hat, the total damage done to Leverage employees (as estimated by them) was $1M and the total value of Geoff's tokens are $10M; the presumption that the tokens should all go to the victims (i.e. that the value of his tokens is equal to the amount of damage done) seems about as detached from reality to me as the assumption that the correct amount of restitution is 0. On a related note, some large amount of the Leverage experience appears to have been self-experimentation; I think the amount we should expect Geoff to be responsible should take into account how much responsibility the participants thought they were taking for themselves (while not just assuming that they were making an informed call and their initial estimate should be our final one). 

Replies from: throwaway2456, farp
comment by throwaway2456 · 2021-10-15T22:32:24.613Z · LW(p) · GW(p)

In retrospect, I apologize for the strident tone and questions in my original comment. I am personally worried about further harm, in uses of money or power by Anders, and from Zoe's post it seems like there were a handful to many more people hurt. If money or tokens are possibly causally downstream of harm, restitution might reduce further harm and address harm that's already taken place. The community is doing ongoing information gathering, though, and my personal rush to judgement isn't keeping in pace with that. I'll leave my above comment as is, since it's already received a constructive reply.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-10-17T00:37:42.363Z · LW(p) · GW(p)

I appreciate you addressing Vaniver's concerns about your comment.

comment by farp · 2021-10-17T07:09:26.856Z · LW(p) · GW(p)

Suppose, pulling these numbers out of a hat, the total damage done to Leverage employees (as estimated by them) was $1M and the total value of Geoff's tokens are $10M; the presumption that the tokens should all go to the victims (i.e. that the value of his tokens is equal to the amount of damage done) seems about as detached from reality to me as the assumption that the correct amount of restitution is 0.

The counter argument would be:

Suppose we do not think it should be profitable to start a cult and get rich. If we enforce the norm "if we find out you started a cult and got rich off it, you only get to be 90% rich instead of 100% rich", well, that is not very powerful. Maybe the rest should go to actually-effective charity or something.

That said, a norm where we say "you don't get to be rich anymore" is sort of moot when ultimately Geoff has all the Leverage 🥁💥

comment by farp · 2021-10-17T07:01:43.392Z · LW(p) · GW(p)

I am sad that you have deleted your original comment because it was my favorite comment in this whole page! Your updated version, by comparison, is much worse (no offense). 

Look, I think once you are trying to express the idea "I think you should pay millions of dollars to the people you have very badly harmed", you should not be so concerned about whether you are doing so in a "hostile" way. I hope we can all appreciate the comedy in this even if you think neutrality is ultimately better.

I agree that your new version is more norm-conformant, but I am curious if you think it is an equally thought-provoking / persuasive / useful presentation of the ideas.

I also think that your new version is inadequate for leaving out the important context that Reserve probably made a lot of money.

comment by alyssavance · 2021-10-14T03:52:09.228Z · LW(p) · GW(p)

EDIT: This comment described a bunch of emails between me and Leverage that I think would be relevant here, but I misremembered something about the thread (it was from 2017) and I'm not sure if I should post the full text so people can get the most accurate info (see below discussion), so I've deleted it for now. My apologies for the confusion

Replies from: Aella, RobbBB
comment by Aella · 2021-10-14T05:50:18.843Z · LW(p) · GW(p)

Would you happen to have/be willing to share those emails?

Replies from: alyssavance
comment by alyssavance · 2021-10-14T06:51:35.947Z · LW(p) · GW(p)

I have them, but I'm generally hesitant to share emails as they normally aren't considered public. I'd appreciate any arguments on this, pro or con

Replies from: habryka4, cata
comment by habryka (habryka4) · 2021-10-14T07:14:06.277Z · LW(p) · GW(p)

I generally feel reasonably comfortable sharing unsolicited emails, unless the email makes some kind of implicit request to not be published, that I judge at least vaguely valid. In general I am against "default confidentiality" norms, especially for requests or things that might be kind of adversarial. I feel like I've seen those kinds of norms weaponized in the past in ways that seems pretty bad, and think that while there is a generally broad default expectation of unsolicited private communication being kept confidential, it's not a particularly sacred protection in my mind (unless explicitly or implicitly requested, in which case I think I would talk to the person first to get a more fully comprehensive understanding for why they requested confidentiality, and would generally err on the side of not publishing, though would feel comfortable overcoming that barrier given sufficient adversarial action)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-14T08:22:39.667Z · LW(p) · GW(p)

unless the email makes some kind of implicit request to not be published

What does "implicit request" mean here? There are a lot of email conversations where no one writes a single word that's alluding to 'don't share this', but where it's clearly discussing very sensitive stuff and (for that reason) no one expects it to be posted to Hacker News or whatever later.

Without having seen the emails, I'm guessing Leverage would have viewed their conversation with Alyssa as 'obviously a thing we don't want shared and don't expect you to share', and I'm guessing they'd confirm that now if asked?

I do think that our community is often too cautious about sharing stuff. But I'm a bit worried about the specific case of 'normalizing big infodumps of private emails where no one technically said they didn't want the emails shared'.

(Maybe if you said more about why it's important in this specific case? The way you phrased it sort of made it sound like you think this should be the norm even for sensitive conversations where no one did anything terrible, but I assume that's not your view.)

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-15T01:58:15.743Z · LW(p) · GW(p)

What does "implicit request" mean here?

I don't know, kind of complicated, enough that I could probably write a sequence on it, and not even sure I would have full introspective access into what I would feel comfortable labeling as an "implicit request".

I could write some more detail, but it's definitely a matter of degree, and the weaker the level of implicit request, the weaker the reason for sharing needs to be, with some caveats about adjusting for people's communication skills, adversarial nature of the communication, adjusting for biases, etc.

Replies from: Spiracular
comment by Spiracular · 2021-10-17T13:38:13.144Z · LW(p) · GW(p)

I want to throw out that while I am usually SUPER on team "explicit communication norms", the rule-nuances of the hardest cases might sometimes work best if they are a little chaotic & idiosyncratic.

I personally think there might be something mildly-beneficial and protective, about having "adversarial case detected" escape-clauses that vary considerably from person-to-person.

(Otherwise, a smart lawful adversary can reliably manipulate the shit out of things.)

comment by cata · 2021-10-14T08:28:27.590Z · LW(p) · GW(p)

I would just ask the other party whether they are OK to share rather than speculating about what the implicit expectation is.

comment by Rob Bensinger (RobbBB) · 2021-10-14T04:12:00.820Z · LW(p) · GW(p)

?!?!?!?!?!?!?!?!?!

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-14T08:35:34.151Z · LW(p) · GW(p)

Update: Looks like the thing I was surprised by didn't happen. Confusion noticed, I guess!

comment by cousin_it · 2021-10-13T08:59:36.344Z · LW(p) · GW(p)

Good stuff. Very similar to DeMille's interview about Hubbard. As an aside, I love how the post rejects the usual positive language about "openness to experience" and calls the trait what it is: openness to influence.

comment by Raemon · 2021-10-16T01:22:05.206Z · LW(p) · GW(p)

My own experience is somewhat like Linch's here [LW(p) · GW(p)], where mostly I'm vaguely aware of some things that aren't my story to tell.

For most of the past 9ish years I'd found Leverage "weird/sometimes-offputting, but not obviously moreso than other rationality orgs." I have gotten personal value out of the Leverage suite of memes and techniques (Belief Reporting was a particularly valuable thing to have in my toolkit). 

I've received one bit of secondhand info about "An ex-leverage employee (not Zoe) had an experience that seemed reasonable to describe as 'the bad kind of cult that was actually harmful'." I was told this as part of a decisionmaking process where it seemed relevant, and asked not to share it further in the past couple years. I think it makes sense to share this much meta-data in this context.

comment by Dustin · 2021-10-13T23:27:38.384Z · LW(p) · GW(p)

While I'm not hugely involved, I've been reading OB/LW since the very beginning. I've likely read 75% of everything that's ever been posted here.

So, I'm way more clued-in to this and related communities than your average human being and...I don't recall having heard of Leverage until a couple of weeks ago.

I'm not exactly sure what that means with regard to PR-esque type considerations.

However.  Fair or not, I find having read the recent stuff I've got an ugh field that extended to slightly include LW.  (I'm not sure what it means to "include LW"...it's just a website.  My first stab at an explanation is it's more like "people engaged in community type stuff who know IRL lots of other people who communicate on LW", but that's not exactly right either.)

I think it'd be good to have some context on why any of this is relevant to LessWrong. The whole thing is generating a ton of activity and it feels like it just came out of nowhere. 

Replies from: vanessa-kosoy, RobbBB, ozziegooen, agc
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-14T09:34:02.517Z · LW(p) · GW(p)

Personally I think this story is an important warning about how people with a LW-adjacent mindset can death spiral [? · GW] off the deep end. This is something that happened around this community multiple times, not just in Leverage (I know of at least one other prominent example and suspect there are more), so we should definitely watch out for this and/or think how to prevent this kind of thing.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-14T10:17:03.716Z · LW(p) · GW(p)

What's the other prominent example you have in mind?

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-14T10:46:50.067Z · LW(p) · GW(p)

I am referring the cause of this incident. This seems like a possibly good source for more information, but I only skimmed it so don't vouch for the content.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-14T17:15:34.939Z · LW(p) · GW(p)

Thanks.

comment by Rob Bensinger (RobbBB) · 2021-10-13T23:49:44.209Z · LW(p) · GW(p)

Leverage has always been at least socially adjacent to LW and EA (the earliest discussion I find is in 2012 [LW · GW]), and they hosted the earliest EA summits in 2013-2014 (before CEA started running EA Global).

Replies from: Dustin
comment by Dustin · 2021-10-14T00:03:27.736Z · LW(p) · GW(p)

Having seen it, I have a very vague recollection of maybe having read that at the time.  Still, the amount of activity on the recent posts about Leverage seems to me all out of proportion with previous mentions/discussions.  

Replies from: Freyja
comment by Freyja · 2021-10-14T00:07:05.036Z · LW(p) · GW(p)

Also, for the extended Leverage diaspora and people who are somehow connected, LessWrong is probably the most obvious place to have this discussion, even if people familiar with Leverage make up only a small proportion of people who normally contribute here.

There are other conversations happening on Facebook and Twitter but they are all way more fragmented than the ones here.

Replies from: BayAreaHuman
comment by BayAreaHuman · 2021-10-14T01:06:55.178Z · LW(p) · GW(p)

I originally chose LessWrong, instead of some other venue, to host the Common Knowledge post primarily because (1) I wanted to create a publicly-linkable document pseudonymously, and (2) I expected high-quality continuation of information-sharing and collaborative sense-making in the comments.

comment by ozziegooen · 2021-10-14T14:32:14.528Z · LW(p) · GW(p)

As someone part of the social communities, I can confirm that Leverage was definitely a topic of discussion for a long time around Rationalists and Effective Altruists. That said, often the discussion went something like, "What's up with Leverage? They seem so confident, and take in a bunch of employees, but we have very little visibility." I think I experienced basically the same exact conversation about them around 10 times, along these lines.

As people from Leverage have said, several Rationalists/EAs were very hostile around the topic of Leverage, particularly in the last ~4 years or so. (I've heard stories of people getting shouted at just for saying they worked at Leverage at a conference). On the other hand, they definitely had support by a few rationalists/EA orgs and several higher-ups of different kinds.

They've always been secretive, and some of the few public threads didn't go well for them, so it's not too surprising to me that they've had a small LessWrong/EA Forum presence.

I've personally very much enjoyed staying mostly staying away from the controversy, though very arguably I made a mistake there.

(I should also note that I had friends who worked at or worked close to Leverage, I attended like 2 events there early on, and I applied to work from there around 6 years ago)

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2021-10-15T04:01:59.379Z · LW(p) · GW(p)

For what it's worth, my opinion is that you sharing your perspective is the opposite of making a mistake.

Replies from: ozziegooen
comment by ozziegooen · 2021-10-15T04:21:26.900Z · LW(p) · GW(p)

Sorry, edited. I meant that it was a mistake for me to keep away before, not now.

(That said, this post is still quite safe. It's not like I have scandalous information, more that, technically I (or others) could do more investigation to figure out things better.)

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2021-10-15T04:27:26.210Z · LW(p) · GW(p)

Yeah, at this point, everyone coming together to sort this out together as a way of building a virtuous spiral of making speaking up feel safe enough that it doesn't even need to be a courageous thing to do or whatever is the kind of thing I think your comment also represents and what I was getting at. 

comment by agc · 2021-10-14T13:32:49.852Z · LW(p) · GW(p)

A 2012 CFAR workshop [LW · GW] included "Guest speaker Geoff Anders presents techniques his organization has used to overcome procrastination and maintain 75 hours/week of productive work time per person." He was clearly connected to the LW-sphere if not central to it.

comment by farp · 2021-10-15T05:52:08.684Z · LW(p) · GW(p)

Re: @Ruby on my brusqueness

LW/EA has more "world saving" orgs than just Leverage. Implicit to "world saving" orgs, IMO, is that we should tolerate some impropriety for the greater good. Or that we should handle things quietly in order to not damage the greater mission. 

I think that our "world saving" orgs ask a lot of trust from the broader community -- MIRI is a very clear example. I'm not really trying to condemn secrecy I am just pointing out that trust is asked of us.

I recognize that this is inflammatory but I don't see a reason to beat around the bush:
Leverage really seems like a cult. It seems like an unsafe institution doing harmful things. I am not sure how much this stuff about Leverage is really news to people involved in our other "world saving" orgs. I think probably not much. I don't want "world saving" orgs to have solidarity. If you want my trust you have to sell out the cult leaders, the rapists, etcetera, regardless of whether it might damage your "world saving" mission. I'm not confident that that's occurring.

Replies from: Ruby, farp, Dustin
comment by Ruby · 2021-10-15T15:51:22.988Z · LW(p) · GW(p)

IMO, is that we should tolerate some impropriety for the greater good.

I agree!

I am just pointing out that trust is asked of us.

I agree!

Leverage really seems like a cult. It seems like an unsafe institution doing harmful things.

Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0 (remote team, focus on science history rather than psychology, 4 people).

I am not sure how much this stuff about Leverage is really news to people involved in our other "world saving" orgs.

The information in Zoe's Medium post was significant news to me and others I've spoken to. 

(saying the below for general clarity, not just in response to you)

I think everyone (?) in this thread is deeply concerned, but we're hoping to figure out what exactly happened, what went wrong and why (and what maybe to do about it). To do that investigation and postmortem, we can't skip to sentencing (forgive me if that's not your intention, but it reads a bit to me that that's what you want to be happening), nor would it be epistemically virtuous or just to do so. 

Some major new information came to light, people need time to process it, surface other relevant information, and make statements. The matter is complicated by forces inhibiting people from speaking both in favor and against Leverage. If there's any reluctance to "sell out" Leverage, it's because people want to have the full conversation first, not because of any sense of solidarity that we're all "world saving" orgs.

Replies from: mayleaf, farp
comment by mayleaf · 2021-10-15T23:44:51.515Z · LW(p) · GW(p)

To do that investigation and postmortem, we can't skip to sentencing (forgive me if that's not your intention, but it reads a bit to me that that's what you want to be happening), nor would it be epistemically virtuous or just to do so. 

I super agree with this, but also want to note that I feel appreciation for farp's comments here. The conversation on this page feels to me like it has a missing mood: I found myself looking for comments that said something like "wow, this account is really horrifying and tragic; we're taking these claims really seriously, and are investigating what actions we should take in response". Maybe everyone thinks that that's obvious, and so instead is emphasizing the part where we're committed to due process and careful thinking and avoiding mob dynamics. But I think it's still worth stating explicitly, especially from those in leadership positions in the community. I found myself relieved just reading Ruby's response here that "everyone in this thread is deeply concerned".

Replies from: Ruby, RobbBB
comment by Ruby · 2021-10-16T00:40:46.923Z · LW(p) · GW(p)

I super agree with this, but also want to note that I feel appreciation for farp's comments here.

Fair!

I found myself looking for comments that said something like "wow, this account is really horrifying and tragic; we're taking these claims really seriously, and are investigating what actions we should take in response"

My models of most of the people I know in this thread feel that way. I can say on my own behalf that I found Zoe's account shocking. I found it disturbing to think that was going on with people I knew and interacted with.  I find it disturbing that if this really is true, how did it not surface until now? (Or how it was ignored until now?)  I'm disturbed that Leverage's weirdness (and usually I'm quite okay with weirdness) turned out to enable and hide terrible things, at least for one person and likely more. I'm saddened that it happened, because based on the account, it seems like Leverage were trying to accomplish some ambitious, good things and I wish we lived in a world where the "red flags" (group-living, mental experimentation, etc) were safely ignored in the pursuit in the service of great things. 

Suddenly I am in a world more awful than the one I thought I was in, and I'm trying to reorient. Something went wrong and something different needs to happen now. Though I'm confident it will, it's just a matter of ensuring we pick the right different thing. 

Replies from: mayleaf
comment by mayleaf · 2021-10-16T00:47:26.949Z · LW(p) · GW(p)

Thank you, I really appreciate this response. I did guess that this was probably how you and others (like Anna, whose comments have been very measured) felt, but it is really reassuring to have it explicitly verbally confirmed, and not just have to trust that it's probably true.

comment by farp · 2021-10-15T22:18:31.588Z · LW(p) · GW(p)

The information in Zoe's Medium post was significant news to me and others I've spoken to. 

That's a good thing to assert. 
It seems preeeetty likely that some leaders in the community knew more or less what was up. I want people to care about whether that is true or not.

To do that investigation and postmortem, we can't skip to sentencing

I get this sentiment, but at the same time I think it's good to be clear about what is at stake. It's easy for me to interpret comments like "Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0" as essentially claiming that, while post-mortems are useful, the situation is behind us. 

Simply put, if I were a victim, I would want to speak up for the sake of accountability, not shared examination and learning. If I spoke up and found that everyone agreed the behavior was bad, but we all learned from it and are ready ot move on, I would be pretty upset by that. And my understanding is that this is how the community's leaders have handled other episodes of abuse (based on 0 private information, only public / second hand information).

But I am coming into this with a lot of assumptions as an outsider. If these assumptions don't resonate with any people who are closer to the situation then I apologize. Regardless sorry for stirring shit up with not much concrete to say. 

Replies from: Viliam, Ruby
comment by Viliam · 2021-10-16T21:05:28.013Z · LW(p) · GW(p)

It's easy for me to interpret comments like "Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0" as essentially claiming that, while post-mortems are useful, the situation is behind us.

Given my high priors on "the past behavior is the best predictor of future behavior", I would assume that the greatest difference will be better OPSEC and PR [LW · GW]. Also, more resouces to silence critics.

comment by Ruby · 2021-10-15T22:49:35.889Z · LW(p) · GW(p)

It seems preeeetty likely that some leaders in the community knew more or less what was up. I want people to care about whether that is true or not.

I would be quite surprised if the people I would call leaders knew of things that were as severe as Zoe's account and "did nothing". I care a lot whether that's true.

It's easy for me to interpret comments like "Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0" as essentially claiming that, while post-mortems are useful, the situation is behind us. 

My intention was to say that we don't have reason to believe there is harm actively occurring right now that we need to intervene on immediately. A day or two to figure things out is fine.

Simply put, if I were a victim, I would want to speak up for the sake of accountability, not shared examination and learning.

Based on what Zoe said plus general models of these situations, I believe how victims feel is likely complicated. I'm hesitant to make assumptions here. (Btw, see here for where some people are trying to set up an anonymous database of experiences at Leverage).

And my understanding is that this is how the community's leaders have handled other episodes of abuse (based on 0 private information, only public / second hand information).

I might suggest creating another post (so as to not interfere too much with this one) detailing what you believe to be the case so that we can discuss and figure out any systematic issues.

Replies from: farp, ChristianKl
comment by farp · 2021-10-17T05:54:28.123Z · LW(p) · GW(p)

I might suggest creating another post (so as to not interfere too much with this one) detailing what you believe to be the case so that we can discuss and figure out any systematic issues.

Look uhhh I believe at the very least the most basic claims about how Anna handled Robert Lecnik.

I would be quite surprised if the people I would call leaders knew of things that were as severe as Zoe's account and "did nothing". I care a lot whether that's true.

👍 (non sarcastic)

Replies from: philh
comment by philh · 2021-10-17T23:33:17.895Z · LW(p) · GW(p)

ὄD

(This renders on my phone as an o with a not-umlaut-but-similar over it followed by a D, and I don't know whether that's what it was intended to look like and I just don't know what it means, or if it's intended to look different than that.)

Replies from: farp
comment by farp · 2021-10-17T23:45:50.573Z · LW(p) · GW(p)

its a thumbsup emoji on mac OS. 👍

comment by ChristianKl · 2021-10-18T21:59:11.194Z · LW(p) · GW(p)

Based on what Zoe said plus general models of these situations, I believe how victims feel is likely complicated. I'm hesitant to make assumptions here. (Btw, see here for where some people are trying to set up an anonymous database of experiences at Leverage).

Having a database run by an anonymous person for that purpose seems to be very questionable. Zoe's edited her post to reference Aella as a point person for people who want to share their stories, so that's likely the best place.

Replies from: Ruby
comment by Ruby · 2021-10-18T22:16:19.607Z · LW(p) · GW(p)

That is the database run by Aella. By anonymous I meant it's anonymous for the posters.

comment by farp · 2021-10-15T06:04:58.835Z · LW(p) · GW(p)

That's my context. However I agree that my contributions haven't been very high EV in that I'm very far on the outside of a delicate situation and throwing my weight around. So I won't keep trying to intervene / subtextually post.

comment by Dustin · 2021-10-15T21:44:58.635Z · LW(p) · GW(p)

we should tolerate some impropriety for the greater good

 

On one level I think this is correct, but...I also think it's possibly a little naïve.  

In the world consists of only "us", the people who think this world saving needs done, and who think like "we" do, your statement becomes more true. In the world wherein the vast majority of people think the world saving we're talking about is unimportant, or bad, or evil, your statement requires closer and closer to perfect secrecy and insularity to remain true.