Posts
Comments
Here is an example:
-
Zoe's report says of the information-sharing agreement "I am the only person from Leverage who did not sign this, according to Geoff who asked me at least three times to do so, mentioning each time that everyone else had (which read to me like an attempt to pressure me into signing)."
-
I have spoken to another Leverage member who was asked to sign, and did not.
-
The email from Matt Fallshaw says the document "was only signed by just over half of you". Note the recipients list includes people (such as Kerry Vaughan) who were probably never asked to sign because they were not present, but I would believe that such people are in the minority; so this isn't strict confirmation, but just increased likelihood, that Geoff was lying to Zoe.
This is lying to someone within the project. I would subjectively anticipate higher willingness to lie to people outside the project, but I don't have anything tangible I can point to about that.
I am more confident that what I heard was "Geoff exhibits willingness to lie". I also wouldn't be surprised if what I heard was "Geoff reports being willing to lie". I didn't tag the information very carefully.
Based on broad-strokes summaries said to me by ex-Leveragers (though admittedly not first-hand experience), I would say that the statement "Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers" rings true to what I have heard.
Some things mentioned to me by Leverage people as typical/archetypal of Geoff's attitude include being willing to lie to people outside Leverage, feeling attacked or at risk of being attacked, and viewing adjacent non-Leverage groups within the broader EA sphere as enemies.
Thanks, this all helps. At the time, I felt that writing this with the meta-disclosures you're describing would've been a tactical error. But I'll think on this more; I appreciate the input, it lands better this time.
I did write both "I know former members who feel severely harmed" and "I don't want to become known as someone saying things this organization might find unflattering". But those are both very, very understated, and purposefully de-emphasized.
I appreciate this invitation. I'll re-link to some things I already said on my own stance: https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QKKnepsMoZmmhGSe
Beyond what I laid out there:
-
It was challenging being aware of multiple stories of harm, and feeling compelled to warn people interacting with Geoff, but not wanting to go public with surprising new claims of harm. (I did mention awareness of severe harm very understatedly in the post. I chose instead to focus on "already known" properties that I feel substantially raise the prior on the actually-observed type of harm, and to disclose in the post that my motivation in cherry-picking those statements was to support pattern-matching to a specific template of harm).
-
After posting, it was emotionally a bit of a drag to receive comments that complained that the information-sharing attempt was not done well enough, and comparatively few comments grateful for attempting to share what I could, as best I could, to the best of my ability at the time, although the upvote patterns felt encouraging. I was pretty much aware that that was what was going to happen. In general, "flinching in anticipation of a high criticism-to-gratitude ratio" is an overall feeling I have when I imagine posting anything on LessWrong.
-
I was told by friends before posting that I ought to consider the risk to myself and to my contacts of tangible real-world retribution. I don't have any experience with credible risk of real-world retribution. It feels mind-numbing.
-
Meta: I haven't felt fully comfortable describing retribution concerns, including in the post, because I haven't been able to rule out that revealing the tactical landscape of why I'm sharing or avoiding certain details is simply more information that can be used by Geoff and associates to make life harder for people pursuing clarity. This is easier now that Zoe has written firsthand about specific retribution concerns.
-
Meta-meta: It doesn't feel great to talk about all this paranoid adversarial retribution thinking, because I don't want to contribute to the spread of paranoia and adversarial thinking. It feels contagious. Zoe describes a very paranoid atmosphere within Leverage and among those who left, and I feel that attesting to a strategically-aware disclosure pattern carries that toxic vibe into new contexts.
I originally chose LessWrong, instead of some other venue, to host the Common Knowledge post primarily because (1) I wanted to create a publicly-linkable document pseudonymously, and (2) I expected high-quality continuation of information-sharing and collaborative sense-making in the comments.
I appreciate hearing clearly what you'd prefer to engage with.
I also feel that this response doesn't adequately acknowledge how tactically adversarial this context is, and how hard it is to navigate people's desire for privacy.
( ... which makes me feel sad, discouraged, and frustrated. It comes across as "why didn't you just say X", when there are in fact strong reasons why I couldn't "just" say X.)
By "tactically adversarial", I mean that Geoff has an incredibly strong incentive to suppress clarity, and make life harder for people contributing to clarity. Zoe's post goes into more detail about specific fears.
By "desire for privacy", I mean I can't publicly lay out a legible map of where I got information from, or even make claims that are specific enough that they could've only come from one person, because the first-hand sources do not want to be identifiable.
Unlike former members, Pareto fellows, workshop attendees, and other similar commenters here, I did not personally experience anything first-hand that is "truly mine to share".
It was very difficult for me to create a document that I felt comfortable making public, without feeling I was compromising the identity of any primary source. I had to stick to statements that were so generic and "commonly known" that they could not be traced back to any one person without that person's express permission.
I agree it's really hard to engage with such statements. In general it's really hard to make epistemic headway in an environment in which people fear serious personal repercussions and direct retribution for contributing to clarity.
I, too, find the whole epistemic situation frustrating. Frustration was my personal motivation for creating this document; namely that people I spoke to, who were interacting with Geoff in the present day, were totally unaware of any yellow flags at all around Geoff whatsoever.
My hope is that inch by inch, step by step, more and more truth and clarity can come out, as more and more people become comfortable sharing their personal experience.
Thanks for this. I think these distinctions are important.
Let me clarify: In this post when I say "Common knowledge among people who spent time socially adjacent to Leverage", what I mean is:
- I heard these directly from multiple different Leverage members.
- When I said these to others, they shared they had also heard the same things directly from other Leverage members, including members other than the ones I had spoken to.
- I was in groups of people where we all discussed that we had all heard these things directly from Leverage members. Some of these discussions included Leverage members who affirmed these things.
I believe there are several dozen people in the set of people this is true of.
So I did mean "People in my circles all know that we all know these things", and by "know" I meant "believe, with sourcing to multiple independent first-hand witnesses".
I do not count you as being in the "common knowledge" set, as your self-report is that you lightly believed these based on third-hand information that was "widely rumored". Rather than having been directly told it by a member; witnessing others being directly told it by members; and having people tell you they were directly told it by members.
It also seems that yet further additional other Leverage members, quite possibly separate from the ones we all spoke to, are publicly claiming some of these things aren't true to their own experience.
My current understanding is that members' experiences differed by subgroup they were part of, at particular points in time. (See e.g. in another comment "(Hedge: there were two smaller training groups where I believe it was a norm for members of the group to train each other. I wasn’t part of those groups and can’t speak to them.)"). So, it's likely that the social circle I'm speaking about had an understanding that was specific to a particular time period, based on reports from members involved in a particular slice of the organization.
Now that Zoe's Medium post is public, there exists for the first time a public first-hand report of many of these statements. So the indirection required to make the claims in this post is no longer quite as necessary. But in the absence of any member yet willing to attest publicly to these first-hand, making the most {defensible x useful} second-hand claims I was able to seemed like a productive step.
Completely fair. I've removed "facts" from the title, and changed the sub-heading "Facts I'd like to be common knowledge" (which in retrospect is too pushy a framing) to "Facts that are common knowledge among people I know"
I totally and completely endorse and co-sign "if people bring forward their personal impressions as different to the OP, this should in large part be treated as more data, and not a challenge."
Thank you for this.
In retrospect, I could've done more in my post to emphasize:
-
Different members report very different experiences of Leverage.
-
Just because these bullets enumerate what is "known" (and "we all know that we all know") among "people who were socially adjacent to Leverage when I was around", does not mean it is 100% accurate or complete. People can "all collectively know" something that ends up being incomplete, misleading, or even basically false.
I think my experience really mismatched the picture of Leverage described by OP.
I fully believe this.
It's also true that I had at least 3 former members, plus a large handful of socially-adjacent people, look over the post, and they all affirmed that what I had written was true to their experience; fairly obvious or uncontroversial; and they expected would be held to be true by dozens of people. Comments on this post attest to this, as well.
I don't advocate for an epistemic standard in which a single person, doing anything less than a singlehanded investigative journalistic dive, is expected to do more than that, epistemic-verification-wise, before sharing their current understanding publicly and soliciting more information in the comments.
Saying the same thing a different way: The post summarizes an understanding that dozens of people all share. If we're all collectively wrong, I don't advocate for a posting standard where the poster somehow determining that we're wrong, via some method other than soliciting more information in a public forum, is required before coming to a public forum with the best of our current understanding.
I am glad that this post is leading to a broader and more transparent conversation, and more details coming to light. That's exactly what I wanted to happen. It feels like the path forward, in coming to a better collective understanding.
Thank you again for your clear and helpful contribution.
I have now made an even more substantial edit to that bullet point.
Hi Larissa -
Dangers and harms from psychological practices
Please consider that the people who most experienced harms from psychological practices at Leverage may not feel comfortable emailing that information to you. Given what they experienced, they might reasonably expect the organization to use any provided information primarily for its own reputational defense, and to discredit the harmed parties.
Dating policies
Thank you for the clarity here.
Charting/debugging was always optional
This is not my understanding. My impression is that a strong expectation was established by individual trainers with their trainees. And that charting was generally done during the hiring process. Even if the stated policy was that it was not required/mandatory.
I would offer that "normal charting" as offered to external clients was being done in a different incentive landscape than "normal charting" as conducted on trainees within the organization. I mean both incentives on the trainer, and incentives on the trainee.
Concretely, incentives-wise:
- The organization has an interest in ensuring that the trainee updates their mind and beliefs to accord with what the organization thinks is right/good/true, what the organization thinks makes a person "effective", and what the organization needs from the member.
- The trainee may reasonably believe they could be de-funded, or at least reduced in status/power in the org, if they do not go along.
I added a sub-bullet to the main post, to clarify my epistemic status on that point.
This is also a useful resource, and the pingbacks link to other resources.
I want to gesture at "The Plan", linked from Gregory Lewis's comment (https://forum.effectivealtruism.org/posts/qYbqX3jX4JnTtHA5f/leverage-research-reviewing-the-basic-facts?commentId=8goitqWAZfEmEDrBT), as supporting evidence for the explicit "take over the world" vibe, in terms of how exactly beneficial outcomes for humanity were meant to result from the project, best viewable as PDF.