Common knowledge about Leverage Research 1.0

post by BayAreaHuman · 2021-09-24T06:56:14.729Z · LW · GW · 212 comments

Contents

  Facts that are common knowledge among people I know:
  Why these particular facts?
  How I know these things
  Focus on structural properties, not impacts or on-net "worth-it-ness".
  Going forward
None
213 comments

I've spoken to people recently who were unaware of some basic facts about Leverage Research 1.0; facts that are more-or-less "common knowledge" among people who spent time socially adjacent to Leverage, and are not particularly secret or surprising in Leverage-adjacent circles, but aren't attested publicly in one place anywhere.

Today, Geoff Anders and Leverage 2.0 are moving into the "Progress Studies" space, and seeking funding in this area (see: Geoff recently got a small grant from Emergent Ventures). This seems like an important time to contribute to common knowledge about Leverage 1.0.

You might conclude that I'm trying to discredit people who were involved, but that's not my aim here. My friends who were involved in Leverage 1.0 are people who I respect greatly. Rather, I just keep being surprised that people haven't heard certain specific, more-or-less legible facts about the past, that seem well-known or obvious to me, and that I feel should be taken into account when evaluating Leverage as a player in the current landscape. I would like to create here a publicly-linkable document containing these statements.

Facts that are common knowledge among people I know:

Why these particular facts?

One reason I feel it is important to make these particular facts more legibly known is because these pertain to the characteristics of a "high-demand group" (which is a more specific term than "cult", as people claim all kinds of subcultures and ideologies are a "cult").

You can compare some of the above bullets with the ICSA checklist of characteristics: https://www.icsahome.com/articles/characteristics.

There are many good reasons to structure groups in ways that have some of these characteristics, and to get involved in groups that have these characteristics. But it alarms me if the presence of these characteristics is simply not known by people interacting with Geoff or with Leverage 2.0 in its new and updated mission, and so this information is not taken into account in an evaluation.

How I know these things

Between 2016 and 2018 I became friends with a few Leverage members. I do not feel I was harmed by Leverage in any substantive way. None of the facts above are things that I got from a single point-of-contact; everything I state above is largely already known among people who were socially adjacent to Leverage when I was around.

Focus on structural properties, not impacts or on-net "worth-it-ness".

I try to focus my points above on structural facts about how the organization was set up, rather than what the result was.

I know former members who feel severely harmed by their participation in Leverage 1.0. I also know former members who view Leverage 1.0 as having been a deeply worthwhile experiment in world-improving. I don't think it's even remotely clear how "good" or "bad" the on-net impact of Leverage 1.0 was, and I don't aim here to speak to that. Nor do I aim to judge whether that organization structure was, or was not, "worth trying" because of the potential of "enormous upside".

I do worry about "ends justify the means" reasoning when evaluating whether a person or project was or wasn't "good for the world" or "worth supporting". This seems especially likely when using an effective-altruism-flavored lens that only a few people/organizations/interventions will matter orders of magnitude more than others. If one believes that a project is one of very few projects that could possibly matter, and the future of humanity is at stake - and also believes the project is doing something new/experimental that current civilization is inadequate for - there is a risk of using that belief to extend unwarranted tolerance of structurally-unsound organizational decisions, including those typical of "high-demand groups" (such as use of psychological techniques to increase member loyalty, living and working in the same place, non-platonic relationships with subordinates, secrecy, and so on) without proportionate concern for the risks of structuring an organization in that way.

Going forward

I'm posting this anonymously because, at the moment, this is all I have to say and I don't want to discuss the topic at length. Also, I don't want to become known as someone saying things this organization might find unflattering. If you happen to know who wrote this post, please don't spread that knowledge. I have asked in advance for a LW moderator to vouch in a comment that I'm someone known to them, who they broadly trust to be epistemically reasonable, and to have written good posts in the past.

If anyone would like to share other information about Leverage 1.0, feel free to do so in the comments section.

212 comments

Comments sorted by top scores.

comment by Aella · 2021-10-13T01:25:54.342Z · LW(p) · GW(p)

Here's a long, detailed account of a Leverage experience which, to me, reads as significantly more damning than the above post: https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b
 

Replies from: RobbBB, AnnaSalamon
comment by Rob Bensinger (RobbBB) · 2021-10-13T08:16:36.484Z · LW(p) · GW(p)

Miscellaneous first-pass thoughts:

Geoff had everyone sign an unofficial NDA upon leaving agreeing not to talk badly about Leverage

I really don't like this. Could I see the NDA somehow? If the wording equally forbids sharing good and bad stuff about Leverage, then I'm much less bothered by this. Likewise if the wording forbids going into certain details, but lets former staff criticize Leverage at a sufficient level of abstraction.

Otherwise, this seems very epistemically distorting to me, and in a direction that things already tend to be distorted (there's pressure against people saying bad stuff about their former employer). How am I supposed to form accurate models of Leverage if former employees can't even publicly say 'yeah, I didn't like working at Leverage'??

One of my supervisors would regularly talk about this as a daunting but inevitable strategic reality (“obviously we’ll do it, and succeed, but seems hard”).

"It" here refers to 'taking over the US government', which I assume means something like 'have lots of smart aligned EAs with very Leverage-y strategic outlooks rise to the top decision-making ranks of the USG'. If I condition on 'Leverage staff have a high probability of succeeding here', then I could imagine that a lot of the factors justifying confidence are things that I don't know about (e.g., lots of people already in high-ranking positions who are quietly very Leverage-aligned). But absent a lot of hidden factors like that, this seems very overconfident to me, and I'm surprised if this really was a widespread Leverage view.

In retrospect, the guy clearly needed help (he was talking to G-d, believed he was learning from Kant himself live across time, and felt the project was missing the importance of future contact w/ aliens — this was not a joke)

???? I'm so confused about what happened here. The aliens part (as stated) isn't a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there's something being lost in translation here, and missing context for why people didn't immediately see that this person was having a mental breakdown?

For example, it wasn’t uncommon to hear “Connection Theory is the One True Theory of Psychology,” “Geoff is the best philosopher who has ever lived” “Geoff is maybe the only mind who has ever existed who is capable of saving the world” or “Geoff’s theoretical process is world-historical.”

A crux for me is that I don't think Geoff's philosophy heuristics are that great. He's a very smart and nimble reasoner, but I'm not aware of any big cool philosophy things from him, and I do think he's very wrong about the hard problem of consciousness (though admittedly I think this is one of the hardest tests of philosophy-heuristics, and breaks a lot of techniques that normally work elsewhere).

If I updated toward 'few if any Leveragers really believed Geoff was that amazing a philosopher', or toward 'Geoff really is that amazing of a philosopher', I'd put a lot less weight on the hypothesis 'Leverage 1.0 was systematically bad-at-epistemics on a bunch of core things they spent lots of time thinking about'.

Replies from: AnnaSalamon, ChristianKl, ChristianKl, ChristianKl, PhilGoetz
comment by AnnaSalamon · 2021-10-13T17:30:08.968Z · LW(p) · GW(p)

???? I'm so confused about what happened here. The aliens part (as stated) isn't a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there's something being lost in translation here, and missing context for why people didn't immediately see that this person was having a mental breakdown?

FWIW, my own experience is that people often miss fairly blatant psychotic episodes; so I'm not sure how Leverage-specific the explanation needs to be for this one. For example, once I came to believe that an acquaintance was having a psychotic episode and suggested he see a psychiatrist; the psychiatrist agreed. A friend who'd observed most of the same data I had asked me how I'd known. I said it was several things, but that the bit where our acquaintance said God was talking to him through his cereal box was one of the tip-offs from my POV. My friend's response was "oh, I thought that was a metaphor." I know several different stories like this one, including a later instance where I was among those who missed what in hindsight was fairly blatant evidence that someone was psychotic, none of which involved weird group-level beliefs or practices.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2021-10-14T10:11:23.041Z · LW(p) · GW(p)

I'd guess that the people in question had a mostly normal air to them during the episode, just starting to say weird things?

Most people's conception of a psychotic episode probably involves a sense of the person acting like a stereotypical obviously crazy person on the street. Whereas if it's someone they already know and trust, just acting slightly more eccentric than normal, people seem likely to filter everything the person says through a lens of "my friend's not crazy so if they do sound crazy, it's probably a metaphor or else I'm misunderstanding what they're trying to say".

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-14T15:26:11.390Z · LW(p) · GW(p)

Yes.

comment by ChristianKl · 2021-10-13T09:18:08.801Z · LW(p) · GW(p)

???? I'm so confused about what happened here. The aliens part (as stated) isn't a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there's something being lost in translation here, and missing context for why people didn't immediately see that this person was having a mental breakdown?

I would imagine that other people saw his relationship to Kant as something like Kant being a Shoulder Advisor, maybe with additional steps to make it feel more real.

In an enviroment where some people do seances and use crystals to clean negative energy, they might have thought that if you believe in the realness of rituals things get more effective. So someone who manages to get to the position to literally believe they are talking to Kant instead of just to some abstraction of their mind of Kant being more powerful.

I do think they messed up here by not understanding why truth is valuable, but I can see how things played out that way.

comment by ChristianKl · 2021-10-13T11:32:45.158Z · LW(p) · GW(p)

If I condition on 'Leverage staff have a high probability of succeeding here', then I could imagine that a lot of the factors justifying confidence are things that I don't know about (e.g., lots of people already in high-ranking positions who are quietly very Leverage-aligned). But absent a lot of hidden factors like that, this seems very overconfident to me, and I'm surprised if this really was a widespread Leverage view.

They seem to have believed that they can turn people into having Musk level competence. A hundred people with Musk level competence might execute a plan like the one Cummings proposed to successfully take over the US government.

If they really could transform people in that way, that might be reasonable. Stories like Zoe's however suggests that they didn't really have an ability to do that and instead their experiments dissolved into strange infighting and losing touch with reality.

comment by ChristianKl · 2021-10-13T08:52:31.336Z · LW(p) · GW(p)

I really don't like this. Could I see the NDA somehow? If the wording equally forbids sharing good and bad stuff about Leverage, then I'm much less bothered by this. Likewise if the wording forbids going into certain details, but lets former staff criticize Leverage at a sufficient level of abstraction.

Interestingly my comment further down that asks for details about the information sharing practices has very little upvotes ( https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=qqAFyqZrfAdHsuBz4 [LW(p) · GW(p)])

It seems like most people reading this thread are more interested in upvoting judgements then in requests for information.

comment by PhilGoetz · 2021-10-14T17:38:36.125Z · LW(p) · GW(p)

To me, saying that someone is a better philosopher than Kant seems less crazy than saying that saying that someone is a better philosopher than Kant seems crazy.

Replies from: DanielFilan, Linch
comment by DanielFilan · 2021-10-14T20:32:20.110Z · LW(p) · GW(p)

Isn't the thing Rob is calling crazy that someone "believed he was learning from Kant himself live across time", rather than believing that e.g. Geoff Anders is a better philosopher than Kant?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-14T21:20:36.682Z · LW(p) · GW(p)

Yeah, I wasn't talking about the 'better than Kant' thing.

Regarding the 'better than Kant' thing: I'm not particularly in awe of Kant, so I'm not shocked by the claim that lots of random people have better core philosophical reasoning skills than Kant (even before we factor in the last 240 years of philosophy, psychology, etc. progress, which gives us a big unfair advantage vs. Kant).

The part I'm (really quite) skeptical of is "Geoff is the best philosopher who’s ever lived". What are the major novel breakthroughs being gestured at here?

comment by Linch · 2021-10-14T17:40:44.010Z · LW(p) · GW(p)

It's more crazy after you load in the context that people at Leverage think Kant is more impressive than eg Jeremy Bentham. 

comment by AnnaSalamon · 2021-10-13T03:24:29.172Z · LW(p) · GW(p)

CFAR recently hosted a “Speaking for the Dead” event, where a bunch of current and former staff got together to try to name as much as we could of what had happened at CFAR, especially anything that there seemed to have been (conscious or unconscious) optimization to keep invisible.

CFAR is not dead, but we took the name anyhow from Orson Scott Card’s novel by the same name, which has quotes like:

“...and when their loved ones died, a believer would arise beside the grave to be the Speaker for the Dead, and say what the dead one would have said, but with full candor, hiding no faults and pretending no virtues.”

“A strange thing happened then. The Speaker agreed with her that she had made a mistake that night, and she knew when he said the words that it was true, that his judgment was correct. And yet she felt strangely healed, as if simply saying her mistake were enough to purge some of the pain of it. For the first time, then, she caught a glimpse of what the power of speaking might be. It wasn’t a matter of confession, penance, and absolution, like the priests offered. It was something else entirely. Telling the story of who she was, and then realizing that she was no longer the same person. That she had made a mistake, and the mistake had changed her, and now she would not make the mistake again because she had become someone else, someone less afraid, someone more compassionate.”

“... there were many who decided that their life was worthwhile enough, despite their errors, that when they died a Speaker should tell the truth for them.”

CFAR’s “speaking for the dead” event seemed really good to me. Healing, opening up space for creativity. I hope the former members of Leverage are able to do something similar. I really like and appreciate Zoe sharing all these details, and I hope folks can meet her details with other details, all the details, whatever they turn out to have been.

I don't know what context permits that kind of conversation, but I hope all of us on the outside help create whatever kind of context it is that allows truth to be shared and heard.

Replies from: Duncan_Sabien, Spiracular, TekhneMakre
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-10-13T04:56:30.211Z · LW(p) · GW(p)

I felt strong negative emotions reading the above comment.

I think that the description of CFAR’s recent speaking-for-the-dead leaves readers feeling positive and optimistic and warm-fuzzy about the event, and about its striving for something like whole truth.

I do believe Anna's report that it was healing and spacious for those who were there, and I share Anna's hope that something similarly good can happen re: a Leverage conversation.

But I think I see the description of the event as trying to say something like “here’s an example of the sort of good thing that is possible.”

And I wanted anyone updating on that particular example to know that I was invited to the event, and declined the invitation, explaining that I genuinely could not cause myself to believe that I was actually welcome, or that it would be safe for me to be there.

This is a fact about me, not about the event.  But it seems relevant, and I believe it changes the impression left by the above comment to be more accurate in a way that feels important.

(I was not the only staff alumnus absent, to be clear.)

I ordinarily would not have left this comment at all, because it feels dangerously ... out of control, or something, in that I do not know what the-act-of-having-written-it will do.  I do not understand and have no idea how to navigate the social currents here, and am not going to try.  I will probably not contribute anything further to this thread unless directly asked by someone like Anna or a moderator.

What caused me to speak up anyway, despite feeling scared and in-over-my-head, was the bit in Anna’s other comment, where she said that she hopes people will not “refrain from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments."

EDIT: for context, I worked at CFAR from October of 2015 to October of 2018, and was its curriculum director and head-of-workshops for two of those three years.

Replies from: DPiepgrass
comment by DPiepgrass · 2021-11-10T12:45:27.766Z · LW(p) · GW(p)

The former curriculum director and head-of-workshops for the Center For Applied Rationality would not be welcome or safe at a CFAR event?

What the **** is going on?

It sounds to me like mission failure, but I suppose it could also just be eccentric people not knowing how to get along (which isn't so much different?) 😕

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-11-10T19:38:22.892Z · LW(p) · GW(p)

It's not just people not knowing how to get along.

I am trying to navigate between Scylla and Charybdis, here; trying to adhere to normal social norms of live-and-let-live and employers and employees not badmouthing each other without serious justification and so forth.  Trying to be honest and candid without starting social wars.

But it's not just people not knowing how to get along.  It's something much closer to the gestalt of this comment [LW(p) · GW(p)], although please note that I directly replied to that comment with a lot of disagreements on the level of fact.

comment by Spiracular · 2021-10-14T18:39:16.561Z · LW(p) · GW(p)

I had to read this a few times before I pieced it together, so I wanted to make sure to clarify this publicly.

You are NOT saying this public forum is the place for that. Correct?

You are proposing that it might be nice, if someone else pulled this together?

Perhaps as something like a carefully-moderated facebook group, or an event.

(I think this would require a good moderator, or it will generate more drama than it solves. It would have to be someone who does NOT have "Leverage PR firm vibes," and needs a lot of early clarity about who will not be invited. Also? Work out early what your privacy policy is! And be clear about how much it intends to be reports-oriented or action-oriented, and do not change that status later. People sometimes make these mistakes, and it's awful.)

Because on the off-chance that you didn't mean that...

I did have some contact with the Leverage strangeness here. But despite that, I have remarkably few social ties that would keep me from "saying what I think about it." I still feel seriously reluctant to get into it, on a public forum like this. I imagine that some others would have an even harder time.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-14T19:59:47.197Z · LW(p) · GW(p)

That's right; I am daydreaming of something very difficult being brought together somehow, in person or in writing (probably slightly less easily-visible-across-the-whole-internet writing, if in writing). I’d be interested in helping but don’t have the know-how on my own to pull it off. I agree with you there’re lots of ways to try this and make things worse; I expect it's key to have very limited ambitions and to be clear about how very much one is not attempting/promising.

comment by TekhneMakre · 2021-10-13T11:09:40.903Z · LW(p) · GW(p)
I hope folks can meet her details with other details, all the details, whatever they turn out to have been.

This is an agreeable target, and also, it seems like we have to keep open hypotheses under which many kinds of detail are systematically not shared. E.g., if someone spent some years self-flagellating for remembering details that would contradict a narrative, those details might have not fully crystallized into verbalizable memories. So more detail is better, of course, but assuming that the ("default") asymptote of more detail will be sufficient for anything is fraught, not that anyone made that assumption.

comment by Ben Pace (Benito) · 2021-09-24T06:53:47.905Z · LW(p) · GW(p)

I vouch that this person is both a LW user who has written IMO some good posts and a member of in-person rationalist/longtermist/EA communities who is in good standing.

Edit: This comment is not meant as an endorsement (nor is this a disendorsement) of the content of the post. I generally support LWers and rationalists being able to post pseudonymously and have their identity as longstanding members of the various communities verified.

comment by Beth Barnes (beth-barnes) · 2021-09-28T20:46:57.395Z · LW(p) · GW(p)
Using psychological techniques to experiment on one another, and on the "sociology" of the group itself, was a main purpose of the group. It was understood among members that they were signing up to be guinea pigs for experiments in introspection, altering one's belief structure, and experimental group dynamics.

The Pareto program felt like it had substantial components of this type of social/psychological experimentation, but participants were not aware of this in advance and did not give informed consent. Some (maybe most?) Pareto fellows, including me, were not even aware that Leverage was involved in any way in running the program until they arrived, and found out they were going to be staying in the Leverage house.

Replies from: juliawise, ChristianKl
comment by juliawise · 2021-09-30T18:08:23.643Z · LW(p) · GW(p)

CEA regards it as one of our mistakes that the Pareto Fellowship was a CEA program, but our senior management didn't provide enough oversight of how the program was being run. To Beth and other participants or applicants who found it misleading or harmful in some way - we're sorry.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-21T14:22:33.656Z · LW(p) · GW(p)

Why doesn't the mistake page say anything about Leverage being involved with the Pareto Fellowship? Is that a statement that this part wasn't seen as a mistake?

Replies from: juliawise
comment by juliawise · 2021-10-27T17:24:58.145Z · LW(p) · GW(p)

Sorry I missed this - we're working on a couple of updates to the mistakes page, including about this. I can let you know once the new text is up.

Replies from: juliawise
comment by juliawise · 2022-01-14T21:51:17.773Z · LW(p) · GW(p)

The new text is finally up: https://www.centreforeffectivealtruism.org/our-mistakes

comment by ChristianKl · 2021-09-28T21:08:14.436Z · LW(p) · GW(p)

Do you have a link for more description of the Pareto program?

Replies from: beth-barnes, beth-barnes
comment by Beth Barnes (beth-barnes) · 2021-09-29T01:20:25.922Z · LW(p) · GW(p)

The basic outline is:

There were ~20 Fellows, mostly undergrad-aged with one younger and a few older.

Stayed in Leverage house for ~3 months in summer 2016 and did various trainings followed by doing a project with mentorship to apply things learnt from trainings

Training was mostly based on Leverage ideas but also included fast-forward versions of CFAR workshop, 80k workshop. Some of the content was taught by Leverage staff and some by CEA staff who were very 'in Leverage's orbit'.

I think most fellows felt that it was really useful in various ways but also weird and sketchy and maybe harmful in various other ways.

Several fellows ended up working for Leverage afterwards; the whole thing felt like a bit of a recruiting drive.

comment by prevlev-anon · 2021-10-04T20:50:57.849Z · LW(p) · GW(p)

Hi all, former Leverage 1.0 employee here. 

The original post and some of the comments seem epistemically low quality to me compared to the typical LessWrong standard. In particular, on top of a lot of insinuations, there are some false facts. This seems especially problematic given that the post is billed as common knowledge. 

There’s a lot of dispute and hate directed towards Leverage, which frankly, has made me hesitant to defend it online. However, a friend of mine in the community recently said something to the effect of, “Well, no former Leverage employee has ever defended it on the attack posts, which I take as an indication of silent agreement.” 

That rattled me and so I’ve decided to weigh in. I typically stay quiet about Leverage online because I don’t know how to say nuanced or positive things without fear of that blowing back on me personally. For now, I’d ask to remain anonymous, but if it ever seems like people are willing to approach the Leverage topic differently, I intend to put my name on this post. I don’t expect my opinion alone (especially anonymously) to substantially change anything, but I hope it will be considered and incorporated into a coherent and more complete picture. 

At a macro level, I had a really positive experience at Leverage. I didn’t feel pressured to do self-improvement or use experimental psychology techniques, and I appreciated the freedom to do independent research. I felt I could (and did on several occasions) opt out of the group-dynamics experiments and training, and was largely free to do my own thing. I learned a lot, became much more curious about the world and willing to form and defend my own views, and met some really amazing people. If I ever have kids and tell them about my bold younger years, I fully expect wholesome Leverage stories to be on the list (with no cult-undertones). I found the people to be kind and thoughtful, and the organization as a whole to be broadly supportive and respectful of my wishes and boundaries. The intellectual environment was incredible and the best of my life. The worst part of my Leverage experience was the negativity I experienced from the EA and rationality communities (for example, receipt of hate mail), and the distance that put between me and people I respect.

Overall, I think my experience really mismatched the picture of Leverage described by OP. 

That said, I want to second Freyja’s comment that Leverage was large and pretty decentralized and people’s experiences really differed. I know of at least two former employees who I believe had importantly negative experiences, and that speaks to mistakes made by the organization and its participants. Nonetheless, I think claiming that the OP’s picture above represents common knowledge is importantly wrong and a real disservice to future efforts of rationalists to try to understand Leverage. 

On the object level, here are some comments on aspects of the original bullets that didn’t match my experience: 

• I didn’t feel encouraged or pressured to live at the office as a new hire. I lived there initially because it made it easier to relocate from the East Coast. I moved out shortly after and no one seemed bothered. 

• I didn’t find the information policy I signed overly stringent. I’ve signed confidentiality agreements with multiple normal for-profit companies (that aren’t affiliated with Leverage, EA, or rationality), and this policy was less restrictive than those. It allowed for personal blogs as well as sharing Leverage training techniques and research piecemeal (without approval required). It required permission before publishing the organization’s research online or starting an extended training / coaching relationship with anyone. It also prohibited sharing personal information about hires or information a trainer learned about a client during training / coaching. These rules seemed sensible to me. I had two different outside-of-Leverage romantic partners while I worked at Leverage, and I saw an external counselor. I discussed my experiences at Leverage (and Leverage’s research) with both and didn’t feel I was in violation of the information policy. 

• Charting was not the only self-improvement or psychology technique that Leverage researched or used in training. Focusing, IFS, coherence therapy, CBT tools, deliberate practice, TAPs, meditation, and more were also used and incorporated. Individual researchers and trainers also developed and used a number of their own techniques that were not based on charting. The charting technique Geoff initially developed also underwent a number of changes over the years primarily driven by researchers other than Geoff. Leverage’s training and psychology research was not primarily driven by Geoff or predominantly composed of charting. 

• I had a good experience with all the training I did and did not experience any form of mental fragmentation. I had one very positive experience in particular, where my social anxiety was significantly and stably lessened afterward. Otherwise I found the training beneficial and better than various other self-improvement tools I’ve tried, but unflashy. I was initially hopeful about larger or faster training successes, but I mostly didn’t experience these; good tools for thinking about how to solve my problems, improving my models, and relating to my feelings reliably helped me, but there was no magic self-improvement sauce. 

• It is not true that people were expected to undergo training by their manager. My understanding and experience of the policy and norms were that (1) training / debugging / coaching wasn’t required, (2) if you chose to do training you could choose your trainer (or choose to avoid a particular trainer or set of trainers), (3) “trainer” was a particular job role that did not include being a manager (did not include evaluating performance or determining payroll status), and (4) not all members of the org were trainers or expected to train anyone. Over several years, I switched between trainers several times with no problem and chose to avoid working with certain trainers entirely. (Hedge: there were two smaller training groups where I believe it was a norm for members of the group to train each other. I wasn’t part of those groups and can’t speak to them.)

• 6 types of bodywork were researched (that I know of): a bodywork style from NYU’s acting school, energy work done by Luminous Awareness, body work styles used by two different independent body work practitioners that people recommended, embodiment and movement techniques (for example, the alexander technique, feldenkrais), body-focused introspection (Focusing), and massage (one researcher looked into and pursued massage certification). Touching in all forms I encountered or heard about was minimal and consensual (like a hand on the back), and not all body work involved touch. Several researchers thought body work was ineffective and overblown, and several thought it was effective and useful (among those, some thought the change was obvious and legible and some felt the impacts were confusing or hard to pin down). This was a big source of internal disagreement. While I tend to prefer interventions that are on my priors more credible than body work and energy healing, there are a lot of anecdotal reports of large positive effects from body work and energy healing (like curing chronic pain) and I was glad that some people chose to look into it.

• I did not join Leverage to be a guinea pig for psychological experimentation. I joined because I wanted to research self-improvement techniques and I liked the vision of starting a university for people who wanted to run high impact projects. I was disappointed with how little I learned in college, and I was (and still am) excited about research into different versions of higher education. I thought the training techniques Leverage had were interesting and helpful, but “being experimented on” was not my primary purpose in joining nor would I now describe it as a main focus of my time at Leverage. 

• I did not find the group to be overly focused on “its own sociology.” Most people I interacted with were mostly doing research (including research in the field of history and sociology), ops (accounting, facility maintenance), or training (see tools above), rather than focusing on the group itself. Near the end, there was lots of internal conflict between different teams / subgroups of the organization, which did feel self-indulgent and unhealthy to me. My understanding is that this contributed to the organization being shut down. 

• The stated purpose of Leverage 1.0 was not to literally take over the US and/or global governance or “take over the world,” nor did I believe or feel pressured to believe that the organization would do so. I’ve been told that the original mission was fairly classic EA (improve the world via the most effective interventions), but Leverage took a more abstract reasoning oriented and less data driven approach. I am glad they did this, largely for diversification (though I can see why people object to Leverage taking talent that might otherwise have gone to other EA orgs) and because it led to them running the initial EA Summits. By the time I joined, the stated mission was to improve the world through social science, specifically via research and delivery of useful training and effectiveness techniques, and research into history / sociology, and coordination. This matched my experience of what the org did day-to-day. For most of the years I was there, there was a training team, a sociology team, etc. Within that broad umbrella there was a lot of diversity in what people worked on and what impact they believed their research and Leverage overall would have; I can’t speak to what other individuals privately believed, but OPs claim is false.

• I did not believe or feel pressured to believe that Leverage was “the only organization with a plan that could possibly work.” I continued to be involved in and supportive of EA while working at Leverage, including donating to SENS (pre-recent disputes), GFI, and other orgs. I respected Eliezer, loved HPMOR, was optimistic about MIRI, and thought the raising-the-sanity-waterline goal of the rationality community was great. I also think many hospitals, animal shelters, advocacy groups, and other extremely common interventions and institutions succeed at their missions and contribute to improving the world in many straightforward ways. 

• I didn’t believe or feel like I was supposed to believe that Geoff “was among the best and most powerful ‘theorists’ in the world.” 

• I did not find “Geoff’s power and prowess as a leader [to be] a central theme.” For most of my years at Leverage 1.0, I interacted primarily with my research, my team, and my team leader; Geoff / Geoff’s leadership was not a major focus for us. In the year before Geoff shut Leverage 1.0 down, Geoff’s leadership was a central theme insofar as he came under criticism internally for not resolving conflict in the group. I think this was hard for all parties involved, and is not best characterized as “his power being a central theme.” 

• The comment on Geoff’s dating life (even after OPs edits) still strikes me as misleading. For example, one of the women mentioned was in a long-term relationship with Geoff prior to her joining Leverage. She subsequently applied to work at Leverage and was accepted by a hiring committee in accordance with the recruitment policy at the time; the hiring committee knew she was in a relationship with Geoff which she expected to continue, and considered that in the hiring process. (I communicated with her to make sure she was okay with me posting this bullet, and she also added that she did not consider herself to be a subordinate to Geoff while they were dating.) I believe there’s similar clarifying context in the other cases, though I’m not willing to discuss the details without permission from the others involved. I also want to go on record and apologize for participating in the discussion of someone’s romantic life online, and I’m sorry it’s come to this. 

Three final comments: 

- I believe that Leverage was great in many ways and I personally benefited a lot from working there, but I also believe it had real problems and made mistakes. I think the discussion in the comments speaks for itself re: that there were negatives associated with Leverage. I view experimenting with self-improvement tools and non-standard organizational structures to generally be risky (but worth having at least some organizations do) and Leverage didn’t handle it delicately in all cases; when I hear of former Leveragers reporting harms, I tend to believe them and find fault with the organization. However, I also think there are generally fewer reports (in the grapevine or formally reported) of harms than are widely believed to exist and less of a picture of the positives. 

- Sometimes I have heard members of the rationalist community hear positive reports about Leverage from ex-Leveragers, insinuate that the ex-Leveragers are basically “still brainwashed,” and then ignore the information. This seems epistemically problematic, because it is very hard to respond to. I don’t know if there’s anything I can do about that here, other than try to convey some nuance, and caution that if all Leveragers’ positive experiences are dismissed as brainwashing or cult-member-positivity, it will be very hard to find out any time Leverage-centered gossip is wrong. I’d desperately like the in-person Bay area community to form a more coherent view on Leverage that unifies the positives and negatives, and extracts lessons about self-experimentation, psychology and training research, non-standard company structures, and weird ambitious organizations in general. I don’t see how that will happen if the current discourse around Leverage doesn’t improve substantially, including making the environment more palatable for Leveragers to talk about the positives and negatives of their experience. 

- Finally, I expect to respond to comments that seem to me like they’re posted in the spirit of genuine inquiry – please avoid vitriol and insinuations. Sorry for the length of this comment, thanks for bearing with me. 

Replies from: BayAreaHuman, anon44, Freyja, RyanCarey, ozziegooen
comment by BayAreaHuman · 2021-10-12T18:39:47.699Z · LW(p) · GW(p)

Thank you for this.

In retrospect, I could've done more in my post to emphasize:

  1. Different members report very different experiences of Leverage.

  2. Just because these bullets enumerate what is "known" (and "we all know that we all know") among "people who were socially adjacent to Leverage when I was around", does not mean it is 100% accurate or complete. People can "all collectively know" something that ends up being incomplete, misleading, or even basically false.

I think my experience really mismatched the picture of Leverage described by OP.

I fully believe this.

It's also true that I had at least 3 former members, plus a large handful of socially-adjacent people, look over the post, and they all affirmed that what I had written was true to their experience; fairly obvious or uncontroversial; and they expected would be held to be true by dozens of people. Comments on this post attest to this, as well.

I don't advocate for an epistemic standard in which a single person, doing anything less than a singlehanded investigative journalistic dive, is expected to do more than that, epistemic-verification-wise, before sharing their current understanding publicly and soliciting more information in the comments.

Saying the same thing a different way: The post summarizes an understanding that dozens of people all share. If we're all collectively wrong, I don't advocate for a posting standard where the poster somehow determining that we're wrong, via some method other than soliciting more information in a public forum, is required before coming to a public forum with the best of our current understanding.

I am glad that this post is leading to a broader and more transparent conversation, and more details coming to light. That's exactly what I wanted to happen. It feels like the path forward, in coming to a better collective understanding.

Thank you again for your clear and helpful contribution.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-10-12T20:36:13.553Z · LW(p) · GW(p)

I don't advocate for an epistemic standard in which a single person, doing anything less than a singlehanded investigative journalistic dive, is expected to do more than that, epistemic-verification-wise, before sharing their current understanding publicly and soliciting more information in the comments.

Sure, but you called the post “Common Knowledge Facts”. If you’d called the post “Me and my friends’ beliefs about Leverage 1.0” or “Basic claims I believe about Leverage 1.0” then that would IMO be a better match for the content and less so claim to universality (that everyone should assume the content of the post as consensus and only question it if strong counter evidence comes in).

Right now, for someone to disagree with the post, they’re in a position where they’re challenging the “facts” of the situation that “everyone knows”. In contrast I think the reality is that if people bring forward their personal impressions as different to the OP, this should in large part be treated as more data, and not a challenge.

Replies from: BayAreaHuman
comment by BayAreaHuman · 2021-10-13T01:00:06.023Z · LW(p) · GW(p)

Completely fair. I've removed "facts" from the title, and changed the sub-heading "Facts I'd like to be common knowledge" (which in retrospect is too pushy a framing) to "Facts that are common knowledge among people I know"

I totally and completely endorse and co-sign "if people bring forward their personal impressions as different to the OP, this should in large part be treated as more data, and not a challenge."

Replies from: Benito, Ruby
comment by Ben Pace (Benito) · 2021-10-13T01:52:58.729Z · LW(p) · GW(p)

Appreciate you editing the post, that seems like an improvement to me.

comment by Ruby · 2021-10-13T22:39:40.506Z · LW(p) · GW(p)

It feels like the "common knowledge" framing is functioning as some form of evidence claim? "Evidence for the truth of these statements is that lots of people believe them". And if it's true that lots of people believe them, that is legitimate Bayesian evidence.

At the same time, it's kind of hard to engage with and I think saying "everyone knows" make it feel harder to argue with. 

A framing I like (although I'm not sure if entirely helps here with ease of engagement) is the "this is what I believe and how I came to believe it" approach, as advocated here. [LW · GW] So you'd start of with "I believe Leverage Research 1.0 has many of the properties of a high-demand group such as" proceeding to "I believe this because of X things I observed and Y things that I heard and were corroborated by groups A and B", etc.

Replies from: BayAreaHuman
comment by BayAreaHuman · 2021-10-14T00:39:59.247Z · LW(p) · GW(p)

I appreciate hearing clearly what you'd prefer to engage with.

I also feel that this response doesn't adequately acknowledge how tactically adversarial this context is, and how hard it is to navigate people's desire for privacy.

( ... which makes me feel sad, discouraged, and frustrated. It comes across as "why didn't you just say X", when there are in fact strong reasons why I couldn't "just" say X.)

By "tactically adversarial", I mean that Geoff has an incredibly strong incentive to suppress clarity, and make life harder for people contributing to clarity. Zoe's post goes into more detail about specific fears.

By "desire for privacy", I mean I can't publicly lay out a legible map of where I got information from, or even make claims that are specific enough that they could've only come from one person, because the first-hand sources do not want to be identifiable.

Unlike former members, Pareto fellows, workshop attendees, and other similar commenters here, I did not personally experience anything first-hand that is "truly mine to share".

It was very difficult for me to create a document that I felt comfortable making public, without feeling I was compromising the identity of any primary source. I had to stick to statements that were so generic and "commonly known" that they could not be traced back to any one person without that person's express permission.

I agree it's really hard to engage with such statements. In general it's really hard to make epistemic headway in an environment in which people fear serious personal repercussions and direct retribution for contributing to clarity.

I, too, find the whole epistemic situation frustrating. Frustration was my personal motivation for creating this document; namely that people I spoke to, who were interacting with Geoff in the present day, were totally unaware of any yellow flags at all around Geoff whatsoever.

My hope is that inch by inch, step by step, more and more truth and clarity can come out, as more and more people become comfortable sharing their personal experience.

Replies from: Ruby
comment by Ruby · 2021-10-16T03:27:45.936Z · LW(p) · GW(p)

I'm very sorry. Despite trying to closely follow this thread, I missed your reply until now.

I also feel that this response doesn't adequately acknowledge how tactically adversarial this context is, and how hard it is to navigate people's desire for privacy.

You're right, it doesn't. I wasn't that aware or thinking about those elements as much as I could have been. Sorry for that.

It was very difficult for me to create a document that I felt comfortable making public...

It makes sense now that this is the document you ended up writing. I do appreciate you went to the effort to write up a critical document to bring important concerns. It is valuable and important that people do so.

My hope is that inch by inch, step by step, more and more truth and clarity can come out, as more and more people become comfortable sharing their personal experience.

Hear, hear.

--

If you'll forgive me suggesting again what you should have written, I'm thinking the adversarial context might have been it. If I had read that you were aware of a number of severe harms that weren't publicly known, but that you couldn't say anything more specific because of fears of retribution and the need to protect privacy–that would have been a large and important update to me regarding Leverage. And it might have got a conversation going into the situation to figure out whether and what information was being suppressed.

But it's easier to say that in hindsight.

Replies from: BayAreaHuman
comment by BayAreaHuman · 2021-10-16T04:29:12.607Z · LW(p) · GW(p)

Thanks, this all helps. At the time, I felt that writing this with the meta-disclosures you're describing would've been a tactical error. But I'll think on this more; I appreciate the input, it lands better this time.

I did write both "I know former members who feel severely harmed" and "I don't want to become known as someone saying things this organization might find unflattering". But those are both very, very understated, and purposefully de-emphasized.

comment by anon44 · 2021-10-05T00:21:07.094Z · LW(p) · GW(p)

Another former Leverage employee here. I agree with the bullet points in Prevlev's post. And my experience of Leverage broadly matches theirs.

comment by Freyja · 2021-10-05T08:25:53.243Z · LW(p) · GW(p)

This is great, and straightforward, and I’m glad you joined the conversation. Thank you.

comment by RyanCarey · 2021-10-13T23:59:14.541Z · LW(p) · GW(p)

It would be useful to have a clarification of these points, to know how different of an org you actually encountered, compared to the one I did when I (briefly) visited in 2014.

It is not true that people were expected to undergo training by their manager.

OK, but did you have any assurance that the information from charting was kept confidential from other Leveragers? I got the impression Geoff charted people who he raised money from, for example, so it at least raises the question whether information gleaned from debugging might be discussed with that person's manager.

“being experimented on” was not my primary purpose in joining nor would I now describe it as a main focus of my time at Leverage. 

OK, but would you agree that a primary activity of leverage was to do psych/sociology research, and a major (>=50%) methodology for that was self-experimentation?

I did not find the group to be overly focused on “its own sociology.”

OK, but would you agree that at least ~half of the group spent at least ~half of their time studying psychology and/or sociology, using the group as subjects?

The stated purpose of Leverage 1.0 was not to literally take over the US and/or global governance or “take over the world,”...OPs claim is false.

OK, but you agree that it was was to ensure "global coordination" and "the impossibility of bad governments", per the plan, right? Do you agree that "the vibe was 'take over the world'", per the OP?

I did not believe or feel pressured to believe that Leverage was “the only organization with a plan that could possibly work.”

OK, but would you agree that many staff said this, even if you personally didn't feel pressured to take the belief on?

 I did not find “Geoff’s power and prowess as a leader [to be] a central theme.”

OK, but did you notice staff saying that he was one of the great theorists of our time? Or that a significant part of the hope for the organisation was to deploy adapt certain ideas of his, like connection theory, which "solved psychology" to deal with cases with multiple individuals, in order to design larger orgs, memes, etc?

Hopefully, the answers to these questions could be mostly-separated from our subjective impressions. Which might sound harsh, or resembling a cross-examination. But it seems necessary in order to figure out to what extent we can reach a shared understanding of "common knowledge facts", at least about different moments in LR's history (potentially also differing in our interpretations), versus the facts themselves actually being contested.

comment by ozziegooen · 2021-10-08T17:38:42.140Z · LW(p) · GW(p)

+1 for the detail. Right now there's very little like this explained publicly (or accessible in other ways to people like myself). I found this really helpful.

I agree that the public discussion on the topic has been quite poor.

comment by orthonormal · 2021-09-29T01:31:08.311Z · LW(p) · GW(p)

This is subjective and all, but I met Geoff Anders at our 2012 CFAR workshop, I absolutely had the "this person wants to be a cult leader" vibe from him then, and I've been telling people as much for the entire time since. (To the extent of hurting my previously good friendships with two increasingly-Leverage-enmeshed people, in the mid-2010s.)

I don't know why other people's cult-leader-wannabe-detectors are set so differently from mine, but it's a similar (though less deadly) version of how I quickly felt about a certain person [don't name him, don't summon him] who's been booted from the Berkeley community for good reason.

Replies from: RyanCarey, kerry-vaughan
comment by RyanCarey · 2021-10-07T16:41:50.234Z · LW(p) · GW(p)

He's also told me, deadpan, that he would like to be starting a cult if he wasn't running Leverage.

Replies from: matt
comment by matt · 2021-10-08T04:08:17.369Z · LW(p) · GW(p)

I've read this comment several times, and it seems open to interpretation whether RyanCarey is mocking orthonormal for presenting weak evidence by presenting further obviously weak evidence, or whether RyanCarey is presenting weak evidence believing it to be strong.

Just to lean on the scales a little here, towards readers taking from these two comments (Ryan's and orthonormal's) what I think could (should?) be taken from them…

An available interpretation of orthonormal's comment is that orthonormal:

  1. had a first impression of Geoff that was negative,
  2. then backed that first impression so hard that they "[hurt their] previously good friendships with two increasingly-Leverage-enmeshed people" (which seems to imply: backed that first impression against the contrary opinions of two friends who were in a position to gather increasingly overwhelmingly more information by being in a position to closely observe Geoff and his practices),
  3. while telling people of their first impression "for the entire time since" (for which, absent other information about orthonormal, it is an available interpretation that orthonormal engaged in what could be inferred to be hostile gossip based on very little information and in the face of an increasing amount of evidence (from their two friends) that their first impression was false (assuming that orthonormal's friends were themselves reasonable people)).
  4. (In this later comment [LW(p) · GW(p)]) orthonormal then reports interacting with Geoff "a few times since 2012" (and reports specific memory of one conversation, I infer with someone other than Geoff, about orthonormal’s distrust of Leverage) (for which it is an available interpretation that orthonormal gathered much less information than their "Leverage-enmeshed" friends would have gathered over the same period, stuck to their first impression, and continued to engage in hostile gossip).

Those who know orthonormal may know that this interpretation is unreasonable given their knowledge of orthonormal, or out of character given other information about orthonormal, or may know orthonormal's first impressions to be unusually (spectacularly?) accurate (I think that I often have a pretty good early read on folks I meet, but having as much confidence in my early reads as I infer from what orthonormal has said in this comment, that orthonormal has in their reads, would seem to require pretty spectacular evidence), or etc., and I hope that readers will use whatever information they have available to draw their own conclusions, but I note that only the information presented in orthonormal's comment seems much more damning of orthonormal than of Geoff.
(And I note that orthonormal has accumulated >15k karma on this site… which… I don't quite know how to marry to this comment, but it seems to me might cause a reasonable person to assume that orthonormal was better than what I have suggested might be inferred from their comment… or, noting that at the time I write this orthonormal has accumulated 30 points of karma from what seems to me to be… unimpressive as presented?… that there may be something going on in the way this community allocates karma to comments (comments that do not seem to me to be very good).)

Then, RyanCarey's comment specifically uses "deadpan", a term strongly associated with intentionally expressionless comedy, to describe Geoff saying something that sounds like what a reasonable person might infer was intentional comedy if said by another reasonable person. So… the reasonable inference, only from what RyanCarey has said, seems to me to be that Geoff was making a deadpan joke.

I think I met Geoff at the same 2012 CFAR workshop that orthonormal did, and I have spent at least hundreds of hours since in direct conversation with Geoff, and in direct conversation with Geoff's close associates. It seems worth saying that I have what seems to me to be overwhelmingly more direct eyewitness evidence (than orthonormal reports in their comment) that Geoff does not seem to me to be someone who wants to be a cult leader. I note further that several comments have been published to this thread by people I know to have had even closer contact over the years with Geoff than I have, and those comments seem to me to be reporting that Geoff does not seem to them to be someone who wants to be a cult leader. I wonder whether orthonormal has other evidence, or whether orthonormal will take this opportunity to reduce their confidence in their first impression… or whether orthonormal will continue to be spectacularly confident that they've been right all along.
And given my close contact with Geoff I note that it seems only a little out of character for Geoff to deliver a deadpan joke, in the face of the very persistent accusations he has fielded, on evidence that seems to me to be of similar quality to the evidence that orthonormal presents here, that he is or is felt to be tending towards, or reminiscent of, a cult leader, a joke “that he would like to be starting a cult if he wasn't running Leverage”. RyanCarey doesn't report their confidence in the accuracy of their memory of this conversation, but given what I know, and what RyanCarey and orthonormal report only in these comments, I invite readers to be both unimpressed and unconvinced by this presentation of evidence that Geoff is a "cult-leader-wannabe".

(I want to note that while readers may react negatively to me characterising orthonormal’s behaviour as “hostile gossip”, I am in the process of drafting a more comprehensive discussion of the OP and other comments here, in which I will try to make a clear case that my use of that term is justified. If you are, based on the information you currently have, highly confident that I am being inappropriately rude in my responses here, to a post that I will attempt to demonstrate is exceedingly rude, exceedingly poorly researched and exceedingly misleading, then you are, of course, welcome to downvote this comment. If you do, I invite you to share feedback for me, so that I can better learn the standards and practices that have evolved on this site since my team first launched it.)
(If your criticism is that I did not take the time to write a shorter letter… then I’ll take those downvotes on the chin. 😁)

Instead of however we might characterise the activity we’re all engaging in here, I wonder whether we might ask Geoff directly? @Geoff_Anders, with my explicit apology for this situation, and the recognition that (given the quality of discourse exhibited here), it would be quite reasonable for you to ignore this and continue carrying on with your life, would you care to comment?

(A disclosure that some readers may conclude is evidence of collusion or conspiracy, and others might conclude is merely the bare minimum amount of research required before accusing someone of activities such as those this post (doesn't actually denotationally accuse Geoff of, but very obviously connotationally) accuses Geoff of: In the time between the OP being posted and this comment, I have communicated with Geoff and several ex-Leverage staff and contributors.)

Replies from: RyanCarey, Vladimir_Nesov, Duncan_Sabien
comment by RyanCarey · 2021-10-08T16:40:05.587Z · LW(p) · GW(p)

As in, 5+ years ago, around when I'd first visited the Bay, I remember meeting up 1:1 with Geoff in a cafe. One of the things I asked, in order to understand how he thought about EA strategy, was what he would do if he wasn't busy starting Leverage. He said he'd probably start a cult, and I don't remember any indication that he was joking whatsoever. I'd initially drafted my comment as "he told me, unjokingly", except that it's a long time ago, so I don't want to give the impression that I'm quite that certain.

comment by Vladimir_Nesov · 2021-10-08T11:28:26.985Z · LW(p) · GW(p)

accumulated 30 points of karma from what seems to me to be… unimpressive as presented?

I upvoted on the value of the comment as additional source data (IIRC when the comment had much lower karma). This value shouldn't be diminished by questionable interpretation/attitude bundled with it, since the interpretation can be discarded, but the data can't be magicked up.

This is a general consideration that applies to communications that provoke a much stronger urge to mute them, for example those that defend detestable positions. If such communications bring you new relevant data, even data that doesn't significantly change your understanding of the situation, they are still precious, the effects of processing them and not ignoring them sum up over all such instances. (I think the comment to this post most rich in relevant data is prevlev-anon's, which I strong-upvoted.)

Replies from: Duncan_Sabien, matt
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-10-08T14:49:51.605Z · LW(p) · GW(p)

This makes sense to me in my first pass of thinking about it, and I agree.  

There's something subtle and extremely hard to pull off (perhaps impossible) in: "in the wishing world, what do we think a shared voting policy should be, such that the aggregate of everyone voting consistently according to that policy leaves all comments in approximately the same order that a single extremely perceptive and high-quality reasoner would rank them?"

As opposed to comments just trending toward infinities.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-10-08T15:36:51.048Z · LW(p) · GW(p)

This works out for the earlier top level comments (that see similar voter turnout), the absolute numbers just scale with popularity of the post. If something is not in its place in your ideal ranking, it's possible to use the vote to move it that way. Vote weights do a little bit to try improving the quality (or value lock-in) of the ranking.

One issue with the system is the zero equilibrium on controversial things, with the last voters randomly winning irrespective of the actual distribution of opinion. It's unclear how to get something more informative for such situations, but this should be kept in mind as a use case for any reform.

comment by matt · 2021-10-11T11:00:58.179Z · LW(p) · GW(p)

I'm trying to apply the ITT [? · GW] to your position, and I'm pretty sure I'm failing (and for the avoidance of doubt I believe that you are generally very well informed, capable and are here engaging in good faith, so I anticipate that the failing is mine, not yours). I hope that you can help me better understand your position:

My background assumptions (not stated or endorsed by you):
Conditional on a contribution (a post, a comment) being all of (a) subject to a reasonably clear interpretation (for the reader alone, if that is the only value the reader is optimising for, or otherwise for some (weighted?) significant portion of the reader community), (b) with content that is relevant and important to a question that the reader considers important (most usually the question under discussion), and (c) that is substantially true, and it is evident that it is true from the content as it is presented (for the reader alone, or the reader community), then…

My agreement with the value that I think you're chasing:
… I agree that there is at least an important value at stake here, and the reader upvoting a contribution that meets those conditions may serve that important value.

Further elaboration of my background assumptions:
If (a) (clear interpretation) is missing, then the reader won't know there's value there to reward, or must (should?) at least balance the harms that I think are clear from the reader or others misinterpreting the data offered.
If (b) (content is relevant) is missing, then… perhaps you like rewarding random facts? I didn't eat breakfast this morning. This is clear and true, but I really don't expect to be rewarded for sharing it.
If (c) (evident truth) is missing, then either (not evident) you don't know whether to reward the contribution or not, or (not true) surely the value is negative?

My statement of my confusion:
Now, you didn't state these three conditions, so you obviously get to reject my claim of their importance… yet I've pretty roundly convinced myself that they're important, and that (absent some very clever but probably nit-picky edge case, which I've been around Lesswrong long enough to know is quite likely to show up) you're likely to agree (other readers should note just how wildly I'm inferring here, and if Vladimir_Nesov doesn't respond please don't assume that they actually implied any of this). You also report that you upvoted orthonormal's comment (I infer orthonormal's comment instead of RyanCarey's, because you quoted "30 points of karma", which didn't apply to RyanCarey's comment). So I'm trying to work out what interpretation you took from orthonormal's comment (and the clearest interpretation I managed to find is the one I detailed in my earlier comment: that orthonormal based their opinion overwhelmingly on their first impression and didn't update on subsequent data), whether you think the comment shared relevant data (did you think orthonormal's first impression was valuable data pertaining to whether Leverage and Geoff were bad? did you think the data relevant to some other valuable thing you were tracking, that might not be what other readers would take from the comment?), and whether you think that orthonormal's data was self-evidently true (do you have other reason to believe that orthonormal's first impressions are spectacular? did you see some other flaw in the reasoning I my earlier comment?)

So, I'm confused. What were you rewarding with your upvote? Were you rewarding (orthonormal's) behaviour, that you expect will be useful to you but misleading for others, or rewarding behaviour that you expect would be useful on balance to your comment's readers (if so, what and how)? If my model is just so wildly wrong that none of these questions make sense to answer, can you help me understand where I fell over?


(To the inevitable commenter who would, absent this addition, jump in and tell me that I clearly don't know what an ITT is: I know that what I have written here is not what it looks like to try to pass an ITT — I did try, internally, to see whether I could convince myself that I could pass Vladimir_Nesov's ITT, and it was clear to me that I could not. This is me identifying where I failed — highlighting my confusion — not trying to show you what I did.)

Edit 6hrs after posting: formatting only (I keep expecting Github Flavoured Markdown, instead of vanilla Markdown).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-10-11T13:14:19.568Z · LW(p) · GW(p)

There is an important class of claims detailed enough to either be largely accurate or intentional lies, their distortion can't be achieved with mere lack of understanding or motivated cognition. These can be found even in very strange places, and still be informative when taken out of context.

The claim I see here is that orthonormal used a test for dicey character with reasonable precision. The described collateral damage of just one positive reading signals that it doesn't trigger all the time, and there was at least one solid true positive. The wording also vaguely suggests that there aren't too many other positive readings, in which case the precision is even higher than the collateral damage signals.

Since base rate is lower than the implied precision, a positive reading works as evidence. For the opposite claim, that someone has an OK character, evidence of this form can't have similar strength, since the base rate is already high and there is no room for precision to get significantly higher.

It's still not strong evidence, and directly it's only about character in the sense of low-level intuitive and emotional inclinations. This is in turn only weak evidence of actual behavior, since people often live their lives "out of character", it's the deliberative reasoning that matters for who someone actually is as a person. Internal urges are only a risk factor and a psychological inconvenience for someone who disagrees with their own urges and can't or won't retrain them, it's not an important defining characteristic and not relevant in most contexts. This must even be purposefully disregarded in some contexts to prevent discrimination.

Edit: I managed to fumble terminology in the original version of this comment and said "specificity" instead of "precision" or "positive predictive value", which is what I actually meant. It's true that specificity of the test is also not low (much higher even), and for basically the same reasons, but high specificity doesn't make a positive reading positive evidence.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2021-10-08T06:38:59.879Z · LW(p) · GW(p)

The culture of Homo Sabiens often clashes pretty hard with the culture of LessWrong, so I can't speak to how this will shake out overall.

But in the culture of Homo Sabiens, and in the-version-of-LessWrong-built-and-populated-by-Duncans, this is an outstanding comment, exhibiting several virtues, and also explicitly prosocial in its treatment of orthonormal and RyanCarey in the process of disagreement (being careful and explicit, providing handholds, preregistering places where you might be wrong, distinguishing between claims about the comments and about the overall people, being honest about hypotheses and willing to accept social disapproval for them, etc.)

I have strong-upvoted and hope further interaction with RyanCarey and orthonormal and other commenters both a) happens, and b) goes well for all involved.  I would try to engage more substantively, but I'm currently trying to kill a motte-and-bailey elsewhere.

comment by Kerry Vaughan (kerry-vaughan) · 2021-09-29T02:52:18.640Z · LW(p) · GW(p)

What an incredibly rude thing to say about someone. I hope no one ever posts their initial negative impressions upon meeting you online for everyone to see.

Geoff Anders is a real person. Stop treating him like he's not.

Added: This comment was too harsh given the circumstance. My apologies to orthonormal for overreacting.

Replies from: drethelin, Freyja
comment by drethelin · 2021-10-13T01:23:29.764Z · LW(p) · GW(p)

Real people can and often are extremely dangerous and it is not rude to describe dangerous people as acting in dangerous ways, or if it is then it is a valuable form of rudeness. 

comment by Freyja · 2021-09-29T07:54:05.742Z · LW(p) · GW(p)

I have a sincere question for you, Kerry, because you seem to be upset by the approach commenters here are taking to talking about this issue and the people involved, and people here are openly discussing the character of your employer, which I can imagine to be really painful.

If your sister or brother or your significant other had become enmeshed in a controlling group and you believed the group and in particular its leader had done them serious psychological harm, how would you want people to talk about the group and its leader in public, after the fact? What sorts of discussions, comments or questions would you consider reasonable or necessary under such circumstances, and what would you consider off the table?

(Specifically, I’m not focused on whether you believe Leverage 1.0 had those characteristics, but how you would respond towards a group and its leader that you personally believed -did- have these characteristics)

Replies from: kerry-vaughan
comment by Kerry Vaughan (kerry-vaughan) · 2021-09-29T20:10:43.834Z · LW(p) · GW(p)

Assuming something like this represents your views Freyja, then I think you’ve handled the situation quite well. 

I hope you can see how that is quite different from the comment I was replying to which is someone who appears to have met Geoff once. I'm sure you can similarly imagine how you would feel if people made comments like the one from orthonormal about friends of yours without knowing them.

Replies from: orthonormal, ChristianKl
comment by orthonormal · 2021-10-03T07:07:21.869Z · LW(p) · GW(p)
  1. Thank you for scaling back your initial response.
  2. I've interacted with Geoff a few times since 2012, and continued to have that bad feeling about him. 
  3. I wanted to let people know that these impressions started even prior to Leverage, and that I know I'm not retconning my memory, because I remember a specific conversation in summer 2014 about my distrust of Leverage (and I believe that wasn't the first such conversation). This post would not have surprised 2012!me; the signs may have been subjective but they were there.
  4. Without getting to the object level, it's very fair to discuss the personality of someone who wields power and authority over people, especially if one mechanism of influence is telling those people that the world is at stake. 
    1. The rationalist community did in fact have to have such conversations about Eliezer over the years, and (IMO) mostly concluded that he actively wants to just sit in a comfortable cave and produce FAI progress with his team, and so he delegates any social authority/power he gains to trusted others, making him a safer weirdo leader figure than most.
Replies from: hg00
comment by hg00 · 2021-10-16T01:40:47.013Z · LW(p) · GW(p)

The rationalist community did in fact have to have such conversations about Eliezer over the years, and (IMO) mostly concluded that he actively wants to just sit in a comfortable cave and produce FAI progress with his team, and so he delegates any social authority/power he gains to trusted others, making him a safer weirdo leader figure than most.

Was this conversation held publicly on a non-Eliezer-influenced online forum?

I think there's a pretty big difference -- from accounts I've read about Leverage, the "Leverage community" had non-public conversations about Geoff as well, and they concluded he was a great guy.

comment by ChristianKl · 2021-09-29T20:36:26.961Z · LW(p) · GW(p)

He said that he had significant discussions about Geoff with people near Leverage afterwards that damaged those relationships. That suggests that the sense was very strong and he had talked about it with people who actually know him more deeply.

Replies from: kerry-vaughan
comment by Kerry Vaughan (kerry-vaughan) · 2021-09-30T00:15:58.643Z · LW(p) · GW(p)

This is a good point. I think I reacted too harshly. I've added an apology to the orthonormal to the original comment

comment by anon2021a · 2021-09-28T04:30:09.612Z · LW(p) · GW(p)

> some of the people who don’t like us
https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-facts-about-leverage-research-1-0-1?commentId=jSCFY2ypMpvAZr8sy [LW(p) · GW(p)]

> However, I would also like to note that Leverage 1.0 has historically been on the receiving end of substantial levels of bullying, harassment, needless cruelty, public ridicule, and more by people who were not engaged in any legitimate epistemic activity. I do not think this is OK. I intend to call out this behavior directly when I see it. I would ask that others do so as well.
https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-facts-about-leverage-research-1-0-1?commentId=hqDXAtk6cnqDStkGC [LW(p) · GW(p)]

 

It would be sad if people came away with the idea that the OP was motivated by hate, jealousy, or tribalism. I think the OP is motivated out of deep compassion for the wider community.

Leverage keeps coming up because Geoff Anders (and associates) emit something epistemically and morally corrosive and are gaslighting the commons about it. And Geoff keeps trying to disingenuously hit the reset button and hide it, to exploit new groups of people. That’s what people are responding to and trying to counteract in posts like the OP.

-

Edit: Removed a paragraph.

Replies from: kerry-vaughan
comment by Kerry Vaughan (kerry-vaughan) · 2021-09-28T23:02:46.855Z · LW(p) · GW(p)

Leverage keeps coming up because Geoff Anders (and associates) emit something epistemically and morally corrosive and are gaslighting the commons about it. And Geoff keeps trying to disingenuously hit the reset button and hide it, to exploit new groups of people. That’s what people are responding to and trying to counteract in posts like the OP.

This seems pretty unfair to me and I believe we’re trying quite hard to not hide the legacy of Leverage 1.0. For example, we (1) specifically chose to keep the Leverage name; (2) are transparent about our intention to stand up for Leverage 1.0 [LW · GW]; and (3) Geoff’s association with Leverage 1.0 is quite clear from his personal website. Additionally, given the state of Leverage’s PR after Leverage 1.0 ended, the decision to keep the name was quite costly and stemmed from a desire to preserve the legacy of Leverage 1.0.

comment by Davis_Kingsley · 2021-10-06T01:29:10.028Z · LW(p) · GW(p)

You know, I'm not necessarily a great backer of Leverage Research, especially some of its past projects, but I feel the level of criticism that it has faced relative to other organizations in the space is a bit bizarre. Many of the things that Leverage is criticized for (such as being secretive, seeing themselves at least in part as saving the world, investing in projects that look crazy to intelligent outsiders, etc.) in my view apply to many rationalist/EA organizations. This is not to say that those other organizations are wrong to do these things necessarily, just that it's weird to me that people go after Leverage-in-particular for reasons that often don't seem to be consistently applied to other projects in the space.

(I have never been an employee of Leverage Research, though at one point they were potentially interested in recruiting me and I was not interested; at another point I checked in re: potentially working there but didn't like the sound of the projects they seemed to be recruiting for at the time.)


EDIT 10/13: My original comment was written before the Medium post from Zoe Curzi. The contents of that Medium post are very concerning to me and seem very unlike what I've encountered in other rationalist or EA organizations.

Replies from: Ruby, orthonormal
comment by Ruby · 2021-10-13T22:05:56.173Z · LW(p) · GW(p)

The new Medium post does imply that Leverage cannot be simply lumped with other EA/Rationalist orgs (I too haven't heard anything that concerning reported of any other org), but I don't think that invalidates your original point that the criticisms in this post, as written, could be levelled at many orgs. (I actually wrote such a damning-sounding list for LessWrong/Lightcone).

Replies from: Davis_Kingsley
comment by Davis_Kingsley · 2021-10-14T04:37:48.088Z · LW(p) · GW(p)

I agree, but I wanted to be clear that my original comment was largely in reply to the original post and in my view does not much apply to the Medium post, which I consider much more specific and concerning criticism.

Replies from: Ruby
comment by Ruby · 2021-10-14T05:14:40.572Z · LW(p) · GW(p)

Entirely fair!

comment by orthonormal · 2021-10-13T19:44:46.642Z · LW(p) · GW(p)

My own strong agreement with the content makes it hard to debias my approval here, but I want to generally massively praise edits that explicitly cross out the existing comment, and state that they've changed their minds, and why they've done so.

(There are totally good reasons to retract without comment, of course, and I'm glad that LW now offers this option. I'm just giving Davis credit for putting his update out there like this.)

comment by Aella · 2021-09-26T05:39:05.807Z · LW(p) · GW(p)

Wanna +1 all these things are points I've heard from people who were at Leverage, also. I also have a more negative opinion of Leverage than might be implied by the points alone, for the record.

comment by Michael Edward Johnson (michael-edward-johnson) · 2021-09-27T00:23:54.118Z · LW(p) · GW(p)

Speaking personally, based on various friendships with people within Leverage, attending a Leverage-hosted neuroscience reading group for a few months, and having attended a Paradigm Academy weekend workshop.

I think Leverage 1.0 was a genuine good-faith attempt at solving various difficult coordination problems. I can’t say they succeeded or failed; Leverage didn’t obviously hit it out of the park, but I feel they were at least wrong in interesting, generative ways that were uncorrelated with the standard and more ‘boring’ ways most institutions are wrong. Lots of stories I heard sounded weird to me, but most interesting organizations are weird and have fairly strict IP protocols so I mostly withhold judgment.

The stories my friends shared did show a large focus on methodological experimentation, which has benefits and drawbacks. Echoing some of the points, I do think when experiments are done on people, and they fail, there can be a real human cost. I suspect some people did have substantially negative experiences from this. There’s probably also a very large set of experiments where the result was something like, “I don’t know if it was good, or if was bad, but something feels different.”

There’s quite a lot about Leverage that I don’t know and can’t speak to, for example the internal social dynamics.

One item that my Leverage friends were proud to share is that Leverage organized the [edit: precursor to the] first EA Global conference. I was overall favorably impressed by the content in the weekend workshop I did, and I had the sense that to some degree Leverage 1.0 gets a bad rap simply because they didn’t figure out how to hang onto credit for the good things they did do for the community (organizing EAG, inventing and spreading various rationality techniques, making key introductions). That said I didn’t like the lack of public output.

I’ve been glad to see Leverage 2.0 pivot to progress studies, as it seems to align more closely with Leverage 1.0’s core strength of methodological experimentation, while avoiding the pitfalls of radical self-experimentation.

Would the world have been better if Leverage 1.0 hadn’t existed? My personal answer is a strong no. I’m glad it existed and was unapologetically weird and ambitious in the way it was and I give its leadership serious points for trying to build something new. 

Replies from: ChristianKl, alex-k-chen
comment by ChristianKl · 2021-09-27T08:38:26.768Z · LW(p) · GW(p)

inventing and spreading various rationality techniques

Besides belief reporting, which rationality techniques did they invent and spread intot he community where they should get credit?

Replies from: michael-edward-johnson
comment by Michael Edward Johnson (michael-edward-johnson) · 2021-09-27T18:10:11.731Z · LW(p) · GW(p)

Goal factoring is another that comes to mind, but people who worked at CFAR or Leverage would know the ins and outs of the list better than I.

Replies from: ESRogs
comment by ESRogs · 2021-09-29T04:19:36.420Z · LW(p) · GW(p)

My understanding is that Geoff Anders and Andrew Critch each independently invented goal factoring, and had even been using the same diagramming software to do it! (I'm not sure which one of them first brought it to CFAR.)

Replies from: David Hornbein, raj-thimmiah
comment by David Hornbein · 2021-09-29T06:36:07.400Z · LW(p) · GW(p)

Geoff Anders was the first one to teach it at CFAR workshops, I think in 2013. This is the first time I've heard claims of independent invention, at the time all the CFAR people who mentioned it were synced on the story that Anders was a guest instructor teaching a technique that Leverage had developed. (Andrew Critch worked at CFAR at the time. I don't specifically remember whether or not I heard anything about goal factoring from him.)

Replies from: Unnamed, TekhneMakre
comment by Unnamed · 2021-10-13T19:09:52.055Z · LW(p) · GW(p)

Anna & Val taught goal factoring at the first CFAR workshop (May 2012). I'm not sure if they used the term "goal factoring" at the workshop (the title on the schedule was "Microeconomics 1: How to have goals"), but that's what they were calling it before the workshop including in passing on LW [LW(p) · GW(p)]. Geoff attended the third CFAR workshop as a participant and first taught goal factoring at the fourth workshop (November 2012), which was also the first time the class was called "Goal Factoring". Geoff was working on similar stuff before 2012, but I don't know enough of the pre-2012 history to know if there was earlier cross-pollination between Geoff & CFAR folks.

Critch developed aversion factoring.

comment by TekhneMakre · 2021-09-29T12:08:16.523Z · LW(p) · GW(p)

In this video from March 2014 https://www.youtube.com/watch?v=k255UjGEO_c Andrew Critch says he developed "Aversion factoring".

Replies from: lincolnquirk
comment by lincolnquirk · 2021-10-05T15:56:17.196Z · LW(p) · GW(p)

I believe this. Aversion factoring is a separate insight from goal factoring.

comment by Raj Thimmiah (raj-thimmiah) · 2021-10-03T19:22:31.821Z · LW(p) · GW(p)

Do you have a link to more info on how they do goal factoring/what software they were using?

Replies from: lincolnquirk
comment by lincolnquirk · 2021-10-05T15:57:32.286Z · LW(p) · GW(p)

When I learned it from Geoff in 2011, they were recommending yEd Graph Editor. The process is to generally write things you do or want to do as nodes, and then connect them to each other using "achieves or helps to achieve" edges (i.e., if you go to work, that achieves making money, which achieves other things you want).

comment by Alex K. Chen (parrot) (alex-k-chen) · 2023-05-14T16:43:26.180Z · LW(p) · GW(p)

When was the precursor to the first EAG? Before 2015?

comment by MalcolmOcean (malcolmocean) · 2021-09-25T05:04:24.401Z · LW(p) · GW(p)

facts that are more-or-less "common knowledge" among people who spent time socially adjacent to Leverage


Yup, sounds right. As someone who visited the rationality community in the bay a bunch in 2013-2018, almost nothing listed in the bullet points was a surprise to me, and off-hand I can think of dozens of other people who I would assume also know almost everything written above. (I'm sure there are more such people, that I haven't met or wouldn't remember.)

I don't have anything in particular to say about the implications of these facts, just seemed worth mentioning this thing re common knowledge.

(The main thing I hadn't heard about was the sexual relationships bullet point.)

Replies from: BrienneYudkowsky
comment by LoganStrohl (BrienneYudkowsky) · 2021-10-13T05:13:53.440Z · LW(p) · GW(p)

man, i'm kinda mad about something going on with this "knowledge" word. i'd really like to insert some space in here between "lots of people believe a thing" and "lots of people know a thing".

i believed most of the bullet points in a low-confidence, easy-to-change-my-mind kind of way. the real thing is that all the bullet points have been widely rumored. it's not the case that all those rumoring people had justified true belief that everyone else had justified true belief about the bullet points, or whatever. if you announce a bunch of rumors with the word "knowledge" attached, it increases people's confidence and a bunch of switches in their mind flip from "here's a hypothesis i'm holding lightly because it came from the rumor mill" over to "yeah i wasn't surprised to hear those things, yet now i'm even more sure of them".

and like, i do recognize that in the vernacular, "common knowledge" (everyone knows everyone knows) isn't really distinguished from a weaker thing that might be called "common belief" (everyone at-least-somewhat-believes everyone at-least-somewhat-believes). but that doesn't mean we should go around conflating such things all to hell like normal people do.

ugh blerg grump. i am kind of exasperated. i guess i really want the top level post to own a bunch more of its shit, epistemically.

and i didn't really mean to direct all of that right at you, Malcolm, your comment just helped the blergness snap into place in my head enough that i ended up typing things.

Replies from: BayAreaHuman, malcolmocean
comment by BayAreaHuman · 2021-10-13T19:34:52.348Z · LW(p) · GW(p)

Thanks for this. I think these distinctions are important.

Let me clarify: In this post when I say "Common knowledge among people who spent time socially adjacent to Leverage", what I mean is:

  • I heard these directly from multiple different Leverage members.
  • When I said these to others, they shared they had also heard the same things directly from other Leverage members, including members other than the ones I had spoken to.
  • I was in groups of people where we all discussed that we had all heard these things directly from Leverage members. Some of these discussions included Leverage members who affirmed these things.

I believe there are several dozen people in the set of people this is true of.

So I did mean "People in my circles all know that we all know these things", and by "know" I meant "believe, with sourcing to multiple independent first-hand witnesses".

I do not count you as being in the "common knowledge" set, as your self-report is that you lightly believed these based on third-hand information that was "widely rumored". Rather than having been directly told it by a member; witnessing others being directly told it by members; and having people tell you they were directly told it by members.

It also seems that yet further additional other Leverage members, quite possibly separate from the ones we all spoke to, are publicly claiming some of these things aren't true to their own experience.

My current understanding is that members' experiences differed by subgroup they were part of, at particular points in time. (See e.g. in another comment "(Hedge: there were two smaller training groups where I believe it was a norm for members of the group to train each other. I wasn’t part of those groups and can’t speak to them.)"). So, it's likely that the social circle I'm speaking about had an understanding that was specific to a particular time period, based on reports from members involved in a particular slice of the organization.

Now that Zoe's Medium post is public, there exists for the first time a public first-hand report of many of these statements. So the indirection required to make the claims in this post is no longer quite as necessary. But in the absence of any member yet willing to attest publicly to these first-hand, making the most {defensible x useful} second-hand claims I was able to seemed like a productive step.

comment by MalcolmOcean (malcolmocean) · 2021-10-13T05:29:47.753Z · LW(p) · GW(p)

Glad to have helped your blergness snap into place—not taking it personally. I share your concerns here in the specific case and in the general case re the word "knowledge"! And that people understanding the difference between "common knowledge" and other things is important.

More accurately maybe I could say "this matches what I understand to be the widespread model of Leverage known by dozens of people to be held among those dozens"

Some of it I observed directly or was told it by Leverage folks myself though, so "rumor" doesn't feel like an adequate descriptor from my vantage point.

comment by OlliPayne · 2021-10-04T06:24:30.400Z · LW(p) · GW(p)

Hi, I'm Olli Payne. I first encountered Leverage in person during the summer of 2018 and worked at Paradigm from August 2019 through April 2020.

I moved to the Bay from NYC in April 2018, after hearing about communities there (EA, Rationality, Leverage, Futurism, etc) that are focused on thinking long-term and having a large positive impact, something that resonated with me and my goals. After attending several EA meetups, I went to a few EA Global afterparties, including one at Leverage's Lake Merritt apartment.

I'd already started to hang out with Leverage employees who I'd met at the afterparty when I requested to be invited to a Paradigm workshop. I attended the workshop in June of 2018 and after finding the tools incredibly useful, I began to pursue a job at Leverage.

During the year before I was hired at Paradigm, I made many friends with employees of both Paradigm and Leverage. We went bouldering, saw movies, played video games, tried to perfect the baking of pies... I'm very happy to say that I'm still close with many of these friends.

This was my take-away from being around Leverage 1.0:

The organization and its members did have the stated goal of "world-saving," but that phrase was used within the group's culture as an intentionally-campy blanket term for world improvement, since it was known and accepted within the ecosystem of projects that no two people completely agreed on how that should be done, and everyone was there to figure it out together. It was a complex group effort, NOT a one-minded initiative, and I enjoyed being around people who were very intellectually diverse but had the same big-picture desire to see a much better world than exists today. This is an ambitious and tough environment to exist in, and it's challenging to avoid personal conflict that can lead to interpersonal misconduct when as much diversity is present in a group with such high-stakes goals. I understand that people were negatively affected by this. But call it what it is.

In 2019, I was (finally) hired at Paradigm as a trainer. This was at the beginning of the Leverage 2.0 era. Paradigm's main focus was to make the tools more accessible to a wider audience. Because I thought the tools were great, and after becoming familiar with the issues that adjacent communities had with the level of transparency around them, I was excited to help with that goal. While I was employed at Paradigm, I was never told that I had to participate in training. I also streamed on Twitch with 1.5k followers my first few months employed there. My content included gaming as well as talking about self-improvement and mental exercises like Focusing. Co-workers watched my stream and no one asked or told me to stop.

I see great potential in the tools, and I'm confident that they will be more than useful in the self-improvement journeys of those looking to fix critical problems in the world.

I'll also add that I don't normally share personal information online, especially in public forums like this, but I'm making an exception here because it adds some perspective:

Today, I live in Louisville, KY with my boyfriend who had never even heard of LessWrong before I showed him this post. I occasionally do bodywork with him, and he'd actually already experienced it at a workshop during his Undergrad program for Opera & Voice at WKU before we even met. We have 2 cats, our own condo, and too many jalapenos growing in our garden to know what to do with. I run a Branding and Design studio focused on helping impactful projects succeed in the memetic space - feel free to reach out, I'm excited to work with anyone whose goals are aligned with my studio's.

Lastly, I'd like to mention that I'm grateful to Leverage. After my time at Paradigm ended they supplied me with the work and contacts that got my studio off the ground. I apply much of what I learned from Leverage 1.0 and Paradigm to develop the frameworks and theories that I use on a daily basis to help cool people and projects succeed.

comment by ChristianKl · 2021-09-25T10:56:34.987Z · LW(p) · GW(p)

Participation in the project involved secrecy / privacy / information-management agreements. 

How strong were those agreements? How much were the participants allowed to share privately with friends, family or outside therapists?

comment by Freyja · 2021-09-27T00:30:54.161Z · LW(p) · GW(p)

Yup. I have known all of these things since 2018-2019, and know or know of maybe a few dozen people who also know these things. I’m glad this bare minimum is being discussed openly, publicly.

Secondhand, I have a very negative view of at least some parts of what happened in Leverage 1.0. My best guess is that the relationships and events that some people have (mostly privately) described as controlling or abusive were not evenly distributed across the whole organisation. So it would have been straightforward for someone to be working at Leverage and never see or get deeply involved with situations that a handful of people have, in private or in semi-public conversations, described clearly as cultic abuse. It seems like there are on the order of dozens of people who probably had a roughly fine time being involved in Leverage for many years, and at least a handful of people who report much more negative experiences.

(I’m @utotranslucence on Twitter; never officially had a LessWrong account before but been around the Bay Area community since 2017. I attended one Paradigm training weekend in early 2018 and some parties at the Lake Merritt building but most of my knowledge comes from conversations with friends who did work there, and there are plenty of things I still don’t know with great clarity.)

Replies from: LarissaRowe
comment by LarissaRowe · 2021-09-28T00:44:44.387Z · LW(p) · GW(p)

Hi Freyja,

I just wanted to reply to this to let you know that it is totally plausible to me that some people who were involved in Leverage 1.0 or any affiliated organizations might have had pretty bad experiences, especially towards the end. I haven't heard any specific cases personally, but by all accounts, there were some pretty intense group dynamics and I can very much imagine that could have been quite harmful to people. I’m not saying this is the same and I don’t want to speak for anyone else’s experience, but I’ve been involved in intense ideological work cultures in the past myself. When everyone involved cares deeply about something, it can be really horrible when it goes wrong. This is why it's very plausible to me that something similar might have happened here.

I really don’t want anyone's negative experiences to get lost or overlooked because of the tribal fight taking place between Leverage and some of the people who don’t like us. I said in my post [LW · GW] that I want to defend the people in Leverage 1.0 who feel like they’ve been constantly harassed and maligned over the years. But I want to defend them from disingenuous attacks. That does not include hearing from anyone from Leverage 1.0 with a genuine negative experience. I want to ensure that people who had negative experiences can have their voices be heard, that any wrongdoings and harms are addressed, and that we as an organization learn and improve.

I’m going to send you a private message on LessWrong in case you would like to talk about any of this. I understand if you decide you don’t want to spend the emotional energy or don’t feel comfortable talking to me, but if there is anything I can do that would make it okay for you, or people you’re in touch with, to have a conversation with me, I’d like to try.

comment by Montaigne · 2021-09-26T06:31:49.845Z · LW(p) · GW(p)

It seems plausible that in the future, if there aren't already, there will be many groups that use the language and terminology of rationality to serve more self-interested and orthogonal objectives.

comment by Sniffnoy · 2021-09-26T07:02:22.003Z · LW(p) · GW(p)

I do worry about "ends justify the means" reasoning when evaluating whether a person or project was or wasn't "good for the world" or "worth supporting". This seems especially likely when using an effective-altruism-flavored lens that only a few people/organizations/interventions will matter orders of magnitude more than others. If one believes that a project is one of very few projects that could possibly matter, and the future of humanity is at stake - and also believes the project is doing something new/experimental that current civilization is inadequate for - there is a risk of using that belief to extend unwarranted tolerance of structurally-unsound organizational decisions, including those typical of "high-demand groups" (such as use of psychological techniques to increase member loyalty, living and working in the same place, non-platonic relationships with subordinates, secrecy, and so on) without proportionate concern for the risks of structuring an organization in that way.

There is (roughly) a sequences post [LW · GW] for that. :P

comment by ChristianKl · 2021-10-10T17:07:54.435Z · LW(p) · GW(p)

For those who think the above description reads like one of a typical cult, it's worse reading how a description of an actual cult reads.

There's currently a cult trying to take over a place that hosts personal development seminars (and I know people personally who went there for seminars unrelated to the cult.

https://metamoderna.org/how-a-psychedelic-sex-cult-infiltrated-a-german-ecovillage/

comment by joecorabi · 2021-10-09T05:20:20.149Z · LW(p) · GW(p)

My name is Joe Corabi. I am a philosophy professor at Saint Joseph’s University in Philadelphia and a longtime friend of Geoff Anders.  I have known Geoff since we were grad students together at Rutgers.  We have also collaborated over the years on a number of philosophical projects, both related to and separate from Leverage Research. 

I have been a volunteer off and on at Leverage since its founding and I wanted to share my experience of Leverage in the hopes that it provides some unique evidence about the organization and its history.  I was troubled by the recent Less Wrong post and I spoke to Geoff about the possibility of writing something that can hopefully provide some additional context for those looking to evaluate the situation. 

I was initially drawn to Leverage by the enthusiasm of its members and Geoff’s vision for the organization.  In my view, which is from someone who has spent over 20 years studying philosophy in an intensive way, Geoff is a highly skilled philosopher who has both an expert knowledge of the field and a sensitivity to methodological concerns, the combination of which is quite rare.  In my view, professional analytic philosophers nowadays often tend to substitute questions that are readily answerable for questions that have deep significance, and they also use methods that encourage small, clever moves in argumentation over insights that will integrate well with a bigger picture of human knowledge and understanding.  I found Geoff to be unique among philosophers I had met in several ways.  Most notably, he was very well steeped in the history of philosophy without, as far as I could tell, succumbing to any tendency to fetishize that history.  He had a broad appreciation for the way that historical philosophers had different methodological approaches for addressing philosophical problems than most current philosophers, and he had a keen awareness of ways that the methods those philosophers employed could lead to improvements in philosophical understanding. I was someone trained in philosophy in a very ahistorical way and Geoff did a fantastic job of answering objections I had to emphasizing the study of historical figures.  He provided me with new approaches for reading historical figures that helped me to get much more out of them.  (I should also mention that I have never found him to be anything but a highly respectful interlocutor who addresses disagreement with charity and a spirit of rational dialogue.  I have never seen him treat anyone else who disagreed in any way different from this. The view of him insinuated in the post—that he was an authoritarian or master manipulator—just does not fit the picture of him I have assembled over the years.)

Based on my conversations with Geoff about philosophy and seeing his own philosophical work, I judged that he was someone who had unusual insights and potentially the ability to lead an organization that would make dramatic improvements to the world.   I started volunteering time with Leverage in whatever ways I could, occasionally visiting their original headquarters in New York City to make presentations and collaborate with Leverage employees.  I continued this practice after the organization moved to California. 

The projects I have volunteered for with Leverage have been diverse.  My involvement with the organization has waxed and waned over the years depending on my other commitments and on my sense of where my efforts would be most valuable.  I have had many positive experiences with Leverage—times when I felt that I was learning about theories and even whole fields that I knew little about and which seemed to promise significant insights.  At other times, I had the sense that there were aspects of Leverage that reproduced things that I did not like about academia.  For example, in one case, I spent significant time on a research project on intelligence amplification, submitted it, then felt that it got filed away without ever having much impact or even getting read by hardly anyone.  (It turns out that I may have been wrong about this, but it was my impression at the time.  This is a fear that I and many other academics also have about our published academic work.)  In another case, I pledged to help a Google engineer (who I believe was connected with the rationalist community) with developing an online intelligence testing platform that a number of people at Leverage had an interest in testing and using. The original plan as I understood it was that I would provide significant theoretical consultation, but in the end he had a more or less complete vision for the project (an impressive one honestly) and my role wound up being much more that of a grunt worker. I judged this to be a less valuable use of my time than other potential things I could be doing, so I did not continue with the project beyond the first few months.

In my debriefs with Geoff, I never felt any pressure to volunteer at the organization more than was comfortable for me, and I never felt any pressure to engage in charting or other more cutting-edge experimental techniques that were popular there.  I knew about at least some of these practices (probably most), and Geoff always good-naturedly invited me to try them out if I thought they would be helpful or I was interested in providing feedback on them. I did try some of these things out, particularly charting.  I found the results fairly impressive in my own case.  I was not fully convinced of all the underlying theoretical claims because I felt that there was not yet comprehensive enough evidence and I held some background philosophical positions that were in tension with the approach, but I was certainly impressed and wanted to learn more. 

Although I never felt pressure from Geoff, I also never felt that I was being “shut out” because I did not live in California or dedicate myself full-time to Leverage’s projects.  I found that Geoff was more available for casual conversations and informal brainstorming sessions early in the organization’s history than he was later (where our conversations tended to be shorter and more efficiently focused on official business), but I judged that to be a virtually inevitable consequence of the fact that the organization had ambitious goals and he had many demands on his time.  I always found him to be the same friendly and reasonable person, and I never saw any indication of increasing demand for agreement with him or any official organizational platform.  He was always welcoming to me without being pushy.  Interestingly, I did sometimes have the impression that I was being treated as an underling or subordinate by a couple of early employees (or perhaps volunteers) of Leverage, perhaps because I was not a member of the rationalist community or because I worked only part-time on Leverage-related projects. But those individuals have to the best of my knowledge not been associated with Leverage in any way since at least 2013.  All my other interactions with Leverage employees, while typically brief, have been very positive.

I have not been physically present at Leverage for several years, so I can’t comment on any interpersonal conflicts at the organization or how (or whether) the culture of Leverage 1.0 changed late in its history.  There does seem to be evidence of conflict and dysfunction there towards the end.  I can only reiterate that my own visits over the years were intellectually invigorating. While some of the projects at times seemed a bit hokey to me, much of what happened seemed exciting and was done as far as I could tell in a spirit of open inquiry and sensitivity to evidence. I was always struck by the decentralized feel of the organization when I visited—the fact that there were many different projects going on simultaneously that seemed to be controlled largely by individuals other than Geoff, with visions somewhat different from his.  So it seems especially ironic to me that Geoff has been accused of having an authoritarian presence or of leading some sort of personality cult.  All I can say is that this does not fit at all with my experiences of Leverage.

comment by jfalexander · 2021-10-07T02:29:20.108Z · LW(p) · GW(p)

Figured I'd chime in too—I'm Jordan Alexander and I was one of the Pareto Fellows back in 2016. I've been involved with the EA and rationality community to various degrees since then (Stanford EA, internship at CHAI, active GWWC pledge) so I thought I'd give my account of the program. I recognize that other people may have had different experiences during the program and that there may have been issues that I was not personally aware of as a participant in the program.  

As for my relationship with Leverage: I have a few friends at Leverage, though we're not in close contact. I participated in Paradigm Coaching (essentially a combination of personal and professional one-on-one coaching) for a few months at the end of 2019 and found it incredibly helpful while working on the mundane problem known as "job-hunting". Finally, one of my friends at Leverage reached out and asked me if I was interested in sharing my experience at the Pareto Fellowship after this post popped up. Frankly I'm annoyed that I have to do this but it seems unfair that these sorts of posts reappear every year. I work as a software engineer and have no professional or financial ties to Leverage. 

Here's an overall account of the program structure, going off of the archived Pareto Fellowship website:

  • The program was indeed split into training and project phases. The trainings were given by folks at Leverage or CEA, though the CEA folks had very close ties to Leverage and similar ways of thinking (some of the CEA folks also worked at Leverage following the program). This program was incubated at CEA but I'd personally judge it as accurate if one were to say that this was basically a Leverage-run and CEA-funded program.
  • The training topics were as-described on the Pareto website. I thought the training was great, though there were definitely a few fellows that were familiar with the content beforehand and if I were in their place I would've wanted it to be compressed further.
  • The project phase lacked strong mentorship and wasn't quite able to satisfy every Fellow's interests. I'd say that the Pareto Fellowship—in terms of material goods—mainly provided food, housing and a fancy title that made it easier for fellows to find their own mentors.

With all this said, then, in terms of immediate professional growth the Pareto Fellowship wasn't that great. What the Pareto Fellowship was great for, in my eyes, was personal growth. This was echoed among a few fellows, usually along the lines of "I didn't do anything incredibly useful in the near-term but I experienced the most personal growth ever during that summer." This was true for myself as well and I still consider it a good investment that I'd make again. Again, though, this is not meant to be a universal judgement of the program. 

Still, it's odd to see these posts popping up again and again because what went best for me in the Pareto Fellowship was the content that came from Leverage. In my eyes it worked incredibly well as an environment for young professionals to come together and spend a couple months deeply and rigorously examining their beliefs and what they'd focus on throughout their career. 

Now, the EA/rationalist community has produced a few other programs like this (SPARC, CFAR) that are generally subjected to far less scrutiny and name-calling than Leverage has been. As such, here's a defensive account of the program that addresses some of the concerns in the above post: 

  • The day-to-day classes of the Pareto Fellowship were hosted at the Leverage building on Lake Merritt. Most of the Fellows lived there, though there was alternative housing provided in a Berkeley apartment. I lived in the Berkeley housing for a month or so during the program but later moved to the Leverage house because there were more amenities at the Leverage House. To my knowledge, everyone that wanted to live off-site was able to do so.
  • I kept a public personal blog at the time and briefly mentioned the Pareto Fellowship in it. I mentioned it once and someone suggested I run it by the program managers, who said it was fine. This was a completely unremarkable exchange. I am aware of information-management agreements with regards to content that's originally designed by Leverage. They do not seem meaningfully different from an NDA or a confidentiality agreement that one might sign in similar contexts (trade secrets, distribution rights, etc.).
  • Leverage has indeed developed unique introspective techniques. I requested to participate in a charting session during the program and found it helpful. At no point did I feel pressured to do so. I also find Leverage's frameworks generally helpful for thinking and continue to use them occasionally. I think these introspective techniques can be compared reasonably well to therapeutic techniques, though Leverage doesn't claim to do the same work licensed therapists do. Still, I've used professional therapy services (Cognitive-Behavioral Therapy and psychoanalysis in particular, with two different certified mental health professionals completely disconnected from Leverage) and found them to be complementary to the techniques taught at Leverage.
  • Leverage did not try to actively recruit me nor do I recall any explicit opportunities to apply for a job at Leverage (perhaps this was unique to my situation, I was 18 and a rising college freshman with no inclinations to drop out).
  • I read "the Leverage plan" around that summer. Emphatically, the plan did not state Leverage would "literally take over US and or/global governance," but would train skilled and value-aligned individuals to occupy positions of power. Importantly: there was no explicit coordination with Leverage or top-down hierarchy. The idea was to equip individuals with skills and have them do their own thing in various important roles. This is pretty much what any skill-building or movement-building organization does, the folks at Leverage just happen to have had a very detailed plan.
  • The "vibe" within Leverage was overall pessimistic about existential risk. This was the case for most Berkeley rationalist-adjacent orgs I encountered at the time. There was not an explicit "narrative" or stated Leverage position.

Now, here's a positive account of the program that addresses the impact it had on me personally:

  • As a newcomer to the Bay Area EA community I was able to live in a house of thoughtful people and foster deep connections with other Pareto Fellows that I'm still in touch with. In purely professional terms, I made between 5 to 10 strong contacts that I'd be happy to work with on future projects (and have done so already on a few occasions). This is an insanely high hit-rate for a program of roughly 20 people and I doubt it would have been possible without the co-living setup. In non-professional terms, I made lots of friends that I still care about today.
  • As a rising college freshman I was given time for intellectual exploration with a group of smart, independent thinkers that I respected. The program was relatively unstructured, which was probably less helpful for some of the older fellows but gave me ample time to explore my own interests. By the end of summer and after an amazing fireside chat with Nate Soares I'd decided to take a stab at doing AI Safety Research. It's not what I work on these days for reasons entirely disconnected from Leverage but, again, especially in 2016 "AI Safety" was a far less established field and it would've been difficult to find a group of people that took it seriously in the way Leverage and Leverage-adjacent folks did (even while Leverage itself was focused on global coordination problems).
  • Again, as a rising college freshman I found it incredibly valuable to have a set of structured introspective practices that I could rely on during college. I'm glad that the folks at Leverage created a space where it was ok to have deep conversations about the structure of one's beliefs and their values. I never felt "directed" towards any train of thought in particular (though perhaps this was purely coincidental). Having these conversations is certainly something that's hard to do and it can be quite off-putting to have conversations about something that one has held on to for a while (e.g. at the time I was questioning my belief in Catholicism) but the Pareto/Leverage staff handled all of these conversations with me with a great deal of care. It's difficult to market for these kinds of things in advance, and frankly I also expected the Pareto Fellowship to be more directly career-oriented than it ended up being but I'd be disappointed if we threw out this kind of space entirely under the misguided idea that emotional depth and introspection equate to "cultishness" and ought to be avoided in any kind of "official" CEA-sponsored summer program.

Finally, I'm generally annoyed that one of these posts surfaces every year in what seems like an attempt to unearth some deep secret at the heart of Leverage. I genuinely think that these are well-intentioned people working on an ambitious and unique project. Building these projects is hard and I wish this community had more respect for that. 

Replies from: jfalexander, Ruby
comment by jfalexander · 2021-10-14T06:05:35.713Z · LW(p) · GW(p)

Given the comments that have surfaced it sounds like my annoyance at these posts was unjustified and that 1) I underestimated how long it takes for structural weaknesses to surface and have effects that are clearly visible to outsiders, and 2) underrated how valuable it was to open a space for people to share their experiences with Leverage. Glad that the original post was able to do this in a way that preserved anonymity for people that understandably needed it. 

I also want to highlight that while I still stand by my personally positive experience at the Pareto Fellowship in 2016 this is not meant to be a universal account of events [and certainly not of Leverage Research] and a proper judgement of the program itself would involve polling a representative sample of former Pareto Fellows. 

Finally, I recognize that it's especially difficult to recount experiences when someone has experienced deep trauma so thanks to Zoe Curzi for the courage involved in telling her story and to anyone else sharing their experiences, anonymously or otherwise.

comment by Ruby · 2021-10-13T21:57:12.204Z · LW(p) · GW(p)

Thanks for taking the time to recount your experiences there.

I do want to register that I expect the experience afforded to fellows as part of a few-month program to be different, and milder, than want long-term employees would experience.

comment by mingyuan · 2021-09-24T20:45:43.609Z · LW(p) · GW(p)

Am I crazy or was something really similar to this, with the same thing of asking for a LW moderator to vouch, posted like a year ago? I didn't immediately find it by searching.

Replies from: mr-hire, Sniffnoy
comment by Matt Goldenberg (mr-hire) · 2021-09-25T17:14:59.897Z · LW(p) · GW(p)

https://forum.effectivealtruism.org/posts/qYbqX3jX4JnTtHA5f/leverage-research-reviewing-the-basic-facts

Replies from: BayAreaHuman
comment by BayAreaHuman · 2021-09-25T18:00:22.700Z · LW(p) · GW(p)

This is also a useful resource, and the pingbacks link to other resources.

I want to gesture at "The Plan", linked from Gregory Lewis's comment (https://forum.effectivealtruism.org/posts/qYbqX3jX4JnTtHA5f/leverage-research-reviewing-the-basic-facts?commentId=8goitqWAZfEmEDrBT [EA(p) · GW(p)]), as supporting evidence for the explicit "take over the world" vibe, in terms of how exactly beneficial outcomes for humanity were meant to result from the project, best viewable as PDF.

comment by Sniffnoy · 2021-09-26T07:05:21.276Z · LW(p) · GW(p)

I'm not involved with the Bay Area crowd but I remember seeing things about how Leverage is a scam/cult years ago; I was surprised to learn it's still around...? I expected most everyone would have deserted it after that...

Replies from: cousin_it
comment by cousin_it · 2021-09-28T22:02:37.893Z · LW(p) · GW(p)

This reminds me of the focusing/circling/NVC discussions, one group (to which I belonged) was like "this is obviously culty mindfuckery, can't you see" and the other group couldn't see, and arguments couldn't bridge that gap. It's like how some people can recognize bullying and others will say "boys will be boys", while looking at the exact same situation.

comment by iceman · 2021-09-26T21:10:14.850Z · LW(p) · GW(p)

I can verify that I saw some version of their document The Plan[1] (linked in the EA Forum post below) in either 2018 or 2019 while discussing Leverage IRL with someone rationalist adjacent that I don't want to doxx. While I don't have first hand knowledge (so you might want to treat this as hearsay), my interlocutor did and told me that they believed they were the only one with a workable plan, along with the veneration of Geoff.

[1]: I don't remember all of the exact details, but I do remember the shape of the flowchart and that looks like it. It's possible that my interlocutor also got it from Gregory Lewis, but I don't think so.

comment by TekhneMakre · 2021-09-24T12:00:08.525Z · LW(p) · GW(p)

I'm not sure what the meaning, if any, of the following fact is, but: I notice that I would feel very positively about Leverage as it's portrayed here if there weren't relationships with multiple younger subordinates (e.g. if the leader had been monogamously married), and as it is I feel mildly negative about it on net.

Replies from: lsusr, Viliam, BayAreaHuman
comment by lsusr · 2021-09-26T19:04:29.789Z · LW(p) · GW(p)

That wasn't necessary evidence for me. The secrecy + "charting/debugging" + "the only organization with a plan that could possibly work, and the only real shot at saving the world" is (if true) adequate to label the organization a cult (in the colloquial sense). This are all ideas/systems/technologies that are consistently and systematically used by manipulative organizations to break a person's ability to think straight. Any two of these might be okay if used extremely carefully (psychiatry uses secrecy + debugging) but having all three brings it solidly into cult territory. Also, psychiatry has lots of rules to prevent abuse, including public, well-established ethical standards.

Are Leverage's standard operating procedures auditable knowledge to outsiders? If not, this is the mother of all red flags and we should default to "cult".

Edit: LarissaRowe didn't reply to this comment because Leverage doesn't have a leg to stand on.

Edit ×2: Shaming someone into a response violates the norms of Less Wrong. The first edit was a mistake. I apologize.

Replies from: ChristianKl, TekhneMakre
comment by ChristianKl · 2021-09-27T08:38:10.456Z · LW(p) · GW(p)

psychiatry uses secrecy 

In psychiatry there's no secrecy for treatment protocols and there are no secrecy rules for patients that prevent them from sharing about their experience.

Replies from: lsusr
comment by lsusr · 2021-09-27T08:40:07.085Z · LW(p) · GW(p)

That's a good point. The psychiatrist (who has power) is sworn to secrecy but the patient (who is vulnerable) isn't.

comment by TekhneMakre · 2021-09-27T05:45:03.692Z · LW(p) · GW(p)

>"the only organization with a plan that could possibly work, and the only real shot at saving the world"

It's definitely a warpy sort of belief. The issue to me, and why I could still feel positively about such an organization, is that the strong default for people and organizations might be a strong false lack of hope [LW · GW]. In which case, it might be correct to have what seems like a delusional bubble of exceptionalism. It still seems to have some significant bad effects, and is still probably partly delusional, but if we don't know how to get the magic of Hope without some delusion I don't think that means we should throw away Hope.

>Are Leverage's standard operating procedures auditable knowledge to outsiders?

It would be nice to live in a world where this standard were good and feasible, but I don't think we do. Not holding this standard does open us up for the possibility of all sorts of abuse hiding in relative secrecy, but unfortunately I don't see how to avoid that risk without becoming ineffective.


I think the things you point out are big risk factors, but to me don't seem to indicate a "poison" in the telos of the organization. Whereas sexual/romantic stuff seems like significant evidence towards "poison", in the sense of "it would actually be bad if these people were in power".

Replies from: ChristianKl, ESRogs, Vladimir_Nesov
comment by ChristianKl · 2021-09-27T08:30:46.394Z · LW(p) · GW(p)

The real problem is to have the belief that you are the only organization with a plan that might work while at the same time requiring secrecy that prevents the participants from feedback from the outside world that might make participants doubt that this is the case. If you then add strong self-modification techniques that also strengthen the belief, that's no good enviroment.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-09-27T09:10:04.459Z · LW(p) · GW(p)

I'm not sure how to pinpoint disagreement here.

I think it's bad, possibly very bad, to have delusional beliefs like this. But I think by default we don't already know how to decouple belief from intention. Saying "we're the only ones with a plan to save the world that might work" is part belief (e.g., it implies that you expect to always find fatal flaws in others's world-saving plans), and part intention (as in, I'm going to make myself have a plan that might work). We also can't by default decouple belief from caring. Specialization can be interpreted as a belief that being a certain way is the best way for you to be; it's not true, objectively, but it results in roughly the same actions. The intention to make your plans work and caring about the worlds in which you can possibly succeed is good, and if we can't decouple these things, it might be worth having false beliefs (though of course it's also extremely worth becoming able to decouple belief from caring and intention, and ameliorating the negative effects on the margin by forming separate beliefs about things that you are able to decouple, e.g. using explicit reason to figure out whether someone else's plan might work, even if intuitively you're "sure" that no one else's plan could work).

I think it's clearly bad to prevent feedback for the sake of protecting "beliefs". But secrecy makes sense for other reasons. (Intentions matter because they affect many details of the implementation, which can add up to large overall effects on the outcomes.)

Replies from: ChristianKl
comment by ChristianKl · 2021-09-30T17:16:46.724Z · LW(p) · GW(p)

I think there are two kinds of secrecy. One is about not answering every questions that outsiders have. The other is about forbidding insiders from sharing information to the outside.

Power easily corrupts processes. Playing around with strong self modification is playing with a lot of power. 

Secrecy has a lot easily visible benefits because you reduce your attack surface. But it has it's costs and it's generally wise to be skeptical of versions of it that prevent insiders from sharing information that's not of a personal nature when doing radical projects.

comment by ESRogs · 2021-09-29T04:28:59.307Z · LW(p) · GW(p)

>"the
>Are

Formatting note — if you put a space between the '>' and the next character, it'll format correctly as a proper block quote.

comment by Vladimir_Nesov · 2021-09-27T11:37:00.221Z · LW(p) · GW(p)

if we don't know how to get the magic of Hope without some delusion

Planning for success doesn't require knowledge of success, doesn't get better if you believe things that can't be known. Hope is a good concept for this situation: a risk of success where the probability of success needn't be significant, it's the value of success that makes hope relevant.

Hope makes sense as a concept of curiosity more than as one of decision making, so that you are not vulnerable to misleading expected utility calculations, but get some guidance for filling in the chart of possible plans, taking steps towards enacting them.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-09-27T12:05:37.246Z · LW(p) · GW(p)

Yeah, if I follow you, I think I agree that Hope is most essential in the realm of curiosity. It seems like Leverage was/is aimed at realms that are deeply ontologically uncertain (what are the possibilities for using my mind radically more effectively, what really matters for affecting the world), which entails that curiosity and probing possibility-space is a nearly permanent central feature of what they're trying to do. To say it more concretely, asking a really weird question and trying out really weird answers might feel intuitively more appealing if you think that you're exceptional, and if you think your social context is exceptionally able to pick up on weird but true/important results.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-09-27T12:32:29.092Z · LW(p) · GW(p)

might feel intuitively more appealing if you think that you're exceptional

More appealing compared to what alternative? Don't stand still, do the work. There is rarely a reason to prefer a particular step of a large journey over all other steps. That's the character of curiosity.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-09-27T12:38:38.044Z · LW(p) · GW(p)

Hm, it seems like you're arguing against the stance I'm describing, where my main point is just that this is a stance many people take. I sometimes find that I've been taking a stance like this; when I reflect on it I've never agreed with it, but that doesn't mean it's not happening. Maybe you're rejecting putting effort into accommodating this stance, rather than unraveling it?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-09-27T12:59:44.147Z · LW(p) · GW(p)

my main point is just that this is a stance many people take [...] putting effort into accommodating this stance, rather than unraveling it

Formulating what might be going on gives something specific to talk about. But then what's the point to settling on an emotional valence? Discussing the error seems interesting, regardless of what attitude that props up. The patch I proposed actually preserves the positive qualities, isn't a demonstration of their absence.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-09-27T13:18:28.995Z · LW(p) · GW(p)

>There is rarely a reason to prefer a particular step of a large journey over all other steps.That's the character of curiosity.

I didn't get the essence of your proposal from this. Could you phrase this as advice to, for example, Elon Musk (taking Elon as an example of someone who's making good use of slightly delusional "beliefs" about his plans, while still remaining very solidly in contact with reality)?

Replies from: ChristianKl, Vladimir_Nesov
comment by ChristianKl · 2021-09-27T14:07:03.366Z · LW(p) · GW(p)

Elon is one of the least delusional people. Not many people start companies like Elon when they believe there's only a ten percent chance of success.

Elon sets goals that often won't be archieved but that's not the same as having delusional beliefs.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-09-27T14:13:24.439Z · LW(p) · GW(p)

I agree he's exceptionally well in contact with reality. But also part of his "setting goals" involves making "predictions" about timelines. Which are very often wrong, quantitatively (while being correct "in spirit" in the sense that they achieve the goal, just later than "predicted").

Replies from: ChristianKl
comment by ChristianKl · 2021-09-27T15:19:14.220Z · LW(p) · GW(p)

Elon generally is not public about the likelihood of various events in timelines and speaks about his timelines as being optimistic guesses. 

comment by Vladimir_Nesov · 2021-09-27T13:42:25.557Z · LW(p) · GW(p)

When a civilization gets curious, each individual only gets to work on a few observations, and most of these observations are not going to be foreknowably more important than others, or useful in isolation from others that are not even anticipated at the time, yet the whole activity is worthwhile. So absence of a reason to pursue a particular activity compared to other activities is no reason for not taking it seriously. It's only presence of a reason to take up a different activity that warrants change.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-09-27T13:56:02.271Z · LW(p) · GW(p)

What if there's an abundance of specific reasons to take up various activities, and which ones you want to invest in seems to depend heavily on "follow through", i.e. "are people going to keep working on this"?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-09-27T14:15:15.287Z · LW(p) · GW(p)

abundance of specific reasons to take up various activities [...] "are people going to keep working on this"?

With some transitivity of preference and a world that's not perpetually chaotically unsettled, people or organizations should be able to find something to work on for which they have no clearly better alternatives. My point is that this is good and worth doing well even when there is no reason to see what they are currently doing as clearly better than the other things they might've been doing instead. And if not enough people work on something, it won't get done, which is OK if there is no reason to prefer it to other things people are actually working on (assuming that neglectedness is not forgotten as a reason to prefer something).

Replies from: TekhneMakre
comment by TekhneMakre · 2021-09-27T14:26:30.919Z · LW(p) · GW(p)
And if not enough people work on something, it won't get done, which is OK if there is no reason to prefer it to other things people are actually working on

Well, one might prefer that something rather than nothing gets done. In which case it matters whether other people will work on it. In particular, when an organization with multiple people "decides" to do something, that's tied up with believing that they will work on it, which affects motivation to work on it.

even when there is no reason to see what they are currently doing as a clearly better alternative to the other things they might've been doing instead

So, if you believe that you're doing an "objectively" better plan, in particular you think that other people will recognize that your plan is good, and will want to work on it; so your belief is tied up with acting in a way that will be successful if other people will continue your work.

comment by Viliam · 2021-09-25T21:18:41.143Z · LW(p) · GW(p)

It provides an alternative version for the motivation of the entire project. More disturbingly, the alternative seems to explain some facts better, such as why after all that work and money spent, after all the grandiose secret plans, there is still no tangible output.

EDIT: The part "no tangible output" was not fair, I apologize for that. I am not updating the comment, because it would feel like moving the goalpost.

Replies from: kerry-vaughan, BayAreaHuman, mr-hire
comment by Kerry Vaughan (kerry-vaughan) · 2021-09-28T00:42:16.578Z · LW(p) · GW(p)

I appreciate the edit, Viliam.

I know that it was a meme about Leverage 1.0 that it was impossible to understand, but I think that is pretty unfair today. If anyone is curious here are some relevant links:

We're no longer engaged with the Rationality community so this information might not have become common knowledge. Hopefully, this helps.

comment by BayAreaHuman · 2021-09-25T22:17:28.916Z · LW(p) · GW(p)

I added a sub-bullet to the main post, to clarify my epistemic status on that point.

Replies from: BayAreaHuman
comment by BayAreaHuman · 2021-09-27T21:16:43.905Z · LW(p) · GW(p)

I have now made an even more substantial edit to that bullet point.

comment by Matt Goldenberg (mr-hire) · 2021-09-26T20:47:49.333Z · LW(p) · GW(p)

I think Bismarck Analysis Consulting Company, Paradigm Academy Training, and Reserve Cryptocurrency all came out of Leverage.

comment by Jonah Stephen Swersey (jonah-stephen-swersey) · 2021-09-27T09:44:11.807Z · LW(p) · GW(p)

I should clarify upfront that I am not a rationalist, and am not a fan of LessWrong. 

That said, I have some experience when it comes to... this sort of thing.

So when I was a little younger, I was the figurehead and leader of a sex cult. (Oddly enough, I did this without ever really understanding that it was, in fact, a sex cult. One of my best friends described this as a "Jerry Smith plot", which I found hilarious.) This cult was, in practice, a discord server focused around my erotic hypnosis work. I copied the model from another server that was definitely a sex cult, and tried to strip out all of the culty elements and just leave the aesthetics (because a lot of us liked the aesthetics). But you really can't reconstitute the structure of a high-control group without, in various ways, reconstituting the behavior of a high-control group. It doesn't work - the culty shit works its way back in if you're not extremely careful. And I was not careful, for reasons that may be obvious if you think about the perks one gets as the figurehead of a sex cult. A lot of people got hurt.

Why bring that up? Because hoo boy does this tick a lot of similar boxes. 

A lot of things scream "high control environment", which... okay, not necessarily all red flags. And he's dating subordinates. Again, not always bad. Like, in the traditional business world, a boss sleeping with one of their subordinates at all is considered a red flag - not illegal, but certainly a really bad idea due to the power dynamics in play. 

But given what this post says, it sounds like he's essentially using his star power to specifically attract young women, then grooming them, isolating them within the group, using various forms of psychological manipulation to make them more invested, and connecting their housing to their employment. My brain takes these facts, and then quickly and easily maps them onto my experience as a member of one sex cult and the leader of another. I find the result very concerning.

But even if all of that isn't enough of a red flag, consider that the OP of this post has made it anonymously and seems to find it important that this organization not think ill of them. 

Replies from: orthonormal, cousin_it
comment by orthonormal · 2021-10-13T06:52:15.365Z · LW(p) · GW(p)

There's a lot going on in this comment, but I note with interest that this is the first time I've seen someone weigh in on questions of cultish behavior from the perspective of a former cult leader. 

I'm fascinated with the claim that if you take on the outer facade of a cult, you now have a strong incentive gradient to turn up the cultishness (maybe because you're now drawing in people who are looking for more of that, and driving away anyone who's put off by it [? · GW]). Obviously the claim needs more than one person's testimony, but it makes sense.

I wonder if some early red flags with Leverage (living together with your superiors who also did belief reporting sessions with you, believing Geoff's theories were the word of god, etc) were explicitly laughed off as "oh, haha, we know we're not a cult, so we can chuckle about our resemblances to cults".

comment by cousin_it · 2021-09-27T10:56:19.046Z · LW(p) · GW(p)

I think from a world and historical perspective, dating subordinates is a very common thing. The American cult bundle of traits is much more specific and rare. For me, the first red flag is shared housing for followers of the idea. Any movement that does it is already kind of weird to me (including the rationalist movement). If there's also some kind of group psychological exercise, that takes it all the way to "nope" (again, including some parts of the rationalist movement).

Replies from: agrippa
comment by agrippa · 2021-10-04T03:59:34.886Z · LW(p) · GW(p)

I will say that the EA Hotel, during my 7 months of living there, was remarkably non-cult-like.  You would think otherwise given Greg's forceful, charismatic presence /j

comment by LarissaRowe · 2021-09-27T05:25:52.172Z · LW(p) · GW(p)

Hi BayAreaHuman, 

I just posted an update [LW · GW] on behalf of Leverage Research to LessWrong along with an invite to an AMA with Leverage Research next weekend, as it seems from the comments that there isn’t a lot of common knowledge about our current work or other aspects of our history. I encourage people to read this [LW · GW]for additional context, and I hope the OP will be able to update this post to incorporate some of that.

I also want to briefly address some of the items raised here.

 

Information management policies

Leverage Research has for a long time been concerned about the potential negative consequences of the potential misuse of knowledge garnered from research. These concerns are widely shared for research in the hard sciences (e.g., nuclear physics), but are valid as well for the social sciences. 

Starting in 2012, Leverage Research had an information management policy designed to prevent negative intended consequences from the premature dissemination of information. Our information policy from 2012-2016 required permission for the release of longform information on the internet. We had an information approval team, with most information release requests being approved. In 2016, the policy was revised, in part to give blanket permission for the sharing of longform information on the internet unrelated to their work at Leverage. Based on our ED’s recollection, in no case was permission withheld for the online publication of regular personal information.

Our information management policy aimed to balance simplicity and usability with effect and certainly did not get everything right. One of the negative consequences of our information policy, as we have learned, is the way it made some regular interactions with people outside of the relevant information circles more difficult than intended. We intend to learn from this experience and do better with information management in the future.
 

Dangers and harms from psychological practices

As mentioned in my update post, [LW · GW] we are very concerned about potential harms to individuals from experimenting with psychological tools and so--when we begin to distribute some of these tools to the public--we will include in the release descriptions of the wide variety of potential near-term and long-term dangers from psychological experimentation that we are aware of.

The post mentions “hearing people who did lots of charting within Leverage report that it led to dissociation and fragmentation, that they have found difficult to reverse.” We believe that the tools we will release to the public, including Belief Reporting and basic charting, are generally safe, and will do our best to alert people to the potential dangers.

If anyone has experienced negative effects from psychological experimentation, including with rationality training, meditation, circling, Focusing, IFS, Leverage’s charting or Belief Reporting tools (or word-of-mouth copies of these tools), or similar techniques please do reach out to us at contact@leverageresearch.org. We are keen to gain as much information as possible on the harms and dangers as we prepare to release our psychology research.
 

Dating policies

From 2011–2019, Leverage did not focus on the development of standard professional norms or policies. We had an employee handbook covering equal employment opportunity, sexual and other forms of harassment, company conduct, and complaints procedures, but had no policy (and still have no policy) on who should date whom. It is true that our Executive Director had three long-term consensual relationships with women employed by Leverage Research or affiliated organizations during their history. Managing the potential for abuses by those in positions of power is very important to us. If anyone is aware of harms or abuses that have taken place involving staff at Leverage Research, please email me, in confidence, at larissa@leverageresearch.org or larissa.e.rowe@gmail.com

Following 2019, Leverage began to prioritize the development of professional standards. We expect to develop policies on dating in the workplace and other topics as part of this effort. As the HR representative at Leverage Research, developing these standards further is my responsibility.

 

Charting/debugging was always optional

The post claims that “Members who were on payroll were expected to undergo charting/debugging sessions with a supervisory "trainer", and to "train" other members.” This is inaccurate, and I’m not sure how this misunderstanding could have occurred.

Neither charting nor debugging was ever required of any person at any time at Leverage, either as part of their work or prior to being hired as part of the hiring process. Many individuals chose to be charted because of their interest in it, either for work or self-improvement-related reasons. But charting — as well as other psychological interventions—were and should remain strictly voluntary. Individuals who were uninterested in charting could have (and did) study other topics.

I hope this, along with my LessWrong forum post [LW · GW], helps to answer some of the questions and concerns raised here. If you have any questions not answered by this comment, my post, or other materials online please feel free to email me (larissa@leverageresearch.org) or join us at our virtual office next weekend.


 

Replies from: beth-barnes, Davidmanheim, BayAreaHuman, lsusr, lsusr
comment by Beth Barnes (beth-barnes) · 2021-09-28T20:56:26.374Z · LW(p) · GW(p)
If anyone is aware of harms or abuses that have taken place involving staff at Leverage Research, please email me, in confidence, at larissa@leverageresearch.org or larissa.e.rowe@gmail.com

I would suggest that anything in this vein should be reported to Julia Wise, as I believe she is a designated person for reporting concerns about community health, harmful behaviours, abuse, etc. She is unaffiliated with Leverage, and is a trained social worker.

Replies from: guzey
comment by guzey · 2021-09-29T10:48:51.022Z · LW(p) · GW(p)

(deleted)

Replies from: juliawise, beth-barnes
comment by juliawise · 2021-09-30T15:14:41.248Z · LW(p) · GW(p)

This was indeed a big screwup on my part.  Again, I'm really sorry I broke your trust.

Replies from: juliawise, ChristianKl
comment by juliawise · 2021-10-04T17:20:53.655Z · LW(p) · GW(p)

To add detail about my mistake:

When you asked if you could confidentially send me a draft of your post about Will's book to check, I said yes.

The next week you sent me a couple more emails with different versions of the draft. When I saw that the draft was 18 pages of technical material, I realized I wasn't going to be a good person to review it. That's when I forwarded to someone on Will's team asking if they could look at it instead of me. 

I should never have done that, because your original email asked me not to share it with anyone. For what it’s worth, the way that this happened is that when I was deciding what to do with the last email in the chain, I didn't remember and didn't check that the first email in the chain requested confidentiality. This was careless of me, and I’m very sorry about it.

I think the underlying mistake I made was not having this kind of situation flagged as sensitive in my mind, which contributed to my forgetting the original confidentiality request. If the initial email had been about some more personal situation, I am much more sure it would have been flagged in my mind as confidential. But because this was a critique of a book, I had it flagged as something like “document review” in my mind. This doesn’t excuse my mistake - and any breach of trust is a serious problem given my role - but I hope it helps show that it wasn’t intentional.

I now try to be much more careful about situations where I might make a similar mistake.

Replies from: juliawise, idlenow, ChristianKl
comment by juliawise · 2021-11-16T20:24:15.375Z · LW(p) · GW(p)

I've now added info on this to the post about being a contact person [EA(p) · GW(p)] and to CEA's mistakes page.

comment by idle (idlenow) · 2021-10-13T18:40:27.161Z · LW(p) · GW(p)

Personally, I don't really blame you or think less of you for this screwup. I never got the impression that you are the sort of person who should be sent confidential book review drafts. Maybe you'd disagree, but that seems like a misunderstanding of your role to me.

It seemed clear to me that you made yourself available to confidential reports regarding conflict, abuse, and community health. Not disagreements with a published book. It makes sense that you didn't have a habit of mentally flagging those emails as confidential.

Regardless, I trust that you've been more careful since then, and I appreciate how clearly you own up to this mistake.

I want to offer my +1 that I strongly believe Julia's trustworthy for reports regarding Leverage.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-23T15:44:16.640Z · LW(p) · GW(p)

It makes sense that you didn't have a habit of mentally flagging those emails as confidential.

I would generally expect that if I give someone access to a draft of any kind and they want to forward it to someone else to put the author of the draft in the CC. Even of the absence of the promise of confidentiality, I consider sharing someone's draft without their permission and witholding the information that you share it bad behavior.

comment by ChristianKl · 2021-10-09T08:28:54.429Z · LW(p) · GW(p)

This doesn’t excuse my mistake - and any breach of trust is a serious problem given my role - but I hope it helps show that it wasn’t intentional.

Of course it doesn't show that it wasn't intentional to say "my mistake wasn't intentional but accidental". The only thing that shows that it wasn't intentional would be to take actual consequences that are meaningful enough so that it doesn't look like you benefited CEA with your mistake.

comment by ChristianKl · 2021-10-09T08:26:53.819Z · LW(p) · GW(p)

Saying "I'm sorry I broke your trust" without engaging in any consequences for it feels cheap. To me such a mistake feels like you owe something to guzey.

One thing you could have done if you actually cared would have been to advocate for guzey in this exchange even if that goes against your personal positions.

Only admitting the mistake at comments and not in a more visible manner also doesn't feel like you treat it seriously enough. It likely deserves the same treatment as the mistakes on https://www.centreforeffectivealtruism.org/our-mistakes

Replies from: willbradshaw
comment by willbradshaw · 2021-10-14T01:42:43.572Z · LW(p) · GW(p)

Only admitting the mistake at comments and not in a more visible manner also doesn't feel like you treat it seriously enough. It likely deserves the same treatment as the mistakes on https://www.centreforeffectivealtruism.org/our-mistakes

For what it's worth, I do think this is probably a serious enough mistake to go on this page.

comment by Beth Barnes (beth-barnes) · 2021-09-30T03:31:29.341Z · LW(p) · GW(p)

Wow, that is very bad. Personally I'd still trust Julia as someone to report harms from Leverage to, mostly from generally knowing her and knowing her relationship to Leverage, but I can see why you wouldn't.

comment by Davidmanheim · 2021-10-14T16:35:54.306Z · LW(p) · GW(p)

One of the negative consequences of our information policy, as we have learned, is the way it made some regular interactions with people outside of the relevant information circles more difficult than intended.

 

Is Leverage willing to grant a blanket exemption from the NDAs which people evidently signed, to rectify the potential ongoing harms of not having information available? If not, can you share the text of the NDAs?

comment by BayAreaHuman · 2021-09-27T17:52:09.836Z · LW(p) · GW(p)

Hi Larissa -

Dangers and harms from psychological practices

Please consider that the people who most experienced harms from psychological practices at Leverage may not feel comfortable emailing that information to you. Given what they experienced, they might reasonably expect the organization to use any provided information primarily for its own reputational defense, and to discredit the harmed parties.

Dating policies

Thank you for the clarity here.

Charting/debugging was always optional

This is not my understanding. My impression is that a strong expectation was established by individual trainers with their trainees. And that charting was generally done during the hiring process. Even if the stated policy was that it was not required/mandatory.

Replies from: ChristianKl
comment by ChristianKl · 2021-09-27T18:12:07.036Z · LW(p) · GW(p)

It seems that Leverage currently in planning to publish a bunch of their techniques and from Leverages point of view, there are considerations that releasing the techniques could be dangerous for people using them. To me that does suggest a sincere desire to use provided information in a useful way. 

See from https://www.lesswrong.com/posts/3GgoJ2nCj8PiD4FSi/updates-from-leverage-research-history-and-recent-progress [LW · GW] :

If you are interested in being involved in the beta testing of the starter pack, or if you have experienced negative effects from psychological experimentation, including with rationality training, meditation, circling, Focusing, IFS, Leverage’s charting or belief reporting tools (or word-of-mouth copies of these tools), or similar techniques please do reach out to us at contact@leverageresearch.org. We are keen to gain as much information as possible on the harms and dangers as we prepare to release our psychology research.

If there are particular people who feel that they have been damaged, it would be great to still have a way that the information reaches Leverage. Maybe, a third-party could be found to mediate the conversation?

Is there anything else you could think of that would be a credible signal that Leverage is sincere about seeking the information about harms?

Replies from: ooverthinkk
comment by ooverthinkk · 2021-09-28T20:21:01.054Z · LW(p) · GW(p)

Why is this getting downvotes? It's a constructive comment containing a good idea (mediation to address concerns) and pointing at a source of transparency, which everyone here has been asking for.

I'm not a rationalist, and I'm new to actually saying anything on LW (despite lurking for 4ish years now - and yes, I made this alt today), but it seems like this would be the type of community to be more open-minded about a topic than what I'm seeing. By "what I'm seeing" I mean people are just throwing rocks and being unwilling to find any way to work with someone who's trying to address the concerns of the OP and commenters.

Replies from: Lukas_Gloor, Vladimir_Nesov, gjm
comment by Lukas_Gloor · 2021-10-01T10:58:53.689Z · LW(p) · GW(p)

I didn't downvote ChristianKI's comment, but I feel like it's potentially a bit naive. 

>Is there anything else you could think of that would be a credible signal that Leverage is sincere about seeking the information about harms?

In my view, the question isn't so much about whether they genuinely don't want harms to happen (esp. because harming people psychologically often isn't even good for growing the organization, not to mention the reputational risks). I feel like the sort of thing ChristianKI pointed out is just a smart PR move given what people already think about Leverage, and, conditional on the assumption that Leverage is concerned about their reputation in EA, it says nothing about genuine intentions.

Instead, what I'd be curious to know is whether they have the integrity to be proactively transparent about past mistakes, radically changed course when it comes to potentially harmful practices, and refrain from using any potentially harmful practices in cases where it might be advantageous on a Machiavellian-consequentialist assessment. To ascertain those things, one needs to go beyond looking at stated intentions. "Person/organization says nice-sounding thing, so they seem genuinely concerned about nice aims, therefore stop being so negative" is a really low bar and probably leads to massive over-updating in people who are prone to being too charitable. 

Replies from: ChristianKl, kerry-vaughan
comment by ChristianKl · 2021-10-01T20:57:24.764Z · LW(p) · GW(p)

I feel like the sort of thing ChristianKI pointed out is just a smart PR move given what people already think about Leverage, and, conditional on the assumption that Leverage is concerned about their reputation in EA, it says nothing about genuine intentions.

I didn't argue that it says something about good intentions. My main argument is that it's useful to cooperate with Leverage on releasing their techniques with the safety warnings that are warrented given past problems instead of not doing that which increases the chances that people will use the techniques in a way that messes them up.

I do consider belief reporting to be a very valuable invention and I think that it's plausible that this is true for more of what leverage produced. I do see that a technique like belief reporting allows for doing scientific experiments that weren't possible before. 

Information gathered from the experiments already run can quite plausible help other people from encoutering harm when integrated in the starter kit that they develop.

comment by Kerry Vaughan (kerry-vaughan) · 2021-10-01T23:00:59.804Z · LW(p) · GW(p)

Instead, what I'd be curious to know is whether they have the integrity to be proactively transparent about past mistakes, radically changed course when it comes to potentially harmful practices, and refrain from using any potentially harmful practices in cases where it might be advantageous on a Machiavellian-consequentialist assessment.

I think skepticism about nice words without difficult-to-fake evidence is warranted, but I also think some of this evidence is already available.

For example, I think it's relatively easy to verify that Leverage is a radically different organization today. The costly investments we've made in history of science research provide the clearest example as does the fact that we're no longer pursuing any new psychological research.

Replies from: Freyja
comment by Freyja · 2021-10-02T06:08:37.639Z · LW(p) · GW(p)

I think the fact that it is now a four person remote organization doing mostly research on science as opposed to an often-live-in organization with dozens of employees doing intimate psychological experiments as well as following various research paths tells me that you are essentially a different organization and the only commonalities are the name and the fact that Geoff is still the leader.

comment by Vladimir_Nesov · 2021-09-28T20:43:25.563Z · LW(p) · GW(p)

If you hover over the karma counter, you can see that the comment is sitting at -2 with 12 votes, which means that there is a significant disagreement on how to judge it, not agreement that it should go away.

(It makes some sense to oppose somewhat useful things that aren't as useful as they should be, or as safe as they should be, I think that is the reason for this reaction. And then there is the harmful urge to punish people who don't punish others, or might even dare suggest talking to them.)

comment by gjm · 2021-09-29T12:42:55.310Z · LW(p) · GW(p)

What are your personal connections, if any, to Leverage Research (either "1.0" or "2.0")?

Replies from: ooverthinkk
comment by ooverthinkk · 2021-09-29T23:51:22.979Z · LW(p) · GW(p)

I'd rather not say, for the sake of my anonymity - something which is important to me because this:

However, I would also like to note that Leverage 1.0 has historically been on the receiving end of substantial levels of bullying, harassment, needless cruelty, public ridicule, and more by people who were not engaged in any legitimate epistemic activity. I do not think this is OK. I intend to call out this behavior directly when I see it. I would ask that others do so as well.

is a real concern. I've seen it firsthand - people associated with Leverage being ostracized, bullied, and made to feel very unwelcome and uncomfortable at social events and in online spaces by people in nearby communities, including this one.

It seems like a real risk to me that any amount of personal information I give will be used to discover my identity, and I'll be subject to the same.


Which, by the way, is despicable, and I find it alarming that only one person [LW(p) · GW(p)] (besides Kerry) in this thread has acknowledged this behavior pattern.

I said in another comment that I didn't make an alt to come here and "defend Leverage" - this instance is the exception to that. These people are human beings.

(quote from Kerry's comment: https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-facts-about-leverage-research-1-0-1?commentId=hqDXAtk6cnqDStkGC) [LW · GW]

(I'm aware that this comment is intense; @gjm that intensity is not intended to be directed at you, but at the situation as a whole.)

Replies from: gjm
comment by gjm · 2021-09-30T01:31:43.386Z · LW(p) · GW(p)

If people are being bullied, that's extremely bad, and if you see that and call it out you're doing a noble thing.

But all I've seen in this thread -- I can't comment on e.g. what happens in person in the Bay Area, since that's thousands of miles away from where I am -- is people saying negative things about Leverage Research itself and not about individuals associated with it, with the single exception of the person in charge of Leverage, who fairly credibly deserves some criticism if the negative things being said about the organization are correct.

Bullying people is cruel and harmful. I'm not so sure there's anything wrong with "bullying" an organization. Especially if that organization is doing harm, or if there is good reason to think it is likely to do harm in the future.

Replies from: pktechgirl, Vladimir_Nesov, ooverthinkk
comment by Elizabeth (pktechgirl) · 2021-10-04T04:50:48.535Z · LW(p) · GW(p)

I've seen someone from a different org, but with a similar valence in the community, get treated quite poorly at a party when they let their association be known. It was like the questioner stopped seeing them as a person with feelings and only treated them as an extension of the organization. I felt gross watching it and regret not saying anything at the time. 

It seems overwhelmingly likely to me that Leveragers faced the same thing, and also that some members lumped some legitimate criticisms or refusals to play along in with this unacceptable treatment, because that's a human thing to do. 

ETA: I talked to the person in question and they don't remember this, so apparently it made a bigger emotional impression on me than them (they remembered a different convo at the same event that seemed like the same kind of thing, but didn't report it being particularly unpleasant). I maintain that if I were regularly subject to what I saw it would have been quite painful, and imagine that to be true for at least some other people.

comment by Vladimir_Nesov · 2021-09-30T13:52:07.955Z · LW(p) · GW(p)

I'm not so sure there's anything wrong with "bullying" an organization.

There's a pragmatic question of building reliable theory of what's going on, which requires access to the facts. Even trivial inconvenience for those who have the facts in communicating them does serious damage to this process's ability to understand what's going on.

The most valuable facts are those that contradict the established narrative of the theory, they can actually be relevant for improving it, for there is no improvement without change. Seeing a narrative that contradicts the facts someone has is already disheartening, so everything else that could possibly be done to make sharing easier, and not make it harder, should absolutely be done.

comment by ooverthinkk · 2021-09-30T09:20:54.917Z · LW(p) · GW(p)

Yes, but imagine for a second that you worked at Leverage, and you're reading this thread (noting that I'd be surprised if several people from both 1.0 and 2.0 were not). Do you think that, whether they had a negative experience or a positive experience, they would feel comfortable commenting on that here?

(This is the relevant impact of the things mentioned in my previous comment.)

No. Of course not. Because the overpowering narrative in this thread, regardless of the goals or intentions of the OP, is "Leverage was/is a cult".

No one accused of being in a cult is going to come into the community of their accusers and say a word. Of course, with the exception of two people in 2.0 who have posted here, one of which is a representative who has been accused of plotting to coerce and manipulate victims, and the other of which has been falsely accused of trying to hide their identity in the thread. 

And this is despite Leverage's efforts to become more legible and transparent.

If someone who worked there had negative experiences as a result, then, of course, they may not want to post publicly in an environment where the initiative that they once put their time, energy, and effort into is being so highly criticized, and in some cases, again, blatantly accused of being a literal cult or what I would call a "strawman's term" for a cult. They also may not want to air their concerns with their ex-employers in this public setting.

And on the other hand, if someone who worked there had positive experiences, they are left to watch as, once again, the discourse of this group disallows them from giving input without figuratively burning them at the stake for supporting something that they personally experienced and had no issue with.

And these are just the first few things that came to mind for me when considering why they may not be present in this conversation.

My main concern here is that this space doesn't allow them to speak AT ALL without serious repercussions, and that is caused by the pattern I mentioned in my comment above. Because of this, the discourse around Leverage Research on this thread (while there has still been new information exchanged, and I do not want to discount that) is doomed to be an echo chamber between people who are degrees (plural) away from whatever the truth may be.

This is my takeaway from this entire thread, and it's a shame.

(Sorry for using the words "of course" "accused"/"accusers" etc so frequently - I am tired.)

Replies from: gjm
comment by gjm · 2021-09-30T22:55:15.912Z · LW(p) · GW(p)

I don't know how comfortable any given person would feel commenting here. I do know that Kerry Vaughan, who is with Leverage now, has evidently felt comfortable enough to comment. I have no idea who you are but it seems fairly apparent that you have some association with Leverage, and you evidently feel comfortable enough to comment.

You say that one of those people (presumably meaning Kerry) "has been accused of plotting to coerce and manipulate victims". I can't find anywhere where anyone has made any such accusation. I can't find any instance of "coerce" or any of its forms other than in your comment above. I find two other instances of "manipulate" and related words; one is specifically about Geoff Anders (who so far as I know is not the same person as Kerry Vaughan) and the other is talking generally about psychological manipulation and doesn't make any accusations about specific people.

You say that the other person (presumably meaning you) "has been falsely accused of trying to hide their identity", but so far as I can make out you are openly trying to hide your identity (on the grounds that if people could tell who you are then you would be mistreated on account of being associated with Leverage).

(I have to say that I'm a bit confused by the anonymity thing. Are you concerned that if you were onymous then people "in real life" would read what you say here, realise that you're associated with Leverage, and mistreat you? Or that if you were onymous then people here would recognize your name, realise that you're associated with Leverage, and mistreat you? Or something else? The first would make sense only if "in real life" you were concealing whatever associations you have with Leverage, which I have to say would itself be a bit concerning; the second would make sense only if knowing your name would make people in this thread think you more closely associated with Leverage than they already think you, and unless you're Literal Geoff Anders or something that seems a little unlikely. And I'm not sure what "something else" might be.)

Saying that someone is in a cult (though I note that most people have been pretty careful not to use quite that terminology) isn't an accusation. Not at the person in question, anyway. For sure it's the sort of thing that many people will find uncomfortable. But what's uncomfortable here is the content of the claim itself, no? So what less-bullying thing would you prefer someone to do, if they are concerned that an organization other people around them might join is worryingly cult-like? Should they just not say anything, because saying "X is cult-like" is bullying? That policy means never being able to give warning to people who might be getting ensnared by an actual cult. What's the alternative?

Replies from: David Hornbein, ooverthinkk
comment by David Hornbein · 2021-09-30T23:26:03.929Z · LW(p) · GW(p)

No comment on your larger point but 

Saying that someone is in a cult (though I note that most people have been pretty careful not to use quite that terminology) isn't an accusation. Not at the person in question, anyway.

"You are in a cult" is absolutely an accusation directed at the person. I can understand moral reasons why someone might wish for a world in which people assigned blame differently, and technical reasons why this feature of the discourse makes purely descriptive discussions unhelpfully fraught, but none of that changes the empirical fact that "You are in a cult" functions as an accusation in practice, especially when delivered in a public forum. I expect you'll agree if you recall specific conversations-besides-this-one where you've heard someone claim that another participant is in a cult.

Replies from: gjm
comment by gjm · 2021-10-01T08:22:33.746Z · LW(p) · GW(p)

Maybe you're right. So, same question as for ooverthinkk: suppose you think some organization that people you know belong to is a cult, or has some of the same bad features as cults. What should you do?

(It seems to me that ooverthinkk feels that at least some of what is being said in this thread about Leverage is morally wrong, and I hope there's some underlying principle that's less overreaching than "never say that anything is cult-like" and less special-pleading than "never say bad things about Leverage" -- but I don't yet understand what that underlying principle is.)

Replies from: ooverthinkk
comment by ooverthinkk · 2021-10-03T17:09:54.747Z · LW(p) · GW(p)

(edit: moved to the correct reply area)

comment by ooverthinkk · 2021-10-03T17:21:51.684Z · LW(p) · GW(p)

The first person was Larissa, the second person was Kerry.

The "anonymity thing" does not fall under the first category. I'd just prefer, as I stated before, not to be targeted "in real life" for my views on this thread.

The "bullying" that I'm referring to happened/happens outside of this thread, and is in no way limited to instances of people being accused of being "in a cult".

 

Replies from: gjm
comment by gjm · 2021-10-03T19:10:13.449Z · LW(p) · GW(p)

D'oh! I'd forgotten that Larissa had commented here too. My apologies.

As I've said, I have no knowledge of any bullying that may or may not be occurring elsewhere (especially in person in the Bay Area), and if anyone's getting bullied then that's bad. If that isn't common knowledge, then there's a problem. But the things in this thread that you've taken exception to don't seem to me to come close to bullying. (Obviously, though, they could be part of a general pattern of excessive hostility to all things Leverage.)

Do you think OP was wrong to post what they did? If so, is that because you think the things they've said about Leverage are factually wrong, or because you think people who think they see an organization behaving in potentially harmful ways shouldn't say so, or what?

comment by lsusr · 2021-09-28T00:13:10.732Z · LW(p) · GW(p)

If anyone is aware of harms or abuses that have taken place involving staff at Leverage Research, please email me, in confidence, at larissa@leverageresearch.org.

Bullshit. This is not how you prevent abuse of power. This is how you cover it up.

Replies from: ooverthinkk
comment by ooverthinkk · 2021-09-28T19:00:14.595Z · LW(p) · GW(p)

Have you even read the default comment guidelines? Hint: they're right below where you're typing.

For your reference:

Default comment guidelines:

  • Aim to explain, not persuade
  • Try to offer concrete models and predictions
  • If you disagree, try getting curious about what your partner is thinking
  • Don't be afraid to say 'oops' and change your mind
Replies from: Viliam, Vladimir_Nesov
comment by Viliam · 2021-09-29T18:59:53.085Z · LW(p) · GW(p)

Let's use some common sense here, please. If - hypothetically speaking - some organization abuses people, what is the most likely consequence if the victim e-mails in confidence their PR person?

My model says, the PR person will start working on a story that protects the organization, with the advantage that the PR person can publish their version before the victim does. (There are also other options, such as threatening the victim, which wouldn't be available if the victim told their story to someone else first.)

comment by Vladimir_Nesov · 2021-09-28T21:37:53.163Z · LW(p) · GW(p)

The content of comment guidelines is not a reason to follow them.

comment by lsusr · 2021-09-28T00:19:14.976Z · LW(p) · GW(p)

These concerns are widely shared for research in the hard sciences (e.g., nuclear physics), but are valid as well for the social sciences.

Social science infohazards are not a thing because they must be implemented by an organization to work and organizations leak like a sieve [? · GW]. Even nuclear secrets leak. This demand for secrecy is an blatant excuse used to obstruct oversight and to prevent peer review. What you're doing is the opposite of science.

Replies from: kerry-vaughan
comment by Kerry Vaughan (kerry-vaughan) · 2021-09-28T01:05:06.745Z · LW(p) · GW(p)

This demand for secrecy is an blatant excuse used to obstruct oversight and to prevent peer review. What you're doing is the opposite of science.

Interestingly, "peer review" occurs pretty late in the development of scientific culture. It's not something we see in our case studies on early electricity, for example, which currently cover the period between 1600 and 1820. 

What we do see throughout the history is the norm of researchers sharing their findings with others interested in the same topics. It's an open question whether Leverage 1.0 violated this norm. On the one hand, they had a quite vibrant and open culture around their findings internally and did seek out others who might have something to offer to their project. On the other hand, they certainly didn't make any of this easily accessible to outsiders. I'm inclined to think they violated some scientific norms in this regard, but I think the work they were doing is pretty clearly science albeit early stage science.

Replies from: lsusr
comment by lsusr · 2021-09-28T03:42:10.299Z · LW(p) · GW(p)

I want to draw attention to the fact that "Kerry Vaughan" is a brand new account that has made exactly three comments, all of them on this thread. "Kerry Vaughan" is associated with Leverage. [LW(p) · GW(p)] "Kerry Vaughan"'s use of "they" to describe Leverage is deliberately misleading.

If "it's not unscientific because it merely takes science back 200-400 years" is the best defense that LEVERAGE ITSELF can give for its own epistemic standards then any claims it has to scientific rigor are laughable. 1600 was the time of William Shakespeare.

Edit: I'm not saying that science in 1600 was laughable. I'm saying that performing 1600-style science today is laughable.

Replies from: kerry-vaughan, tslarm, ChristianKl, Raemon
comment by Kerry Vaughan (kerry-vaughan) · 2021-09-28T14:11:43.154Z · LW(p) · GW(p)

I want to draw attention to the fact that "Kerry Vaughan" is a brand new account that has made exactly three comments, all of them on this thread. "Kerry Vaughan" is associated with Leverage. [LW(p) · GW(p)] "Kerry Vaughan"'s use of "they" to describe Leverage is deliberately misleading.

I'm not hiding my connection to Leverage which is why I used my real name, mentioned that I work at Leverage in other comments, and used "we" in connection with a link to Leverage's case studies. I used "they" to refer to Leverage 1.0 since I didn't work at Leverage during that time.

comment by tslarm · 2021-09-28T09:31:37.898Z · LW(p) · GW(p)

I want to draw attention to the fact that "Kerry Vaughan" is a brand new account that has made exactly three comments, all of them on this thread. "Kerry Vaughan" is associated with Leverage. [LW(p) · GW(p)] "Kerry Vaughan"'s use of "they" to describe Leverage is deliberately misleading.

To be fair, KV was open about that association in both previous comments, using 'we' in the first and including this disclaimer in the second --

(I currently work at Leverage research but did not work at Leverage during Leverage 1.0 (although I interacted with Leverage 1.0 and know many of the people involved). Before working at Leverage I did EA community building at CEA between Summer 2014 and early 2019.)

-- which also seems to explain the use of 'they' in KV's third comment, which referred specifically to "Leverage 1.0".

(I hope this goes without saying on LW, but I don't mean this as a general defense of Leverage or of KV's opinions. I know nothing about either beyond what I've read here, and I haven't even read all the relevant comments. Personally I wouldn't get involved with an organisation like Leverage.)

comment by ChristianKl · 2021-09-29T13:12:28.480Z · LW(p) · GW(p)

The problem is that currents academic standards lead to fields like psychology being very unproductive. 

Experimenting with going back to scientific norms from before the great stagnation is one way to work to achieve scientific progress.

comment by Raemon · 2021-09-28T10:10:37.400Z · LW(p) · GW(p)

(This account [LW · GW] is the same Kerry btw, my guess is Kerry happened to try logging in with google, which doesn't actually connect to existing accounts)

Replies from: kerry-vaughan
comment by Kerry Vaughan (kerry-vaughan) · 2021-09-28T14:08:15.040Z · LW(p) · GW(p)

I don't think that's my account actually. It's entirely possible that I never created a LW account before now.

comment by ChristianKl · 2021-09-24T15:01:24.781Z · LW(p) · GW(p)

When I hear that a few people within Leverage ended up with serious negative consequences because of charting, it's unclear for me from the outside what that means.

It's my understanding that Leverage did a lot of experiments. It could be that some experiments ended messing up some of the participants. It could also be that "normal charting" without doing any experiments messed the people up. 

Replies from: BayAreaHuman
comment by BayAreaHuman · 2021-09-26T17:14:49.503Z · LW(p) · GW(p)

I would offer that "normal charting" as offered to external clients was being done in a different incentive landscape than "normal charting" as conducted on trainees within the organization. I mean both incentives on the trainer, and incentives on the trainee.

Concretely, incentives-wise:

  • The organization has an interest in ensuring that the trainee updates their mind and beliefs to accord with what the organization thinks is right/good/true, what the organization thinks makes a person "effective", and what the organization needs from the member.
  • The trainee may reasonably believe they could be de-funded, or at least reduced in status/power in the org, if they do not go along.
comment by polyphony (tyleralterman) · 2021-10-05T15:38:51.952Z · LW(p) · GW(p)

Hi all,

During my years in the Bay I spent some of my time as an employee of Paradigm, a Leverage 1.0 affiliate. I also spent a good amount of time living and hanging out at the Leverage house/offices.

I'm writing here from a coffeeshop in Berlin because...why? I think because I get frustrated by the balance of coverage that Leverage gets. When I consider what sorts of things produce value, they tend to start off being very high-variance. They tend to have very weird-seeming history. 

For instance: Whispers have it that – before AI X-Risk was a respectable, well-known cause backed by people like Elon Musk – that a high school drop-out named "Eliezer Yudkowsky" wrote a Harry Potter fan-fiction to bring hundreds of people into a rationalist movement that might someday save the world from runaway algorithms. Did you know Trump-supporter Peter Thiel was an early funder of one of its main organizations?! Did you know that many rationalists have become affiliated with Neoreaction, an alt-right group with members that support authoritarianism?!! Don't get me started on a different now-respectable org – one staffed by many rationalists – that bootstrapped itself in part through astroturfing and which maintains strict information management policies (what are they hiding???).

I speak above of causes and orgs (ones that I like by the way) that have since done a decent job of PR...to the point where most people don't know about their strange early history. Leverage 1.0, on the other hand did a crap job of PR, and so clearly biased "Facts" posts I've seen on here do more damage. They do damage to reputations of organizations that were – on net in my opinion – worthwhile endeavors and to people who continue to be generative (like Geoff, who, full disclaimer, is my friend).

That all said, I don't dispute most of what BayAreaHuman has mentioned above. I'm reacting more to "Facts" posts that read like gossip magazine content but with the stylistic manner of rationalist objectivity. I.e., posts that seem intent on curating bullet-points about Leverage that appear to me to be hand-selected to be off-putting, despite an overall tone and structural properties of objectivity. (My straw-man imitation: "Look, I'm not saying LessWrong is a cult. I'm just stating several facts showing a similarity to 'high-demand groups.' Wink wink.")

For my part, Leverage 1.0 harmed me in some ways. It helped me in some ways. The result was, upon reflection, quite net positive. I'd be happy to say more after eating some dinner.

Replies from: tyleralterman, tyleralterman
comment by polyphony (tyleralterman) · 2021-10-13T09:34:11.915Z · LW(p) · GW(p)

Follow-up: I wanted to acknowledge that some other people who spent time at Leverage had much worse experiences than I did. I don't want to downplay that. My experience may have been unique since I focused on building an external company and since my social circle in the Bay Area was mostly non-Leveragers. 

All that said, I still stand by what I wrote above. I was reacting mainly to the original post wearing a guise of objectivity. I think I would have no gripe with it if the title was, "I have beef with Leverage and so here are some biasing facts I'd like to highlight about them" – though, to be fair, that's a really long title, and also I could be projecting.

Replies from: Ruby
comment by Ruby · 2021-10-13T22:14:24.872Z · LW(p) · GW(p)

I think Leverage is worthy of deep criticisms (and thought so even before yesterday's Medium post) but also what you say about "guise of objectivity" is something that bothered me about this post and I'm glad you voiced it.

comment by polyphony (tyleralterman) · 2021-10-05T15:51:15.046Z · LW(p) · GW(p)

oh ps, I'm sure this has already been mentioned in the 100+ comments I haven't read, but it's weird to call Leverage a "high-demand" group since – during my time there – people were regularly complaining about basically having too much freedom. I can't remember a single day there that anyone demanded I do anything, in the way a manager demands of employees or guru makes demands of disciples. (Actually there might have been a few where, eg, there was a mandatory meeting that we all install a good password manager so we don't get hacked. But often people missed these "mandatory" meetings.) Most days I just did whatever I wanted. Often people felt like they were floating and wanted *more* directives.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-13T23:19:43.236Z · LW(p) · GW(p)

My current model is that this changed around 2017 or so. At least my sense was that people from before that time often had tons of side-projects and free time, and afterwards people seemed to have really full schedules and were constantly busy.

Replies from: tyleralterman
comment by polyphony (tyleralterman) · 2021-10-17T12:28:53.829Z · LW(p) · GW(p)

Oh you might be right. I think around 2017 was when the overall thing started to separate into subgroups, some of which I remember having stronger requirements (eg do one presentation every X weeks or something). Around that time I was off doing Reserve, which largely got started in New York, and wasn't so in touch with the rest of the "ecosystem" in the Bay Area. OK, yeah, I think this makes me not a reliable commentator on the 2017-on period. 

Replies from: tyleralterman
comment by polyphony (tyleralterman) · 2021-10-17T12:36:15.822Z · LW(p) · GW(p)

Maybe one thing worth mentioning on this: If my memory serves correct, Reserve was founded with the goal of funding existential risk work. This included funding the Leverage ecosystem (since many of the members thought of themselves as working on solutions that would help with X-Risk) but would have also included other people and orgs.

comment by LarissaRowe · 2021-10-01T20:07:56.894Z · LW(p) · GW(p)

Update from Leverage Research: a reminder about our AMA & other ways to get updates
For anyone in this thread who still has questions about Leverage Research, I just wanted to remind you about the AMA we are running at our virtual office tomorrow (Saturday, October 2, at 12 PM PT). 

The event is open to anyone interested in our work and is designed to allow people to ask questions about our history, current work, and future plans. See this comment for further details. [LW(p) · GW(p)]

Beyond that, we're currently exploring different ways to ensure we hear from people who were part of the Leverage 1.0 ecosystem about their experiences, especially before we release some of our psychology tools and as we write our FAQ on our history (see this post for more details on these two initiatives [LW · GW]). This includes looking into neutral third-party moderators and ways of gathering anonymous feedback. If you want to stay up to date on the steps we're taking, or our current work in general, subscribe to our quarterly newsletter or follow us on Twitter.

comment by lsusr · 2021-09-26T07:13:59.905Z · LW(p) · GW(p)

How do you say "this a cult" without literally saying the words "this is a cult"? (In the common colloquial sense of the word "cult", as opposed the historical academic sense of the word.)

I've never heard of this organization until now and I'd be happy never to hear about them in the future. (This isn't a criticism of OP.)

Replies from: Viliam
comment by Viliam · 2021-09-27T20:53:38.801Z · LW(p) · GW(p)

Once I wrote an article [LW · GW] about how to unpack "cult" into eight more specific behaviors. It wasn't received well. Ironically, one of the objections was that this would also classify Leverage as a cult. ¯\_(ツ)_/¯

Replies from: ChristianKl, lsusr
comment by ChristianKl · 2021-09-28T12:00:22.287Z · LW(p) · GW(p)

No, that was not the objection. My main point was about you asserting a bad binary classification frame. I made no assertion that Leverage fulfilled all the criteria. 

It was rather the opposite. If Leverage would indeed fulfill all criteria then a binary classification of them as being a cult wouldn't be a problem.

comment by lsusr · 2021-09-27T21:58:06.371Z · LW(p) · GW(p)

Hahaha!

comment by rhys_lindmark · 2021-09-27T03:55:01.107Z · LW(p) · GW(p)

It's been helpful for me to think of three signs of a cult:

1. Transformational experiences that can only be gotten through The Way.

2. Leader has Ring of Power that gives access to The Way.

3. Clear In-Group vs. Out-Group dynamics to achieve The Way.

Leverage 1.0 seems to have all three:

1. Transformational experiences through charting and bodywork.

2. Geoff as Powerful Theorist.

3. In-group with Leverage as "only powerful group".

Given this, I'm most curious about what Geoff has done to reflect/improve and what the ~rationalist community would want to see from him in order to "trust" him again.

Fwiw, I know very little about Leverage. I've never met Geoff but I have heard a few negative stories about emotional ~trauma from some folks there.

Cult Checklist taken from here: https://rebelwisdom.co.uk/18-film-content/general/605-how-to-spot-a-cult-jamie-wheal

Replies from: ChristianKl
comment by ChristianKl · 2021-09-27T08:19:35.434Z · LW(p) · GW(p)

Just because an organization provides transformational experiences doesn't necessarily mean that there's a belief that only the techniques of the organization can provide those experiences. 

If you for example ask the Dalei Lama about Christianity, the Dalei Lama will grant that it provides transformational experiences for some people and might be good for some people. That's very different then Scientology which claims that everything that Hubert didn't develop doesn't really provide transformational change.

Replies from: Viliam
comment by Viliam · 2021-09-29T19:41:57.230Z · LW(p) · GW(p)

The narrative within the group was that they were the only organization with a plan that could possibly work, and the only real shot at saving the world; that there could be no possibility of success at one's goal of saving the world outside the organization.

Many in the group felt that Geoff was among the best and most powerful "theorists" in the world. Geoff's power and prowess as leader was a central theme.

Is this also very different from Scientology?

Replies from: ChristianKl
comment by ChristianKl · 2021-09-29T20:08:43.220Z · LW(p) · GW(p)

In high-level Scientology saving the world means auditing a lot of thetans in your body so that all Thetans can be free. Generally, there's a belief that saving the world can only achieved via Scientologies method because it's inherently about doing auditing on a lot of people.

Leverage on the other hand does not advocate that everyone has to do Leverage techniques to be saved and I expect that I would find the outcome that Geoff wants to bring about a desirable outcome.

The is a huge difference between claiming that someone is the best and someone is the only person who understands how things work and thus everything else is foreign tech that is to be shunned. If you say someone is among the best, that acknowledges that there are other people worth learning from. 

The strategy of Leverage involved doing things like holding EA Global which is about Leverage having part of it's impact through helping other organizations. There's no "you are either with us or against us"-vibe but willingness to help without needing to buy all the beliefs of Geoff but willingness to interact freely with other organizations.

comment by Kerry Vaughan (kerry-vaughan) · 2021-09-28T00:55:57.788Z · LW(p) · GW(p)

I think the way the term cult (or euphemisms like “high-demand group”) has been used by the OP and by many commenters in this thread is extremely unhelpful and, I suspect, not in keeping with the epistemic standards of this community.

At its core, labeling a group as a cult is an out-grouping power move used to distance the audience from that group’s perspective. You don’t need to understand their thoughts, explain their behavior, form a judgment on their merits. They’re a cult. 

This might be easier to see when you consider how, from an outside perspective, many behaviors of the Rationality community that are, in fact, fine might seem cultish. Consider, for example, the numerous group houses, hero-worship of Eliezer, the tendency among Rationalists to hang out only with other Rationalists, the literal take over the world plan (AI), the prevalence of unusual psychological techniques (e.g., rationality training, circling), and the large number of other unusual cultural practices that are common in this community. To the outside world, these are cult-like behaviors. They do not seem cultish to Rationalists because the Rationality community is a well-liked ingroup and not a distrusted outgroup.  

My understanding is that historically the Rationality community has had some difficulty in protecting itself from parasitic bad actors who have used their affiliation with this community to cause serious harm to others. Given that context, I understand why revisiting the topic of early Leverage might be compelling. I would suggest that the cult/no cult question will not be helpful here because the answer depends so largely on whether people liked or didn’t like Leverage. I think past events should demonstrate that this is not a reliable indicator of parasitic bad actors.

Some questions I would ask instead include: Did this group represent that they were affiliated with Rationality in order to achieve their ends? If so, did they engage in activities that are contrary to the norms of the Rationality community? Were people harmed by this group? If so, was that harm abnormal given the social context? Was that harm individual or institutional? Did those involved act responsibly given the circumstances? Etc. 

Given my knowledge of Leverage 1.0 and my knowledge of the Rationality community, I am quite confident that Leverage was not the parasitic bad actor that you are looking for, but I think this is something the Rationality community should determine for itself and this seems like a fine time to do so.

However, I would also like to note that Leverage 1.0 has historically been on the receiving end of substantial levels of bullying, harassment, needless cruelty, public ridicule, and more by people who were not engaged in any legitimate epistemic activity. I do not think this is OK. I intend to call out this behavior directly when I see it. I would ask that others do so as well.

(I currently work at Leverage research but did not work at Leverage during Leverage 1.0 (although I interacted with Leverage 1.0 and know many of the people involved). Before working at Leverage I did EA community building at CEA between Summer 2014 and early 2019.)


 

Replies from: tcheasdfjkl, Aella, lsusr
comment by tcheasdfjkl · 2021-09-28T08:09:35.067Z · LW(p) · GW(p)

This might be easier to see when you consider how, from an outside perspective, many behaviors of the Rationality community that are, in fact, fine might seem cultish. Consider, for example, the numerous group houses, hero-worship of Eliezer, the tendency among Rationalists to hang out only with other Rationalists, the literal take over the world plan (AI), the prevalence of unusual psychological techniques (e.g., rationality training, circling), and the large number of other unusual cultural practices that are common in this community. To the outside world, these are cult-like behaviors. They do not seem cultish to Rationalists because the Rationality community is a well-liked ingroup and not a distrusted outgroup. 

 

I think there's actually been a whole lot of discourse and thought about Are Rationalists A Cult, focusing on some of this same stuff? I think the most reasonable and true answers to this are generally along the lines of "the word 'cult' bundles together some weird but neutral stuff and some legitimately concerning stuff and some actually horrifying stuff, and rationalists-as-a-whole do some of the weird neutral stuff and occasionally (possibly more often than population baseline but not actually that often) veer into the legitimately concerning stuff and do not really do the actually horrifying stuff". This post, as I read it, is making the case that Leverage veered far more strongly into the "legitimately concerning" region of cult-adjacent space, and perhaps made contact with "actually horrifying"-space.

Notably out of your examples, some are actually bad imo? "Hero-worship of Eliezer" is imo bad, and also happily is not really much of a thing in at least the parts of ratspace I hang out in; "the tendency of rationalists to hang out with only other rationalists" is I think also not great and I think if taken to an extreme would be a pretty worrying sign, but in fact most rationalists I know do maintain social ties (including close ones) outside this group. 

Unusual rationalist psychological techniques span a pretty wide range, and I have sometimes heard descriptions of such techniques/practices/dynamics and been wary or alarmed, and talked to other rationalists who had similar reactions (which I say not to invoke the authority of an invisible crowd that agrees with me but to note that rationalists do sometimes have negative "immune" responses to practices invented by other rationalists even if they're not associated with a specific disliked subgroup). Sort of similarly re: "take over the world plan", I do not really know enough about any specific person or group's AI-related aspirations to say how fair a summary that is, but... I think the more a fair summary it is, the more potentially worrying that is?

Which is to say, I do think that there are pretty neutral aspects of rationalist community (the group houses, the weird ingroup jargon, the enthusiasm for making everything a ritual) that may trip people's "this makes me think of cults" flag but are not actually worrying, but I don't think this means that rationalists should turn off their, uh, cult-detectors? Central-examples-of-cults do actually cause harm, and we do actually want to avoid those failure modes.

Replies from: Viliam
comment by Viliam · 2021-09-28T21:41:45.991Z · LW(p) · GW(p)

There is a huge difference between "tendency to hang out with other Rationalists" and having mandatory therapy sessions with your supervisor or having to ask for permission to write a personal blog.

comment by Aella · 2021-09-28T23:09:33.027Z · LW(p) · GW(p)

Yeah, 'cult' is a vague term often overused. Yeah, a lot of rationality norms can be viewed as cultish. 

How would you suggest referring to an 'actual' cult - or, if you prefer not to use that term at all, how would you suggest we describe something like scientology or nxivm? Obviously those are quite extreme, but I'm wondering if there is 'any' degree of group-controlling traits that you would be comfortable assigning the word cult to? Or if I refer to scientology as a cult, do you consider this an out-grouping power move used to distance people from scientology's perspective?

Replies from: DanielFilan
comment by DanielFilan · 2021-10-13T19:03:13.926Z · LW(p) · GW(p)

This strikes me as an obviously good question and I'm surprised it hasn't been answered.

comment by lsusr · 2021-09-28T04:00:20.151Z · LW(p) · GW(p)

I think the way the term cult (or euphemisms like “high-demand group”) has been used by the OP and by many commenters in this thread is extremely unhelpful and, I suspect, not in keeping with the epistemic standards of this community.

No. As demonstrated by this comment by Viliam [LW(p) · GW(p)], the word "cult" refers is a well-defined set of practices used to break people's ability to think rationally. Leverage does not deny using these practices. To the contrary, it appears flagrantly indifferent [LW(p) · GW(p)] to the abuse potential. Cult techniques of brainwashing an attractor of human social behavior. Eliezer Yudkowsky warned about this attractor. [LW · GW] Your attempt to redefine cult more broadly is a signal you're bullshitting us [LW · GW].

Replies from: TAG, ChristianKl, ooverthinkk
comment by TAG · 2021-09-28T15:58:29.829Z · LW(p) · GW(p)

It's useful to be able to conceptualise something that is 50% or 90% of the way to becoming a cult, because then you can jump off.

comment by ChristianKl · 2021-09-28T12:01:37.015Z · LW(p) · GW(p)

Leverage is not doing everything that Viliam described in his post.

Your mind belongs to the group : In the description above there's no mention of people needing to confess sins.

A sacred science : Leverage did not have an intellectual environment that didn't allow for doubts. 

Map over the territory : There's no assertion of that in the common knowledge facts and I doubt it's true for Leverage.

Replies from: Viliam
comment by Viliam · 2021-09-28T21:26:33.525Z · LW(p) · GW(p)

Your mind belongs to the group : In the description above there's no mention of people needing to confess sins.

They call it "Belief Reporting", it's described in one of the documents that were removed from Internet Archive. The members are (were?) supposed to do it regularly with their manager. That is like "auditing" in Scientology, except instead of using an e-meter they rely on nerds being pathologically honest.

Replies from: ChristianKl, tcheasdfjkl
comment by ChristianKl · 2021-09-29T08:13:22.908Z · LW(p) · GW(p)

There's no inherent need to confess having violated any rules and comitted sins in belief reporting. 

It's a debugging technique and while you can use any debugging technique to debug someone having comitted sins no one here who has closer information about leverage charged that they do that.

Scientology actually does force people to confess sins when they commit what they consider ethics violations (scientology calls their code of conduct ethics).

Anyone involved in scientology would easily classify what scientology does as including a need to confess sins. On the other hand, that's far how the participants of belief reporting sessions at Leverage likely thought about it. At the moment there's no source that anybody in Leverage got an impression that this is what happened to them.

It's quite toxic for rational discussion to make those accusations instead of focus on the facts that are actually out in the open. 

comment by tcheasdfjkl · 2021-09-29T05:03:40.313Z · LW(p) · GW(p)

What's the content of belief reporting?

Replies from: ChristianKl
comment by ChristianKl · 2021-09-29T08:37:17.126Z · LW(p) · GW(p)

I learned belief reporting from a person who attended a Leverage workshop and haven't had any direct face-to-face exposure to Leverage.

Belief reporting is a debugging technique. You have a personal issue you want to address. Then you look at related beliefs. 

Leverage found that if someone sets an intention of "I will tell the truth" and then speaks out of a belief like "I'm a capable person" and they don't believe that (at a deep level), they will have a physical sensation of resistance. 

Afterwards, there's an attempt to trace the belief to it's roots. The person can then speak out various forms of "I'm not a capable person because X" and "I'm not a capable person because Y". Then recursively the process gets applied to seek for the root. Often that allows uncovering that there's some confusion at the base of the belief and then after having uncovered the confusion it's possible to work the tree up again to get rid of the "I'm not a capable person" belief and switch it into "I'm a capable person".

This often leads to discovering that one holds beliefs at a deep level that one's system II considers silly but that still are the base of other beliefs and that affect our actions. 

Replies from: Viliam, tcheasdfjkl
comment by Viliam · 2021-09-29T19:33:46.648Z · LW(p) · GW(p)

Thanks for the description!

In my opinion, this sounds interesting as a confidential voluntary therapy, but Orwellian when:

Members who were on payroll were expected to undergo charting/debugging sessions with a supervisory "trainer", and to "train" other members. The role of trainer is something like "manager + therapist": that is, both "is evaluating your job performance" and "is doing therapy on you".

So, your supervisor is debugging your beliefs, possibly related to your job performance, and you are supposed to not only tell the truth, but also "seek for the root"... and yet, in your opinion, this does not imply "having to confess violation of the rules or committed sins"?

What exactly happens when you start having doubts about the organization or the leader, and as a result your job performance drops, and then you are having the session with your manager? Would you admit, truthfully, "you know, recently I started having some doubts about whether we are really doing our best to improve the world, or just using the effective altruist community as a finshing pond for people who are idealistic and willing to sacrifice... and I guess these thoughts distract me from my tasks", and then your therapist/manager is going to say... what?

Replies from: ChristianKl
comment by ChristianKl · 2021-09-29T20:37:04.908Z · LW(p) · GW(p)

Nothing written above suggests that doubt about central strategy would have been seen as sin, especially when it isn't necessarily system II endorsed. It's my understanding that talking about the theories of change through which Leverage is going to have an effect on the world was one of the main activities Leverage engaged in.

Besides the word sin is generally about taking actions that are in violation of norms of an organization. In the Scientology context it's for example a sin to watch a documentary about Scientology on normal TV. In Christianity masturbation would be a sin. 

Leverage doesn't have a similar behavior codex that declares certain actions as sins that have to be confessed. 

Role conflicts between being a manager and a therapist can easily produce problems but analysing them through a frame as it being about "confessing sins" is not an useful lense to think coherently about the involved problems.

comment by tcheasdfjkl · 2021-09-29T18:25:09.274Z · LW(p) · GW(p)

Interesting, thanks!

comment by ooverthinkk · 2021-09-28T19:28:33.032Z · LW(p) · GW(p)

You missed the part where this person was pointing out that there is Deliberately Vague Language used by the OP. Imo, this language doesn't create enough of a structure for commenters to construct an adequate dialogue about several sub-topics in this thread.

Also, what's "flagrantly indifferent" about Larissa wanting to hear out people who feel wronged?

You seem to be quite upset by all of this, why not reach out and let her know? 

Replies from: cousin_it
comment by cousin_it · 2021-09-28T21:05:15.313Z · LW(p) · GW(p)

You seem to be quite upset by all of this

Nah, he's alright. If someone calls a cult a cult, that's not a reason to call them upset. Plus, he writes about plenty of other things; you're the one with the new account made only to defend Leverage.

Replies from: Vladimir_Nesov, ooverthinkk
comment by Vladimir_Nesov · 2021-09-28T21:59:47.623Z · LW(p) · GW(p)

you're the one with the new account made only to defend Leverage

The social pressure against defending Leverage is in the air, so anonymity shouldn't be held against someone who does that, it's already bad enough that there is a reason for anonymity.

comment by ooverthinkk · 2021-09-28T21:17:58.362Z · LW(p) · GW(p)

If questioning the "rationality" of the discourse is defending them, then what do you suppose you're doing?

I just don't see the goals or values of this community reflected here and it confuses me. That's why I made this account - to get clarity on what seems to me to be a total anomaly case in how the rationalist community members (at least as far as signaling goes, I guess) conduct themselves.

Because I've only seen what is classifiable as a hysteric response to this topic, the Leverage topic.