Zoe Curzi's Experience with Leverage Research

post by Ilverin the Stupid and Offensive (Ilverin) · 2021-10-13T04:44:49.020Z · LW · GW · 261 comments

This is a link post for https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b

Contents

263 comments

261 comments

Comments sorted by top scores.

comment by RyanCarey · 2021-10-13T11:36:49.041Z · LW(p) · GW(p)

Thanks for your courage, Zoe!

Personally, I've tried to maintain anonymity in online discussion of this topic for years. I dipped my toe into openly commenting last week [LW(p) · GW(p)], and immediately received an email that made it more difficult to maintain anonymity - I was told "Geoff has previously speculated to me that you are 'throwaway', the author of the 2018 basic facts post". Firstly, I very much don't appreciate my ability to maintain anonymity being narrowed like this. Rather, anonymity is a helpful defense in any sensitive online discussion, not least this one. But yes, throwaway/anonymoose is me - I posted anonymously so as to avoid adverse consequences from friends who got more involved than me. But I'm not throwaway2,  anonymous, or BayAreaHuman - those three are bringing evidence that is independent from me at least.

I only visited Leverage for a couple months, back in 2014. One thing that resonated strongly with me about your post is that the discussion is badly confused by lack of public knowledge and strong narratives, about whether people are too harsh on Leverage, what biases one might have, and so on. This is why I think we often retreat to just stating "basic" [EA · GW] or "common knowledge" [LW · GW] facts; the facts cut through the spin.

Continuing in that spirit, I personally can attest that much of what you have said is true, and the rest congruent with the picture I built up there. They dogmatically viewed human nature as nearly arbitrarily changeable. Their plan was to study how to change their psychology, to turn themselves into Elon Musk type figures, to take over the world. This was going to work because Geoff was a legendary theoriser, Connection Theory had "solved psychology", and the resulting debugging tools were exceptionally powerful. People "worked" for ~80 hours a week - which demonstrated the power of their productivity coaching.

Power asymmetries and insularity were present to at least some degree. I personally didn't encounter an NDA, or talk of "demons" etc. Nor did I get a solid impression of the psychological effects on people from that short stay, though of course there must have been some.

What's frustrating about still hearing noisy debate on this topic, so many years later, is that Leverage being a really bad org seems overdetermined at this point. On the one hand, if I ranked MIRI, CFAR, CEA, FHI, and several startups I've visited, in terms of how reality-distorting they can be, Leverage would score ~9, while no other would surpass ~7. (It manages to be nontransparent and cultlike in other ways too!).  While on the other hand, their productive output was... also like a 2/10? It's indefensible. But still only a fraction of the relevant information is in the open.

As you say, it'll take time for people to build common understanding, and to come to terms with what went down. I hope the cover you've offered will lead some others to feel comfortable sharing their experiences, to help advance that process.

Replies from: Linch, f____, Evan_Gaensbauer
comment by Linch · 2021-10-13T23:23:09.885Z · LW(p) · GW(p)

What's frustrating about still hearing noisy debate on this topic, so many years later, is that Leverage being a really bad org seems overdetermined at this point. On the one hand, if I ranked MIRI, CFAR, CEA, FHI, and several startups I've visited, in terms of how reality-distorting they can be, Leverage would score ~9, while no other would surpass ~7. (It manages to be nontransparent and cultlike in other ways too!).  While on the other hand, their productive output was... also like a 2/10? It's indefensible. But still only a fraction of the relevant information is in the open.

One thing to note is that if you "read the room" instead of only looking at the explicit arguments, it's noticeable that a lot of people left Leverage and the new org ("Leverage 2.0") completely switched research directions, which to me seems like tacit acknowledgement that their old methods etc aren't as good.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-14T09:19:31.403Z · LW(p) · GW(p)

As far as people leaving organizations I'd love to have good data for MIRI, CFAR, CEA and FHI.

Replies from: habryka4, Duncan_Sabien, ozziegooen
comment by habryka (habryka4) · 2021-10-14T23:24:36.693Z · LW(p) · GW(p)

I think I could write down a full history of employment for all of these orgs (except maybe FHI, which I've had fewer tabs on), in an hour or two of effort. It's somewhat costly for me (in terms of time), but if lots of people are interested, I would be happy to do it. 

Replies from: David Hornbein, kohaku-none, NicholasKross
comment by David Hornbein · 2021-10-15T00:38:12.925Z · LW(p) · GW(p)

I'm personally interested, and also I think having information like this collected in one place makes it much easier for everyone to understand the history and shape of the movement. IMO an employment history of those orgs would make for a very valuable top-level post.

comment by Nicholas / Heather Kross (NicholasKross) · 2022-06-17T03:20:12.334Z · LW(p) · GW(p)

I would like to read this very much, as I want to go into technical AI alignment work and such a document would be very helpful.

comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-20T06:09:02.344Z · LW(p) · GW(p)

Full-time at CFAR in Oct 2015 when Pete Michaud and I arrived:

Anna Salamon, Val Smith, Kenzi Amodei, Julia Galef, Dan Keys, Davis Kingsley

 

Full-time at one point or another during my tenure:

Morgan Davis, Renshin Lee, Harmanas Chopra, Adom Hartell, Lyra Sancetta

(Kenzi, Julia, Davis, and Val all left while I was there, in that order.)

 

Notable part-timers (e.g. welcome at CFAR's weekly colloquium):

Steph Zolayvar, Qiaochu Yuan, Gail Hernandez

 

At CFAR in Oct 2018 when I left:

Anna Salamon (part time), Tim Telleen-Lawton, Dan Keys, Jack Carroll, Elizabeth Garrett, Adam Scholl, Luke Raskopf, Eli Tyre (part time), Logan Strohl (part time)

 

... I may have missed an Important Person or two but that's a decent initial sketch of those three years.

Replies from: elityre
comment by Eli Tyre (elityre) · 2021-10-20T06:41:39.085Z · LW(p) · GW(p)

I think I should also be in the list of notable part-timers?

 

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-20T16:06:43.216Z · LW(p) · GW(p)

You're listed as part time at CFAR when I left.

Replies from: elityre
comment by Eli Tyre (elityre) · 2021-10-21T06:05:39.290Z · LW(p) · GW(p)

I guess I don't understand your categories. I would guess that I would should be on both sub-lists. [shrug]

comment by ozziegooen · 2021-10-14T14:42:45.482Z · LW(p) · GW(p)

As someone who's been close to these, some had a few related issues, but Leverage seemed much more extreme in many of these dimensions to me.

However, now there are like 50 small EA/rationalist groups out there, and I am legitimately worried about quality control.

Replies from: Viliam, ChristianKl
comment by Viliam · 2021-10-15T08:03:44.210Z · LW(p) · GW(p)

I generally worry about all kinds of potential bad actors associating themselves with EA/rationalists.

There seems to be a general pattern where new people come to an EA/LW/ACX/whatever meetup or seminar, trusting the community, and there they meet someone who abuses this trust and tries to extract free work / recruit them for their org / abuse them sexually, and the new person trusts them as representatives of the EA/rationalist community (they can easily pretend to be), while the actual representatives of EA/rationalist community probably don't even notice that this happens, or maybe feel like it's not their job to go reminding everyone "hey, don't blindly trust everyone you meet here".

I assume the illusion of transparency plays a big role here, where the existing members generally know who is important and who is a nobody, who plays a role in the movement and who is just hanging out there, what kind of behavior is approved and what kind is not... but the new member has no idea about anything, and may assume that if someone acts high-status then the person actually is high-status in the movement, and that whatever such person does has an approval of the community.

To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously -- from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them.

Not sure what exactly to do about this, but perhaps the first step could be to write some warnings about this, and read them publicly at the beginning of every public event where new people come. Preferably with specific examples of things that happened in the past; like, not the exact name and place, but the pattern, like "hey, I have a startup that aims to improve the world, wanna code for me this app for free, I will totally donate something to some effective charity, pinky swear".

Replies from: ozziegooen, ozziegooen, farp
comment by ozziegooen · 2021-10-16T06:29:06.396Z · LW(p) · GW(p)

I very much agree about the worry, My original comment was to make the easiest case quickly, but I think more extensive cases apply to. For example, I’m sure there have been substantial problems even in the other notable orgs, and in expectation we should expect there to continue to be so. (I’m not saying this based on particular evidence about these orgs, more that the base rate for similar projects seems bad, and these orgs don’t strike me as absolutely above these issues.)

One solution (of a few) that I’m in favor of is to just have more public knowledge about the capabilities and problems of orgs.

I think it’s pretty easy for orgs of about any quality level to seem exciting to new people and recruit them or take advantage of them. Right now, some orgs have poor reputations among those “in the know” (generally for producing poor quality output), but this isn’t made apparent publicly.[1] One solution is to have specialized systems that actually present negative information publicly; this could be public rating or evaluation systems.

This post by Nuno was partially meant as a test for this:

https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations [EA · GW]

Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have. I think that in the case of Leverage, there really should have been some deep investigation a few years ago, perhaps after a separate setup to flag possible targets of investigation. Back then things were much more disorganized and more poorly funded, but now we’re in a much better position for similar efforts going forward.

[1] I don’t particularly blame them, consider the alternative.

Replies from: agrippa
comment by agrippa · 2021-10-17T06:44:32.536Z · LW(p) · GW(p)

[1] I don’t particularly blame them, consider the alternative.

I think the alternative is actually much better than silence!

For example I think the EA Hotel is great and that many "in the know" think it is not so great. I think that the little those in the know have surfaced about their beliefs has been very valuable information to the EA Hotel and to the community. I wish that more would be surfaced. 

Simply put, if you are actually trying to make a good org, being silently blackballed by those "in the know" is actually not so fun. Of course there are other considerations, such as backlash, but IDK I think transparency is good on all sorts of angles. The opinions of those "in the know" matter; they lead, and I think its better for everyone if that leadership happens in the light.

Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have. 

I think this is more than warranted at this point, yeah. I wonder who might be trusted enough to lead something like that.

Replies from: ozziegooen
comment by ozziegooen · 2021-10-17T15:48:36.226Z · LW(p) · GW(p)

I agree that it would have been really nice for grantmakers to communicate with the EA Hotel more, and other orgs more, about their issues. This is often a really challenging conversation to have ("we think your org isn't that great, for these reasons"), and we currently have very few grantmaker hours for the scope of the work, so I think grantmakers don't have much time now to spend on this. However, there does seem to be a real gap here to me. I represent a small org and have been around other small orgs, and the lack of communication with small grantmakers is a big issue. (And I probably have it much easier than most groups, knowing many of the individuals responsible)

I think the fact that we have so few grantmakers right now is a big bottleneck that I'm sure basically everyone would love to see improved. (The situation isn't great for current grantmakers, who often have to work long hours). But "figuring out how to scale grantmaking" is a bit of a separate discussion. 

Around making the information public specifically, that's a whole different matter. Imagine the value proposition, "If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see." Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund. 

(Note: I was a guest manager on the LTFF for a few months, earlier this year)

Replies from: ChristianKl, agrippa
comment by ChristianKl · 2021-10-20T07:50:17.957Z · LW(p) · GW(p)

Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund. 

I think that it would be very interesting to have a fund that has that policy. Yes, that might reduce in fewer people applying but people applying might itself be a signal that their project is worth funding.

comment by agrippa · 2021-10-17T16:46:28.520Z · LW(p) · GW(p)

"If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see."

I feel confident that Greg of EA Hotel would very much prefer this in the case of EA Hotel. It can be optional, maybe.

Replies from: ozziegooen
comment by ozziegooen · 2021-10-17T18:38:38.263Z · LW(p) · GW(p)

That's good to know. 

I imagine grantmakers would be skeptical about people who would say "yes" to an optional form. Like, they say they're okay with the information being public, but when it actually goes out, some of them will complain about it, leading to a lot of extra time.

However, some of our community seems unusually reasonable, so perhaps there's some way to make it viable.

comment by ozziegooen · 2021-10-16T06:32:27.108Z · LW(p) · GW(p)

To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously -- from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them. —-

For what it’s worth, I think this is true for basically all intense and moral communities out there. The EA/rationalist groups generally seem better than many religious and intense political groups in these areas, to me. However, even “better” is probably not at all good enough.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-10-16T08:54:37.439Z · LW(p) · GW(p)

What are “intense” and/or “moral” communities? And, why is it (or is it?) a good thing for a community to be “moral” and/or “intense”?

Replies from: ChristianKl, ozziegooen
comment by ChristianKl · 2021-10-16T09:53:14.023Z · LW(p) · GW(p)

There are certain goals for which having a moral or intense community is helpful. Whether or not I want to live in such a community I consider it okay for other people to build those communities. On the other hand, building cults is not okay in the same sense. 

Intense communities also generally focus on something where otherwise there's not much focus in society, increase cognitive diversity and are thus able to produce certain kinds of innovations that wouldn't happen with less cognitive diversity.

comment by ozziegooen · 2021-10-16T15:32:44.656Z · LW(p) · GW(p)

I was just thinking of the far right-wing and left-wing in the US; radical news organizations and communities. Q-anon, some of the radical environmentalists, conspiracy groups of all types. Many intense religious communities. 

I'm not making a normative claim about the value of being "moral" and/or "intense", just saying that I'd expect moral/intense groups to have some of the same characteristics and challenges.

comment by farp · 2021-10-17T06:35:47.110Z · LW(p) · GW(p)

while the actual representatives of EA/rationalist community probably don't even notice that this happens

I think it matters a lot whether this is true, and there is widely known evidence that it isn't true. For example Brent Dill and (if you are willing to believe victims) Robert Lecnik. 

Your post is well said and I am also very worried about EA/rat spaces as a fruitful space for predatory actors. 

Replies from: AnnaSalamon, Viliam
comment by AnnaSalamon · 2021-10-17T18:44:16.427Z · LW(p) · GW(p)

Which thing are you claiming here? I am a bit confused by the double negative (you're saying there's "widely known evidence that it isn't true that representatives don't even notice when abuse happens", I think; might you rephrase?).

I've made stupid and harmful errors at various time, and e.g. should've been much quicker on the uptake about Brent, and asked more questions when Robert brought me info about his having been "bad at consent" as he put it. I don't wish to be and don't think I should be one of the main people trying to safeguard victims' rights; I don't think I have needed eyes/skill for it. (Separately, I am not putting in the time and effort required to safeguard a community of many hundreds, nor is anyone that I know of, nor do I know if we know how or if there's much agreement on what kinds of 'safeguarding' are even good ideas, so there are whole piles of technical debt and gaps in common knowledge and so on here.)

Nonetheless, I don't and didn't view abuse as acceptable, nor did I intend to tolerate serious harms. Parts of Jay's account of the meeting with me are inaccurate (differ from what I'm really pretty sure I remember, and also from what Robert and his husband said when I asked them for their separate recollections). (From your perspective, I could be lying, in coordination with Robert and his husband who also remember what I remember. But I'll say my piece anyhow. And I don't have much of a reputation for lying.) If you want details on how the me/Robert/Jay interaction went as far as I can remember, they're discussed in a closed FB group with ~130 members that you might be able to join if you ask the mods; I can also paste them in here I guess, although it's rather personal/detailed stuff about Robert to have on the full-on public googleable internet so maybe I'll ask his thoughts/preferences first, or I'm interested in others' thoughts on how the etiquette of this sort of thing ought to go. Or could PM them or something, but then you skip the "group getting to discuss it" part. We at CFAR brought Julia Wise into the discussion last time (not after the original me/Robert/Jay conversation, but after Jay's later allegations plus Somni's made it apparent that there was something more serious here), because we figured she was trustworthy and had a decent track record at spotting this kind of thing.

Replies from: farp
comment by farp · 2021-10-17T20:25:11.895Z · LW(p) · GW(p)

Which thing are you claiming here?

I'm claiming that CFAR representatives did in fact notice bad things happening, and that the continuation of bad things happening was not for lack of noticing. I think that you are pretty familiar with this view.

I don't wish to be and don't think I should be one of the main people trying to safeguard victims' rights; I don't think I have needed eyes/skill for it. (Separately, I am not putting in the time and effort required to safeguard a community of many hundreds,

I want to point out what is in my mind a clear difference between taking a major role as a safeguard, and failing people who trust you and when the accused confesses to you. You can dispute whether that happened but it's not as though I am asking you to be held liable for all harms.

I can also paste them in here I guess, although it's rather personal/detailed stuff about Robert to have on the full-on public googleable internet so maybe I'll ask his thoughts/preferences first

If you think this guy raped people (with 80% credence or whatever) then you should probably warn people about him (in a public googleable way). If you don't think so then you can just say so. Basically, it seems like your willingness to publish this stuff should mostly just depend on how harmful you think this person was. 

I'm personally not aware of anything you did with respect to Robert that demonstrates intolerance for serious harms. Allowing somebody to continue to be an organizer for something after they confess to rape qualifies as tolerance of serious harms to me.

 

Of course my comment here seems litigious -- I am not really trying to litigate. 

In very plain terms: It has been alleged that CFAR leadership knew that Brent and Robert were committing serious harms and at the very least tolerated it. I take these allegations seriously. Anyone who takes these allegations seriously would obviously be troubled by it being taken for granted that community leaders do not even notice harms taking place.

Replies from: Duncan_Sabien, ChristianKl, AnnaSalamon
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-18T01:23:42.194Z · LW(p) · GW(p)

"It has been alleged" strikes me as not meeting the bar that LW should strive to clear, when dealing with such high stakes, with this much uncertainty.

Allegations come with an alleger attached.  If that alleger is someone else (i.e. if you don't want to tie your own credibility to their account) then it's good to just ... link straight to the source.

If that alleger is you (including if you're repeating someone else's allegations because you found them credible enough that you're adopting them, and repeating them on your own authority), you should be able to state them directly and concretely.

"It has been alleged" is a vague, passive-voice, miasmatic phrase that is really difficult to work with, or think clearly around.

It also implies that these allegations have not been, or cannot be, settled, as questions of fact, or at least probability.  It perpetuates a sort of un-pin-downable quality, because as long as the allegations are mist and fog, just floating around absent anyone who's taking ownership of them, they can't be conclusively settled or affirmed, and can be repeated forever.

I think it's pretty bad to lean into a dynamic like that.

In very plain terms: it is the explicit and publicly stated position of CFAR leadership that they were unaware of Brent's abuses, and that as soon as they became aware of them, they took quick and final action.

In that very statement, you can also find CFAR's mea culpas re: places where CFAR feels it should have become aware, prior to the moment it did become aware.  CFAR does not claim that it did a good job with Brent.  CFAR explicitly acknowledges pretty serious failures.

No one is asking anyone to take for granted that community leaders either [always see], or [never wrongly ignore], harms.  That was a strawman.  Obviously it is a valid hypothesis that community leaders can fail to see harms, or fail in their response to them.  You can tell it's a valid hypothesis because CFAR is an existence proof of community leaders outright admitting to just such a mistake.

It seems to me that Anna is trying pretty hard, in her above reply, to be open, and legible, and give-as-much-as-she-can without doing harm, herself.  I read in Anna's reply something analogous to the CFAR Brent statement: that, with hindsight, she wishes she had done some things differently, and paid more attention to some concerning signals, but that she did not suppress information, or ignore or downplay evidence of harm once it came clearly to her attention (I say "evidence of harm" rather than "harm" because it's important to be clear about my epistemic status with regards to this question, which is that I have no idea).  

I furthermore see in Anna's comment evidence that there are non-CFAR-leadership people looking at the situation, and taking action, albeit in a venue that you and I cannot see.  It doesn't sound like anything is being ignored or suppressed.

So insofar as "things that have been alleged" are concerned, I think it boils down to something like:

Either one believes CFAR (in the Brent case) or Anna (above), or one explicitly registers (whether publicly or privately) a claim that they're lying, or somehow blind or incompetent to a degree tantamount to lying.

Which is a valid hypothesis to hold, to be clear.  Right now the whole point of the broader discussion is "are these groups and individuals good or bad, and in what ways?"  It's certainly reasonable to think "I do not believe them."

But that's different from "it has been alleged," and the implication that no response has been given.  To the allegation that CFAR leadership ignored Brent, there's a clear, on-the-record answer from CFAR.  To the allegation that CFAR leadership ignored Robert or other similar situations, there's a clear, on-the-record answer from Anna above (that, yes, is not fully forthright, but that's because there are other groups already involved in trying to answer these questions and Anna is trying not to violate those conversations nor the involved parties' privacy).

I think that you might very well have further legitimate beef, à la your statement that "I'm claiming that CFAR representatives did in fact notice bad things happening, and that the continuation of bad things happening was not for lack of noticing."

But I think we're at a point where it's important to be very clear, and to own one's accusations clearly (or, if one is not willing to own them clearly, because e.g. one is pursuing them privately, to not leave powerful insinuations in places where they're very difficult to responsibly answer).

The answer, in both cases given above, seems to me to be, unambiguously:

"No, we did not knowingly tolerate harm."

If you believe CFAR and/or Anna are lying, then please proceed with that claim, whether publicly or privately.

If you believe CFAR and/or Anna are confused or incompetent, then please proceed with that claim, whether publicly or privately.

But please ... actually proceed? Like, start assembling facts, and presenting them here, or presenting them to some trusted third-party arbiter, or whatever.  In particular, please do not imply that no answer to the allegations has been given (passive voice).  I don't think that repeating sourceless substanceless claims—

(especially in the Brent case, where all of the facts are in common knowledge and none of them are in dispute at this point)

—after Anna's already fairly in-depth and doing-its-best-to-cooperate reply, is doing good for anybody in either branch of possibility.  It feels like election conspiracy theorists just repeating their allegations for the sake of the power the repetition provides, and never actually getting around to making a legible case.

EDIT: For the record, I was a CFAR employee from 2015 to 2018, and left (for entirely unrelated reasons) right around the same time that the Brent stuff was being resolved.  The linked document was in part written with my input, and sufficiently speaks for me on the topic.

comment by ChristianKl · 2021-10-20T07:59:54.309Z · LW(p) · GW(p)

If you think this guy raped people (with 80% credence or whatever) then you should probably warn people about him (in a public googleable way). 

In most legal enviroment like the US publically accusing someone of being a rapist comes with huge legal risks especially if the relevant evidence only allows 80% credence.

Calling for something like this seems to be in ignorance of the complexity of the relevant dynamics.

comment by AnnaSalamon · 2021-10-20T05:15:17.382Z · LW(p) · GW(p)

Allowing somebody to continue to be an organizer for something after they confess to rape

To fill in some details (I asked Robert, he's fine with it):

Robert had not confessed to rape, at least not the way I would use the word. He had told me of an incident where (as he told it to me) [edit: the following text is rot13'd, because it contains explicit descriptions of sexual acts] ur naq Wnl unq obgu chg ba pbaqbzf, Wnl unq chg ure zbhgu ba Eboreg’f cravf, naq yngre Eboreg unq chg uvf zbhgu ba Wnl’f cravf jvgubhg nfxvat, naq pbagvahrq sbe nobhg unys n zvahgr orsber abgvpvat fbzrguvat jnf jebat. Wnl sryg genhzngvmrq ol guvf. Eboreg vzzrqvngryl erterggrq vg, naq ernyvmrq ur fubhyq unir nfxrq svefg, naq fubhyq unir abgvprq rneyvre fvtaf bs qvfpbzsbeg.

Robert asked for my help getting better at consent, and I recommended he do a bunch of sessions on consent with a life coach named Matt Porcelli, which he did (he tells me they did not much help); I also had a bunch of conversations with him about consent across several months, but suspect these did at most a small part of what was needed. I did allow him to continue using CFAR’s community space to run (non-CFAR-affiliated) LW events after he told me of this incident. In hindsight I would do a bunch of things differently around these events, particularly asking Jay more questions about how it went, and asking Robert more questions too probably, particularly since in hindsight there were a number of other signs that Robert didn’t have the right skills and character here (e.g., he found it difficult to believe he could refuse hugs; and he’d told me about a previous more minor incident involving Robert giving someone else “permission” to touch Jay’s hair.) My guess in hindsight is that the incident had more warning signs about it than I noticed at the time. But I don’t think “he confessed to rape” is a good description.

(Separately, Somni and Jay later published complaints about Robert that included more than what’s above, after which CFAR asked Robert not to be in CFAR’s community space. Robert and I remained and remain friends.)

(Robert has since worked with an AltJ group that he says actually helped a lot, if it matters, and has shown me writeups and things that leave me thinking he’s taken things pretty seriously and has been slowly acquiring the skills/character he initially lacked. I am inclined to think he has made serious progress, via serious work. But I am definitely not qualified to judge this on behalf of a community; if CFAR ever readmits Robert to community events it will be on someone else’s judgment who seems better at this sort of judgment, not sure who.)

comment by Viliam · 2021-10-17T12:10:13.511Z · LW(p) · GW(p)

I think it matters a lot whether this is true, and there is widely known evidence that it isn't true.

If that's so, then it's very bad, and I feel like some people should receive a wake-up slap. I live on the opposite side of the planet, and I usually only learn about things after they have already exploded. Sometimes I wonder if anything would be different if I lived where most of the action happens. Generally, it seems like they should import some adults into the Bay Area.

As far as I know, in the Vienna community we do not tolerate this type of behavior. (Anyone feel free to correct me if I am wrong, publicly or privately at your choice.)

comment by ChristianKl · 2021-10-15T14:57:31.217Z · LW(p) · GW(p)

It seems to me that quality control has always an issue with some groups no matter how many groups there were.

Replies from: ozziegooen
comment by ozziegooen · 2021-10-16T15:30:39.268Z · LW(p) · GW(p)

Agreed, though I think that the existence of many groups makes it a more obvious problem, and a more complicated problem.

comment by orellanin (f____) · 2022-10-29T02:59:35.208Z · LW(p) · GW(p)

Hi! In the past few months I've been participating in Leverage Research/EA discourse on Twitter. Now there is one Twitter thread discussing your involvement as throwaway/anonymoose: https://twitter.com/KerryLVaughan/status/1585319237018681344 (with a subthread starting at https://twitter.com/ohabryka/status/1586084766020820992 discussing anti-doxxing norms and linking back to EA Forum comments).

One piece of information that's missing is why you used two throwaway accounts instead of one (and in particular, why you used one to reply to the other one, as alleged by Kerry Vaughan in https://twitter.com/KerryLVaughan/status/1585319243985424384 ). Can you tell me about your reasoning behind that decision?

(If that matters, I am not affiliated with any Leverage-adjacent org and I am not a throwaway account for a different EA Forum user.)

Replies from: RyanCarey
comment by RyanCarey · 2022-10-29T04:20:13.053Z · LW(p) · GW(p)

Hi Orellanin,

In the early stages, I had in mind that the more info any individual anon-account revealed, the more easily one could infer what time they spent at Leverage, and therefore their identity. So while I don't know for certain, I would guess that I created anonymoose to disperse this info across two accounts.

When I commented on the Basic Facts post as anonymoose, It was not my intent to contrive a fake conversation between two entities with separate voices. I think this is pretty clear from anonymoose's comment [EA · GW], too - it's in the same bulleted and dry format that throwaway uses, so it's an immediate possibility that throwaway and anonymoose are one and the same. I don't know why I used anonymoose there. Maybe due to carelessness, or maybe because I lost access to throwaway. (I know that at one time, an update to the forum login interface did rob me of access to my anon-account, but not sure if this was when that happened).

comment by Evan_Gaensbauer · 2021-10-14T23:11:30.386Z · LW(p) · GW(p)

I dipped my toe into openly commenting last week [LW(p) · GW(p)], and immediately received an email that made it more difficult to maintain anonymity - I was told "Geoff has previously speculated to me that you are 'throwaway', the author of the 2018 basic facts post".

Leverage Research hosted a virtual open house and AMA a couple weeks ago for their relaunch as a new kind of organization that has been percolating for the last couple years. I attended. One subject Geoff and I talked about was the debacle that was the article in The New York Times (NYT) on Scott Alexander from several months ago. I expressed my opinion that:

  1.  Scott Alexander could have managed his online presence much better than he did on and off for a number of years.
  2. Scott Alexander and the rationality community in general could have handled the situation much better than they did.
  3. Those are parts of this whole affair that too few in the rationality community have been willing to face, acknowledge or discuss about what can be learned from mistakes made.
  4. Nonetheless, NYT was the instigating party in whatever of the situation constituted a conflict between NYT, and Scott Alexander and his supporters, and NYT is the party that should be held more accountable and is more blameworthy if anyone wants to make it about blame.

Geoff nodded, mostly in agreement, and shared his own perspective on the matter that I won't share. Yet if Geoff considers NYT to have done one or more things wrong in that case, 

You yourself, Ryan, never made any mistake of posting your comments online in a way that might make it easier for someone else to de-anonymize you. If you made any mistake, it's that you didn't anticipate how adeptly Geoff would apparently infer or discern your identity. I expect why it wouldn't be so hard for Geoff to have figured it out it was you because you would have shared information about the internal activities at Leverage Research you are one of only a small number of people would have had access to. 

Yet that's not something you should not have had to anticipate. A presumption of good faith in a community or organization entails a common assumption that nobody would do that to their other peers. Whatever Geoff himself has been thinking about you as the author of those posts, he understands exactly the way in which to de-anonymize you or whoever would also be considered a serious violation of a commonly respected norm.

Based on how you wrote your comment, it seems that the email you received may have come across as intimidating. Obviously I don't expect you to disclose anything else about it, and would respect and understand if you don't, but it seems the email may have been meant to provide you with a well-intended warning. If so, there is also a chance Geoff had discerned that you were the account-holder for 'throwaway' (at least at the time of the posts in question) but hasn't even considered the possibility of de-anonymizing you, at least in more than a private setting. Yet either way, Geoff has begun responding in a way that if he were to act upon enough would only have become more disrespectful to you, your privacy and your anonymity. 

Of course, if it's not already obvious to anyone, neither am I someone who has an impersonal relationship with Leverage Research as an organization. I'm writing this comment with the anticipation that Geoff may read it himself or may not be comfortable with what I've disclosed above. Yet what I've shared was not from a particularly private conversation. It was during an AMA Leverage Research hosted that was open to the public. I've already explained above as well that in this comment I could have disclosed more, like what Geoff himself personally said, but I haven't. I mention that to also show that I am trying to come at this with good faith toward Geoff as well. 

During the Leverage AMA, I also asked a question that Geoff called the kind of 'hard-hitting journalistic' question he wanted more people to have asked. If that's something he respected during the AMA, I expect this comment is one he would be willing to accept being in public as well. 

Replies from: Viliam
comment by Viliam · 2021-10-15T17:24:45.776Z · LW(p) · GW(p)

Based on how you wrote your comment, it seems that the email you received may have come across as intimidating.

I think the important information here is how did Geoff / Leverage Research handle similar criticism in the past. (I have no idea. I assume both you and Ryan probably know more about this.) As they say, past behavior is the best predictor of future behavior. The wording of the e-mail is not so important.

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2021-10-15T20:18:00.189Z · LW(p) · GW(p)

I previously have not been as aware that this is a pattern of how so many people have experienced responses to criticism from Geoff and Leverage in the past. 

comment by Spiracular · 2021-11-01T20:03:23.680Z · LW(p) · GW(p)

Timeline Additions

I rather liked the idea of making a timeline [LW(p) · GW(p)]!

Geoff currently had a short doc on timing of changes in org structure, but it currently doesn't include much else.

Depending on how discussion here goes, I might transfer/transform this into its own post in the future. Will link them, if so.


Preamble

Nobody has talked much in public about the most dysfunctional things, yet? I am going to switch strategies out of dark-hinting and anonymity at this point, and put my cards down on the table.

This will be a sketch of the parts of this story that I know about. I do not have exact dates, and these are just broad-strokes of some of the key incidents here.

And not all of these are my story to tell? So sometimes, I will really only feel comfortable providing the broad-strokes.

(If someone has a better 2-3 sentence summary, or the full story, for some of these? Do chime in.)

These are each things I feel pretty solid about believing in. I think these incidents belong somewhere on any good consensus-timeline, but are not the full set of relevant events.

(I only have about 3-6 relevant contacts at the moment, but I've gotten at least 2 points of confirmation on each of these. It was not in this exact wording, though.)


Timeline Pieces

  • Early L1-present: Leverage has always had weird levels of PR-protective secret-keeping stuff, as far back as I can remember (~2015)
    • I believe this may have been true, since before they probably had anything worth hiding? Not confident in that, though.
  • Early L1: Leverage runs first EA Global (2013)
    • It brought a lot of important early EA people together, and a lot of them left it on friendlier terms
    • They also ran the 2014 one, but the character was pretty different; less "people crowded in a single house" more "conference + summit"
  • Early-L1: Panel discussion between Geoff & Anna Salamon (2014)
    • Brought up a few of Geoff's non-materialist views
    • Seems to have been some sort of turning-point that deepened a rift between Geoff and Leverage vs the Rationalist and EA philosophies and communities
  • ???: Sense of a developing rift between Leverage and EA
    • Started out relatively friendly, in first EAG era
    • Drifted apart due to a mix of philosophical differences and some growing anti-Leverage reputational dynamics
    • Geoff and other Leveragers were largely blocked from collaborations with EA orgs, and felt pretty shunned
  • Mid-L1: Attempt by a 1-2 Leverage-aligned people to use CEA as a Leverage recruitment vehicle. This escalated for a while, eventually became such a problem that they were rebuffed and fired.
    • Geoff was aware of this
    • Oli was pretty burned out, scared, and depressed during this period. The Leverage drama contributed to that, although it was not the sole reason.
    • I was dating Oli (habryka [LW · GW]). Oli didn't give me very much detail for a long time, but I could pick up on the fact that he was scared by this component of it, and he did tell me some of it eventually.
    • I have not seen Oli get this scared very often? So this does feel almost-personal for me, and I get pretty incensed about this.
  • Late L1: While worried about defunding, different factions of psychology did stuff towards each other that... bordered on psychological warfare?
    • Some nebulous mix of "actually fucking with each other," "claiming/convincing-self that the other faction fucked with them," and "trying to enforce unreasonable/unhealthy norms on each other."
    • Different factions and splinter-groups often had different degrees and specifics of dysfunction.
    • (I'd like a more Kosher wording for this! But I'm not sure how else to express just how bad it got.)
  • Late L1/post-L1: At least one faction freaked the hell out about social contagion, and engaged in a lot of dysfunctional pressuring of others as a result
    • I happen to think that social contagion is a useful model? But that they failed to factor in enough "it all adds up to normality," and got pretty arrogant about their models and personal competence, in a way that did not end well.
  • End of L1: Leverage 1.0 was dissolved
    • My personal take is that dissolving and restructuring was overall a good move
    • As much as I sympathize with the intent, I am not always a big fan of the legacy and specifics of the information-hiding agreement (I'll pick that fight later, though.)
  • post-L1: A few non-Leverage people who were close to someone in one of the psychology factions, experienced some nasty second-degree fallout drama when their ex-Leverager friend started claiming they were {infected with objects, had attacked the Leverager with their aura, etc.}.
    • This is the short version of my story? I experienced one of these second-degree* echoes. See "My Story" below.
    • I reported the 10-second-version of my story to Anna and MattF in an anonymized text snippet, shortly after Zoe posted.
    • I have discovered others who were affected by some second-degree fallout drama, but the exact stories differ.

My Story

I was sworn into an intense secrecy agreement ("do not tell anyone, even if they will get hurt if they don't know"), and told by a friend that I "contained an object" ("object" is basically their confusing term for a psychologically-unhealthy memetic thing). The Ziz/CFAR incident hit the same damn day. I responded by requesting an outside-view sanity-check from Oli**. I told my friend that I'd told someone, as soon as I got back, and shit hit the fan.

I sometimes call my incident the "quarantine-before-quarantine?" I responded to things like "getting told that I'm not allowed to email partner-of-friend because that gave partner an object" and "friend vents that they can't visit friend's-partner, because they caught an object from me, and now they have to wait until someone has an opening for about an hour of bodywork" and "friend says they felt me attack them, while I was just eating breakfast" by generating and following monotonically-increasing explicit rules-of-separation, until we were living in the same house but were not allowed to talk or even be in the same room together. We both moved out a month later. The whole thing is a long story, but basically, friend and I had a gigantic fallout as a result of all of this.

I was mum for a long time about everything except "I broke a major secrecy agreement" and "friend & I are not even able to calmly co-exist in the same house anymore," because the friend had made it clear that talking honestly about any of this would render to them as "throwing them under the bus." I do genuinely still care about their well-being.

If you know who I am talking about, do not reveal it publicly and please be nice to them. What they went through was even worse than what I experienced.

Same goes for friend's-partner, who always struck me as swept-up and misguided, not malicious. They were the route by which this insane frame reached me, but I genuinely believe that they meant well. There were even times when the partner was more charitable towards me, than ex-friend was.

I distantly wish both of them well. I also do not wish to speak privately with either of them about this, at present.

On Centers of Dysfunction

In terms of clusters-of-dysfunction:

  • I think early-L1 was generally less-dysfunctional, although the culture had many of what I would think of as "risk-factors."
  • The worst stuff seems to have reached a head right near the end of L1?
  • Reserve was one of the more-functional corners, and was basically fine.
    • I worked at Reserve for a while; you could pick up on some of the "taste of Leverage" from that distance, but it never appears to have escalated to anything seriously dysfunctional.
    • For example: my worst complaint is that C was weirdly-intense about not granting me access to the #general Slack channel, even though that could get in the way of doing my job sometimes
  • In general, what I've seen seems consistent with some ex-Leveragers getting a substantial amount of good out of the experience. Sometimes with none of the bad, sometimes alongside it.
    • ex: I recognize that bodywork was very helpful to my ex-friend, in working through some of their (unrelated) trauma. Many people have reported good experiences with belief-reporting, and say they found it useful.
  • I also think there were a lot of regrettable actions taken by people who were swept up in this at the time, by people who would ordinarily be harmless under normal circumstances.
    • It can be hard to judge this, especially from where I am? But as bad as the things that happened were, I think this is broadly true of most of the people involved.
  • I do not want to put people through the sort of mob-driven invalidation, that I once felt.
    • I was once friends with Brent. I still care about his well-being. There are times where I was under a lot of pressure to write that out of my personal narrative, but it was ultimately healthier for me that I chose to keep it.
    • I hope that those with stories about Leverage that are different from mine, feel the right to lay claim to the positives of their experiences, as well as the negatives.

Footnotes

* Technically, my friend was dating an ex-Leverager. So I actually got a third-degree burn.

** I told Oli something to the effect of "Mental illness as social contagion theory; claimed to be spread highly-effectively through circling. Not sure if Ziz incident may be an instance? If there's another psychotic break within 1 month, boost likelihood of this being true. If there's not, please update downward on this model.*** Pieces of Leverage's model of social contagion did not match my own theory of social contagion, and I'm not entirely confident who is in the right, here? Also, this one may have come out of Leverage, but they say it was an accident."

(...that is probably roughly everything I said? It was succinct, in part because I was taking the possibility that I had caught something, seriously. In my theory of social contagion, bandwidth really matters. Some Leveragers behaved in a way that implied thinking bandwidth mattered less, and this was one of the first things -of several- that struck me as insane about their lens on it.)

*** Ziz turned out to be already-crazy as a baseline. There were no psychosis episodes that month from anyone else. I asked around, and I do not believe Ziz had any strong connection to Leverage at all, but especially not in that time-period.

Replies from: Spiracular, Viliam, Richard_Kennaway
comment by Spiracular · 2021-11-01T20:04:43.858Z · LW(p) · GW(p)

Threads Roundup

  • Several things under the LW Leverage Tag [? · GW]
  • Leverage Basic Facts EA Post [EA · GW] & comment thread
    • I discovered this one a little late? Still flipping through it.
  • BayAreaHuman LW Post [LW · GW]
    • By now, I have been able to confirm every single concrete point made in that post seems true or reasonable, to myself or at least one of my contacts (not always two). The tone is slightly-aggressive, but seems generally truth-seeking, to me.
    • I think it leans more towards characterizing dysfunctional late-L1, than early-L1? But not strictly.
    • Someone, probably Geoff (it's apparently the kind of thing he does, confirmed by 2+ people), sent out emails to friends of Leverage framing it as an unwarranted attack and encouraging flooding the comment thread with people's positive experiences.
      • I do not like that he did this! I know someone else, who intends to write up something more thorough about this. But if they don't, I am likely to comment on it myself, after saving evidence and articulating my thoughts.
      • EDIT: I do think a lot of the positive accounts are honest! I am not accusing any commenter of lying. My concern here is selective reporting, and something of a concentration of force [LW · GW] dynamic that I believe may have been invoked deliberately in a way that I do not trust as truth-seeking.
    • Matt Falshaw's recent email [LW(p) · GW(p)] mentioned non-Zoe people writing "disingenuous and deliberately misleading" posts in the past? If that was meant to implicate BAH, then I think it was being a bit "disingenuous and deliberately misleading."
  • Zoe's Medium Post
    • I buy it! I was willing to chime-in in its favor, from early on
    • Late-Leveragers seem to have conceded that it is a valid personal recounting
    • In case this changes location: LW comment thread on it [LW · GW]
  • Geoff Anders Twitch Streams
    • Stream 1
      • Included some relevant backstory on the rift with EA, which probably also belongs in a timeline.
      • Audio was recovered, and there's a transcript here [LW · GW] of the second half for the less audio-inclined.
      • Geoff's initial twitch-stream (with Anna Salamon) included commentary about how Leverage used to be pretty friendly with EA, and ran the first EAG. Several EA founders felt pretty close after that, and then there was some pretty intense drifting apart (partially over philosophical differences?). There was also some sort of kerfuffle where a lot of people ended up with the frame that "Leverage was poaching donors," which may have been unfair to Leverage. As time went on, Geoff and other Leveragers were largely blocked from collaborations, and felt pretty shunned.
        • Some decent higher-detail text summaries here [LW(p) · GW(p)] and here [LW(p) · GW(p)].
        • TekhneMakre started a thread with some good additional thoughts, here [LW(p) · GW(p)]
  • Some Press Releases from Leverage
    • A letter from the Executive Director on Negative Past Experiences: Sympathy, Transparency, and Support
      • Commits to:
        • "reimburse any employee of any organization in the Leverage research collaboration for expenditures they made on therapy" (w/ details)
        • "we will share information about intention research in the form of essays, talks, podcasts, etc., so as to give the public greater context on this area of our past research"
        • Sets up 4 intermediaries (to ease coming forward with accounts, in cases of distrust)
          • EDIT: Names Anna Salamon, Eli Tyre, Matthew Graves, and Matt Falshaw as several somewhat-intermediary people who can be contacted.
        • "Leverage Research will thus seek to resolve the current conflict as definitively as possible, publicly dissociate from the Rationalist community, and take actions to prevent future conflict"
      • Overall, I found this one pretty heartening
    • Ecosystem Dissolution Agreement
      • The socially-enforced NDA-like from the end of Leverage 1.0
      • EDIT: For a sense of Leverage's information-suppression policy in prior years, here is the Basic Information Management Checklist from 2017
    • Leverage 1.0 Ecosystem information sharing and initial inquiry
      • Email from Matt Falshaw on Oct 19
    • ETA: Essay On Intention Research
      • This essay seemed really well done, overall.
        • Outlined the history of the research clearly. Seemed pretty good at sticking to fairly grounded descriptions, especially given the slipperyness of the subject matter. Tried to provide multiple hypotheses of what could be happening, and remained open to explanations nobody has come up with yet. This has been a tricky topic for people to describe, and I suspect he handled it well.
      • Mostly gives a history of Intention Research, a line of inquiry that started out poking at energywork and bodywork (directing attention with light touch), got increasingly into espousing detailed reads of each other's nonverbals, and which eventually fed into some really awful interpersonal dynamics that got so bad that Leverage 1.0 was dissolved to diffuse it.
      • Warnings are at the end. My sole complaint with the writing is that I wish they were outlined earlier.
    • ETA: Public Report on Inquiry Findings: Factors and Mistakes that Contributed to a Range of Negative Experiences on Our 2011-2019 Research Collaboration
      • I thought this was quite good. Reading this raised my esteem for Matt Falshaw.
      • I do think this accurately characterized a lot of the structural problems, and leaves me more optimistic that Leverage 2.0 will avoid those. If you are interested in the details of that, I recommend reading it.
        • I don't think all of the problems were structural? But a lot of them were, and the ones that weren't were often exacerbated by structural things. Putting the focus on fixing things at that layer looks like a reasonable choice.
      • 3-5 people with extremely negative experiences and perspectives, out of something like 45 people, does sound plausible to me.
      • Something I felt wasn't handled perfectly: The refusal of people with largely-negative experiences, to talk with investigators, reads to me as some indicator of a feeling of past loss-of-trust or breach-of-trust. And while their absence is gestured at, I did feel like the significance of this tended to get downplayed more than I would have liked.
comment by Viliam · 2021-11-07T22:53:53.196Z · LW(p) · GW(p)

I have some difficulty understanding the descriptions by former Leverage members. Inferential distance, but even if you tell me what the words refer to, I am not sure I am painting my near-mode picture correctly. Like, when you say "bodywork", now I imagine something like one person giving the other person a massage, where both participants believe that this action not only relaxes the body, but also helps to remove some harmful memes from the mind. -- Is this a strawman? Or is it a reasonable first approximation (which of course misses some important nuance)?

For me, getting these things right feels like I have an insight into how the organization actually works, on social level. Approximate descriptions are okay. If massaging someone's left shoulder helps them overcome political mindkilling, and massaging someone's right shoulder protects them from Roko's Basilisk, don't tell me! You have the NDA, and I don't actually care about this level of detail. Keep your secret tech! I just want to understand the dynamic, like if someone talks to a stranger and later feels like the person may have cast some curse on them, the reasonable response is to schedule a massage.

From all descriptions I have read so far, yours felt the most helpful in this direction. Thank you!

Replies from: Spiracular
comment by Spiracular · 2021-11-08T05:57:50.945Z · LW(p) · GW(p)

My impression is that Leverage's bodywork is something closer to what other people call "energy work," which probably puts it... closer to Reiki than massage?

But I never had it done to me, and I don't super understand it myself! Pretty low confidence in even this answer.

comment by Richard_Kennaway · 2021-11-02T09:27:17.567Z · LW(p) · GW(p)

("object" is basically their confusing term for a psychologically-unhealthy memetic thing)

Cult symptom! Invented terminology for invented, fictitious entities.

Replies from: Ruby
comment by Ruby · 2021-11-02T12:53:47.162Z · LW(p) · GW(p)

The observation might be correct but I don't love the tone. It has some feeling of "haha, got you!" that doesn't feel appropriate to these discussions.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2021-11-02T13:16:21.558Z · LW(p) · GW(p)

Point taken, but I stand by the observation.

comment by habryka (habryka4) · 2021-10-28T21:01:18.715Z · LW(p) · GW(p)

After discussing the matter with some other (non-Leverage) EAs, we've decided to wire $15,000 to Zoe Curzi (within 35 days).

A number of ex-Leveragers seem to be worried about suffering (financial, reputational, etc.) harm if they come forward with information that makes Leverage look bad (and some also seem worried about suffering harm if they come forward with information that makes Leverage look good). This gift to Zoe is an attempt to signal support for people who come forward with accounts like hers, so that people in Zoe's reference class are more inclined to come forward.

We've temporarily set aside $85,000 in case others write up similar accounts -- in particular, accounts where it would be similarly useful to offset the incentives against speaking up. We plan to use our judgment to assess reports on a case-by-case basis, rather than having an official set of criteria. (It's hard to design formal criteria that aren't gameable, and we were a bit wary of potentially setting up an incentive for people to try to make up false bad narratives about organizations, etc.)

Note that my goal isn't to evaluate harms caused by Leverage and try to offset such harms. Instead, it's trying to offset any incentives against sharing risky honest accounts like Zoe's.

Full disclosure: I worked with a number of people from Leverage between 2015 and 2018. I have a pretty complicated, but overall relatively negative view of Leverage (as shown in my comments), though my goal here is to make it less costly for people around Leverage to share important evidence, not to otherwise weigh in on the object-level inquiry into what happened. Also, this comment was co-authored with some EAs who helped get the ball rolling on this, so it probably isn't phrased the way I would have fully phrased it myself.

Replies from: Duncan_Sabien, ozziegooen
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-28T23:33:42.574Z · LW(p) · GW(p)

Note that my goal isn't to evaluate harms caused by Leverage and try to offset such harms. Instead, it's trying to offset any incentives against sharing risky honest accounts like Zoe's.

I like the careful disambiguation here.

FWIW, I independently proposed something similar to a friend in the Lightcone office last week, with an intention that was related to offsetting harm.  My reasoning:

There's often a problem in difficult "justice" situations, where people have only a single bucket for "make the sufferer feel better" and "address the wrong that was done."

This is quite bad—it often causes people to either do too little for victims, or too much to offenders, because they're trying to achieve two goals at once and one goal dominates the calculation.  Not helping someone materially because the harm proved unintentional, or punishing the active party way in excess of what they "deserve" because that's what it takes to make the injured party feel better, that sort of thing.

Separating it out into "we're still figuring out the Leverage situation but in the meantime, let's try to make this person's life a little better" is excellent.

Reiterating that I understand that's not what you are doing, here.  But I think that would separately have also been a good thing.

comment by ozziegooen · 2021-11-02T08:52:37.823Z · LW(p) · GW(p)

A few quick thoughts:

1) This seems great, and I'm impressed by the agency and speed.

2) From reading the comments, it seems like several people were actively afraid of how Leverage could retaliate. I imagine similar for accusations/whistleblowing for other organizations. I think this is both very, very bad, and unnecessary; as a whole, the community is much more powerful than individual groups, so it seems poorly managed when the community is scared of a specific group. Resources should be spent to cancel this out.

In light of this, if more money were available, it seems easy to justify a fair bit more. Or even better could be something like, "We'll help fund lawyers in case you're attacked legally, or anti-harassing teams if you're harassed or trolled". This is similar to how the EFF helps with cases from small people/groups being attacked by big companies.

I don't mean to complain; I think any steps here, especially so quickly are fantastic.

3) I'm afraid this will get lost in this comment section. I'd be excited about a list of "things to keep in mind" like this to be repeatedly made prominent somehow. For example, I could imagine that at community events or similar, there could be necessary papers like, "Know your rights, as a Rationalist/EA", which flags how individuals can report bad actors and behavior.

4) Obviously a cash prize can encourage lying, but I think this can be decently managed. (It's a small community, so if there's good moderation, $15K would be very little compared to the social stigma that would come and you've found out to have destructively lied for $15k)

comment by Beth Barnes (beth-barnes) · 2021-10-13T17:39:35.948Z · LW(p) · GW(p)

Many of these things seem broadly congruent with my experiences at Pareto, although significantly more extreme. Especially: ideas about psychology being arbitrarily changeable, Leverage having the most powerful psychology/self-improvement tools, Leverage being approximately the only place you could make real progress, extreme focus on introspection and other techniques to 'resolve issues in your psyche', (one participant's 'research project' involved introspecting about how they changed their mind for 2 months) and general weird dynamics (e.g. instructors sleeping with fellows; Geoff doing lectures or meeting individually with participants in a way that felt very loaded with attempts to persuade and rhetorical tricks), and paranoia (for example: participants being concerned that the things they said during charting/debugging would be used to blackmail or manipulate them; or suspecting that the private slack channels for each participant involved discussion of how useful the participants were in various ways and how to 'make use of them' in future). On the other hand, I didn't see any of the demons/objects/occult stuff, although I think people were excited about 'energy healers'/'body work', not actually believing that there was any 'energy' going on, but thinking that something interesting in the realm of psychology/sociology was going on there. Also, I benefitted from the program in many ways, many of the techniques/attitudes were very useful, and the instructors generally seemed genuinely altruistic and interested in helping fellows learn.

comment by Unreal · 2021-10-18T01:54:29.432Z · LW(p) · GW(p)

Edit: (One person reading this reports below that this made them more reluctant to come forward with their story, and so that seems bad to me. I have mentally updated as a result. More relevant discussion below.) 

I notice that there's not that much information public about what Geoff actually Did and Did Not Do. Or what he instigated and what he did not. Or what he intended or what he did not intend. 

Um, I would like more direct evidence of what he actually did and did not do. This is cruxy for me in terms of what should happen next. 

Right now, based just on the Medium post, one plausible take is that the people in Geoff's immediate circle may have been taking advantage of their relative power in the hierarchy to abuse the people under them. 

See this example from Zoe:

A few weeks after this big success, this person told me my funding was in question — they had done all they could do to train me and thought I might be too blocked to sufficiently progress into a Master on the project. They and Geoff were questioning my commitment to and understanding of the project, and they had concerns about my debugging trajectory.

"They and Geoff" makes it sound like Zoe's supervisor basically name-dropped Geoff as a way to add weight to a scare tactic. Like "better watch out cuz the boss thinks you're not committed enough..." But it's not really clear what the boss actually said or did not say... This supervisor might just be using a move. (I welcome additional clarity.) 

The most directly 'damning' thing, as far as I can tell, is Geoff pressuring people to sign NDAs. 

A lot of the other stuff seems like it's due to the people around Geoff elevating him to an unreasonable pedestal and treating him like a savior. Maybe Geoff should have done more to stop this from escalating / done more to make people chill out about him and his supposed specialness. But him failing to control his flock is a different failure from him feeding them lies or requiring worship. I'm not seeing any statements about this. I welcome more information and clarity. 

I am wanting clarity here because I am very aware of people's strong desire for a [cult] leader. It can be pretty severe. And this is very much a co-participation between leaders and followers. 

I know what it's like from the inside to want someone to be my cult leader, god or parent figure. And I have low-tolerance for narratives that try to take my personal agency away from me—that claim I was a victim of mind control or whatever, rather than someone who bottom-level gave up my power to them. 

Even if I didn't consciously give away my power and it just sort of happened, I think it's still wrong to write a narrative where I merely blame the other person and absolve myself of all responsibility or agency. This sounds unhealthy to hold onto, as a story. 

I'm def not trying to absolve Geoff (or anyone) of responsibility, accountability, or agency. But also ew scapegoating is gross? 

My main desire is for more information, or for people to realize that we might not be meeting relevant cruxes for how to move forward, and that we should continue to investigate and hold off on taking heavy actions. 

Replies from: None, Unreal
comment by [deleted] · 2021-10-21T02:43:36.134Z · LW(p) · GW(p)

The most directly 'damning' thing, as far as I can tell, is Geoff pressuring people to sign NDAs.


I received an email from a Paradigm board member on behalf of Paradigm and Leverage that aims to provide some additional clarity on the information-sharing situation here. Since the email specifies that it can be shared, I've uploaded it to my Google Drive (with some names and email addresses redacted). You can view it here.

The email also links to the text of the information-sharing agreement in question with some additional annotations.

[Disclosure: I work at Leverage, but did not work at Leverage during Leverage 1.0. I'm sharing this email in a personal rather than a professional capacity.]

Replies from: ChristianKl, Unreal, RobbBB, RobbBB
comment by ChristianKl · 2021-10-21T13:17:05.243Z · LW(p) · GW(p)

I do applaud explicitely clarifying that people are free to share their own experiences.

comment by Unreal · 2021-10-21T21:15:10.270Z · LW(p) · GW(p)

Thanks for sharing this. ! 

I believe this is public information if I look for your 990s, but could you or someone list the Board members of Leverage / Paradigm, including changes over time? 

comment by Rob Bensinger (RobbBB) · 2021-10-22T03:05:02.279Z · LW(p) · GW(p)

I don't know how realistic this worry is, but I'm a bit worried about scenarios like:

  1. A signatory doesn't share important-to-share info because they interpret the lnformation Arrangement doc (even with the added comments) as too constraining.

    My sense is that there's still a lot of ambiguity about exactly how to interpret parts of the agreement? And although the doc says it "is meant to be based on norms of good behavior in society" I don't see a clause explicitly allowing people's personal consciences to supersede the agreement. (I might just have missed it.)
     
  2. Or: A signatory doesn't share important-to-share info because they see the original agreement as binding, not the new "clarifications and perspective today" comments.

    (I don't know how scrupulous ex-Leveragers are about sticking to signed informal agreements, but if the agreement has moral force, I could imagine some people going 'the author can't arbitrarily reinterpret the agreement post facto, when the agreement didn't specify that you have this power'.

    Indeed, signing a document with binding moral force seems pretty risky to me if the author has lots of leeway to later reinterpret what parts of the agreement mean. But maybe I'm misunderstanding the social context or ethical orientation of the Leveragers -- I might be reading the agreement way more strictly than anyone construed it at the time.)

Is there any reason not to just say something like 'to the extent we have the power to void this agreement, the whole agreement is now void'? People could then still listen to their consciences, and your recommendations, about what to do next; but I'd be less worried anyone feels constrained by having signed the thing. I don't know the late-Leverage-1.0 people well, but I currently have more faith in y'alls moral judgment than in your moral-judgment-constrained-by-this-verbal-commitment.

The main reason I could imagine it being bad to say 'this is void now' is if there's an ex-Leverager you think is super irresponsible, but who you convinced to sign the agreement -- someone who you'd expect to make terrible reckless decisions if they weren't bound by the thing.

But in that case I'd still think it makes sense to void the agreement for the people who are basically sensible and well-intentioned, which is hopefully just about everyone.

Replies from: ChristianKl
comment by ChristianKl · 2021-11-08T09:09:16.770Z · LW(p) · GW(p)

I don't see a clause explicitly allowing people's personal consciences to supersede the agreement. (I might just have missed it.)

It seems to me "this is not a legal agreement" is basically such a clause. 

The main reason I could imagine it being bad to say 'this is void now' is if there's an ex-Leverager you think is super irresponsible, but who you convinced to sign the agreement -- someone who you'd expect to make terrible reckless decisions if they weren't bound by the thing.

It seems that at the end of Leverage 1.0 groups were in conflict. There's a strong interest in that conflict not playing out in a way where different people publish private information of each other and then retaliate in kind.

It might very well be that plenty of the ex-Leverages don't speak out because they are afraid that private information about them will be openly published in retaliation if they do. 

Or: A signatory doesn't share important-to-share info because they see the original agreement as binding, not the new "clarifications and perspective today" comments.

Given that there's a section of (10) Expected lessening it seems strange to me to see the original agreement as infinitely binding.

• Expect to revisit this in 12 months[p]

• Expect that the overall need for share restrictions will diminish, and that as a result we will wind down share restrictions over time, while still maintaining protection of sensitive information and people’s privacy

• If anyone concludes in the future that stronger information management is required, they should make efforts to educate others themselves, and should expect that that might be covered by some future arrangement, not this one

comment by Rob Bensinger (RobbBB) · 2021-10-22T03:01:40.496Z · LW(p) · GW(p)

[...] The most important thing we want to clarify is that as far as we are concerned, at least, individuals should feel free to share their experiences or criticise Geoff or the organisations.

[... T]his document was never legally binding, was only signed by just over half of you, and almost none of you are current employees, so you are under no obligation to follow this document or the clarified interpretation here. [...]

I'm really happy to see this! Though I was momentarily confused by the "so" here -- why would there be less moral obligation to uphold an agreement, just because the agreement isn't legally binding, some other people involved didn't sign it, and the signatory has switched jobs? Were those stipulated as things that would void the agreement?

My current interpretation is that Matt's trying to say something more like 'We never took this agreement super seriously and didn't expect you to take it super seriously either, given the wording; we just wanted it as a temporary band-aid in the immediate aftermath of Leverage 1.0 dissolving, to avoid anyone taking hasty action while tensions were still high. Here's a bunch of indirect signs that the agreement is no big deal and doesn't have moral force years later in a very different context: (blah).' It's Bayesian evidence that the agreement is no big deal, not a deductive proof that the agreement is ~void. Is that right?

comment by Unreal · 2021-10-18T13:55:09.140Z · LW(p) · GW(p)

Another thing I want to mentally watch out for: 

It might be tempting for some ex-Leverage people to use Geoff as the primary scapegoat rather than implicating themselves fully. So as more stories come out, I plan to be somewhat delicate with the evidence. The temptation to scapegoat a leader is pretty high and may even seem justifiable in a "ends justifies the means" kind of thinking. 

I don't seem to personally be OK with using misleading information or lies to bolster a case against a person, even if this ends up "saving" a lot of people. (I don't think it actually saves them... people should come to grips with their own errors, not hide behind a fallback person.) 

So... Leverage, I'm looking at you as a whole community! You're not helpless peons of Geoff Anders. 

When spiritual gurus go out of control, it's not a one-man operation; there are corroborators, enablers, people who hid information, yes-men and sycophants, those too afraid to do the right thing or speak out against wrongdoing, those too protective of personal benefits they may be receiving (status, friends, food, housing), etc. 

There's stages of 'coming to terms' with something difficult. And a very basic model would be like 

  1. Defensive / protective stage. I am still blended and identified with a problematic pattern or culture, so I defend it. It feels like my own being is at stake or on the line. It's hard to see what's true, and I am partially in denial or in dissociation—although I myself may not realize it. 
  2. Mitosis stage. I am in the process of a painful identity-level separation from the pattern or culture. I start feeling anger towards it, some grief, horror, etc. It's likely I feel victimized. For the sake of gaining clarity, a victim narrative is more helpful than the previous narrative of "the thing is actually good though" or whatever fog of denial I was in. 
  3. Grief stage. Even more open and full realization of what happened and its problematic nature. Realizing my own personal part in it and the extent to which my actions were my own and also contributed to harm. This can be a very difficult stage, and may come with shame, guilt, remorse, and immense sadness. 
  4. Letting go and integration stage. Happy relief comes when all the disparate parts are integrated and all is forgiven. I hold myself to a new, higher standard, and I hold others also to a higher standard. I feel good about where I stand now, with more clarity and compassion. I see clearly what the mistakes were and how to avoid them. I can guide or warn others from making similar mistakes. There's no emotional or trauma residue left in me. My capacity has expanded to hold more complexity and diversity. I am more accepting of the past, can see from many perspectives, and ready to live fully present. 

Stage 2 is a dangerous stage, and it is one I have been in, and where I was most volatile, angry, and likely to cause damage. Kind of wanting more common knowledge about this as a Thing so that we are collectively aware that damage is best minimized. Although I imagine disagreements with this. 

Replies from: Spiracular, Spiracular, weft, Spiracular
comment by Spiracular · 2021-10-18T14:37:26.305Z · LW(p) · GW(p)

I basically agree with this.

But also, I think pretty close to ZERO people who were deeply affected (aside from Zoe, who hasn't engaged beyond the post) have come forward in this thread. And I... guess we should talk about that.

I know from firsthand, that there were some pretty bad experiences in the incident that tore Leverage 1.0 apart, which nobody appears to feel able to talk about.

I am currently not at all optimistic that we're managing to balance this correctly? I also want this to go right. I'm not quite sure how to do it.

Replies from: Unreal
comment by Unreal · 2021-10-18T15:44:31.991Z · LW(p) · GW(p)

That's pretty fair. I am open to taking down this comment, or other comments I've made. (Not deleting them forever, I'll save them offline or something.) Your feedback is helpful here and revealing to me, and I feel myself updating because of it. 

I have commented somewhere else that I do not like LessWrong for this discussion... because a) It seems bad for justice to be served. and b) It removes a bunch of context data that I personally think is super relevant (including emotional, physical layers) and c) LW is absolutely not a place designed for healing or reconciliation... and it also seems only 'okay' for sense-making as a community. It is maybe better for sense-making at the individual intellectual level. So... I guess LW isn't my favorite place for this discussion to be happening... I wonder what you think. 

Replies from: Unreal, Spiracular, elityre, Spiracular
comment by Unreal · 2021-10-18T15:46:32.618Z · LW(p) · GW(p)

(Separately) I care about folks from Leverage. I am very fond of the ones I've met. Zoe charted me once, and I feel fondly about that. I've been charted a number of times at Leverage, and it was good, and I personally love CT charting / Belief Reporting and use, reference, and teach it to others to this day. Although it's my own version now. I went to a Paradigm workshop once, as well as several parties or gatherings. 

My felt sense of my time at the workshop (especially during more casual hang-out-y parts of it) is like a sense of sad distance... like, oh I would like to be friends with these people... but mentally / emotionally they seem "hard to access." 

I'm feeling compassion towards the ones who have suffered and are suffering. I don't need to be personal friends with anyone, but ... if there's a way I can be of service, I am interested. 

Open and free invitation: If anyone involved in the Leverage stuff in some way wants someone to hold space for you as you process things, I am open to offer that, over Zoom, in a confidential manner. (I am not very involved in the community normally, as I am committed to being at the Monastic Academy in Vermont for a long while, and I don't engage in divisive / gossipy speech. It is wrong speech :P) Cat would probably vouch for me. But basically uhh, even if what you want to say would normally be totally crazy to most rationalists or even most Westerners, I have ventured so far outside the overton window that I doubt I'll be taken aback. If that helps. :P 

You can FB msg me or gmail me (unrealeel). 

comment by Spiracular · 2021-10-18T17:05:08.195Z · LW(p) · GW(p)

Since it's mostly just pointers to stuff I've already said/implied... I'll throw out a quick comment.

I would like it if somebody started something like a carefully-moderated private Facebook group, mostly of core people who were there, to come to grips with their experiences? I think this could be good.

I am slightly concerned that people who are still in the grips of "Leverage PR campaigning" tendencies, will start trying to take it over or otherwise poison the well? (Edit: Or conversely, that people who still feel really hurt or confused about it might lash out more than I'd wish. I personally, am more worried about the former.) I still think it might be good, overall.

Be sure to be clear EARLY about who you are inviting, and who you are excluding! It changes what people are willing to talk about.

...I am not personally the right person to do this, though.

(It is too easy to "other" me, if that makes sense.)


I feel like one of the only things the public LW thread could do here?

Is ensuring public awareness of some of the unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms, and showing a public ramp-down of opportunities to do so in the future.

Along with doing what we can, to signal that we generally stand against people over-simplistically demonizing the people and organizations involved in this.

Replies from: Unreal
comment by Unreal · 2021-10-18T18:58:01.848Z · LW(p) · GW(p)

... unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms ... 

Hmm. This seems worth highlighting. 

The NDAs (plus pressure to sign) point to this. 

... 

( The rest of this might be triggering to anyone who's been through gaslighting / culty experiences. Blunt descriptions of certain forms of control and subjugation. ) 

...

The rest of the truth-suppressive measures I can only speculate. Here's a list of possible speculative mechanisms that come to mind, some of which were corroborated by Zoe's report but not all:

  • Group hazing or activities that cause collective shame, making certain things hard to admit to oneself and others (plus, inserting a bucket error where 'shameful activity' is bucketed with 'the whole project' or something)
    • This could include implanting group delusions that are shameful to admit. 
  • Threats to one's physical person or loved ones for revealing things
  • Threats to one's reputation or ability to acquire resources for revealing things
  • Deprivation used to negatively / positively reinforce certain behaviors or stories ("well, if you keep talking like that, we're gonna have to take your phone / food / place to sleep") 
  • Gaslighting specific individuals or subgroups ("what you're experiencing is in your own head; look at other people, they are doing fine, stop being crazy / stop ruining the vibe / stop blocking the project")
    • A lot of things could fit into this category. 
  • Causing dissociation. (Thus disconnecting a person from their true yes/no or making it harder for them to discern truth from fiction.) This is very common among modern humans, though, and doesn't seem as evil-sounding as the other examples. Modern humans are already very dissociated afaict. 
    • It would become more evil if it was intentionally exploited or amplified.
    • Dissociation could be generalized or selective. Selective seems more problematic because it could be harder to detect. 
  • Pretending there is common knowledge or an obvious norm around what should be private / confidential, when there is not. (There is some of this going around rationalist spaces already.) "Don't talk about X behind their back, that's inappropriate." or "That's their private business, stay out of it." <-- Said in situations where it's not actually inappropriate or when claims of it being someone's 'private business' is overreaching.
  • Deliberately introducing and enforcing a norm of privacy or confidentiality that breaks certain normal and healthy social accountability structures. (Compassionate gossip is healthy in groups, especially those living in residential community,. Rationalists seem to not get this though and tend to break Chesterton's fence on this, but I attribute this to hubris. It seems worse to me if these norms are introduced out of self-serving fear.)
  • Sexual harassment, molestation, or assault. (This tends to result in silencing pretty effectively.) 
  • Creating internal jockeying, using an artificial scarcity around status or other resources. A culture of oneupmanship. A culture of having to play 'loyal'. People getting way too sucked into this game and having their motives hijacked. They internally align themselves with the interests of certain leaders or the group, leading to secrecy being part of their internal motivation system.
  • This one is really speculative, but if I imagine buying into the story that Geoff is like, a superintelligence basically, and can somehow predict my own thoughts and moves before I can, then ... maybe I get paranoid about even having thoughts that go against (my projection of) his goals. 
    • Basically, if I thought someone could legit read my mind and they were not-compassionate or if I thought that they could strategically outmaneuver me at every turn due to their overwhelming advantage, that might cause some fucked up stuff in my head that stays in there for a while. 
    • If this resonates with you, I am very sorry. 

I welcome additions to this list. 

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-18T22:01:49.485Z · LW(p) · GW(p)
  • "You can't rely on your perspective / Everything is up for grabs." All of your mental content--ideas, concepts, motions, etc.--are potentially good (and should be leaned more heavily on, overriding others) / bad (and should be ignored / downvoted / routed around / destroyed / pushed against), and more openness to change is better, and there's no solid place from which you can stand and see things. Of course, this is in many ways true and useful; but leaning into this creates much more room for others to selectively up/downvote stuff in you to avoid you reaching conclusions they don't want you to reach; or more likely, up/downvote conclusions, and have you rearrange yourself to harmonize with those judgements.
  • Trolling Hope [LW · GW] placed in the project / leadership. Like: I care deeply that things go well in the world; the only way I concretely see that might happen, is through this project; so if this project is doomed, then there's no Hope; so I may as well bet everything on worlds where the project isn't doomed; so worlds where the project is doomed are irrelevant; so I don't see / consider / admit X if X implies that the project is doomed, since X is entirely about irrelevant worlds.
  • Emotional reward conditioning. (This one is simple or obvious, but I think it's probably actually a significant portion of many of these sorts of situations.) When you start to say information I don't like, I'm angry at you, annoyed, frustrated, dismissive, scornful, derisive, insulting, blank-faced, uninterested, condescending, disgusted, creeped out, pained, hurt, etc. When you start to hide information I don't like, or expound the opposite, I'm pleasant, endeared, happy, admiring, excited, etc. etc. Conditioning shades into + overlaps other tactics like stonewalling (blank-faced, aiming at learned helplessness), shaming, and running intereference (changing the subject), but conditioning has a particular systematic effect of making you "walk on eggshells" about certain things and feeling relief / safety when you stick to appropriate narratives. And this systematic effect can be very strong and persist even when you're away from the people who put it there, if you didn't perfectly compartmentalize how-to-please-them from everything else in your mind.
comment by Eli Tyre (elityre) · 2021-10-19T00:31:02.465Z · LW(p) · GW(p)

Do you have a suggestion for another forum that you think would be better? 

In particular, do you have pointers to online forums that do incorporate the emotional and physical layers ("in a non-toxic way", he adds, thinking of twitter). Or do you think that the best way to do this is just not online at all?

Replies from: Unreal
comment by Unreal · 2021-10-19T02:25:33.671Z · LW(p) · GW(p)

CFAR's recent staff reunion seemed to do all right. It wasn't, like, optimized for safety or making sure everyone was heard equally or something like that, but such features could be added if desired. Having skilled third-party facilitators seemed good. 

Oh you said 'online'. Uhhh. 

Online fishbowl Double Cruxes would get us like ... 30% of the way there maybe? Private / invite only ones? 

One could run an online Group Process like thing too. Invite a group of people into a Zoom call, and facilitate certain breakout sessions? Ideally with facilitation in each breakout group? 

I am not thinking very hard about it. 

We need a lot of skill points in the community to make such things go well. I'm not sure how many skill points we're at. 

comment by Spiracular · 2021-10-18T17:24:08.268Z · LW(p) · GW(p)

Meta: I think it makes some good points. I do not think it was THAT bad, and I think the discussion was good. I would keep it up, but it's your call. Possibly adding an "Edit: (further complicated thoughts)" at the top? (Respect for thinking about it, though.)

comment by Spiracular · 2021-10-18T14:39:23.430Z · LW(p) · GW(p)

I see what you're doing? And I really appreciate that you are doing it.

...but simultaneously? You are definitely making me feel less safe to talk about my personal shit.

(My position on this is, and has always been: "I got a scar from Leverage 1.0. I am at least somewhat triggered; on both that level, and by echoes from a past experience. I am scared that me talking about my stuff, rather than doing my best to make and hold space, will scare more centrally-affected people off. And I know that some of those people, had an even WORSE experience than I did. In what was, frankly, a surreal and really awful experience for me.")

comment by weft · 2021-10-18T17:17:24.651Z · LW(p) · GW(p)

Multiple times on this thread I've seen you make the point about figuring out what responsibility should fall on Geoff, and what should be attributed to his underlings.

I just want to point out that it is a pattern for powerful bad actors to be VERY GOOD at never explicitly giving a command for a bad thing to happen, while still managing to get all their followers on board and doing the bad thing that they only hinted at/ set up incentive structures for, etc.

Replies from: Unreal, ChristianKl
comment by Unreal · 2021-10-18T17:38:46.276Z · LW(p) · GW(p)

I wanted to immediately agree. Now I'm pausing...

It seems good to try to distinguish between:

  • Well-meaning but flawed leader sets up a system or culture that has blatant holes that allow abuse to happen. This was unintentional but they were careless or blind or ignorant, and this resulted in harm. (In this case, the leader should be held accountable, but there's decent hope for correction.) 
    • Of course, some of the 'flawed' thing might be shadow stuff, in which case it might be slippery and difficult to see, and the leader may have various coping mechanisms that make accountability difficult. I think this is often the case with leaders, and as far as I can tell, most leaders have shadow stuff, and it negatively impacts their groups, to varying degrees. (I'm worried about Geoff in this case because I think intelligence + competence + shadow stuff is a lot more difficult. The more intelligent and powerful you are, the longer you can keep outmaneuvering attempts to get you to see your own shadow; I've seen this kind of thing, it's bad.) 
  • The leader is not well-meaning and is deliberately exploitative in an intentional way. They created a system that was designed to exploit people systematically, and they lack care in their body or soul for the beings they hurt. They internally applaud when they come up with clever systems that avoid accountability or responsibility while gaining personal benefit. They hope they can keep this up forever. They have a deep-seated fear of failure, and they will do whatever it takes to avoid failure. (This feels more like Jeffrey Epstein.) 
    • You could try to argue that this is also 'shadow stuff', but I think the intention matters. If the leader's goal and desire was to create healthy and wholesome community and failed, this is different from the goal and plan being to exploit people. 

But anyway, point is: I am wanting discernment on this level of detail. For the sake of knowing best interventions and moves. 

I am not interested in putting blame on particular individuals. I am not interested in the epistemic question of who's more or less responsible. I am interested in group dynamics without the question of who's more or less responsible. 

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-18T22:28:42.374Z · LW(p) · GW(p)

I'm not sure about this, and I don't think you were trying to say this, but, I doubt that the two categories you gave usefully cover the space, even at this level of abstraction. Someone could be "well-meaning" in the sense of all their explicit, and even all their conscious, motives being compassionate, life-oriented, etc., while still systematically agentically cybernetically motivatedly causing and amplifying harm. I think you were getting at this in the sub-bullet-point, but the sort of person I'm describing would both meet the description "well-meaning; unintentional harm" and also this from your second bullet-point:

They created a system that was designed to exploit people systematically, and they lack care in their body or soul for the beings they hurt. They internally applaud when they come up with clever systems that avoid accountability or responsibility while gaining personal benefit. They hope they can keep this up forever. They have a deep-seated fear of failure, and they will do whatever it takes to avoid failure.

Maybe I'm just saying, I don't know what you (or I, or anyone) mean by "well-meaning": I don't know what it is to be well-meaning, and I don't know how we would know, and I don't know what predictions to make if someone is well-meaning or not. (I'm not saying it's not a thing, it's very clearly a thing; it's just that I want to develop our concepts more, because at least my concepts are pushed past the breaking point in abusive situations.) For example, someone might both (1) have never once consciously explicitly worked out any strategy or design to make it easier to harm people, and (2) across contexts, take actions that reliably develop/assemble a social field where people are being systematically harmed, and not update on information about how to not do that.

Maybe it would help to distinguish "categories of essence" from "categories of treatment". Like, if someone is so drowning in their shadow that they reliably, proactively, systematically harm people, then a category of essence question is like, "in principle is there information that could update them to stop doing this", and a category of treatement is like, "regardless of what they really are, we are going to treat them exactly like we'd treat a conscious, malevolent, deliberate exploiter".

Replies from: Unreal
comment by Unreal · 2021-10-19T02:44:30.339Z · LW(p) · GW(p)

I appreciate the added discernment here. This is definitely the kind of conversation I'd like to be having. ! 

someone might both (1) have never once consciously explicitly worked out any strategy or design to make it easier to harm people, and (2) across contexts, take actions that reliably develop/assemble a social field where people are being systematically harmed, and not update on information about how to not do that.

Agree. I was including that in 'shadow stuff'. 

The main difference between well-meaning and not, I think for me, is that the well-meaning person is willing to start engaging in conversations or experimenting with new systems in order to help the problems be less. Even though it's in their shadow and they cannot see it and it might take a lot to convince them, after some time period (which could be years!), they are game enough to start making changes, trying to see it, etc. 

I believe Anna S is an example of such a well-meaning person, but also I think it took her a pretty long time to come to grips with the patterns? I think she's still in the process of discerning it? But this seems normal. Normal human level thing. Not sociopathic Epstein thing. 

More controversially perhaps, I think Brent Dill has the potential to see and eat his shadow (cuz I think he actually cares about people and I've seen his compassion), but as you put it, he is "so drowning in his shadow that he reliably, systematically harms people." And I actually think it's the compassionate thing to do to prevent him from harming more people. 

So where does Geoff fall here? I am still in that inquiry. 

comment by ChristianKl · 2021-10-20T07:42:43.371Z · LW(p) · GW(p)

While this is true cult enviroments by their nature allow other bad actors besides the leader often to allow to rise into positions of power within them.

I think the Osho community is a good example. Given that Osho himself was open about his community running the biggest bioterror attack on the US at the same which otherwise likely wouldn't have been discovered, it doesn't seem to me that he was the person most responsible for that but his right hand at the time. 

As far as cult dynamics go it's not only the leader getting his followers to do things either but also various followers acting in a way where they treat the leader as a guru whether or not the leader wants that to happen which in turn does often affect the mindset and actions of the leader.

At the moment it's for example unclear to me to what extend CEA shares part of the responsibility for enabling Leverage. 

comment by Spiracular · 2021-10-18T15:16:39.558Z · LW(p) · GW(p)

My current sense? Is that both Unreal and I are basically doing a mix of "take an advocate role" and "using this as an opportunity to get some of what the community got wrong last time -with our own trauma- right." But for different roles, and for different traumas.

It seemed worth being explicit and calling this out. (I don't necessarily think this is bad? I also think both of us seem to have done a LOT of "processing our own shit' already, which helps.)

But doing this is... exhausting for me, all the same. I also, personally, feel like I've taken up too much space for a bit. It's starting to wear on me in ways I don't endorse.

I'm going to take a step back from this for a week, and get myself to focus on living the rest of my life. After a week, I will circle back. In fact, I COMMIT to circling back.


And honestly? I have told several people about the exact nature of my Leverage trauma. I will tell at least several more people about it, before all of this is over.

It's not going to vanish. I've already ensured that it can't. I can't quite commit to "going full public," because that might be the wrong move? But I will not rest on this until I have done something broadly equivalent.

I am a little bit scared of some sort of attempts to undermine me emerging as a consequence, because there's a trend in even the casual reports that leans in this direction? But if it happens, I will go public about THAT fact.

I am a lot less scared of the repercussions than almost anyone else would be. So, fuck it.

(But also? My experience doesn't necessarily rule out "most of the bad that happened here was a total lack of guard-rails + culty death-spirals." It would take some truly awful negligence to have that few guard-rails, and I would not want that person running a company again? But still, just fyi. Yeah, I know, I know, it undercuts the drama of my last statement.)


But if anyone wonders why I vanished? I'm taking a break. That is what I'm doing.

comment by konstell (parsley) · 2021-10-13T19:05:03.391Z · LW(p) · GW(p)

Epistemic status: I have not been involved with Leverage Research in any way, and have no knowledge of what actually happened beyond what's been discussed on LessWrong. This comment is an observation I have after reading the post.

I had just finished reading Pete Walker's Complex PTSD before coming across this post. In the book, the author describes a list of calm, grounded thoughts to respond to inner critic attacks. A large part of healing is for the survivor to internalize these thoughts so they can psychologically defend themselves.

I see a stark contrast between what the CPTSD book tries to instill and the ideas Leverage Research tried to instill, per Zoe's account. It's as if some of the programs at Leverage Research were trying to unravel almost all of one's sense of self.

A few examples:

Perfectionism

From the CPTSD book:

I do not have to be perfect to be safe or loved in the present. I am letting go of relationships that require perfection. I have a right to make mistakes. Mistakes do not make me a mistake.

From the post:

We might attain his level of self-efficacy, theoretical & logical precision, and strategic skill only once we were sufficiently transformed via the use of our debugging techniques. The overarching objective was to discover and “update” deep irrationalities and eventually become a sort of Musk-level super-person (“attain Mastery” of a world-saving-relevant domain).

All-or-None & Black-and-White Thinking

From the CPTSD book:

I reject extreme or overgeneralized descriptions, judgments or criticisms [...] Statements that describe me as “always” or “never” this or that, are typically grossly inaccurate.

From the post:

Another supervisor spoke wonderingly about Geoff’s presence in our lives, “It’s hard to make sense of the fact that this guy exists at all, and then on top of it, for some reason our lives have intersected with his, at this moment in history. It’s almost impossible to believe that we are the only people who have ever lived with access to the one actual theory of psychology.”

Micromanagement/Worrying/Obsessing/Looping/Over-Futurizing

From the CPTSD book:

I will not repetitively examine details over and over. I will not jump to negative conclusions. I will not endlessly second-guess myself. I cannot change the past. I forgive all my past mistakes. I cannot make the future perfectly safe. I will stop hunting for what could go wrong. I will not try to control the uncontrollable. I will not micromanage myself or others. I work in a way that is “good enough”, and I accept the existential fact that my efforts sometimes bring desired results and sometimes they do not.

From the post:

I sat in many meetings in which my progress as a “self-debugger” was analyzed or diagnosed, pictographs of my mental structure put on a whiteboard. What were my bottlenecks? Was I just not trying hard enough, did I need to be pushed out of the nest? Did I just need support in fixing that one psych issue? Or were there ten, and this wasn’t going to work out? If I was introspectively blocked for too long, I’d have to come up with something else I could do for the project, like operations or sociology, and quick. And if I wasn’t any good at those, I was out. I couldn’t be out, so I doubled down on trying to mold my mind in the “right” direction.”

Unfair/Devaluing Comparisons

From the CPTSD book:

I refuse to compare myself unfavorably to others. I will not compare “my insides to their outsides”. I will not judge myself for not being at peak performance all the time. In a society that pressure us into acting happy all the time, I will not get down on myself for feeling bad.

From the post:

No one else, with one exception being the head of sociology, was considered to have any theories that could possibly be good enough to significantly further the plan like Geoff’s could.

Overproductivity/Workaholism/Busyholism

From the CPTSD book:

I am a human being not a human doing. I will not choose to be perpetually productive. I am more productive in the long run, when I balance work with play and relaxation. I will not try to perform at 100% all the time. I subscribe to the normalcy of vacillating along a continuum of efficiency.

From the post:

I was regularly left with the feeling that I was low status, uncommitted, and kind of useless for wanting to socialize on the weekends or in the evenings. Geoff was known to sleep only about 5–6 hours. Multiple people in leadership had themselves booked from 7am until past midnight every single day of the week.

comment by AnnaSalamon · 2021-10-13T03:25:15.689Z · LW(p) · GW(p)

More thoughts:

I really care about the conversation that’s likely to ensue here, like probably a lot of people do.

I want to speak a bit to what I hope happens, and to what I hope doesn’t happen, in that conversation. Because I think it’s gonna be a tricky one.

What I hope happens:

  • Curiosity
  • Caring,
  • Compassion,
  • Interest in understanding both the specifics of what happened at Leverage, and any general principles it might hint at about human dynamics, or human dynamics in particular kinds of groups.

What I hope doesn’t happen:

  • Distancing from uncomfortable data.
  • Using blame and politics to distance from uncomfortable data.
  • Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.

This is LessWrong; let’s show the world how curiosity/compassion/inquiry is done!

Replies from: Ruby, rohinmshah, Spiracular, farp
comment by Ruby · 2021-10-13T04:30:37.375Z · LW(p) · GW(p)

Thanks, Anna!

As a LessWrong mod, I've been sitting and thinking about how to make the conversation go well for days now and have been stuck on what exactly to say.  This intention setting is a good start.

I think to your list I would add judging each argument and piece of data on its merits, .i.e., updating on evidence even if it pushes against the position we currently hold.

Phrased alternatively, I'm hoping we don't: treating arguments as soldiers: accepting bad arguments because they favor our preferred conclusion, rejecting good arguments because they don't support our preferred conclusion. I think there's a risk in this cases of knowing which side you're on and then accepting and rejecting all evidence accordingly.

comment by Rohin Shah (rohinmshah) · 2021-10-13T22:53:15.288Z · LW(p) · GW(p)

Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.

Are you somehow guaranteeing or confidently predicting that others will not take them in a politicized way, use them as an excuse for false judgments, or otherwise cause harm to those sharing the true relevant facts? If not, why are you asking people not to refrain from sharing such facts?

(My impression is that it is sheer optimism, bordering on wishful thinking, to expect such a thing, that those who have such a fear are correct to have such a fear, and so I am confused that you are requesting it anyway.)

Replies from: AnnaSalamon, clone of saturn, TekhneMakre
comment by AnnaSalamon · 2021-10-14T03:16:37.131Z · LW(p) · GW(p)

Thanks for the clarifying question, and the push-back. To elaborate my own take: I (like you) predict that some (maybe many) will take shared facts in a politicized way, will use them as an excuse for false or uncareful judgments, etc. I am not guaranteeing, nor predicting, that this won’t occur.

I am intending to myself do inference and conversation in a way that tries to avoids these “politicized speech” patterns, even if it turns out politically costly or socially awkward for me to do so. I am intending to make some attempt (not an infinite amount of effort, but some real effort, at some real cost if needed) to try to make it easier for others to do this too, and/or to ask it of others who I think may be amenable to being asked this, and/or to help coalesce a conversation in what I take to be a better pattern if I can figure out how to do so. I also predict, independently of my own efforts, that a nontrivial number of others will be trying this.

If “reputation management” is a person’s main goal, then the small- to medium-sized efforts I can hope to contribute toward a better conversation, plus the efforts I’m confident in predicting independently of mine, would be insufficient to mean that a person’s goals would be well-served in the short run by following my request to avoid “refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.”

However, I’m pretty sure most people in this ecosystem, maybe everyone, would deep down like to figure out how to actually see what kinds of fucked up individual and group and inter-group dynamics we’ve variously gotten into, and why and how, so that we can have a realer shot at things going forward. And I'm pretty sure we want this (in the deeper, long-run sense) much more than we want short-run reputation management. Separately, I suspect most peoples’ reputation-management will be kinda fucked in the long run if we don’t figure out how to get enough right to do actual progress on the world (vs creating local illusions of the same), although this last sentence is more controversial and less obvious. So, yeah, I’m asking people to try to engage in real conversation with me and others even though it’ll probably mess up parts of their/our reputation in the short run, and even though probably many won't manage to joint this in the short run. And I suspect this effort will be good for many peoples’ deeper goals despite the political dynamics you mention.

Here’s to trying.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2021-10-14T08:00:48.519Z · LW(p) · GW(p)

It sounds like you are predicting that the people who are sharing true relevant facts have values such that the long-term benefits to the group overall outweigh the short-term costs to them. In particular, it's a prediction about their values (alongside a prediction of what the short-term and long-term effects are).

I'll just note that, on my view of the short-term and long-term effects, it seems pretty unclear whether by my values I should share additional true relevant information, and I lean towards it being negative. Of course, you have way more private info than me, so perhaps you just know a lot more about the short-term and long-term effects.

I'm also not a fan of requests that presume that the listener is altruistic, and willing to accept personal harm for group benefit. I'm not sure if that's what you meant -- maybe you think in the long term sharing of additional facts would help them personally, not just help the group.

Fwiw I don't have any particularly relevant facts. I once tagged along with a friend to a party that I later (i.e. during or after the party) found out was at Leverage. I've occasionally talked briefly with people who worked at Leverage / Paradigm / Reserve. I might have once stopped by a poster they had at EA Global. I don't think there have been other direct interactions with them.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-14T09:00:21.678Z · LW(p) · GW(p)

I'm also not a fan of requests that presume that the listener ...

From my POV, requests, and statements of what I hope for, aren't advice. I think they don't presume that the listener will want to do them or will be advantaged by them, or anything much else about the listener except that it's okay to communicate my request/hope to them. My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people, and I guess I'm also assuming that my words won't be assumed to be a trustworthy voice of authorities that know where the person's own interests are, or something. That I can be just some person, hoping and talking and expecting to be evaluated by equals.

Is it that you think these assumptions of mine are importantly false, such that I should try to follow some other communication norm, where I more try to only advocate for things that will turn out to be in the other party's interests, or to carefully disclaim if I'm not sure what'll be in their interests? That sounds tricky; I'm not peoples' parents and they shouldn't trust me to do that, and I'm afraid that if I try to talk that way I'll make it even more confusing for anyone who starts out confused like that.

I think I'm missing part of where you're coming from in terms of what good norms are around requests, or else I disagree about those norms.

you have way more private info than me, so perhaps...

I don't have that much relevant-info-that-hasn't-been-shared, and am mostly not trying to rely on it in whatever arguments I'm making here. Trying to converse more transparently, rather.

Replies from: rohinmshah, TekhneMakre
comment by Rohin Shah (rohinmshah) · 2021-10-14T11:47:48.202Z · LW(p) · GW(p)

I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people,

I feel like this assumption seems false. I do predict that (at least in the world where we didn't have this discussion) your statement would create a social expectation for the people to report true, relevant facts, and that this social expectation would in fact move people in the direction of reporting true, relevant facts.

I immediately made the inference myself on reading your comment. There was no choice in the matter, no execution of a deliberate strategy on my part, just an inference that Anna wants people to give the facts, and doesn't think that fear of reprisal is particularly important to care about. Well, probably, it's hard to remember exactly what I thought, but I think it was something like this. I then thought about why this might be, and how I might have misunderstood. In hindsight, the explanation you gave above should have occurred to me, that is the sort of thing that people who speak literally would do, but it did not.

I think there are lots of LWers who, like me, make these sorts of inferences automatically. (And I note that these kinds of inferences are excellent for believing true things about people outside of LW.) I think this is especially true of people in the same reference class as Zoe, and that such people will feel particularly pressured by it. (There are a sadly large number of people in this community who have a lot of self-doubt / not much self-esteem, and are especially likely to take other people's opinions seriously, and as a reason for them to change their behavior even if they don't understand why.) This applies to both facts that are politically-pro-Leverage and facts that are politically-anti-Leverage.

So overall, yes, I think your words would lead people to infer that it would be better for them to report true relevant facts and that any fear they have is somehow misplaced, and to be pressured by that inference into actually doing so, i.e. it coerces them.

I don't have a candidate alternative norm. (I generally don't think very much about norms, and if I made one up now I'm sure it would be bad.) But if I wanted to convey something similar in this particular situation, I would have said something like "I would love to know additional true relevant facts, but I recognize there is a risk that others will take them in a politicized way, or will use them as an excuse to falsely judge you, so please only do this if you think the benefits are worth it".

(Possibly this is missing something you wanted to convey, e.g. you wish that the community were such that people didn't have to fear political judgment?)

(I also agree with TekhneMakre's response about authority.)

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2021-10-15T03:05:09.728Z · LW(p) · GW(p)

Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can't claim to know much about setting up effective norms for defending whistleblowers though.

comment by TekhneMakre · 2021-10-14T09:57:53.176Z · LW(p) · GW(p)
My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people, and I guess I'm also assuming that my words won't be assumed to be a trustworthy voice of authorities that know where the person's own interests are, or something.
these assumptions of mine are importantly false

If someone takes you as an authority, then they're likely to take your wishes as commands. Imagine a CEO saying to her employees, "What I hope happens: ... What I hope doesn't happen: ...", and the (vocative/imperative mood) "Let's show the world...". That's only your responsibility insofar as you're somehow collaborating with them to have them take you as an authority; but it could happen without your collaboration.

such that I should try to follow some other communication norm

IMO no, but you could, say, ask LW to make a "comment signature" feature, and then have every comment you make link, in small font, to the comment you just made.

comment by clone of saturn · 2021-10-14T18:17:45.734Z · LW(p) · GW(p)

I read Anna's request as an attempt to create a self-fulfilling prophecy. It's much easier to bully a few individuals than a large crowd.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-14T18:57:38.640Z · LW(p) · GW(p)

Yeah, I also read Anna as trying to create/strengthen local norms to the effect of 'whistleblowers, truth-tellers, and people-saying-the-emperor-has-no-clothes are good community members and to-be-rewarded/protected'. That doesn't make reprisals impossible, but I appreciated the push (as I interpreted it).

I also interpreted Anna as leading by example to some degree -- a lot of orgs wouldn't have their president join a public conversation like this, given the reputational risks. If I felt like Anna was taking on zero risk but was asking others to take on lots of risk, I may have felt differently.

Saying this publicly also (in my mind) creates some accountability for Anna to follow through. Community leaders who advocate value X and then go back on their word are in much more hot water than ones who quietly watch bad things happen.

E.g., suppose this were happening on the EA Forum. People might assume by default that CEA or whoever is opposed to candor about this topic, because they're worried hashing things out in public could damage the EA-brand (or whatever). This creates a default pressure against open and honest truth-seeking. Jumping in to say 'no, actually, having this conversation here is good, and it seems good to try to make it as real as we can' can relieve a lot of that perceived pressure, even if it's not a complete solution. I perceive Anna as trying to push in that direction on a bunch of recent threads (e.g., here [LW(p) · GW(p)]).

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-14T19:07:59.141Z · LW(p) · GW(p)

I'm not sure what I think of Rohin's interpretation. My initial gut feeling is that it's asking too much social ownership of the micro, or asking community leaders to baby the community too much, or spend too much time carefully editing their comments to address all possible errors (with the inevitable result that community leaders say very little and the things they say are more dead and safe).

It's not that I particularly object to the proposed rephrasings, more just that I have a gut-level sense that this is in a reference class of a thousand other similarly-small ways community leaders can accidentally slightly nudge folks in the wrong direction. In this particular case, I'd rather expect a little more from the community, rather than put this specific onus on Anna.

I agree there's an empirical question of how socially risky it actually is to e.g. share negative stuff about Leverage in this thread. I'm all in favor of a thread to try to evaluate that question (which could also switch to PMs as needed if some people don't feel safe participating), and I see the argument for trying to do that first, since resolving that could make it easier to discuss everything else. I just think people here are smart and independent enough to not be 'coerced' by Anna if she doesn't open the conversation with a bunch of 'you might suffer reprisals' warnings (which does have a bit of a self-fulfilling-prophecy ring to it, though I think there are skillful ways to pull it off).

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2021-10-14T23:41:58.367Z · LW(p) · GW(p)

You're reading too much into my response. I didn't claim that Anna should have this extra onus. I made an incorrect inference, was confused, asked for clarification, was still confused by the first response (honestly I'm still confused by that response), understood after the second response, and then explained what I would have said if I were in her place when she asked about norms.

(Yes, I do in fact think that the specific thing said had negative consequences. Yes, this belief shows in my comments. But I didn't say that Anna was wrong/bad for saying the specific thing, nor did I say that she "should" have done something else. Assuming for the moment that the specific statement did have negative consequences, what should I have done instead?)

(On the actual question, I mostly agree that we probably have too many demands on public communication, such that much less public communication happens than would be good.)

I just think people here are smart and independent enough to not be 'coerced' by Anna if she doesn't open the conversation with a bunch of 'you might suffer reprisals' warnings

I also would have been fine with "I hope people share additional true, relevant facts". The specific phrasing seemed bad because it seemed to me to imply that the fear of reprisal was wrong. See also here [LW(p) · GW(p)].

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-15T00:03:22.706Z · LW(p) · GW(p)

OK, thanks for the correction! :]

comment by TekhneMakre · 2021-10-13T23:15:05.255Z · LW(p) · GW(p)

Of course there's also the possibility that it's worth it. E.g. because people could then notice who is doing a rush-to-judgement thing or confirmation-bias-y thing. (This even holds if there's threat of personal harm to fact-sharers, though personal harm looks like something you added to the part you quoted.)

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2021-10-14T07:44:12.729Z · LW(p) · GW(p)

I agree that's possible, but then I'd say something like "I would love to know additional true relevant facts, but I recognize there are real risks to this and only recommend people do this if they think the benefits are worth it".

Analogy: it could be worth it for an employee to publicly talk about the flaws of their company / manager (e.g. because then others know not to look for jobs at that company), even though it might get them fired. In such a situation I would say something like "It would be particularly helpful to know about the flaws of company X, but I recognize there are substantial risks involved and only recommend people do this if they feel up to it". I would not say "I hope people don't refrain from speaking up about the flaws of company X out of fear that they might be fired", unless I had good reason to believe they wouldn't be fired, or good reason to believe that it would be worth it on their values (though in that case presumably they'd speak up anyway).

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-14T10:14:08.630Z · LW(p) · GW(p)

Thanks. I'm actually still not sure what you're saying.

Hypothesis 1: you're saying, stating "I hope person A does X" implies a non-dependence on person A's information, which implies the speaker has a lot of hidden evidence (enough to make their hope unlikely to change given A's evidence). And, people might infer that there's this hidden evidence, and update on it, which might be a mistake.

Hypothesis 2: you're pointing at something about how "do X, even if you have fear" is subtly coercive / gaslighty, in the sense of trying to insert an external judgement to override someone's emotion / intuition / instinct. E.g. "out of fear" might subtly frame an aversion as a "mere emotion".

(Maybe these are the same...)

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2021-10-14T11:17:24.966Z · LW(p) · GW(p)

Hypothesis 2 feels truer than hypothesis 1.

(Just to state the obvious: it is clearly not as bad as the words "coercion" and "gaslighting" would usually imply. I am endorsing the mechanism, not the magnitude-of-badness.)

I agree that hypothesis 1 could be an underlying generator of why the effect in hypothesis 2 exists.

I think I am more confident in the prediction that these sorts of statements do influence people in ways-I-don't-endorse, than in any specific mechanism by which that happens.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-14T17:17:15.295Z · LW(p) · GW(p)

Okay.

comment by Spiracular · 2021-10-15T17:48:09.719Z · LW(p) · GW(p)

I hesitated a bit before saying this? I thought it might add a little bit of clarity, so I figured I'd bring it up.

(Sorry it got long; I'm still not sure what to cut.)

There are definitely some needs-conflicts. Between (often distant) people who, in the face of this, feel the need to cling to the strong reassurance that "this could not possibly happen to them"/"they will definitely be protected from this," and would feel reassured at seeing Strong Condemning Action as soon as possible...

...and "the people who had this happen." Who might be best-served, if they absorbed that there is always some risk of this sort of shit happening to people. For them, it would probably be best if they felt their truth was genuinely heard, and took away some actionable lessons about what to avoid, without updating their personal identity to "victim" TOO much. And in the future, embraced connections that made them more robust against attaching to this sort of thing in the future.

("Victim" is just not a healthy personal identity in the long-term, for most people.)


Sometimes, these needs are so different, that it warrants having different forums of discussion. But there is some overlap in these needs (working out what happened, improving reporting, protecting people from cultish reprisals), and I'm not sure that separation is always necessary.

My read of the direction Anna seems to be trying to steer this is "do everything she can to clearly hear out people's stories carefully First." Only later, after people have really really listened, use that to formulate carefully considered harm-reducing actions.

Understanding the issue, in all its complexity, before working on coming up with solutions? I feel pretty on-board with that.


...I admit, I initially chaffed a bit? I have some memories of times Anna has leaned a bit more into the former-group's needs. Some of her attempts to aim differently this time, have felt a little awkward.

I did also get an "ordering other people to ignore politics and be vulnerable" vibe off this, which put my armor up to around my ears. (Something with more of a feel of... "showing own vulnerability to elicit other's vulnerability," would have generally felt more natural to me? I think her later responses cycled to this, a little).

...but I'm starting to think that even the awkwardness, is its own sort of evidence? Of someone who is used to wielding frame control, trying to put it aside to listen. And I feel a lot of affection, in seeing it show that she's working on this.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-15T18:05:25.820Z · LW(p) · GW(p)

There's also the need to learn from what happened, so that when designing organizations in the future the same mistakes aren't repeated. 

comment by farp · 2021-10-13T22:17:23.425Z · LW(p) · GW(p)

I would like it if we showed the world how accountability is done, and given your position, I find it disturbing that you have omitted this objective. That is, if I wanted to deflect the conversation away from accountability, I think I would write a post similar to yours. 

Replies from: AnnaSalamon, deluks917
comment by AnnaSalamon · 2021-10-14T03:34:34.832Z · LW(p) · GW(p)

I would like it if we showed the world how accountability is done

So would I. But to do accountability (as distinguished from scapegoating, less-epistemic blame), we need to know what happened, and we need to accurately trust each other (or at least most of each other) to be able to figure out what happened, and to care what actually happened.

The “figure out what happened” and “get in a position where we can have a non-fucked conversation” steps come first, IMO.

I also sort of don’t expect that much goal divergence on the accountability steps that very-optimistically come after those steps, either, basically because integrity and visible trustworthiness serve most good goals in the long run, and vengeance or temporarily-overextended-trust serves little.

Though, accountability is admittedly a weak point of mine, so I might be missing/omitting something. Maybe spell it out if so?

Replies from: farp
comment by farp · 2021-10-15T02:21:39.125Z · LW(p) · GW(p)

I also sort of don’t expect that much goal divergence on the accountability steps that very-optimistically come after those steps, either, basically because integrity and visible trustworthiness serve most good goals in the long run, and vengeance or temporarily-overextended-trust serves little.

To clarify: goal divergence between whom? Geoff and Zoe? Zoe and me? Me and you?

comment by sapphire (deluks917) · 2021-10-13T22:19:26.344Z · LW(p) · GW(p)

This reaction has been predictable for years IMO. As usual, a reasonable response required people to go public. There is no internal accountability process. Luckily things have been made public.

comment by Viliam · 2021-10-17T13:25:35.877Z · LW(p) · GW(p)

Some thoughts related to this topic:

*

For someone familiar with Scientology, the similarities are quite funny. There is a unique genius who develops a new theory of human mind called [Dianetics | Connection Theory]. For people familiar with psychology, it's mostly a restatement of some dubious existing theories, with huge simplifications and little [LW(p) · GW(p)] evidence [LW · GW]. But many people have their minds blown.

The genius starts a group with the goal of providing almost-superpowers such as [perfect memory | becoming another Elon Musk] to his followers, with the ultimate goal of saving the planet. The followers believe this is the only organization capable of achieving such goal. They must regularly submit to having their thoughts checked at [auditing | debugging], where their sincerity is verified using [e-meter | Belief Reporting]. When the leaders runs out of useful or semi-useful ideas to teach, there is always the unending task of exorcising the [body thetans | demons].

The former members are afraid of consequences if they speak about their experience in the organization.

*

Some people expressed epistemic frustration about situation that seems important to understand correctly, but information is scarce. Please note that from one party's perspective, this is a feature, not a bug! The whole situation was intentionally designed to be difficult to figure out.

When you are provided filtered evidence [? · GW], it makes sense to assume that your ignorance plays in favor of the party who censors the information. That means, that the actual reality is worse than you assume based on the information you already have. (Maybe even worse than when you take this into account. Because, that party still has an opportunity to stop the information embargo if public suspicion goes too far.. and yet they chose not to.)

When reading Geoff's comment [LW(p) · GW(p)], please also notice the part that is missing: revoking the NDA, or promising not to take legal or other action against Zoe (or anyone else who talks about their experience at Leverage).

Also, it's more of an excuse than an apology. "It’s terrible that you felt like [...]. I totally did not expect this". Says the guy whose alleged superpower is modelling other people's minds, about the one who regularly had to submit to having her thoughts inspected by a supervisor. (Also, notice that the terrible thing is how she felt, not what she was subjected to. It's kinda her fault for being so irrationally sensitive, am I right? /s)

comment by AnnaSalamon · 2021-10-20T02:49:12.580Z · LW(p) · GW(p)

I wish there were more facts about Leverage out in actual common knowledge.

One thing I’d find really helpful, and that I suspect might be helpful broadly for untangling what happened and making parts of it obvious / common knowledge, is if I/someone/a group could assemble a Leverage timeline that included:

  • Who worked there in different years. When they came and left.
  • Who was dating whom at different years, in cases where both parties worked at Leverage and at least one was within leadership.
  • Funding cycles: when funding from different sources was applied for and/or received; what the within-Leverage narrative was for what was needed to get the funding.
  • Maybe anything else broad and simple/factual/obvious about that time period.

If anyone wants to give me any of this info, either anonymously or with your name attached, I’d be very glad to help assemble this into a timeline. I’m also at least as enthusiastic about anyone else doing this, and would be glad to pay a small amount for someone’s time if that would help. Maybe also it could be cobbled together in common here, if anyone is willing to contribute some of these basic facts in common here.

Is anyone up for collaborating toward this in some form? I’m hoping it might be easier than some kinds of sorting-through, and like it might make some of the harder stuff easier once done.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-21T03:01:37.861Z · LW(p) · GW(p)

I would be happy to contribute my part of this, with the memory I have. I think I could cover a decent amount of the questions above, though would also likely get some things wrong, so wouldn't be a totally dependent observer.

Replies from: elityre
comment by Eli Tyre (elityre) · 2021-10-21T06:06:59.877Z · LW(p) · GW(p)

Same of me.

comment by Geoff_Anders · 2021-10-17T11:21:31.301Z · LW(p) · GW(p)

Zoe - I don’t know if you want me to respond to you or not, and there’s a lot I need to consider, but I do want to say that I’m so, so sorry. However this turns out for me or Leverage, I think it was good that you wrote this essay and spoke out about your experience.

It’s going to take me a while to figure out everything that went wrong, and what I did wrong, because clearly something really bad happened to you, and it is in some way my fault. In terms of what went wrong on the project, one throughline I can see was arrogance, especially my arrogance, which affected so many of the choices we made. We dismissed a lot of the actually useful advice and tools and methods from more typical sources, and it seems that blocking out society made room for extreme and harmful narratives that should have been tempered by a lot more reality. It’s terrible that you felt like your funding, or ability to rest, or take time off, or choose how to interact with your own mind were compromised by Leverage’s narratives, including my own. I totally did not expect this, or the negative effects you experienced after leaving, though maybe I would have, had I not narrowed my attention and basically gotten way too stuck in theoryland.

I agree with you that we shouldn’t skip steps. I’ve updated accordingly. Again I’m truly sorry. I really wanted your experience on the project to be good.

Replies from: Spiracular, Spiracular, Spiracular, homosexuallover22poopoo
comment by Spiracular · 2021-10-17T15:51:57.597Z · LW(p) · GW(p)

Edit: I got a request to cut the chaff and boil this down to discrete actionables. Let me do that.

  1. Will you release everyone from any NDAs

  2. Will you step down from any management roles (e.g. Leverage and Paradigm)

  3. Will you state for the record, that you commit to not threaten* anyone who comes forward with reports that you do not like, in the course of this process

I get the sense that you have made people afraid to stand against you, historically. Engaging in any further threats, seems likely to impede all of our ability to make sense of, and come to terms with, whatever happened. It could also be quite incriminating on its own.

* For full points, commit to also not make any strong stealthy attempts to socially discredit people.

Replies from: Unreal, Spiracular
comment by Unreal · 2021-10-17T20:22:44.054Z · LW(p) · GW(p)

There's good ways to do this kind of thing and bad ways. I feel that this is a bad way? Unless I'm missing a lot of context about what's happening here. 

Other ways to go about this:

  • Hire a third-party mediator to connect aggrieved parties with Geoff
  • Have a mutual trusted friend mediate conversations between aggrieved parties and Geoff
  • Geoff and ex-Leverage staff do a postmortem of some kind
  • Leverage creates an accountability system through which is collects data and feedback

I want to suggest that Geoff doesn't need to respond to Spiracular's requests because they contain a lot of assumptions, in the same way the question "Where were you on the night of the murder" contains a lot of assumptions. And this is a bad way to go about justice. Unless, again, I'm missing a bunch of context. 

Replies from: Spiracular, Spiracular
comment by Spiracular · 2021-10-17T21:03:19.172Z · LW(p) · GW(p)

For whatever it's worth, I think "No" is a pretty acceptable answer to some of these.


"No, for reasons X, Y, Z" is a pretty ordinary answer to the NDA concern. I'd still like to see that response.

"Leverage 2.0 was deliberately structured to avoid a lot of the drawbacks of Leverage 1.0" is something I actually think is TRUE. The fact that Leverage 1.0 was sun-setted deliberately, is something that I thought actually reflected well on both Geoff and the people there.

I think from that, an argument could be made that stepping down is not necessary. I can't say I would necessarily agree with it, but I think the argument could be made.


Most of my stance, is that currently most people are too SCARED to talk. And this is actually really worrying to me.

I don't think "introducing a mediator," who would be spending about half of their time with Geoff --the epicenter of a lot of that fear-- would actually completely solve all of that problem. It would surprise me a lot if it worked here.


My #1 most desired commitment, right now? Is actually #3, and I maybe should have put it first.

A commitment to, in the future, not go after people and especially not to threaten them, for talking about their experiences.

That by itself, would be quite meaningful to me.

Replies from: Unreal
comment by Unreal · 2021-10-17T21:37:44.381Z · LW(p) · GW(p)

Well, I am at least gonna name a fraction of the assumptions that are implied by this set of requests. I am not asking you to do anything about this, but I am going to name them out loud, in the hopes that people come away more conscious of what other assumptions might be present. 

  • Geoff was the center of the problem and, by himself, should be held accountable 
  • If Geoff agrees to this, he is also agreeing on behalf of Leverage itself, including current members and potentially even past members. Meaning that if not-Geoff people break or violate these commitments, Geoff himself should be held responsible
  • Geoff has a meaningful degree of control over what other people do or do not do / say or do not say
  • The people who are scared of retaliation of some kind are mostly afraid of Geoff in particular
  • People's views of Geoff's willingness and ability to retaliate are basically correct / their fears are justified
  • The aggrieved parties should put the mass of the blame on Geoff
  • They should feel better if Geoff agrees to these requests
  • Geoff is totally free to say "no" to these requests on a public internet forum, and this won't cause a bunch of misunderstanding / assumption of guilt if he does 
  • Zoe's account is more or less the full picture of what happened
  • Geoff shouldn't be in a position of leadership 
    • Geoff is bad in a way that cannot easily be corrected through a postmortem, accountability, or feedback process
    • Geoff meant ill-will toward individuals or sort-of knowingly abused people or used them for ego-inflating grandiose aims
    • Geoff knew the basics of Zoe's post / experience

(If you or others basically agree that the above list is true, that would be illuminating and help me understand where you're coming from.) 

It seems bad that people are scared to talk. I appreciate your and other people's efforts to make it easier for them to make sense of their experience and to process it out loud. I suspect Zoe's account has helped a lot and created space for bravery, and I feel trust that others will come forward when it's time. 

Spiracular, I sense good faith from you and appreciate the thoughtfulness you seem to be bringing. I am wanting to have this conversation in the open so that people aren't left with weird impressions about who knows what, what's true, and we we have collectively agreed is true. 

I'd like to suggest not moving too fast past the "processing what happened" phase by jumping ahead of "observe orient" to "decide act." Even if for the sake of helping people, I think it's important to "slow our roll" when it comes to deciding what a person should do and engaging them with a leading set of questions or demands. I don't like the things Geoff would subtly be 'agreeing to' by saying 'yes' or 'no' to any of these requests. Him engaging them at all seems like a trap. (He might choose to do so anyway. I am not trying to protect HIM in particular. I am protecting against 'doing justice in a way that doesn't serve'. For that reason, I don't want to see him respond to your requests until there's more 'orientation'.) 

LessWrong is not the ideal medium for handling justice, as far as I can tell, and so I also generally feel like we shouldn't be trying to handle this on LW. 

Replies from: Duncan_Sabien, Spiracular
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-17T22:22:19.664Z · LW(p) · GW(p)

(In the Duncan-culture version of LW, comments like the above are both commonplace and highly appreciated.  I mention this because Unreal has mentioned having a tough time with LW, and imo the above comment demonstrates solidly central LW virtue.)

comment by Spiracular · 2021-10-18T01:02:40.507Z · LW(p) · GW(p)

I appreciate this too. I think this form of push-back, is a potentially highly-productive one.

I may need to think for a bit about how to respond? But it seemed worth expressing my appreciation for it, first.

comment by Spiracular · 2021-10-17T21:11:36.658Z · LW(p) · GW(p)

Meta-note: I tried the longer-form gentler one? But somebody ELSE complained about that structure.

(A piece of me recognizes that I can't make everybody happy here, but it's a little annoying.)

comment by Spiracular · 2021-10-17T16:40:05.389Z · LW(p) · GW(p)

That last point sub-point is a little vague, so let my clarify my personal cut-off on this. Others may disagree.

I wouldn't object to seeing the occasional brief overt statement coming directly from Geoff that his recollection doesn't match someone else's interpretation.

I would object to any further encouragement of things that resemble the "strong, repeated pressure by someone close to Geoff to have the post marked as flawed" that Ruby described [LW(p) · GW(p)].

Consistently denouncing the later going forward, would be very helpful.

Replies from: Ruby
comment by Ruby · 2021-10-17T17:24:02.605Z · LW(p) · GW(p)

I want to clarify that using the word "threat" in my case would cause one to overestimate the severity by 5-20x or something of the pressure I experienced (more so than "strong pressure"). Not that the word is strictly wrong, but the connotations of it are too strong. I might end up listing the actual behaviors in a bit, maybe after more dialog with the person in question.

Replies from: Spiracular
comment by Spiracular · 2021-10-17T19:58:16.878Z · LW(p) · GW(p)

When I said "last sub-point?"

I was referring to "make any strong stealthy attempts to socially discredit people," not "threaten" (by which I mean, "threaten").

I was deliberately treating "no threats" as minimum, and "no strong social pressure" as extra-credit.

Replies from: Ruby
comment by Ruby · 2021-10-17T20:34:06.038Z · LW(p) · GW(p)

Ah, gotcha. I misunderstood the meaning of "sub-point".

comment by Spiracular · 2021-10-17T14:51:26.591Z · LW(p) · GW(p)

I recognize it took some courage to talk about this in the first place, and I don't want to discount that. I am glad that you said something.

...but I also don't want to lose track of this thread [LW(p) · GW(p)].

Edit: I got a request to boil this down, so I separated it to that thread.

And reading the room? I think there is, broadly speaking, a lot of fear of you. And I think part of why that is true, is because you cultivated that.

You have noticed that you made some errors which blinded you to the consequences of some of your actions, and I think that's a good start? I hope you might be able to agree with me that this attitude of fear, is probably blinding you to the reporting of any further harms.

I recognize processing takes time, and there hasn't been a lot of time yet. But also, I think somebody needed to say this to your face, and it might as well be me.

How do you want to help wind down this aura of fear, which I think is still blinding not just most of us, but also YOU, to a lot of the full reality of what happened?

(And it might well be, that you will help with this by saying almost nothing and going after no-one. But if so? I think it would help, if you briefly committed to that outright.)

comment by Spiracular · 2021-10-17T14:51:11.248Z · LW(p) · GW(p)

I appreciate hearing from you about some of what you probably got wrong.

I'm pretty sure that a lot of this started out relatively benignly, and spiraled?

I agree with your impression that arrogance was at least one of several pressures that made it hard to see that things were going in a bad direction. A lot of invisible guard-rails were dropped or traded away over time, and the absence of a certain amount of reality-checking made it very hard to fix after things had veered off the rails.

I hope your account contributes to making people less likely to make similar errors in the future.

(I would also be very unhappy, if I ever saw you having a substantial amount of power over people again though, fwiw.)

comment by Raemon · 2021-11-07T23:22:10.721Z · LW(p) · GW(p)

I had thought about saying this earlier, for fairness/completeness, but didn't get around to it. I've heard some people feeling wary of speaking positively of Leverage out of vague worry of reprisal.

So... I do want to note 

a) I got a lot of personal value from interacting with Geoff personally. In some sense I'm an agent who tries to do ambitious things because of him. He looked at my early projects (Solstice in particular), he understood them, and told me he thought they were valuable. This was an experience that would later feed into my thoughts in this post [LW · GW].

b) I also have gotten some good techniques from the Leverage ecosystem. I'm not 100% sure which ideas came from where, but Belief Reporting in particular has been a valuable tool in my toolkit.

(none of this is meant to be evidence about a bunch of other claims in this thread. Just wanted to somewhat offset the arguments-are-soldiers default)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-11-08T02:31:52.221Z · LW(p) · GW(p)

Piggybacking with additional accurate (albeit somewhat-tangential) positive statements, with a hope of making it seem more possible to say true positive and negative things about Leverage (since I've written mostly negative things, and am writing another negative thing as we speak):

The 2014 EA Retreat, run by Leverage, is still by far the best multi-org EA or rationalist event I've ever been to, and I think it had lots of important positive effects on EA.

comment by AnnaSalamon · 2021-10-14T20:08:08.951Z · LW(p) · GW(p)

I imagine a lot of people want to say a lot of things about Leverage and the dynamics around it, except it’s difficult or costly/risky or hard-to-imagine-being-heard-about or similar.

If anyone is up for saying a bit about how that is for you personally (about what has you reluctant to try to share stuff to do with Leverage, or with EA/Leverage dynamics or whatever, that in some other sense you wish you could share — whether you had much contact with Leverage or not), I think that would be great and would help open up space.

I’d say err on the side of including the obvious.

Replies from: Ruby, mingyuan, habryka4, BayAreaHuman, Spiracular, Evan_Gaensbauer, Linch
comment by Ruby · 2021-10-16T05:21:12.070Z · LW(p) · GW(p)

I interacted with Leverage some over the years. I felt like they had useful theory and techniques, and was disappointed that it was difficult to get access to their knowledge. I enjoyed their parties. I did a Paradigm workshop. I knew people in Leverage to a casual degree.

What's live for me now is that when the other recent post about Leverage was published, I was subjected to strong, repeated pressure by someone close to Geoff to have the post marked as flawed, and asked to lean on BayAreaHuman to approximately retract the post or acknowledge its flaws. (This request was made of me in my new capacity as head of LessWrong.) "I will make a fuss" is what I was told. I agreed that the post has flaws (I commented to that effect in the thread) and this made me feel the pressure wasn't illegitimate despite being unpleasant. Now it seems to be part of a larger concerning pattern.

Further details seem pertinent, but I find myself reluctant to share them (and already apprehensive that this more muted description will have the feared effect) because I just don't want to damage the relationship I have with the person who was pressuring me. I'm unhappy about it, but I still value that relationship. Heck, I haven't named them. I should note that this person updated (or began reconsidering their position) after Zoe's post and has since stopped applying any pressure on me/LessWrong. 

With Geoff himself (with whom I personally have had a casual positive relationship) I feel more actual fear of being critical or in anyway taking the side against Leverage. I predict that if I do so, I'll be placed on the list of adversaries. And something like, just based on the reaction to the Common knowledge post, Leverage is very agenty when it comes to their reputation. Or I don't know, I don't fear any particularly terrible retribution myself, but I loathe to make "enemies".

I'd like to think that I've got lots of integrity and will say true things despite pressures and incentives otherwise, but I'm definitely not immune to them. 

Replies from: RobbBB, TekhneMakre
comment by Rob Bensinger (RobbBB) · 2021-10-16T05:55:36.701Z · LW(p) · GW(p)

With Geoff himself (with whom I personally have had a casual positive relationship) I feel more actual fear of being critical or in anyway taking the side against Leverage. I predict that if I do so, I'll be placed on the list of adversaries. And something like, just based on the reaction to the Common knowledge post, Leverage is very agenty when it comes to their reputation. Or I don't know, I don't fear any particularly terrible retribution myself, but I loathe to make "enemies".

If you do make enemies in this process, in trying to help us make sense of the situation: count me among the people you can call on to help.

Brainstorming more concrete ideas: if someone makes a GoFundMe to try to offset any financial pressure/punishment Leverage-adjacent people might experience from sharing their stories, I'll be very happy to contribute.

comment by TekhneMakre · 2021-10-16T05:44:03.020Z · LW(p) · GW(p)
I'm unhappy about it, but I still value that relationship

Positive reinforcement for finding something you could say that (1) protects this sort of value at least somewhat and (2) opens the way for aggregation of the metadata, so to speak; like without your comment, and other hypothetical comments that haven't happened yet for similar reasons, the pattern could go unnoticed.


I wonder if there's an extractable social norm / conceptual structure here. Something like separating [the pattern which your friend was participating in] from [your friend as a whole, the person you have a relationship]. Those things aren't separate exactly, but it feels like it should make sense to think of them separately, e.g. to want to be adversarial towards one but not the other. Like, if there's a pattern of subtly suppressing certain information or thoughts, that's adversarial, and we can be agnostic about the structure/location of the agency behind that pattern while still wanting to respond appropriately in the adversarial frame.

comment by mingyuan · 2021-10-15T18:44:55.363Z · LW(p) · GW(p)

My contact with Leverage over the years was fairly insignificant, which is part of why I don’t feel like it’s right for me to participate in this discussion. But there are some things that have come to mind, and since Anna’s made space for that, I’ll note them now. I still think it’s not really my place to say anything, but here’s my piece anyway. I’m speaking only for myself and my own experience.

I interviewed for an ops position at Leverage/Paradigm in early 2017, when I was still in college. The process took maybe a couple months, and the in-person interview happened the same week as my CFAR workshop; together these were my first contact with the Bay community. Some of the other rationalists I met that week warned me against Leverage in vague terms; I discussed their allegations with the ops team at my interview and came away feeling satisfied that both sides had a point.

I had a positive experience at the interview and with the ops and their team hiring process in general. The ops lead seemed to really believe in me and recommended me to other EA orgs after I didn’t get hired at Paradigm, and that was great. My (short-term) college boyfriend had a good relationship with Leverage and later worked at Paradigm. In mid-2017 I met a Leverage employee in a non-Leverage context and we went on a couple dates; that ended amicably. All that’s just to say that at that point, I thought I had a fairly positive relationship with them.

Then, Leverage/Paradigm put on EA Summit in the summer of 2018. I applied to attend and was rejected. My boyfriend, who I think attended a Paradigm workshop around that time, managed to get that decision reversed, but told me that I was rejected because I was on a list of people who might speak ill of Leverage. That really rubbed me the wrong way. I didn’t think I had ever acted in a way to deserve that, and it seemed bad to me that they were so paranoid about their reputation that they would reject large swaths of people from a conference that’s supposed to bring together EAs from around the world, just because of vague suspicion. Ironically that’s the personal experience that led me to distrust Leverage the most. 

The bottom line being that discussions around Leverage’s reputation have always been really fraught and murky, and it’s totally understandable to me that people would fear unknown repercussions for discussing Leverage in public. Many other people in these threads have said that in various ways, but there’s my concrete example.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-15T19:17:15.632Z · LW(p) · GW(p)

The obsession with reputation control is super concerning to me, and I wonder how this connects up with Leverage's poor reputation over the years.

Like, I could imagine two simplified stories...

Story 1:

  • Leverage's early discoveries and methods were very promising, but the inferential gap was high -- they really needed a back-and-forth with someone to properly communicate, because everyone had such different objections and epistemic starting points. (This is exactly the trouble MIRI had in its early comms -- if you try to anticipate which objections will be salient to the reader, you'll usually miss the mark. And if you do this a lot, you miss the mark and are long-winded.)
  • Because of this inferential gap, Leverage acquired a very bad reputation with a bunch of people who (a) misunderstood its reasoning, and then (b) didn't get why Leverage wasn't investing more into public comms.
  • Leverage then responded by sharing less and trying to reset its public reputation to 'normal'. It wasn't trying to become super high-status, just trying to undo the damage already done / prevent things from further degrading as rumors mutated over time. Unfortunately, its approach was heavy-handed and incompetent, and backfired.

Story 2:

  • Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers.
  • This was one of the causes of Leverage's bad reputation, from an early date. Through some combination of 'people noticing when Leverage bungles a PR operation' and 'humans are pretty good at detecting other humans' character, and picking up on subtle cues that someone is manipulative'.

To what extent is one or the other true? (Another possibility is that there isn't much of a causal tie between Leverage's PR obsession and its bad reputation, and they just both occurred for other reasons.)

Replies from: BayAreaHuman
comment by BayAreaHuman · 2021-10-20T06:12:30.678Z · LW(p) · GW(p)

Based on broad-strokes summaries said to me by ex-Leveragers (though admittedly not first-hand experience), I would say that the statement "Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers" rings true to what I have heard.

Some things mentioned to me by Leverage people as typical/archetypal of Geoff's attitude include being willing to lie to people outside Leverage, feeling attacked or at risk of being attacked, and viewing adjacent non-Leverage groups within the broader EA sphere as enemies.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-20T06:29:16.309Z · LW(p) · GW(p)

Thanks! To check: did one or more of the ex-Leveragers say Geoff said he was willing to lie? Do you have any detail you can add there? The lying one surprises me more than the others, and is something I'd want to know.

Replies from: BayAreaHuman, BayAreaHuman
comment by BayAreaHuman · 2021-10-21T16:48:09.864Z · LW(p) · GW(p)

Here is an example:

  • Zoe's report says of the information-sharing agreement "I am the only person from Leverage who did not sign this, according to Geoff who asked me at least three times to do so, mentioning each time that everyone else had (which read to me like an attempt to pressure me into signing)."

  • I have spoken to another Leverage member who was asked to sign, and did not.

  • The email from Matt Fallshaw says the document "was only signed by just over half of you". Note the recipients list includes people (such as Kerry Vaughan) who were probably never asked to sign because they were not present, but I would believe that such people are in the minority; so this isn't strict confirmation, but just increased likelihood, that Geoff was lying to Zoe.

This is lying to someone within the project. I would subjectively anticipate higher willingness to lie to people outside the project, but I don't have anything tangible I can point to about that.

comment by BayAreaHuman · 2021-10-20T07:16:04.171Z · LW(p) · GW(p)

I am more confident that what I heard was "Geoff exhibits willingness to lie". I also wouldn't be surprised if what I heard was "Geoff reports being willing to lie". I didn't tag the information very carefully.

comment by habryka (habryka4) · 2021-10-14T23:20:47.341Z · LW(p) · GW(p)

My current feelings are a mixture of the following: 

  • I disagree with a lot of the details of what many people have said (both people who had bad experiences and people defending their Leverage experiences and giving positive testimonials), and feel like expressing my take has some chance of making those people feel like their experiences are invalidated, or at least spark some conflict of some type
  • I know that Geoff and Leverage more broadly in the past have said pretty straightforwardly that they will take pretty adversarial action if someone threatens their reputation or brand, and that makes me both feel like I can trust many fewer things in the discussion, and makes me personally more hesitant to share some things (while also feeling like that's kind of cowardly, but I haven't yet had the time to really work through my feelings here, which in itself has some chilling effects that I feel uncomfortable with, etc.)
  • On the other side, there have been a lot of really vicious and aggressive attacks to anyone saying anything pro-leverage for many years, with a strength that I think is overall even greater and harder to predict than what Geoff and Leverage have been doing. It's also been more of a crowd-driven phenomenon, which makes it less predictable and more scary.
  • I feel like it's going to be really hard to say anything without people pigeonholing me into belonging to some group that is trying to rewrite the rationality social and political landscape some way, and that makes me feel like I have to actively think about how to phrase what I am saying in a way that avoids that pigeonholing effect (as a concrete example, one person approached me who read Ben's initial comment on the "BayAreaHuman" post that said "I confirm that this is a real person in good standing" as an endorsement of the post, when the comment was really just intended as confirming some facts about the identity of the poster, with basically complete independence from the content of the post)
  • I myself have access to some sensitive and somewhat confidential information, and am struggling with navigating exactly which parts are OK to share and which ones are not. 
Replies from: RobbBB, farp
comment by Rob Bensinger (RobbBB) · 2021-10-15T00:13:45.915Z · LW(p) · GW(p)

Geoff and Leverage more broadly in the past have said pretty straightforwardly that they will take pretty adversarial action if someone threatens their reputation or brand

I assume there isn't a public record of this anywhere? Could I hear more details about what was said? This sounds atrocious to me.

I similarly feel that I can't trust the exculpatory or positive evidence about Leverage much so long as I know there's pressure to withhold negative information. (Including informal NDAs and such.)

On the other side, there have been a lot of really vicious and aggressive attacks to anyone saying anything pro-leverage for many years, with a strength that I think is overall even greater and harder to predict than what Geoff and Leverage have been doing. It's also been more of a crowd-driven phenomenon, which makes it less predictable and more scary.

I agree with this too, and think it's similarly terrible, but harder to blame any individual for (and harder to fix).

I assume it's to a large extent an extreme example of the 'large inferential gaps + true beliefs that sound weird' afflicting a lot of EA orgs, including MIRI. Though if Leverage has been screwed up for a long time, some of that public reaction may also have been watered over the years by true rumors spreading about the org.

comment by farp · 2021-10-15T03:52:56.028Z · LW(p) · GW(p)

Let's stand up for the truth regardless of threats from Geoff/Leverage, and let's stand up for the truth regardless of the mob. 

I feel like it's going to be really hard to say anything without people pigeonholing me into belonging to some group that is trying to rewrite the rationality social and political landscape some way.

Let's stand up for the truth! Maintaining some aura of neutrality or impartiality at the expense of the truth would be IMO quite obviously bad. 

I myself have access to some sensitive and somewhat confidential information, and am struggling with navigating exactly which parts are OK to share and which ones are not.

I think that it is seen as not very normative on LW to say "I know things, confidential things I will not share, and because of that I have a very [bad/good] impression of this person or group". But IMO its important to surface. Vouching is an important social process. 

Replies from: ChristianKl, Ruby
comment by ChristianKl · 2021-10-15T08:58:32.137Z · LW(p) · GW(p)

Let's stand up for the truth regardless of threats from Geoff/Leverage, and let's stand up for the truth regardless of the mob. 

It seems that your account is registered to just participate in this discussion and you withold your personal identity. 

If you sincerely believe that information should be shared, why are you withholding yourself and tell other people to take risks?

Replies from: farp
comment by farp · 2021-10-15T21:59:14.679Z · LW(p) · GW(p)

I have no private information to share. I think there is an obvious difference between asking powerful people in the community to stand up for the truth, and asking some rando commentator to de-anonymize. 

comment by Ruby · 2021-10-15T04:37:39.265Z · LW(p) · GW(p)

Anna is attempting to make people comfortable having this difficult conversation about Leverage by first inviting them just to share what factors are affecting their participation. Oliver is kindly obliging and saying what's going through his mind. 

This seems like a good approach to me for getting the conversation going. Once people have shared what's going through their minds–and probably these need to received with limited judgmentality–the group can then understand the dynamics at play and figure out how to proceed having a productive discussion.

All that to say, I think it's better to hold off on pressuring people or saying their reactions aren't normative [1] in this sub-thread. Generally, I think having this whole conversation well requires a gentleness and patience in the face of the severe, hard-to-talk-about situation.  Or to be direct, I think your comments in this thread have been brusque/pushy in a way that's hurting the conversation (others feel free to chime in if that seems wrong to them).

[1] For what it's worth, I think disclosing that your stance is informed by private info is good and proper.

Replies from: mayleaf
comment by mayleaf · 2021-10-16T00:42:33.421Z · LW(p) · GW(p)

I think your comments in this thread have been brusque/pushy in a way that's hurting the conversation (others feel free to chime in if that seems wrong to them).

I mentioned in a different comment that I've appreciated some of farp's comments here for pushing back against what I see as a missing mood in this conversation (acknowledgment that the events described in Zoe's account are horrifying, as well as reassurance that people in leadership positions are taking the allegations seriously and might take some actions in response). I also appreciate Ruby's statement that we shouldn't pressure or judge people who might have something relevant to say.

The unitofcaring post on mediators and advocates seems relevant here. I interpret farp (edit: not necessarily in the parent comment, but in various other comments in this thread) as saying that they'd like to see more advocacy in this thread instead of just mediation. I am not someone who has any personal experiences to share about Leverage, but if I imagine how I'd personally feel if I did, I think I agree.

Replies from: Spiracular, RobbBB, farp
comment by Spiracular · 2021-10-16T01:42:06.851Z · LW(p) · GW(p)

On mediators and advocates: I think order-of-operations MATTERS.

You can start seeking truth, and pivot to advocate, as UOC says.

What people often can't do easily is start with advocate, and pivot to truth.

And with something like this? What you advocated early can do a lot to color both what and who you listen to, and who you hear from.

Replies from: farp
comment by farp · 2021-10-17T06:18:48.023Z · LW(p) · GW(p)

You can start seeking truth, and pivot to advocate, as UOC says.

The entire thesis of the post is that you want a mixture of advocacy and mediation in the community. So if your proposal is that we all mediate, and then pivot to advocacy, I think that is not at all what UOC says. 

Not that I super endorse the prescription / dichotomy that the post makes to begin with.

comment by Rob Bensinger (RobbBB) · 2021-10-16T01:05:54.515Z · LW(p) · GW(p)

I liked Farp's "Let's stand up for the truth" comment, and thought it felt appropriate. (I think for different reasons than "mediators and advocates" -- I just like people bluntly stating what they think, saying the 'obvious', and cheerleading for values that genuinely deserve cheering for. I guess I didn't expect Ollie to feel pressured-in-a-bad-way by the comment, even if he disagrees with the implied advice.)

Replies from: farp
comment by farp · 2021-10-17T06:06:19.393Z · LW(p) · GW(p)

Thanks. Your comments and mayleaf's do mean a lot to me. Also, I was surprised by negative reaction to that comment and didn't really expect it to come off as admonishment or pressure. Love 2 cheerlead \o/

comment by farp · 2021-10-17T07:59:52.595Z · LW(p) · GW(p)

I have thought about this UOC post and it has grown on me.

The fact is that I believe Zoe and I believe her experience is not some sort of anomaly. But I am happy to advocate for her just on principle.

Geoff has much more resources and much at stake. Zoe just has (IMO) the truth and bravery and little to gain but peace. Justice for Geoff just doesn't need my assistance, but justice for Zoe might. 

So I am happy to blindly ally with Zoe and any other victims. And yes I would like others to do the same, and broadcast that we will fight for them. Otherwise they are entering a potentially shitty looking fight with little to gain against somebody with everything to lose.

I don't demand that no mediation take place, but if I want to plant my flag, that's my business. It's not like I am doing anything dishonest in the course of my advocacy.

And to be completely frank, as an advocate for the victims, I don't really want AnnaSalomon to be one of the major mediators here. I don't think she's got a good track record with CFAR stuff at all -- I have mentioned Robert Lecnik a few times already. 

I think Kelsey's post is right -- mediators need to seem impartial. For me, Anna can't serve this role. I couldn't say how representative I am.

Replies from: Viliam
comment by Viliam · 2021-10-17T12:13:33.196Z · LW(p) · GW(p)

I will be happy to contribute financially to Zoe's legal defense, if Geoff decides to take revenge.

In the meanwhile, I am curious about what actually happened. The more people talk, the better.

comment by BayAreaHuman · 2021-10-16T02:24:59.474Z · LW(p) · GW(p)

I appreciate this invitation. I'll re-link to some things I already said on my own stance: https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QKKnepsMoZmmhGSe [LW(p) · GW(p)]

Beyond what I laid out there:

  • It was challenging being aware of multiple stories of harm, and feeling compelled to warn people interacting with Geoff, but not wanting to go public with surprising new claims of harm. (I did mention awareness of severe harm very understatedly in the post. I chose instead to focus on "already known" properties that I feel substantially raise the prior on the actually-observed type of harm, and to disclose in the post that my motivation in cherry-picking those statements was to support pattern-matching to a specific template of harm).

  • After posting, it was emotionally a bit of a drag to receive comments that complained that the information-sharing attempt was not done well enough, and comparatively few comments grateful for attempting to share what I could, as best I could, to the best of my ability at the time, although the upvote patterns felt encouraging. I was pretty much aware that that was what was going to happen. In general, "flinching in anticipation of a high criticism-to-gratitude ratio" is an overall feeling I have when I imagine posting anything on LessWrong.

  • I was told by friends before posting that I ought to consider the risk to myself and to my contacts of tangible real-world retribution. I don't have any experience with credible risk of real-world retribution. It feels mind-numbing.

  • Meta: I haven't felt fully comfortable describing retribution concerns, including in the post, because I haven't been able to rule out that revealing the tactical landscape of why I'm sharing or avoiding certain details is simply more information that can be used by Geoff and associates to make life harder for people pursuing clarity. This is easier now that Zoe has written firsthand about specific retribution concerns.

  • Meta-meta: It doesn't feel great to talk about all this paranoid adversarial retribution thinking, because I don't want to contribute to the spread of paranoia and adversarial thinking. It feels contagious. Zoe describes a very paranoid atmosphere within Leverage and among those who left, and I feel that attesting to a strategically-aware disclosure pattern carries that toxic vibe into new contexts.

Replies from: Spiracular, TekhneMakre
comment by Spiracular · 2021-10-16T17:50:26.010Z · LW(p) · GW(p)

Since it sounds like just-upvotes might not be as strong a signal of endorsement as positive engagement...

I want to say that I really appreciate and respect that you were willing to come forward, with facts that were broadly-known in your social graph, but had been systematically excluded from most people's models.

And you were willing to do this, in a pretty adversarial environment! You had to deal with a small invisible intellectual cold-war that ensured, almost alone, without backing down. This ​counts for even more.


I do have a little bit of sensitive insider information, and on the basis of that: Both your posts and Zoe's have looked very good-faith to me.

In a lot of places, they accord with or expand on what I know. There are a few parts I was not close enough to confirm, but they have broadly looked right to me.

Replies from: Spiracular
comment by Spiracular · 2021-10-16T17:51:21.616Z · LW(p) · GW(p)

I also have a deep appreciation, for Zoe calling out that different corners of Leverage had very different experiences with it. Because they did! Not all time-slices or sub-groups within it experienced the same problems.

This is probably part of why it was so easy, to systematically play people's personal experiences against each other: Since he knew the context through which Leverage was experienced, Geoff or others could systematically bias whose reports were heard.

(Although I think it will be harder in the future to engage in this kind of bullshit, now that a lot of people are aware of the pattern.)


To those who had one of the better firsthand experiences of Leverage:

I am still interested in hearing your bit! But if you are only engaging with this due to an inducement that probably includes a sampling-bias, I appreciate you including that detail.

(And I am glad to see people in this broader thread, being generally open about that detail.)

comment by TekhneMakre · 2021-10-16T02:41:40.708Z · LW(p) · GW(p)
Meta-meta: It doesn't feel great to talk about all this paranoid adversarial retribution thinking, because I don't want to contribute to the spread of paranoia and adversarial thinking. It feels contagious. Zoe describes a very paranoid atmosphere within Leverage and among those who left, and I feel that attesting to a strategically-aware disclosure pattern carries that toxic vibe into new contexts.

I don't have anything to add, but I just want to say I felt a pronounced pang of warmth/empathy towards you reading this part. Not sure why, something about fear/bravery/aloneless/fog-of-war.

comment by Spiracular · 2021-10-15T16:24:21.913Z · LW(p) · GW(p)

I will talk about my own bit with Leverage later, but I don't feel like it's the right time to share it yet.

(But fwiw: I do have some scars, here. I have a little bit of skin in this one. But most of what I'm going to talk about, comes from analogizing this with a different incident.)

A lot of the position I naturally slide into around this, which I have... kind of just embraced, is of trying to relate hard to the people who:

  • WERE THERE
  • May have received a lot of good along with the bad
  • May have developed a very complicated and narratively-unsatisfying opinion because of that, which feels hard to defend
  • Are very sensitized to condemning mob-speak. Because they've been told, again and again, that anything good they got out of the above, will be swept out with the bathwater if the bad comes to light.
    • This sort of thing only stays covered up for this long, if there was a lot of pressure and plausible-sounding arguments pointing in the direction of "say nothing." The particular forms of that, will vary.
    • Core Leverage seems pretty willing to resort to manipulation and threats? And despite me generally trying so hard to avoid this vibe: I want to condemn that outright.
    • Also, in any other circumstance: Most people are very happy to condemn people who break strong secrecy agreements that they've made. If you feel like you've made one, I recognize that this is not easy to defy.
      • (My own part in this story is small. The only reason I'm semi-comfortable with sharing it, is because I got all of my own "vaguely owning the fact that I broke a very substantial secrecy agreement, publicly, to all my friends" out of the way EARLY. It would be bogging me down like crazy, otherwise. I respect Zoe, and others, for defying comparable pulls, or even worse ones.)
        • If you're stuck on this bit, I would like to say: This is an exceptional circumstance. You should maybe talk to somebody, eventually. Maybe only once your own processing has settled down. Publicly might not be the right call for you, and I won't push for it. Please take care for yourself, and try to be careful to pick someone who is not especially prone to demonizing things.
  • People can feel their truth drowned out by mobs of uninvested people, condemning it from afar.
    • The people who know what happened here, are in the minority. They have the most knowledge of what actually happened, and the most skin in this. They are also the people with the most to fear, and the most to lose.

People often don't appreciate, how much the sheer numbers game can weigh on you. It can come to feel like the chorus is looming over you, in this sort of circumstance; poised, always ready to condemn you and yours from afar. Each individual member is only "speaking-their-truth" once, but in aggregate, they can feel like an army.

It's hard to keep appropriate sight of the fact that the weight of the people who were there, and their story, are probably worth 1000x as much as even the most coherent but distant and un-invested condemning statement. They will not get as many shares. It might not even qualify as a story! But their contributions are worth a lot more, at least in my mind. Because they were THERE.

And I... want to stick up for them where relevant? Because this one wasn't my incident, but I know how hard it might be for them to do it for themselves. I can't swear I will do a good job of it? But the desire is there.


I do think a more-private forum, that is enriched for people who were closer to the event, might be a more comfortable place for some people to recount. It's part of why I tried to talk up that possibility, in another thread.

...it is unfortunately not my place to make this, though. For various reasons, which feel quite solid, to me.

(And after Ryan's account? I honestly have some concerns about it getting infiltrated by one of the more manipulative people around Leverage. I don't want to discount that fear! I still think it might be a good idea?)

I do think we could stand to have a clearer route for things to be shared anonymously, because I suspect at least some people would be more comfortable that way.

(Since "attempts at deanonymization" appears to be a known issue, it may be worth having a flag for "only share as numeric aggregations of >1, using my recounting as a data-point.")

EDITEDIT: This press release names Anna Salamon, Eli Tyre, Matthew Graves, and Matt Falshaw as several somewhat-intermediary people who can be contacted. I feel fewer misgivings around contacting them, than I did around the proposal of contacting Geoff and Larissa to handle this internally.

Replies from: Spiracular, Spiracular, TekhneMakre
comment by Spiracular · 2021-10-15T16:25:24.065Z · LW(p) · GW(p)

I was once in a similar position, due to my proximity to a past (different) thing. I kinda ended up excruciatingly sensitive, to how some things might read or feel to someone who was close, got a lot of good out of it (with or without the bad), and mostly felt like there was no way their account wouldn't be twisted into something unrecognizable. And who may be struggling, with processing an abrupt shift in their own personal narrative --- although I sincerely hope the 2 years of processing helped to make this less of a thing? But if you are going through it anyway, I am sorry.

And... I want this to go right. It didn't go right then; not entirely. I think I got yelled at by someone I respect, the first time I opened up about it. I'm not quite sure how to make this less scary for them? But I want it to be.

The people I know who got swept up in this includes some exceptionally nice people. There is at least one of them, who I would ordinarily call exceptionally sane. Please don't feel like you're obligated to identify as a bad person, or as a victim, because you were swept up in this. Just because some people might say it about you, doesn't make it who you are.

comment by Spiracular · 2021-11-01T20:31:47.633Z · LW(p) · GW(p)

While I realize I've kinda de-facto "taken a side" by this point (and probably limited who will talk to me as a result)? I was mispronouncing Geoff's name, before this hit; this is pretty indicative of how little I knew him personally. I started out mostly caring about having the consequences-for-him be reached based off of some kind of reasonable assessment, and not caring too much about having it turn out one way or another. I still feel more invested in there being a good process, and in what will generate the best outcomes for the people who worked under him (or will ever work under him), than anything else.

Compared to Brent's end-result of "homeless with health-problems in Hawaii" **? The things I've asked for have felt mild. But I also knew that if I wasn't handling mentioning them, somebody else probably would. In my eyes, we probably needed someone outside of the Leverage ecosystem who knew a lot of the story (despite the substantial information-hiding efforts) to be handling this part of the response.

Pushing for people to publish the information-hiding agreement, and proposing that Geoff maybe shouldn't have a position with a substantial amount of power over others (at least while we sort this out), felt to me like fairly weaksauce requests. I am still a bit surprised that Geoff may have taken this as a convincing audition for a "prosecutor" role? I am angry and clued-in enough to sincerely fill the role, if somebody has to and if nobody else will touch it. But it still surprised me, because it is not what I see as my primary responsibility here.

**Despite all his flaws and vices? I was close to Brent. I do care about Brent, and I wouldn't have wished that for him.

comment by TekhneMakre · 2021-10-16T01:03:07.271Z · LW(p) · GW(p)

An abstract note: putting stock in anonymous accounts potentially opens wider a niche for false accounts, because anonymity prevents doing induction about trustworthiness across accounts by one person. (I think anonymity is a great tool to have, and don't know if this is practically a problem; I just want to track the possibility of this dynamic, and appreciate the additional value of a non-anonymous account.)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-16T01:07:51.898Z · LW(p) · GW(p)

One tool here is for a non-anonymous person to vouch for the anonymous person (because they know the person, and/or can independently verify the account).

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-16T01:15:32.923Z · LW(p) · GW(p)

True. A maybe not-immediately-obvious possibility: someone playing Aella's role of posting anonymous accounts could offer the following option: if you given an account and take this option, then if the poster later finds out that you seriously lied, then, they have the option to de-anonymize you. The point being, in the hypothetical where the account is egregiously false, the accounter's reputation still takes a hit; and so, these accounts can be trusted more. If there's no possibility of de-anonymization, then the account can only be trusted insofar as you trust the poster's ability to track accounter's trustworthiness. Which seems like a more complicated+difficult task. (This might be terrible thing to do, IDK.)

Replies from: Spiracular, Viliam
comment by Spiracular · 2021-10-16T01:57:40.728Z · LW(p) · GW(p)

I get VERY creepy vibes from this proposal, and want to push back hard on it.

Although, hm... I think "lying" and "enemy action" are different?

Enemy action occasionally warrants breaking contracts back, after they didn't respect yours.

Whereas if there is ZERO lying-through-negligence in accounts of PERSONAL EXPERIENCES, we can be certain we set the bar-of-entry far too high.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-16T02:22:40.918Z · LW(p) · GW(p)

(Downvoted. I'd have strong downvoted but -5 seems too harsh. Sounds like you're responding to something other than what I said, and if that's right, I don't like that you said "VERY creepy" about the proposal, rather than about whatever you took from it.)

Replies from: Spiracular
comment by Spiracular · 2021-10-16T16:09:23.759Z · LW(p) · GW(p)

I was very up-front about the role I am attempting to embody in this: Relating to, and trying to serve, people with complicated opinions who are finding it hard to talk about this.

I feel we needed someone to take this role. I wish someone had done it for me, when my stuff happened.


You seem to not understand that I am making this statement, from that place and in that capacity.

Try seeing it through through the lens of that, rather than thinking that I'm making confident statements about your epistemic creepiness.

Hopefully this helps to resolve your confusion.

comment by Viliam · 2021-10-16T20:30:01.525Z · LW(p) · GW(p)

Depends on the algorithm to determine whether "you seriously lied".

Imagine a hypothetical situation where telling the truth puts you in danger, but you read this offer, think "well, I am telling the truth, so they will protect my anonymity", and describe truthfully your version. Unluckily for you, your opponent lied, and was more convincing than you. Afterwards, because your story contradicts the accepted version of events, it seems that you were lying, accusing unfairly the people who are deemed innocent. As a punishment for "seriously lying", your identity is exposed.

If people with sensitive information suspect that something like this could happen, then it defeats the purpose of the proposal.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-16T21:03:24.861Z · LW(p) · GW(p)

Yeah, that seems like a big potential flaw. (Which could just mean, no one should stick their neck out like that.) I'm imagining that there's only potential benefit here in cases where the accounter also has strong trust in the poster, such that they think the poster almost certainly won't be falsely convinced that a truth is an egregious lie.

In particular, the agreement isn't about whether the court of public opinion decides it was a lie, just the poster's own opinion. (The poster can't be held accountable to that by the public, unless the public changes its mind again, but the poster can at least be held accountable by the accounter.) (We could also worry that this option would only be taken by accounters with accounts that are infeasible to ever reveal as egregious lies, which would be a further selection bias, though this is sort of going down a hypothetical rabbit hole.)

comment by Evan_Gaensbauer · 2021-10-15T03:56:37.885Z · LW(p) · GW(p)

In the past, I've been someone who has found it difficult and costly to talk about Leverage and the dynamics around it, or organizations that are or have been affiliated with effective altruism, though the times I've spoken up I've done more than others. I would have done it more but the costs were that some of my friends in effective altruism interacted with me less, seemed to take me less seriously in general and discouraged me from speaking up more often again with what sometimes amounted to nothing more than peer pressure. 

That was a few years ago. For lots of reasons, it's easier, less costly, less risky and easier to not feel fear for me now. I don't know yet what I'll say regarding any or all of this related to Leverage because I don't have any sense of how I might be prompted or provoked to respond. Yet I expect I'll have more to say and towards what I might share as relevant I don't have any particular feelings about yet. I'm sensitive to how my statements might impact others but for myself personally I feel almost indifferent. 

comment by Linch · 2021-10-14T23:51:37.203Z · LW(p) · GW(p)

My general feeling about this is that the information I know is either well-known or otherwise "not my story to tell." 

I've had very few direct interactions with Leverage except applying to Pareto, a party or two, and some interactions with Leverage employees (not Geoff) and volunteers.  As is common with human interactions, I appreciated many but not all of my interactions.

Like many people in the extended community, I've been exposed to a non-overlapping subset of accounts/secondhand rumors of varying degrees of veracity. For some things it's been long enough that I can't track the degree of confidences I'm supposed to keep, and under which conditions, so it seems better to err on the side of silence. 

At any rate, it's ultimately not my story/tragedy. My own interactions with Leverage has not been personally noticeably harmful or beneficial.

comment by Avi (Avi Weiss) · 2021-10-22T07:53:43.901Z · LW(p) · GW(p)

FYI - Geoff will be talking about the history of Leverage and related topics on Twitch tomorrow (Saturday, October 23rd 2021) starting at 10am PT (USA West Coast Time). Apparently Anna Salamon will be joining the discussion as well.

Geoff's Tweet

Text from the Tweet (for those who don't use Twitter):

"Hey folks — I'm going live on Twitch, starting this Saturday. Join me, 10am-1pm PT:
twitch.tv/geoffanders
This first stream will be on the topic of the history of my research institute, Leverage Research, and the Rationality community, with @AnnaWSalamon as a guest."

Replies from: AnnaSalamon, Unreal
comment by AnnaSalamon · 2021-10-22T08:32:57.793Z · LW(p) · GW(p)

Yep. I hope this isn’t bad to do, but I am doing it.

Replies from: Avi Weiss, Benito
comment by Avi (Avi Weiss) · 2021-10-22T08:38:27.437Z · LW(p) · GW(p)

I'm sure it'll be fine :-)

I'm not involved in this in any way, but from the comments I've seen of yours in these threads you've shown great honesty and openness with everything.

comment by Ben Pace (Benito) · 2021-10-22T08:42:00.854Z · LW(p) · GW(p)

I’d be more inclined to join if I could ask questions. I’ve not twitched before, my sense is the chat is a bit ephemeral to the video. Is the intention that it is mostly going to be you two talking?

Edit: Slightly changed tone of comment to avoid potentially sounding flippant.

Replies from: LarissaRowe
comment by LarissaRowe · 2021-10-23T17:35:15.164Z · LW(p) · GW(p)

Geoff is answering chat questions at the moment (at least until 11 AM PT) so if you have any questions you should consider joining.

comment by Unreal · 2021-10-24T15:39:55.990Z · LW(p) · GW(p)

Unfortunately for me, there is apparently no video recording available on Twitch for this stream? (There are two short clips, but not the full broadcast.) 

If anyone has a link to it, if you could include it here, that'd be great. ! 

Replies from: AnnaSalamon, TekhneMakre
comment by AnnaSalamon · 2021-10-24T23:09:44.178Z · LW(p) · GW(p)

Alas, no. I'm pretty bummed about it, because I thought the conversation was rather good, but Geoff pushed the "save recording" button after it was started and that didn't work.

Replies from: Lulie
comment by Lulie · 2021-11-03T16:08:38.255Z · LW(p) · GW(p)

Based on the fact Twitch is counter-intuitive about recording (it's caught me out before too) and the technical issues at the start, I made a backup recording just in case – only audio but hope it helps!:

https://drive.google.com/file/d/1Af1dl-v7Q7uJhdX8Al9FsrJDBc4BqM_f/view?usp=sharing

Replies from: habryka4, RobbBB, Lulie
comment by habryka (habryka4) · 2021-11-03T17:57:46.380Z · LW(p) · GW(p)

Thank you so much! This is great!

comment by Rob Bensinger (RobbBB) · 2021-11-03T17:55:24.278Z · LW(p) · GW(p)

!!! BRILLIANT. I thought the conversation was quite important, and your foresight has saved it for the community's memory. Thank you so much. :)

This should be signal-boosted for all the people who missed the stream.

comment by Lulie · 2021-11-04T00:20:52.236Z · LW(p) · GW(p)

Update: I’ve disabled public access by request. Geoff said (here) he’s going to post the recording to his website.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-11-04T01:56:23.929Z · LW(p) · GW(p)

Audio linked on Geoff's website: https://www.dropbox.com/s/p0nah8ulohypexe/Geoff%20Anders%20-%20Twitch%20-%20Leverage%20History%201%20with%20Anna%20Salamon.m4a?dl=0 

Video link on Geoff's website, corresponding to ten seconds of dead air plus the first 20:35 of the audio: https://www.dropbox.com/s/pt3q5xejglsgrcr/1st%20Twitch%20Stream%20-%20Leverage%20History%20-%20Beginning.mp4?dl=0# 

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-11-04T05:18:26.970Z · LW(p) · GW(p)

I re-listened to Anna and Geoff's conversation, which is the main part of the audio that I found interesting. Timestamps for that conversation:

1:57:57 - Early EA history, the EA Summit, and early EA/Leverage interactions
2:13:34 - Narrative addiction and leaders being unable to talk to each other
2:17:20 - Early Geoff cooperativeness
2:19:58 - Possible causes for EA becoming more narrative-addicted
2:22:35 - Conflict causing group insularity
2:24:50 - Anna on narrative businesses, narrative pyramid schemes, and disagreements
2:28:28 - Geoff on narratives, morale and the epistemic sweet spot
2:30:08 - Anna on trying to block out things that would weaken the narrative, and external criticism of Leverage
2:36:30 - More on early Geoff cooperativeness
2:41:44 - 'Stealing donors', Leverage's weird vibe (non-materialism?), Anna/Geoff's early interactions, 'writing off' on philosophical grounds, and keeping weird things at arm's length
2:50:00 - The value of looking at historical details, and narrative addiction collapse
2:52:30 - Geoff wants out of the rationality community; PR and associations; and disruptive narratives

comment by TekhneMakre · 2021-10-25T06:07:08.482Z · LW(p) · GW(p)

Shoot. They did try at the beginning and thought they were recording. A few other points, additional to my other comment (these are half-remembered, rephrased, and presumably missing parts and context):

  • Anna hypothesizes that Geoff was selecting who he talked with and worked with and hired in part based on them being "not too big", so that he could intellectually dominate them. She tells a story where she and Nate went (in 2017? 2019?) to talk with Geoff, and Anna+Nate thought talking was good or fruitful or something but Geoff seemed uninterested, and maybe that's because Anna and Nate are "too big".
  • Geoff describes trying, 2010±2, to get EA orgs to team up / combine, but finding lack of interest. I got a (speculative) bit of a sense of frame control battles going on in the shadows, like Geoff said something like "well when you think in detail about ambitious plans, you tend to see ways in which other people could fit into them", and I could imagine his overtures having a subtle sense of trying to capture or define a frame, like some proportion of "here's how you fit into my plan" rather than "here's a common goal we're aiming at, here's synergies between our strategies, also let's continuously double crux about crucial things". (It would be bad to punish people for having ambitious plans that involve other people; it would be good to understand how to navigate "provisional plans" that can go in a direction and gain from coordination, while also remaining deeply open to members doing surprising things that upturn the plans, as well as not taking over their soul etc.) ETA: Anna's comment here [LW(p) · GW(p)] seems to be counterevidence: [Anna] was like “yes, that matches my memory and perception; I remember you [Geoff] and Leverage seeming unusually interested in getting specific collaborations or common projects that might support your goals + other groups’ goals at once, going, and more than other groups, and trying to support cooperation in this way”[...].
  • Geoff describes feeling very shut out unfairly by EA; there was a 2012(?) week-long EA summit run by, and at, Leverage, after which there was EA(+x-risk/rationality?) camaraderie, but then Leverage wasn't allowed to have a table at some later EA event (maybe a later EA summit).
  • Geoff describes wanting to have an "exit strategy" from the rationality community, and describes getting "mixed messages", like "please stay" and also something else like "you're bad" or something.
  • Anna described "narrative addiction", i.e. addiction to narratives. Used a speculative example/metaphor of an anorexic, who is somehow addicted to the control offered by a narrative, and so rejects information that pushes against the narrative. (I didn't quite get this; something like, there's a narrative in which being {good, attractive, healthy, non-selfish...?} can be controlled by not eating, and even if that's false, it's nice to think you can achieve those things? Where the analogy is like, thinking [the thing that I'm doing helps with X-risk / life-saving / etc.] is a narrative that one could be addicted to in the same way.) Anna hypothesizes that Leverage (as well as other EA/x-risk orgs) had narrative addiction. I'm curious what Leverage's narratives were, in part because that seemed to play a significant role in Zoe's experience.
comment by Geoff_Anders · 2021-10-14T01:33:09.398Z · LW(p) · GW(p)

Hi everyone. I wanted to post a note to say first, I find it distressing and am deeply sorry that anyone had such bad experiences. I did not want or intend this at all.

I know these events can be confusing or very taxing for all the people involved, and that includes me. They draw a lot of attention, both from those with deep interest in the matter, where there may be very high stakes, and from onlookers with lower context or less interest in the situation. To hopefully reduce some of the uncertainty and stress, I wanted to share how I will respond.

My current plan (after writing this note) is to post a comment about the above-linked post. I have to think about what to write, but I can say now that it will be brief and positive. I’m not planning to push back or defend. I think the post is basically honest and took incredible courage to write. It deserves to be read.

Separately, I’m going to write a letter in my role as Executive Director of Leverage Research on the topic of harms from our previous psychology research and the structure of the organization.

It may be useful to address the Leverage/Rationality relation or the Leverage/EA relation as well, but discussion of that might distract us from what is most important right now.

Replies from: ChristianKl, Freyja, BlueMarlin, throwaway2456
comment by ChristianKl · 2021-10-14T08:59:16.221Z · LW(p) · GW(p)

Given what the post said about the NDA that people signed when leaving, it seems to me like explictely releasing people from that NDA (maybe with a provision to anonymize names of other people) would be very helpful for having a productive discussion that can integrate the experiences of many people into public knowledge and create a shared understanding of what happened. 

Replies from: BlueMarlin
comment by BlueMarlin · 2021-11-07T22:21:19.952Z · LW(p) · GW(p)

Geoff, in relation your recent livestream, which was on the topic of helping to craft good incentives so people can speak up, could you comment on the state of the NDAs, and, if people have not yet been released from them, whether or not you will explicitly release people from them in order to facilitate discussion about Leverage? And if not, why not?

Given your emphasis on symmetry (of incentivizing both positive and negative accounts), it would seem obviously necessary to release people from an agreement to "be generally positive about each other" (the very first agreement in the document), which they may still feel bound by, in order to unbias incentives. Cf. the questions and concerns raised in Rob's comment [LW(p) · GW(p)], which remain pertinent.

comment by Freyja · 2021-10-16T17:03:59.138Z · LW(p) · GW(p)

Hi Geoff—have you posted the brief response comment anywhere yet?

Replies from: Benito, Geoff_Anders
comment by Ben Pace (Benito) · 2021-10-17T00:36:46.268Z · LW(p) · GW(p)

I would also be interested in knowing a timeline for the response.

comment by Geoff_Anders · 2021-10-17T11:23:08.521Z · LW(p) · GW(p)

Yes, here: https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=3gMWA8PjoCnzsS7bB

comment by BlueMarlin · 2021-11-07T22:23:16.129Z · LW(p) · GW(p)

Separately, I’m going to write a letter in my role as Executive Director of Leverage Research on the topic of harms from our previous psychology research and the structure of the organization.

Geoff, has this letter been published yet? And if not, when will it be published?

Replies from: Geoff_Anders, Freyja
comment by Geoff_Anders · 2021-11-13T07:52:10.873Z · LW(p) · GW(p)

It was published this evening. Here is a link to the letter, and here is the announcement on Twitter.

Replies from: Freyja
comment by Freyja · 2021-11-13T22:47:48.845Z · LW(p) · GW(p)

Thank you for keeping that promise; I imagine it wasn’t easy to write.

comment by Freyja · 2021-11-08T15:40:02.868Z · LW(p) · GW(p)

I would also be very interested in a timeline for this.

comment by throwaway2456 · 2021-10-15T20:19:25.622Z · LW(p) · GW(p)
  • For Geoff or Reserve: What is the relationship between Leverage and Reserve, and related individuals and entities?
  • For everyone: Under what conditions does restitution to ex-Leveragers make sense? Under what conditions does it make sense for leadership to divest themselves of resources?
  • For everyone: In arguendo, what could restitution or divestment concretely look like?

Edit: I was going to leave the original comment, to provide context to Vaniver’s reply. But it started receiving upvotes that brought it above “-1", making it a more prominent bad example of community norms. I think the upvotes indicate importance in the essence of the questions, but their form were ill-considered and rushed to judgement. In compromise, I've tried to rewrite them more neutrally and respectfully to all involved. I may revisit them a few more times.

Replies from: Vaniver, farp
comment by Vaniver · 2021-10-15T21:33:48.458Z · LW(p) · GW(p)

I wanted to note that I think this comment both a) raises a good point (should Leverage pay restitution to people that were hurt by it? Why and how much?) and b) does so in a way that I think is hostile and assumes way more buy-in than it has (or would need to get support for its proposal).

First, I think most observers are still in "figuring out what's happened" mode. Was what happened with Zoe unusually bad or typical, predictable or a surprise? I think it makes sense to hear more stories before jumping to judgment, because the underlying issue isn't that urgent and the more context, the wiser a decision we can make.

Second, I think a series of leading questions asked to specific people in public looks more like norm enforcement than it does like curious information-gathering, and I think the natural response is suspicion and defensiveness. [I think we should go past the defensiveness and steelman.]

Third, I do think that it makes sense for people to make things right with money when possible; I think that this should be proportional to damages done and expectations of care, rather than just 'who has the money.' Suppose, pulling these numbers out of a hat, the total damage done to Leverage employees (as estimated by them) was $1M and the total value of Geoff's tokens are $10M; the presumption that the tokens should all go to the victims (i.e. that the value of his tokens is equal to the amount of damage done) seems about as detached from reality to me as the assumption that the correct amount of restitution is 0. On a related note, some large amount of the Leverage experience appears to have been self-experimentation; I think the amount we should expect Geoff to be responsible should take into account how much responsibility the participants thought they were taking for themselves (while not just assuming that they were making an informed call and their initial estimate should be our final one). 

Replies from: throwaway2456, farp
comment by throwaway2456 · 2021-10-15T22:32:24.613Z · LW(p) · GW(p)

In retrospect, I apologize for the strident tone and questions in my original comment. I am personally worried about further harm, in uses of money or power by Anders, and from Zoe's post it seems like there were a handful to many more people hurt. If money or tokens are possibly causally downstream of harm, restitution might reduce further harm and address harm that's already taken place. The community is doing ongoing information gathering, though, and my personal rush to judgement isn't keeping in pace with that. I'll leave my above comment as is, since it's already received a constructive reply.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-10-17T00:37:42.363Z · LW(p) · GW(p)

I appreciate you addressing Vaniver's concerns about your comment.

comment by farp · 2021-10-17T07:09:26.856Z · LW(p) · GW(p)

Suppose, pulling these numbers out of a hat, the total damage done to Leverage employees (as estimated by them) was $1M and the total value of Geoff's tokens are $10M; the presumption that the tokens should all go to the victims (i.e. that the value of his tokens is equal to the amount of damage done) seems about as detached from reality to me as the assumption that the correct amount of restitution is 0.

The counter argument would be:

Suppose we do not think it should be profitable to start a cult and get rich. If we enforce the norm "if we find out you started a cult and got rich off it, you only get to be 90% rich instead of 100% rich", well, that is not very powerful. Maybe the rest should go to actually-effective charity or something.

That said, a norm where we say "you don't get to be rich anymore" is sort of moot when ultimately Geoff has all the Leverage 🥁💥

comment by farp · 2021-10-17T07:01:43.392Z · LW(p) · GW(p)

I am sad that you have deleted your original comment because it was my favorite comment in this whole page! Your updated version, by comparison, is much worse (no offense). 

Look, I think once you are trying to express the idea "I think you should pay millions of dollars to the people you have very badly harmed", you should not be so concerned about whether you are doing so in a "hostile" way. I hope we can all appreciate the comedy in this even if you think neutrality is ultimately better.

I agree that your new version is more norm-conformant, but I am curious if you think it is an equally thought-provoking / persuasive / useful presentation of the ideas.

I also think that your new version is inadequate for leaving out the important context that Reserve probably made a lot of money.

comment by Aella · 2021-11-22T22:01:27.215Z · LW(p) · GW(p)

Here's anonymous submission of Leverage's Basic Information Acknowledgement Checklist document. The submitter said "The text of this document has been copied word for word from the original, except with names redacted."

https://we.tl/t-KaDXP3vrW3

Replies from: LarissaRowe
comment by LarissaRowe · 2021-11-23T00:17:05.024Z · LW(p) · GW(p)

I can confirm that this document is legitimate as I've seen a more recent version of the same checklist.

Leverage Research is planning to review and revise its information management policy, as soon as we have time.

Relatedly, a LessWrong user recently reached out to us directly for information about our information management policies and agreements. During the conversation, it became clear that it was difficult for them, as someone seeking information, to formulate which questions to ask and difficult for us as an organization to determine what answers they might find useful, given the differences in information and context. As a result of this conversation, we concluded it might be useful to figure out how to help people request the information that they are looking for, while at the same time protecting the institute’s time, ownership of research, and ability to carry out its mission. 

As part of this, we have now set up a request form on our website where it is possible to make information requests of the organization. We expect to respond to genuine inquiries with answers, updates to our FAQ (forthcoming), the release of documents, and more, as our other responsibilities permit.

comment by alyssavance · 2021-10-14T03:52:09.228Z · LW(p) · GW(p)

EDIT: This comment described a bunch of emails between me and Leverage that I think would be relevant here, but I misremembered something about the thread (it was from 2017) and I'm not sure if I should post the full text so people can get the most accurate info (see below discussion), so I've deleted it for now. My apologies for the confusion

Replies from: Aella, RobbBB
comment by Aella · 2021-10-14T05:50:18.843Z · LW(p) · GW(p)

Would you happen to have/be willing to share those emails?

Replies from: alyssavance
comment by alyssavance · 2021-10-14T06:51:35.947Z · LW(p) · GW(p)

I have them, but I'm generally hesitant to share emails as they normally aren't considered public. I'd appreciate any arguments on this, pro or con

Replies from: habryka4, cata
comment by habryka (habryka4) · 2021-10-14T07:14:06.277Z · LW(p) · GW(p)

I generally feel reasonably comfortable sharing unsolicited emails, unless the email makes some kind of implicit request to not be published, that I judge at least vaguely valid. In general I am against "default confidentiality" norms, especially for requests or things that might be kind of adversarial. I feel like I've seen those kinds of norms weaponized in the past in ways that seems pretty bad, and think that while there is a generally broad default expectation of unsolicited private communication being kept confidential, it's not a particularly sacred protection in my mind (unless explicitly or implicitly requested, in which case I think I would talk to the person first to get a more fully comprehensive understanding for why they requested confidentiality, and would generally err on the side of not publishing, though would feel comfortable overcoming that barrier given sufficient adversarial action)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-14T08:22:39.667Z · LW(p) · GW(p)

unless the email makes some kind of implicit request to not be published

What does "implicit request" mean here? There are a lot of email conversations where no one writes a single word that's alluding to 'don't share this', but where it's clearly discussing very sensitive stuff and (for that reason) no one expects it to be posted to Hacker News or whatever later.

Without having seen the emails, I'm guessing Leverage would have viewed their conversation with Alyssa as 'obviously a thing we don't want shared and don't expect you to share', and I'm guessing they'd confirm that now if asked?

I do think that our community is often too cautious about sharing stuff. But I'm a bit worried about the specific case of 'normalizing big infodumps of private emails where no one technically said they didn't want the emails shared'.

(Maybe if you said more about why it's important in this specific case? The way you phrased it sort of made it sound like you think this should be the norm even for sensitive conversations where no one did anything terrible, but I assume that's not your view.)

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-15T01:58:15.743Z · LW(p) · GW(p)

What does "implicit request" mean here?

I don't know, kind of complicated, enough that I could probably write a sequence on it, and not even sure I would have full introspective access into what I would feel comfortable labeling as an "implicit request".

I could write some more detail, but it's definitely a matter of degree, and the weaker the level of implicit request, the weaker the reason for sharing needs to be, with some caveats about adjusting for people's communication skills, adversarial nature of the communication, adjusting for biases, etc.

Replies from: Spiracular
comment by Spiracular · 2021-10-17T13:38:13.144Z · LW(p) · GW(p)

I want to throw out that while I am usually SUPER on team "explicit communication norms", the rule-nuances of the hardest cases might sometimes work best if they are a little chaotic & idiosyncratic.

I personally think there might be something mildly-beneficial and protective, about having "adversarial case detected" escape-clauses that vary considerably from person-to-person.

(Otherwise, a smart lawful adversary can reliably manipulate the shit out of things.)

comment by cata · 2021-10-14T08:28:27.590Z · LW(p) · GW(p)

I would just ask the other party whether they are OK to share rather than speculating about what the implicit expectation is.

comment by Rob Bensinger (RobbBB) · 2021-10-14T04:12:00.820Z · LW(p) · GW(p)

?!?!?!?!?!?!?!?!?!

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-14T08:35:34.151Z · LW(p) · GW(p)

Update: Looks like the thing I was surprised by didn't happen. Confusion noticed, I guess!

comment by TekhneMakre · 2021-10-23T21:15:04.706Z · LW(p) · GW(p)

Off the cuff thoughts from me listening to the Twitch conversation between Anna and Geoff:

  • I think Geoff, more than he's seeing clearly, disagrees or at least in the past disagreed with the claim that using narratives to boost morale--specifically, deemphasizing information that contradicts a narrative plan--is basically just bad in the long run. Would be better to have deeper understanding of what morale is.
  • Geoff describes being harmed by some sort of initial rejection by the rationality/EA community (around 2011? 2010?). This suggests, to me, a (totally conjectural!) story where he got into an escalating narrative cold war with the rationality community: first he perceives (possibly correctly) that the community rejects him, and thereby cuts off his ability to work with people for projects he thinks are good; then, he corrects for this with narrative pushback--basically, firmly reemphasizing his positive vision or whatever. Then people in the community sense this as narrative distortion / deception, and react (more or less consciously) with further counter-distortion. (Where the mechanism is like, they sense something fishy but don't know how to say "Geoff is slightly distorting things about Leverage's plans", so instead they want people to just not work with Geoff; but they can't just tell people to do that, so they distort facts about Geoff/Leverage to cause others to take their prefered actions; etc.)
  • [ETA: sorry for all the caveats... specifically, I do use judgy language, but don't endorse the judgements, but don't want to change the language.] [The following if taken as a judgement is very harsh and basically unfair, and it would suck to punish Geoff for having conversations like this. So please don't take it as a judgement. I want to get a handle on what's up with Geoff, so I want to describe his behavior. Maybe this is bad, LMK if you think so.] It was often hard to listen to Geoff. He seemed to talk in long, apparently low content sentences with lots of hemming and hawing and attention to appearance, and lots of very general statements that seemed to not address precisely the topic. (Again this is unfairly harsh if taken as a judgement, and also he was talking in front of 50 people, sort of.)
  • Anna says there were in the early 2010s rumors that Leverage was trying to fundraise from "other people's donors". And that Leverage/Geoff was trying to recruit, whether ideologically or employfully, employees of other EA/rationality orgs.
  • I didn't hear anything that strongly confirms or denies adversarial hypotheses like "Geoff was fairly actively doing something pretty distortiony in Leverage that caused harm, and is sort of hiding this by downplaying / redirecting attention / etc.".
  • Broadly it would be really good to understand better how to have world-saving narratives and such, especially ones that can recruit and retain political will if they really ought to, without narrative fraud / information cascades / etc.
Replies from: AnnaSalamon, Vaniver, Unreal, BlueMarlin
comment by AnnaSalamon · 2021-10-25T04:14:56.707Z · LW(p) · GW(p)

Thanks! I would love follow-up on LW to the twitch stream, if anyone wants to. There were a lot of really interesting things being said in the text chat that we didn’t manage to engage with, for example. Although unfortunately the recording was lost, which is unfortunate because IMO it was a great conversation.

TekhneMakre writes:

This suggests, to me, a (totally conjectural!) story where [Geoff] got into an escalating narrative cold war with the rationality community: first he perceives (possibly correctly) that the community rejects him…

This seems right to me

Anna says there were in the early 2010s rumors that Leverage was trying to fundraise from "other people's donors". And that Leverage/Geoff was trying to recruit, whether ideologically or employfully, employees of other EA/rationality orgs.

Yes. My present view is that Geoff’s reaching out to donors here was legit, and my and others’ complaints were not; donors should be able to hear all the pitches, and it’s messed up to think of “person reached out to donor X to describe a thingy X might want to donate to” as a territorial infringement.

This seems to me like an example of me and others escalating the “narrative cold war” that you mention.

[Geoff] seemed to talk in long, apparently low content sentences with lots of hemming and hawing and attention to appearance…

I noticed some of this, though less than I might’ve predicted from the background context in which Geoff was, as you note, talking to 50 people, believing himself to be recorded, and in an overall social context in which a community he has long been in a “narrative cold war” with (under your hypothesis, and mine) was in the midst of trying to decide whether to something-like scapegoat him.

I appreciate both that you mentioned your perception (brought it into text rather than subtext, where we can reason about it, and can try to be conscious of all the things together), and that you’re trying to figure out how to incentivize and not disincentivize Geoff’s choice to do the video (which IMO shared a bunch of good info).

I’d like to zoom in on an example that IMO demonstrates that the causes of the “hemming and hawing” are sometimes (probably experience-backed) mistrust of the rationalist community as a [context that is willing to hear and fairly evaluate his actual evidence], rather than, say, desire for the truth to be hidden:

At one point toward the end of the twitch, Geoff was responding to a question about how we got from a pretty cooperative state in ~2013, and said something kinda like “… I’m trying to figure out how to say this without sounding like I’m being unfair your side of things,” or something, and I was like “maybe just don’t, and I or others can disagree if we think you’re wrong,” and then he sort of went “okay, if you’re asking for it” and stopped hemming and hawing and told a simple and direct story about how in the early days of 2011-2014, Leverage did a bunch of things to try to cause specific collaborations that would benefit particular other groups (THINK, the original EA leaders gathering in the Leverage house in 2013, the 2014 retreat + summit, a book launch party for ‘Our Final Invention’ co-run with SingInst, some general queries about what kind of collaborations folks might want, early attempts to merge with SingInst and with 80k), and how he would’ve been interested in and receptive to other bids for common projects if I or others had brought him some. And I was like “yes, that matches my memory and perception; I remember you and Leverage seeming unusually interested in getting specific collaborations or common projects that might support your goals + other groups’ goals at once, going, and more than other groups, and trying to support cooperation in this way” and he seemed surprised that I would acknowledge this.

So, I think part of the trouble is that Geoff didn’t have positive expectations of us as a context in which to truth-seek together.

One partial contributor to this expectation of Geoff’s, I would guess, is the pattern via which (in my perception) the rationalist community sometimes decides peoples’ epistemics/etc. are “different and bad” and then distances from them, punishes those who don’t act as though we need to distance from them, etc., often in a manner that can seem kinda drastic and all-or-nothing, rather than docking points proportional to what it indicates about a person’s likely future ability to share useful thoughts in a milder-mannered fashion. For example, during a panel discussion at the (Leverage-run) 2014 EA Summit, in front of 200 people, I asked Geoff aloud whether he in fact thought that sticking a pole though someone’s head (a la Phineas Gage) would have no effect on their cognition except via their sense-perception. Geoff answered “yes”, as I expected since he’d previously mentioned this view. And… there was a whole bunch of reaction. E.g., Habryka, in the twitch chat, mentioned having been interning with Leverage at the time of that panel conversation, and said “[that bit of panel conversation] caused me nightmares… because I was interning at Leverage at the time, and it made me feel very alienated from my environment. And felt like some kind of common ground was pulled out from under me.”

I for many years often refrained from sharing some of positive views/data/etc. I had about Leverage, for fear of being [judged or something] for it. (TBC, I had both positive and negative views, and some error bars. But Leverage looked to me like well-meaning people who were trying a hard-core something that might turn out cool, and that was developing interesting techniques and models via psychological research, and I mostly refrained from saying this because I was cowardly about it in response to social pressure. … in addition to my usual practice of sometimes refraining from sharing some of my hesitations about the place, as about most places, in a flinchy attempt to avoid conflict.)

I didn't hear anything that strongly confirms or denies adversarial hypotheses like "Geoff was fairly actively doing something pretty distortiony in Leverage that caused harm, and is sort of hiding this by downplaying / redirecting attention / etc.".

My guess is that he was and is at least partially some of doing this, in addition to making an earnest (and better than I’d expected on generic-across-people priors) effort to share true things. Re: the past dynamics, I and IMO others were also doing actively distortionary stuff, and I think the Geoff’s choices, and mine and others’, need to be understood together, as similar responses to a common landscape.

As I mentioned in the twitch that alas didn’t get recorded, in ~2008-2014, ish, somehow a lot of different EA and rationality and AI risk groups felt like allies and members of a common substantive community, at least in my perception (including my perception of the social context that I imagined lots of other peopl were in. And later on, most seemed to me to kinda give up on most of the others, opting still for a social surface of cooperation/harmony, but without any deep hope in anyone else of the sort that might support building common infrastructure, really working out any substantive disagreements (with tools of truth-seeking rather than only truce-seeking/surface-harmony-preservation, etc.). (With some of the together-ness getting larger over time in the early years, and then with things drifting apart again.) I’m really interested in whether that transition matches others’ perceptions, and, if so, what y’all think the causes were. IMO it was partly about what I’ve been calling “narrative addiction” and “narrative pyramid schemes,” which needs elaboration rather than a set of phrases (I tried this a bit in the lost twitch video) but I need to go now so may try it later.

Replies from: ChristianKl, TekhneMakre
comment by ChristianKl · 2021-10-25T16:01:34.667Z · LW(p) · GW(p)

I have video of the first 22 minutes at the beginning but at the end switched into my password manager (not showing passwords on screens but a series of sides where I'm registered), so I would want to publically post the video but I'm open to share it to individual people if someone wants to write something referencing it.

I wished I would have been more clear about how to do screen recording in a way that only captures one browser window...

Replies from: TekhneMakre, elityre
comment by TekhneMakre · 2021-10-25T16:19:51.751Z · LW(p) · GW(p)

How about posting the audio?

Replies from: ChristianKl
comment by ChristianKl · 2021-10-25T23:06:36.895Z · LW(p) · GW(p)

Geoff asked me to leave public publication to him. I send him my video with the last minute (where I had personal information) cutoff. Given that I do think that Geoff made a good effort to be cooperative, and there's no attempt to assert that something that something happened during the stream that didn't happen as asserted I see no reason to unilaterally publish something publically.

Replies from: BlueMarlin, habryka4, RobbBB
comment by BlueMarlin · 2021-11-03T19:30:48.411Z · LW(p) · GW(p)

Noting that it has been 9 days and Geoff has not yet followed though on publishing the 22-minute video. Thankfully, however, a complete audio recording [LW(p) · GW(p)] has been made available by another user.

Replies from: ChristianKl
comment by ChristianKl · 2021-11-03T22:15:50.442Z · LW(p) · GW(p)

On https://www.geoffanders.com/ there's the link to https://www.dropbox.com/s/pt3q5xejglsgrcr/1st%20Twitch%20Stream%20-%20Leverage%20History%20-%20Beginning.mp4?dl=0# so he did follow through.

Replies from: BlueMarlin
comment by BlueMarlin · 2021-11-04T02:54:24.957Z · LW(p) · GW(p)

I notice that my comment score above is now zero. I would like others to know that I visited Geoff's website prior to posting my comment to ensure my comment was accurate, and that these links appeared after my above comment.

Replies from: habryka4, ChristianKl
comment by habryka (habryka4) · 2021-11-04T07:10:53.056Z · LW(p) · GW(p)

I did indeed misunderstand that! I didn't downvote, but my misunderstanding did cause me to not upvote. 

comment by ChristianKl · 2021-11-04T14:33:27.214Z · LW(p) · GW(p)

Geoff wrote me six days ago that he put it on his website. 

Replies from: BlueMarlin
comment by BlueMarlin · 2021-11-04T18:23:25.662Z · LW(p) · GW(p)

It is possible that I missed the link, in which case I apologize, although I am surprised because I did check the website. It doesn't seem that the web archive can verify timestamps.

I am glad I wrote my comments anyway, so that now the links have been shared here on LW, which I don't think they were before, and since Lulie's recording that I linked above seems to have been taken down.

comment by habryka (habryka4) · 2021-10-26T19:06:29.546Z · LW(p) · GW(p)

Geoff was interested in publishing a transcript and a video, so I think Geoff would be happy with you publishing the audio from the recording you have.

comment by Rob Bensinger (RobbBB) · 2021-10-25T23:24:45.968Z · LW(p) · GW(p)

Hope to see this posted soon! I missed the first hour of the Twitch video. (Though I'm guessing the part I saw, Geoff and Anna talking, was the most valuable part.)

comment by Eli Tyre (elityre) · 2021-10-29T11:22:32.502Z · LW(p) · GW(p)

Is that to say that you have audio of the whole conversation, and video of the first 20 minutes?

Replies from: ChristianKl
comment by ChristianKl · 2021-10-29T12:12:48.307Z · LW(p) · GW(p)

I have a recording of 22 minutes. The last minute includes me switching into my password manager and thus I cut it off from the video that I passed on.

Replies from: TurnTrout
comment by TurnTrout · 2021-10-29T12:49:34.087Z · LW(p) · GW(p)

I think the question is: Why not send the audio from after the 22 minute mark? Then we won't be able to see the password manager.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-29T13:13:25.443Z · LW(p) · GW(p)

I don't have anything after the 22 minute mark. I have a recording of 22 minutes and passed on 21 minutes of it.

At the time, I didn't want to focus my cognitive resources at that point on figuring out recording but on the actual content (and you actually see my writing my comment ;) in the video ). 

Replies from: TurnTrout
comment by TurnTrout · 2021-10-29T13:33:27.091Z · LW(p) · GW(p)

Makes sense, thanks for clarifying and for sharing what you have.

comment by TekhneMakre · 2021-10-25T08:05:55.637Z · LW(p) · GW(p)

A few more half-remembered notes from the conversation: https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=e8vL8nyTGwDLGnR3r#Yrk2375Jt5YTs2CQg [LW(p) · GW(p)]

but without any deep hope in anyone else of the sort that might support building common infrastructure, really working out any substantive disagreements

If this is true, it does strike me as important and interesting.

what y’all think the causes were

Speaking from a very abstract viewpoint not strongly grounded in observations, I'll speculate:

any deep hope

One contributor, naturally, would be fear of false hope. One is (correctly) afraid of hope because hope somewhat entails investment and commitment. Fear of false hope could actually make hope be genuinely false, even when there could have been true hope. This happens because hope is to some extent a decision, so *expecting* you and others in the future to not collaborate in some way, also *constitutes a decision* to not collaborate in that way. If you will in the future behave in accordance with a plan, then it's probably correct to behave now in accordance with the plan; and if you will not, then it's probably correct to not now. (I tried to meditate on this in the footnotes to my post Hope and False Hope [LW · GW].) (Obviously most things aren't very subject to this belief-plan mixing, and things where we can separate beliefs from plans are very useful for building foundations, but some non-separable things are important, e.g. open-ended collaboration.)

This feels maybe related to a comment you Anna made in the conversation about Geoff seeming somewhat high on a dimension of manic-ness or something, and he said others have said he seems hypomanic. The story being, Geoff is more hopeful and hope-based in general, explaining why he sought collaboration, and caused collective hope in EA, and ended up feeling he had to defend his org's hope against hope-destroyers (which hope he referred to as "morale").

working out any substantive disagreements

I kind of get the impression, based on public conversations, that some people (e.g. Eliezer) get stuck with disagreements because the real reasons for their beliefs are ideas that they don't want to spread, e.g. ideas about how intelligence works. I'm thinking, for example, of Yudkowsky-Christiano-Hanson-Drexler disagreements, and also of disagreements about likely timelines. Is that a significant part of it?

truce-seeking/surface-harmony-preservation

I guess this an obvious hypothesis, but worth stating: to the extent that people viewed things as zero-sum around recruiting mind-share, and other things beholden to third parties like funding or relationships to non-EA/x-risk orgs, there's an incentive to avoid public fights (which would be negative sum for the combatants), but avoid updating on core beliefs (which would "hurt" the updater, in terms of mind-share). Related to the thing about fundraising to "our donors" and poaching employees. It'd be nice to be clearer on who's lying to whom in this scenario. Org leaders are lying to donors, to employees, to other orgs, to themselves... basically everyone I guess...

I imagine (even more speculatively) there being a sort of deep ambiguity about supposedly private conversations aimed at truth-seeking, where there's a lot of actual intended truth seeking, but also there's the specter of "If I update too much about these background founding assumptions of my strategy, I'll have to start from scratch and admit to everyone I was deeply mistaken", as well as "If I can get the other person to deeply update, that makes the environment more hospitable to my strategy", which might lead one to direct attention away from one's own cruxes.

(I also feel like there's something about specialization or commitment that's maybe playing in to all this. On the one hand, people with something to protect want to deeply update and do something else if their foundational strategic beliefs are wrong; on the other hand, throwing out your capital is maybe bad policy. E.g., Elon Musk didn't drop his major projects upon realizing about AI risk, and that's not obviously a mistake?)

comment by Vaniver · 2021-10-25T06:11:26.220Z · LW(p) · GW(p)

Geoff describes being harmed by some sort of initial rejection by the rationality/EA community (around 2011? 2010?).

One of the interesting things about that timeframe is that a lot of the stuff is online; here's the 2012 discussion (Jan 9th [LW(p) · GW(p)], Jan 10th [? · GW], Sep 19th [LW · GW]), for example. (I tried to find his earliest comment that I remembered, but I don't think it was with the Geoff_Anders [LW · GW] account or it wasn't on LessWrong; I think it was before Leverage got started, and people responded pretty skeptically then also?)

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-25T06:43:35.223Z · LW(p) · GW(p)

Thanks!

One takeaway: Eliezer's interaction with Geoff [LW(p) · GW(p)] does seem like Eliezer was making some sort of mistake. Not sure what the core is, but, one part is like conflating [evidence, the kind that can be interpersonally verified] with [evidence, the kind that accumulates subconsciously as many abstract percepts and heuristics, which can be observably useful while still pre-theoretic, pre-legible]. Like, maybe Eliezer wants to only talk with people where either (1) they already have enough conceptual overlap that abstract cutting-edge theories also always cash out as perceivable predictions, or (2) aren't trying to share pre-legible theories. But that's different from Geoff making some terrible incurable mistake of reasoning. (Terrible incurable mistakes are certainly correlated with illegibility, but that's not something to Goodhart.)

Replies from: Vaniver
comment by Vaniver · 2021-10-25T17:04:20.895Z · LW(p) · GW(p)

I'm sort of surprised that you'd interpret that as a mistake. It seems to me like Eliezer is running a probabilistic strategy, which has both type I and type II errors, and so a 'mistake' is something like "setting the level wrong to get a bad balance of errors" instead of "the strategy encountered an error in this instance." But also I don't have the sense that Eliezer was making an error.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-25T18:01:40.893Z · LW(p) · GW(p)
. It seems to me like Eliezer is running a probabilistic strategy

It sounds like this describes every strategy? I guess you mean, he's explicitly taking into account that he'll make errors, and playing the probabilities to get good expected value. So this makes sense, like I'm not saying he was making a strategic mistake by not, say, working with Geoff. I'm saying:

(internally) Well this is obviously wrong. Minds just don't work by those sorts of bright-line psychoanalytic rules written out in English, and proposing them doesn't get you anywhere near the level of an interesting cognitive algorithm.[...]
(out loud) What does CT say I should experience seeing, that existing cognitive science wouldn't tell me to expect?
Geoff: (Something along the lines of "CT isn't there yet"[...])
(out loud) Okay, then I don't believe in CT because without evidence there's no way you could know it even if it was true.

sounds like he's conflating shareable and non-shareable evidence. Geoff could have seen a bunch of stuff and learned heuristics that he couldn't articulately express other than with silly-seeming "bright-line psychoanalytic rules written out in English". Again, it can make sense to treat this as "for my purposes, equivalent to being obviously wrong". But like, it's not really equivalent, you just *don't know* whether the person has hidden evidence.

Replies from: Taran
comment by Taran · 2021-10-26T08:54:46.984Z · LW(p) · GW(p)

Even if all you have is a bunch of stuff and learned heuristics, you should be able to make testable predictions with them.  Otherwise, how can you tell whether they're any good or not?

Whether the evidence that persuaded you is sharable or not doesn't affect this.  For example, you might have a prior that a new psychotherapy technique won't outperform a control because you've read like 30 different cases where a leading psychiatrist invented a new therapy technique, reported great results, and then couldn't train anyone else to get the same results he did.  That's my prior, and I suspect it's Eliezer's, but if I wanted to convince you of it I'd have a tough time because there's not really a single crux, just those 30 different cases that slowly accumulated.  And yet, even though I can't share the source of my belief, I can use it to make concrete testable predictions: when they do an RCT for the 31st therapy technique, it won't outperform the control.

Geoff-in-Eliezer's-ancedote has not reached this point.  This is especially bad for a developing theory: if Geoff makes a change to CT, how will he tell if the new CT is better or worse than the old one?  Geoff-replying-to-Eliezer takes this criticism seriously, and says he can make concrete, if narrow, predictions about specific people he's charted.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-26T09:51:58.471Z · LW(p) · GW(p)
you should be able to make testable predictions with them

Certainly. But you might not be able to make testable predictions for which others will readily agree with your criteria for judgement. In the exchange, Geoff gives some "evidence", and in other places he gives additional "evidence". It's not really convincing to me, but it at least has the type signature of evidence. Eliezer responds:

Which sounds a lot like standard cognitive dissonance theory

This is eliding that Geoff probably has significant skill in identifying more detail of how beliefs and goals interact, beyond just what someone would know if they heard about cognitive dissonance theory. Like basically I'm saying that if Eliezer sat with Geoff for a few hours through a few sessions of Geoff doing his thing with some third person, Eliezer would see Geoff behave in a way that suggests falsifiable understanding that Eliezer doesn't have. (Again, not saying he should have done that or anything.)

comment by Unreal · 2021-10-25T10:36:56.224Z · LW(p) · GW(p)

Well, the video is lost. But my friend Ben Pace (do you know him? he is great) was kind enough to take notes on what he said specifically in response to my question. 

My question was something like: "Why do you think some people are afraid of retaliation from you? Have you made any threats? Have you ever retaliated against a Leverage associate?" This is not the exact wording but close enough. I used the words "spiteful, retaliatory, or punishing" so he repeats that in his answer. 

I also explicitly told him he didn't have to answer any of these questions, like I wasn't demanding him to answer them. 

I am pasting Geoff's response below. 

  • Great questions.
  • Um.
  • Off the top of my head I don’t recall spiteful retaliatory or punishing actions. Um. I do think that I… There’s gotta be some other category of actions taken in anger where… I can think of angry remarks that I’ve made, absolutely. I can think of some actions that don’t pertain to Leverage associates that after thinking about for a while I realized there was something I was pretty angry about. In general I try to be really constructive, there’s definitely, let’s see, so… There’s definitely a mode that, it’s like, I like to think of all sorts of different possibilities of things to do, for example this was for EAG a while back, we were going to go and table at EAG and see if there’s anyone who is good to hire, we received word from CEA that we weren’t allowed to table there, super mad about that because we created the EA Summit series and handed it to CEA, being disinvited from the thing we started, I was really mad “let’s go picket and set up in front of EAG and tell people about htis”, y’know a number of people responded to that suggestion really negatively, and… Maybe the thing I want to say is I think there’s something like appropriate levels of response, and the thing I really want to do is to have appropriate levels of response. It’s super hard to never get mad or never be insulted but the thing I try super hard to do is to get to the point where there’s calibrated response. So maybe y’know there’s something in there… I have been in fact rly surprised when people talked about extreme retaliation, I’m like “What.” (!).
  • There’s definitely a line of thought I’ve seen around projects where they deal with important things, where people are like “The project is so important we must do anything to protect it” which I don’t agree with, y’know, I shut down Leverage because I talked to someone who was suffering too much and I was like “No” and then that was that.
Replies from: Unreal
comment by Unreal · 2021-10-25T11:08:13.006Z · LW(p) · GW(p)

Anna made a relevant follow-up question. She said something like: I expect picketing to be [a more balanced response] because it's a public action. What about [non-public] (hidden) acts of retaliation? 

I saw some of his reaction to this before my internet cut out again. (I think he could have used a hug in that moment... or maybe just me, maybe I could use a hug right now.) 😣

From the little glimpses I got (pretty much only during the first hour Q&A section), I got this sense (this is my own feelings and intuitions speaking):

  • I did not sense him being 'in cooperate mode' on the object level, but he seemed to be 'picking cooperate' on a meta level. He was trying to act according to good principles. E.g. by doing the video at all, and the way he tried to answer Qs by saying only true things. He tried not to come from a defensive place.
  • He seemed to keep to his own 'side of the street'. Did not try to make claims about others, did not really offer models of others, did not speculate. I think he may have also been doing the same thing with the people in the chat? (I dunno tho, I didn't see 90%.) Seems 'cleaner' to do it this way and avoids a lot of potential issues (like saying something that's someone else's to say). But meh, it's also too bad we didn't get to see his models about the people. 
comment by BlueMarlin · 2021-10-23T22:08:50.649Z · LW(p) · GW(p)
  • [ETA: sorry for all the caveats... specifically, I do use judgy language, but don't endorse the judgements, but don't want to change the language.] [The following if taken as a judgement is very harsh and basically unfair, and it would suck to punish Geoff for having conversations like this. So please don't take it as a judgement. I want to get a handle on what's up with Geoff, so I want to describe his behavior. Maybe this is bad, LMK if you think so.] It was often hard to listen to Geoff. He seemed to talk in long, apparently low content sentences with lots of hemming and hawing and attention to appearance, and lots of very general statements that seemed to not address precisely the topic. (Again this is unfairly harsh if taken as a judgement, and also he was talking in front of 50 people, sort of.)


I don't think it's bad of you. It seemed to me that he was deflecting or redirecting many of the points Anna was trying to get at.

comment by cousin_it · 2021-10-13T08:59:36.344Z · LW(p) · GW(p)

Good stuff. Very similar to DeMille's interview about Hubbard. As an aside, I love how the post rejects the usual positive language about "openness to experience" and calls the trait what it is: openness to influence.

comment by Dustin · 2021-10-13T23:27:38.384Z · LW(p) · GW(p)

While I'm not hugely involved, I've been reading OB/LW since the very beginning. I've likely read 75% of everything that's ever been posted here.

So, I'm way more clued-in to this and related communities than your average human being and...I don't recall having heard of Leverage until a couple of weeks ago.

I'm not exactly sure what that means with regard to PR-esque type considerations.

However.  Fair or not, I find having read the recent stuff I've got an ugh field that extended to slightly include LW.  (I'm not sure what it means to "include LW"...it's just a website.  My first stab at an explanation is it's more like "people engaged in community type stuff who know IRL lots of other people who communicate on LW", but that's not exactly right either.)

I think it'd be good to have some context on why any of this is relevant to LessWrong. The whole thing is generating a ton of activity and it feels like it just came out of nowhere. 

Replies from: vanessa-kosoy, RobbBB, ozziegooen, agc
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-14T09:34:02.517Z · LW(p) · GW(p)

Personally I think this story is an important warning about how people with a LW-adjacent mindset can death spiral [? · GW] off the deep end. This is something that happened around this community multiple times, not just in Leverage (I know of at least one other prominent example and suspect there are more), so we should definitely watch out for this and/or think how to prevent this kind of thing.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-14T10:17:03.716Z · LW(p) · GW(p)

What's the other prominent example you have in mind?

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-14T10:46:50.067Z · LW(p) · GW(p)

I am referring the cause of this incident. This seems like a possibly good source for more information, but I only skimmed it so don't vouch for the content.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-14T17:15:34.939Z · LW(p) · GW(p)

Thanks.

comment by Rob Bensinger (RobbBB) · 2021-10-13T23:49:44.209Z · LW(p) · GW(p)

Leverage has always been at least socially adjacent to LW and EA (the earliest discussion I find is in 2012 [LW · GW]), and they hosted the earliest EA summits in 2013-2014 (before CEA started running EA Global).

Replies from: Dustin
comment by Dustin · 2021-10-14T00:03:27.736Z · LW(p) · GW(p)

Having seen it, I have a very vague recollection of maybe having read that at the time.  Still, the amount of activity on the recent posts about Leverage seems to me all out of proportion with previous mentions/discussions.  

Replies from: Freyja
comment by Freyja · 2021-10-14T00:07:05.036Z · LW(p) · GW(p)

Also, for the extended Leverage diaspora and people who are somehow connected, LessWrong is probably the most obvious place to have this discussion, even if people familiar with Leverage make up only a small proportion of people who normally contribute here.

There are other conversations happening on Facebook and Twitter but they are all way more fragmented than the ones here.

Replies from: BayAreaHuman
comment by BayAreaHuman · 2021-10-14T01:06:55.178Z · LW(p) · GW(p)

I originally chose LessWrong, instead of some other venue, to host the Common Knowledge post primarily because (1) I wanted to create a publicly-linkable document pseudonymously, and (2) I expected high-quality continuation of information-sharing and collaborative sense-making in the comments.

comment by ozziegooen · 2021-10-14T14:32:14.528Z · LW(p) · GW(p)

As someone part of the social communities, I can confirm that Leverage was definitely a topic of discussion for a long time around Rationalists and Effective Altruists. That said, often the discussion went something like, "What's up with Leverage? They seem so confident, and take in a bunch of employees, but we have very little visibility." I think I experienced basically the same exact conversation about them around 10 times, along these lines.

As people from Leverage have said, several Rationalists/EAs were very hostile around the topic of Leverage, particularly in the last ~4 years or so. (I've heard stories of people getting shouted at just for saying they worked at Leverage at a conference). On the other hand, they definitely had support by a few rationalists/EA orgs and several higher-ups of different kinds.

They've always been secretive, and some of the few public threads didn't go well for them, so it's not too surprising to me that they've had a small LessWrong/EA Forum presence.

I've personally very much enjoyed staying mostly staying away from the controversy, though very arguably I made a mistake there.

(I should also note that I had friends who worked at or worked close to Leverage, I attended like 2 events there early on, and I applied to work from there around 6 years ago)

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2021-10-15T04:01:59.379Z · LW(p) · GW(p)

For what it's worth, my opinion is that you sharing your perspective is the opposite of making a mistake.

Replies from: ozziegooen
comment by ozziegooen · 2021-10-15T04:21:26.900Z · LW(p) · GW(p)

Sorry, edited. I meant that it was a mistake for me to keep away before, not now.

(That said, this post is still quite safe. It's not like I have scandalous information, more that, technically I (or others) could do more investigation to figure out things better.)

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2021-10-15T04:27:26.210Z · LW(p) · GW(p)

Yeah, at this point, everyone coming together to sort this out together as a way of building a virtuous spiral of making speaking up feel safe enough that it doesn't even need to be a courageous thing to do or whatever is the kind of thing I think your comment also represents and what I was getting at. 

comment by agc · 2021-10-14T13:32:49.852Z · LW(p) · GW(p)

A 2012 CFAR workshop [LW · GW] included "Guest speaker Geoff Anders presents techniques his organization has used to overcome procrastination and maintain 75 hours/week of productive work time per person." He was clearly connected to the LW-sphere if not central to it.

comment by Raemon · 2021-10-16T01:22:05.206Z · LW(p) · GW(p)

My own experience is somewhat like Linch's here [LW(p) · GW(p)], where mostly I'm vaguely aware of some things that aren't my story to tell.

For most of the past 9ish years I'd found Leverage "weird/sometimes-offputting, but not obviously moreso than other rationality orgs." I have gotten personal value out of the Leverage suite of memes and techniques (Belief Reporting was a particularly valuable thing to have in my toolkit). 

I've received one bit of secondhand info about "An ex-leverage employee (not Zoe) had an experience that seemed reasonable to describe as 'the bad kind of cult that was actually harmful'." I was told this as part of a decisionmaking process where it seemed relevant, and asked not to share it further in the past couple years. I think it makes sense to share this much meta-data in this context.

comment by farp · 2021-10-15T05:52:08.684Z · LW(p) · GW(p)

Re: @Ruby on my brusqueness

LW/EA has more "world saving" orgs than just Leverage. Implicit to "world saving" orgs, IMO, is that we should tolerate some impropriety for the greater good. Or that we should handle things quietly in order to not damage the greater mission. 

I think that our "world saving" orgs ask a lot of trust from the broader community -- MIRI is a very clear example. I'm not really trying to condemn secrecy I am just pointing out that trust is asked of us.

I recognize that this is inflammatory but I don't see a reason to beat around the bush:
Leverage really seems like a cult. It seems like an unsafe institution doing harmful things. I am not sure how much this stuff about Leverage is really news to people involved in our other "world saving" orgs. I think probably not much. I don't want "world saving" orgs to have solidarity. If you want my trust you have to sell out the cult leaders, the rapists, etcetera, regardless of whether it might damage your "world saving" mission. I'm not confident that that's occurring.

Replies from: Ruby, farp, Dustin
comment by Ruby · 2021-10-15T15:51:22.988Z · LW(p) · GW(p)

IMO, is that we should tolerate some impropriety for the greater good.

I agree!

I am just pointing out that trust is asked of us.

I agree!

Leverage really seems like a cult. It seems like an unsafe institution doing harmful things.

Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0 (remote team, focus on science history rather than psychology, 4 people).

I am not sure how much this stuff about Leverage is really news to people involved in our other "world saving" orgs.

The information in Zoe's Medium post was significant news to me and others I've spoken to. 

(saying the below for general clarity, not just in response to you)

I think everyone (?) in this thread is deeply concerned, but we're hoping to figure out what exactly happened, what went wrong and why (and what maybe to do about it). To do that investigation and postmortem, we can't skip to sentencing (forgive me if that's not your intention, but it reads a bit to me that that's what you want to be happening), nor would it be epistemically virtuous or just to do so. 

Some major new information came to light, people need time to process it, surface other relevant information, and make statements. The matter is complicated by forces inhibiting people from speaking both in favor and against Leverage. If there's any reluctance to "sell out" Leverage, it's because people want to have the full conversation first, not because of any sense of solidarity that we're all "world saving" orgs.

Replies from: mayleaf, farp
comment by mayleaf · 2021-10-15T23:44:51.515Z · LW(p) · GW(p)

To do that investigation and postmortem, we can't skip to sentencing (forgive me if that's not your intention, but it reads a bit to me that that's what you want to be happening), nor would it be epistemically virtuous or just to do so. 

I super agree with this, but also want to note that I feel appreciation for farp's comments here. The conversation on this page feels to me like it has a missing mood: I found myself looking for comments that said something like "wow, this account is really horrifying and tragic; we're taking these claims really seriously, and are investigating what actions we should take in response". Maybe everyone thinks that that's obvious, and so instead is emphasizing the part where we're committed to due process and careful thinking and avoiding mob dynamics. But I think it's still worth stating explicitly, especially from those in leadership positions in the community. I found myself relieved just reading Ruby's response here that "everyone in this thread is deeply concerned".

Replies from: Ruby, RobbBB
comment by Ruby · 2021-10-16T00:40:46.923Z · LW(p) · GW(p)

I super agree with this, but also want to note that I feel appreciation for farp's comments here.

Fair!

I found myself looking for comments that said something like "wow, this account is really horrifying and tragic; we're taking these claims really seriously, and are investigating what actions we should take in response"

My models of most of the people I know in this thread feel that way. I can say on my own behalf that I found Zoe's account shocking. I found it disturbing to think that was going on with people I knew and interacted with.  I find it disturbing that if this really is true, how did it not surface until now? (Or how it was ignored until now?)  I'm disturbed that Leverage's weirdness (and usually I'm quite okay with weirdness) turned out to enable and hide terrible things, at least for one person and likely more. I'm saddened that it happened, because based on the account, it seems like Leverage were trying to accomplish some ambitious, good things and I wish we lived in a world where the "red flags" (group-living, mental experimentation, etc) were safely ignored in the pursuit in the service of great things. 

Suddenly I am in a world more awful than the one I thought I was in, and I'm trying to reorient. Something went wrong and something different needs to happen now. Though I'm confident it will, it's just a matter of ensuring we pick the right different thing. 

Replies from: mayleaf
comment by mayleaf · 2021-10-16T00:47:26.949Z · LW(p) · GW(p)

Thank you, I really appreciate this response. I did guess that this was probably how you and others (like Anna, whose comments have been very measured) felt, but it is really reassuring to have it explicitly verbally confirmed, and not just have to trust that it's probably true.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-20T02:18:51.619Z · LW(p) · GW(p)

Sorry, only just now saw that I was mentioned by name here. I agree that Zoe's experiences were horrifying and sad, and that it's worth quite a bit to try to spare others that kind of thing. Not mangling peoples' souls matters, rather a lot, both intrinsically (because people matter) and instrumentally (because we need integrity if we want to do anything real and sustained).

comment by farp · 2021-10-15T22:18:31.588Z · LW(p) · GW(p)

The information in Zoe's Medium post was significant news to me and others I've spoken to. 

That's a good thing to assert. 
It seems preeeetty likely that some leaders in the community knew more or less what was up. I want people to care about whether that is true or not.

To do that investigation and postmortem, we can't skip to sentencing

I get this sentiment, but at the same time I think it's good to be clear about what is at stake. It's easy for me to interpret comments like "Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0" as essentially claiming that, while post-mortems are useful, the situation is behind us. 

Simply put, if I were a victim, I would want to speak up for the sake of accountability, not shared examination and learning. If I spoke up and found that everyone agreed the behavior was bad, but we all learned from it and are ready ot move on, I would be pretty upset by that. And my understanding is that this is how the community's leaders have handled other episodes of abuse (based on 0 private information, only public / second hand information).

But I am coming into this with a lot of assumptions as an outsider. If these assumptions don't resonate with any people who are closer to the situation then I apologize. Regardless sorry for stirring shit up with not much concrete to say. 

Replies from: Viliam, Ruby
comment by Viliam · 2021-10-16T21:05:28.013Z · LW(p) · GW(p)

It's easy for me to interpret comments like "Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0" as essentially claiming that, while post-mortems are useful, the situation is behind us.

Given my high priors on "the past behavior is the best predictor of future behavior", I would assume that the greatest difference will be better OPSEC and PR [LW · GW]. Also, more resouces to silence critics.

comment by Ruby · 2021-10-15T22:49:35.889Z · LW(p) · GW(p)

It seems preeeetty likely that some leaders in the community knew more or less what was up. I want people to care about whether that is true or not.

I would be quite surprised if the people I would call leaders knew of things that were as severe as Zoe's account and "did nothing". I care a lot whether that's true.

It's easy for me to interpret comments like "Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0" as essentially claiming that, while post-mortems are useful, the situation is behind us. 

My intention was to say that we don't have reason to believe there is harm actively occurring right now that we need to intervene on immediately. A day or two to figure things out is fine.

Simply put, if I were a victim, I would want to speak up for the sake of accountability, not shared examination and learning.

Based on what Zoe said plus general models of these situations, I believe how victims feel is likely complicated. I'm hesitant to make assumptions here. (Btw, see here for where some people are trying to set up an anonymous database of experiences at Leverage).

And my understanding is that this is how the community's leaders have handled other episodes of abuse (based on 0 private information, only public / second hand information).

I might suggest creating another post (so as to not interfere too much with this one) detailing what you believe to be the case so that we can discuss and figure out any systematic issues.

Replies from: farp, ChristianKl
comment by farp · 2021-10-17T05:54:28.123Z · LW(p) · GW(p)

I might suggest creating another post (so as to not interfere too much with this one) detailing what you believe to be the case so that we can discuss and figure out any systematic issues.

Look uhhh I believe at the very least the most basic claims about how Anna handled Robert Lecnik.

I would be quite surprised if the people I would call leaders knew of things that were as severe as Zoe's account and "did nothing". I care a lot whether that's true.

👍 (non sarcastic)

Replies from: philh
comment by philh · 2021-10-17T23:33:17.895Z · LW(p) · GW(p)

ὄD

(This renders on my phone as an o with a not-umlaut-but-similar over it followed by a D, and I don't know whether that's what it was intended to look like and I just don't know what it means, or if it's intended to look different than that.)

Replies from: farp
comment by farp · 2021-10-17T23:45:50.573Z · LW(p) · GW(p)

its a thumbsup emoji on mac OS. 👍

comment by ChristianKl · 2021-10-18T21:59:11.194Z · LW(p) · GW(p)

Based on what Zoe said plus general models of these situations, I believe how victims feel is likely complicated. I'm hesitant to make assumptions here. (Btw, see here for where some people are trying to set up an anonymous database of experiences at Leverage).

Having a database run by an anonymous person for that purpose seems to be very questionable. Zoe's edited her post to reference Aella as a point person for people who want to share their stories, so that's likely the best place.

Replies from: Ruby
comment by Ruby · 2021-10-18T22:16:19.607Z · LW(p) · GW(p)

That is the database run by Aella. By anonymous I meant it's anonymous for the posters.

comment by farp · 2021-10-15T06:04:58.835Z · LW(p) · GW(p)

That's my context. However I agree that my contributions haven't been very high EV in that I'm very far on the outside of a delicate situation and throwing my weight around. So I won't keep trying to intervene / subtextually post.

comment by Dustin · 2021-10-15T21:44:58.635Z · LW(p) · GW(p)

we should tolerate some impropriety for the greater good

 

On one level I think this is correct, but...I also think it's possibly a little naïve.  

In the potential world which consists of only "us", the people who think this world saving needs done, and who think like "we" do, your statement becomes more true. 

In the world we live in wherein the vast majority of people think the world saving we're talking about is unimportant, or bad, or evil, your statement requires closer and closer to perfect secrecy and insularity to remain true.