Posts

The feeling of breaking an Overton window 2021-02-17T05:31:40.629Z
“PR” is corrosive; “reputation” is not. 2021-02-14T03:32:24.985Z
Where do (did?) stable, cooperative institutions come from? 2020-11-03T22:14:09.322Z
Reality-Revealing and Reality-Masking Puzzles 2020-01-16T16:15:34.650Z
We run the Center for Applied Rationality, AMA 2019-12-19T16:34:15.705Z
AnnaSalamon's Shortform 2019-07-25T05:24:13.011Z
"Flinching away from truth” is often about *protecting* the epistemology 2016-12-20T18:39:18.737Z
Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” 2016-12-12T19:39:50.084Z
CFAR's new mission statement (on our website) 2016-12-10T08:37:27.093Z
CFAR’s new focus, and AI Safety 2016-12-03T18:09:13.688Z
On the importance of Less Wrong, or another single conversational locus 2016-11-27T17:13:08.956Z
Several free CFAR summer programs on rationality and AI safety 2016-04-14T02:35:03.742Z
Consider having sparse insides 2016-04-01T00:07:07.777Z
The correct response to uncertainty is *not* half-speed 2016-01-15T22:55:03.407Z
Why CFAR's Mission? 2016-01-02T23:23:30.935Z
Why startup founders have mood swings (and why they may have uses) 2015-12-09T18:59:51.323Z
Two Growth Curves 2015-10-02T00:59:45.489Z
CFAR-run MIRI Summer Fellows program: July 7-26 2015-04-28T19:04:27.403Z
Attempted Telekinesis 2015-02-07T18:53:12.436Z
How to learn soft skills 2015-02-07T05:22:53.790Z
CFAR fundraiser far from filled; 4 days remaining 2015-01-27T07:26:36.878Z
CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype 2014-12-26T15:33:08.388Z
Upcoming CFAR events: Lower-cost bay area intro workshop; EU workshops; and others 2014-10-02T00:08:44.071Z
Why CFAR? 2013-12-28T23:25:10.296Z
Meetup : CFAR visits Salt Lake City 2013-06-15T04:43:54.594Z
Want to have a CFAR instructor visit your LW group? 2013-04-20T07:04:08.521Z
CFAR is hiring a logistics manager 2013-04-05T22:32:52.108Z
Applied Rationality Workshops: Jan 25-28 and March 1-4 2013-01-03T01:00:34.531Z
Nov 16-18: Rationality for Entrepreneurs 2012-11-08T18:15:15.281Z
Checklist of Rationality Habits 2012-11-07T21:19:19.244Z
Possible meetup: Singapore 2012-08-21T18:52:07.108Z
Center for Modern Rationality currently hiring: Executive assistants, Teachers, Research assistants, Consultants. 2012-04-13T20:28:06.071Z
Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 2012-03-29T20:48:48.227Z
How do you notice when you're rationalizing? 2012-03-02T07:28:21.698Z
Urges vs. Goals: The analogy to anticipation and belief 2012-01-24T23:57:04.122Z
Poll results: LW probably doesn't cause akrasia 2011-11-16T18:03:39.359Z
Meetup : Talk on Singularity scenarios and optimal philanthropy, followed by informal meet-up 2011-10-10T04:26:09.284Z
[Question] Do you know a good game or demo for demonstrating sunk costs? 2011-09-08T20:07:55.420Z
[LINK] How Hard is Artificial Intelligence? The Evolutionary Argument and Observation Selection Effects 2011-08-29T05:27:31.636Z
Upcoming meet-ups 2011-06-21T22:28:40.610Z
Upcoming meet-ups: 2011-06-11T22:16:09.641Z
Upcoming meet-ups: Buenos Aires, Minneapolis, Ottawa, Edinburgh, Cambridge, London, DC 2011-05-13T20:49:59.007Z
Mini-camp on Rationality, Awesomeness, and Existential Risk (May 28 through June 4, 2011) 2011-04-24T08:10:13.048Z
Learned Blankness 2011-04-18T18:55:32.552Z
Talk and Meetup today 4/4 in San Diego 2011-04-04T11:40:05.167Z
Use curiosity 2011-02-25T22:23:54.462Z
Make your training useful 2011-02-12T02:14:03.597Z
Starting a LW meet-up is easy. 2011-02-01T04:05:43.179Z
Branches of rationality 2011-01-12T03:24:35.656Z
If reductionism is the hammer, what nails are out there? 2010-12-11T13:58:18.087Z

Comments

Comment by AnnaSalamon on In Defense of Attempting Hard Things, and my story of the Leverage ecosystem · 2022-01-03T05:09:00.291Z · LW · GW

I, also, really appreciate Cathleen for writing this piece, and found it worth reading and full of relevant details. I'll try to add more substantive comments in a week or so, but wanted meanwhile to add my vote to those recommending that folks wanting to understand Leverage read this piece.

Comment by AnnaSalamon on AnnaSalamon's Shortform · 2022-01-01T04:14:47.728Z · LW · GW

This is one of my bottlenecks on posting, so I'm hoping maybe someone will share thoughts on it that I might find useful:

I keep being torn between trying to write posts about things I have more-or-less understood already (which I therefore more-or-less know how to write up), and posts about things I presently care a lot about coming to a better understanding of (but where my thoughts are not so organized yet, and so trying to write about it involves much much use of the backspace, and ~80% of the time leads to me realizing the concepts are wrong, and going back to the drawing board).

I'm curious how others navigate this, or for general advice.

Comment by AnnaSalamon on What would you like from Microcovid.org? How valuable would it be to you? · 2021-12-29T18:21:22.249Z · LW · GW

I continue to get a lot of value from microcovid.org, just as is. Partly using it myself and partly using it with friends/family who want help evaluating particular actions. Very grateful for this site.

The main additional feature that would be great for me would be help modeling how much of an update to make from Covid tests (e.g., how much does it help if everyone takes a rapid test before a gathering).

Comment by AnnaSalamon on The Rationalists of the 1950s (and before) also called themselves “Rationalists” · 2021-11-29T20:17:36.588Z · LW · GW

Thanks! I appreciate knowing this. Do you happen to know if there's a connection between these 1950's rationalists, and the "critical rationalists" (who are a contemporary movement that involves David Deutsch, the "taking children seriously" people, and some larger set of folks who try to practice a certain set of motions and are based out of the UK, I think)?

Comment by AnnaSalamon on Frame Control · 2021-11-29T19:44:55.540Z · LW · GW

But to understand better: if I'd posted a version of this with fully anonymous examples, nothing specifically traceable to Leverage, would that have felt good to you, or would something in it still feel weird?

I'd guess the OP would’ve felt maybe 35% less uneasy-making to me, sans Geoff/Aubrey/“current” examples.

The main thing that bothers me about the post is related to, but not identical to, the post’s use of current examples:

I think the phenomena you’re investigating are interesting and important, but that the framework you present for thinking about them is early-stage. I don’t think these concepts yet “cleave nature at its joints.” E.g., it seems plausible to me that your current notion of “frame control” is a mixture of [some thing that’s actually bad for people] and mere disagreeableness (and that, for all I know, disagreeableness decreases rather than increases harms), as Benquo and Said variously argue. Or that this notion of “frame control” blends in some behaviors we’re used to tolerating as normal, such as leadership, as Matt Goldenberg argues. Or any number of other things.

I like that you’re writing about something early-stage! Particularly given that it seems interesting and important. But I will wish you would do it in a way that telegraphs the early-stage-ness and lends momentum toward having readers join you as fellow scientists/philosophers/naturalists who are squinting at the phenomena together. There are a lot of kinds of sentences that can invite investigation. Some are explicit — stating explicitly something like “this is an early-stage conceptualization of a set of thingies we’re probably still pretty confused by, and so I’d like to invite you guys in to be fellow scientists/philosophers/naturalists with me about this stuff, including helping spot where this model is a bit askew.” Some are more ‘inviting it by doing parts of it yourself to make it easy for others to join’ — saying things like “my guess is that all of the examples I’m clustering under ‘frame control’ share a common structure; some of the reasons for my guess as [reasons]; I’m curious what you guys think about whether there’s a common structure and a single cluster here”. (A lot of this amounts to showing your scratchwork.)

If the post seemed mostly to invite being a fellow scientist/philosopher/puzzler with you about these thingies, while mostly not-inviting “immediate application to current events with the assumption that ‘frame control’ is a simple thing that we-as-a-group now understand” (it could still invite puzzling at current events, but would in my hoped-for world invite doing this while puzzling at where the causal joints are, how valid the ‘frame control’ concept is or isn’t and what is or isn’t central to it, a la rationalist taboo), I’d feel great about it.

Comment by AnnaSalamon on Frame Control · 2021-11-29T05:42:54.868Z · LW · GW

I expect these topics are hard to write about, and that there’s value in attempting it anyway. I want to note that before I get into my complaints. So, um, thanks for sharing your data and thoughts about this hard-to-write-about (AFAICT) and significant (also AFAICT) topic!

Having acknowledged this, I’d like to share some things about my own perspective about how to have conversations like these “well”, and about why the above post makes me extremely uneasy.

First: there’s a kind of rigor that IMO the post lacks, and IMO the post is additionally in a domain for which such rigor is a lot more helpful/necessary than such rigor usually is.

Specifically: I can’t tell what the core claims of the OP are. I can’t easily ask myself “what would the world look like if [core claim X] was true? If it were false? what do I see?” “How about [core claim Y]”? “Are [X] and [Y] the best way to account for the evidence the OP presents, or are there unnecessary details tagging along with the conclusions that aren’t actually actually implied by the evidence?”, and so on.

I.e., the post’s theses are not factored to make evidence-tracking easy.

I care more about (separable claims, each separately trackable by evidence, laid out to make vetting easy) here than I usually would, because the OP is about politics (specifically, it is about what behaviors should lead to us “burning [those who do them] with fire” and ostracizing those folks from our polity. Politics is damn tricky stuff; political discussion in groups about who to exclude and what precedents to set up for why is damn tricky stuff.

I think Raemon’s comment is pretty similar to the point I’m trying to make here.

(Key to my reaction here is that this is a large public discussion. I’m worried that in such discussions, “X was claimed, and upvoted, and no one objected” may cause many readers to assume “X is now a vetted claim that can be assumed-and-cited when making future arguments.” I’m not sure if this is right; if it’s false, I care less.)

(Alternately put: I like this post fine for conversation-level discussion; it’s got some interesting examples and anecdotes and claims and hypotheses, seems worth reading and helpful-on-some-points. I don’t as much like it as a contribution to LW’s “vetted precedents that we get to cite when sorting through political cases”, because I think it doesn’t hit the fairly high and hard-to-hit standard required for such precedents to be on-net-not-too-confusing/“weaponizable”/something.)

I expect it’s slower to try to proceed via separable claims that we can separately track the evidence for/against, but on ground this tricky, slower seems worth it to me.

I’ve often failed at the standard I’m requesting here, but I’ll try to hit in in the future, and will be a good sport when people point out I’m dramatically failing at it.

Secondly, and relatedly: I am uneasy about the fact that many of the post’s examples are from a current conflict that is still being worked out (the rationalist community’s attempt to figure out how to relate to Geoff Anders). IMO, we are still in the process of evaluating both: a) Whether Geoff Anders is someone the rationalist community (or various folks in it) would do better to ostracize, in various senses; and b) Whether there really is a thing called “frame control”, what exactly it is, whether it’s bad, whether it should be “burned with fire,” etc.

I would much rather we try to prosecute conversation (a) and conversation (b) separately, rather than taking unvetted claims about what a new bad thing is and how to spot it, and relatively unvetted claims about Geoff, and using them to reinforce each other.

(If one is a prerequisite for the other, we could try to establish that one first, and then bring in the other.)

The reason I’d much rather they be done separately, is that I don’t trust my own, or most others’, ability to track evidence when they’re done at once. The sort of confusion I get around this is similar to the confusion the OP describes frame-controllers as inducing with “burried claims”. If (a) and (b) are both cited as evidence for one another, it’s a bit tricky to pull out the claims, and I notice myself getting sort of dizzy as I read.

Hammering a bit more here, we get to my third source of unease: there are plenty of ways I can excerpt-and-paraphrase-uncharitably from the OP, that seem like kinds of things that ought not to be very compelling, and that I’d kind of expect would cause harm if a community found them compelling anyhow.

Uncharitable paraphrase/caricature: “Hey you guys. There’s a thing that is secretly very bad, but looks pretty normal. (So, discount your “this is probably fine”, “the argument for ostracism doesn’t seem very compelling here” reactions. (cf. “Finger-trap beliefs.)) I know it’s really bad because my dad was really bad for me and my mom during my childhood, and this not-very-specified thingy was the central thing; I can’t give you enough of a description to allow independent evaluation of who’s doing it, but I can probably detect it myself and tell you which people are/aren’t doing (the central and vaguely specified bad thing). We should burn it with fire when we see it; my saying this may trigger your “wait, we should be empathetic” reactions, but ignore those because, let me tell you so that you know, I’m normally very empathetic, and I think this one vaguely specified thing should be burned with fire. So you guys should override a bunch of your usual heuristics and trust (me or whoever you think is good at spotting this vaguely specified thing) to decide which things we should collectively burn with fire.”

It’s possible there are protective factors that should make me not-worry about this post, even if I’m right that a reasonable person would worry about some other posts that fit my above caricature. But I don’t clearly see them, and would like help with that if they are here!

I like a bunch of the ending, about holding things lightly and so on. I feel like that is basically enough to make the post net-just-fine, and also helpful, for an individual reading this, who isn’t part of a community with the rest of the readers and the author — for such an individual, the post basically seems to me to be saying “sometimes you’ll find yourself feeling really crazy around somebody without knowing how to pin down why. In such a case, feel free to trust your own judgment and get out of there, if that’s what your actual unjustifiable best guess at what to do is.” This seems like fine advice! But in a community context, if we’re trying to arrive at collective beliefs about other people (which I’m not sure we’re doing, and I’m even less sure we should be doing; if we aren’t, maybe this is fine), such that we’re often deferring to other peoples’ guesses about what was and wasn’t “frame control” and whether that “frame control” maps onto a set of things that are really actually “burn it with fire” harmful and not similar in some other sense… I’m uneasy!

Comment by AnnaSalamon on Cornell Meetup · 2021-11-23T21:55:33.276Z · LW · GW

I've known Lionel since high school, and can vouch for him if it's somehow helpful. Additional thoughts: He's good at math; he's new enough to AI alignment that having anyone local-to-him (e.g. at Cornell / in Ithaca) who wants to talk about this would probably help, so don't be shy or think you need much background; he cares about this stuff; he enjoys thinking and trying to get at truth, and I tend to find him fun to talk to.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-11-11T07:21:53.691Z · LW · GW

A CFAR board member asked me to clarify what I meant about “corrupt”, also, in addition to this question.

So, um. Some legitimately true facts the board member asked me to share, to reduce confusion on these points:

  • There hasn’t been any embezzlement. No one has taken CFAR’s money and used it to buy themselves personal goods.
  • I think if you took non-profits that were CFAR’s size + duration (or larger and longer-lasting), in the US, and ranked them by “how corrupt is this non-profit according to observers who people think of as reasonable, and who got to watch everything by video and see all the details”, CFAR would on my best guess be ranked in the “less corrupt” half rather than in the “more corrupt” half.

This board member pointed out that if I call somebody “tall” people might legitimately think I mean they are taller than most people, and if I agree with an OP that says CFAR was “corrupt” they might think I’m agreeing that CFAR was “more corrupt” than most similarly sized and durationed non-profits, or something.

The thing I actually think here is not that. It’s more that I think CFAR’s actions were far from the kind of straight-forward, sincere attempt to increase rationality, compared to what people might have hoped for from us, or compared to what a relatively untraumatized 12-year-old up-and-coming-LWer might expect to see from adults who said they were trying to save the world from AI via learning how to think. (IMO, this was made mostly via a bunch of people doing reasoning that they told themselves was intended to help with existential risk or with rationality or at least to help CFAR or do their jobs, but that was not as much that as the thing a kid might’ve hoped for. I think I, in my roles at CFAR, was often defensive and power-seeking and reflexively flinching away from things that would cause change; I think many deferred to me in cases where their own sincere, Sequences-esque reasoning would not have thought this advisable; I think we fled from facts where we should not have, etc.).

I think this is pretty common, and that many of us got it mostly from mimicking others at other institutions (“this is how most companies do management/PR/whatever; let’s dissociate a bit until we can ‘think’ that it’s fine”). But AFAICT it is not compatible (despite being common) with the kinds of impact we were and are hoping to have (which are not common), nor with the thing that young or sincere readers of the Sequences, who were orienting more from “what would make sense” and less from “how do most organizations act” would have expected. And I think it had the result of wasting a bunch of good peoples’ time and money, and making it look as though the work we were attempting is intrinsically low-reward, low-yield, without actually checking to see what would happen if we tried to locate rationality/sanity skills in a simpleway.

I looked at the Wikipedia article on corruption to see if it had helpful ontology I could borrow. I would say that the kind of corruption I am talking about is “systemic” corruption rather than individual, and involved “abuse of discretion”.

A lot of what I am calling “corruption” — i.e., a lot of the systematic divergence between the actions CFAR was taking, and the actions that a sincere, unjaded, able-to-actually-talk-to-each-other version of us would’ve chosen for CFAR to take, as a best guess for how to further our missions — came via me personally, since I was in a leadership role manipulating the staff of CFAR by giving them narratives about how the world would be more saved if they did such-and-such (different narratives for different folks), and looking to see how they responded to these narratives in order to craft different ones. I didn’t say things I believed false, but I did choose which things to say in a way that was more manipulative than I let on, and I hoarded information to have more control of people and what they could or couldn’t do in the way of pulling on CFAR’s plans in ways I couldn’t predict, and so on. Others on my view chose to go along with this, partly because they hoped I was doing something good (as did I), partly because it was way easier, partly because we all got to feel as though were were important via our work, partly because none of us were fully conscious of most of this.

This is “abuse of discretion” in that it was using places in which my and our judgment had institutional power because people trusted me and us, and making those judgments via a process that was predictably going to have worse rather than better outcomes, basically in my case via what I’ve lately been calling narrative addiction.

I love the people who work at CFAR, both now and in the past, and predict that most would make your house or organization or whatnot better if you live or hire them or similar. They’re bringing a bunch of sincere goodwill, willingness to try what is uncomfortable (not fully, but more than most, and enough that I admire it and am impressed a lot), attempt better epistemic practices than I see most places where they know how to, etc. I’m afraid to say paragraphs like the ones preceding this one lest I cause people who are quite good as people in our social class go, and who sacrificed at my request in many cases, to look bad.

But in addition to the common human pass-time of ranking all of us relative to each other, figuring out who to scapegoat and who to pass other relative positive or negative judgments on, there is a different endeavor I care very much about: one of trying to see the common patterns that’re keeping us stuck. Including patterns that may be pretty common in our time and place, but that (I think? citation needed, I’ll grant) may have been pretty uncommon in the places where progress historically actually occurred.

And that is what I was so relieved to see Jessica’s OP opening a beginning of a space for us to talk about. I do not think Jessica was saying CFAR was unusually bad; she estimates it was on her best guess a less traumatizing place than Google. She just also tries to see through lines between patterns across places, in ways I found very relieving and hopeful. Patterns I strongly resisted seeing for most of the last six years. It’s the amount of doublethink I found in myself on the topic, more than almost any of the rest of it, that most makes me think “yes there is a non-trivial insight here, that Jessica has and is trying to convey and that I hope eventually does get communicated somehow, despite all the difficulties of talking about it so far.”

Comment by AnnaSalamon on Self-Integrity and the Drowning Child · 2021-10-27T20:59:46.452Z · LW · GW

Equally importantly IMO, it argues for transfer from a context where the effect of your actions is directly perceptionally obvious to one where it is unclear and filters through political structures (e.g., aid organizations and what they choose to do and to communicate; any governments they might be interacting with; any other players on the ground in the distant country) that will be hard to model accurately.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-25T04:28:05.704Z · LW · GW

In the last two years, CFAR hasn't done much outward-facing work at all, due to COVID, and so has neither been a MIRI funnel nor definitively not a MIRI funnel.

Yes, but I would predict that we won't be the same sort of MIRI funnel going forward. This is because MIRI used to have specific research programs that it needed to hire for, and it it was sponsoring AIRCS (covering direct expenses plus loaning us some researchers to help run the thing) in order to recruit for that, and those research programs have been discontinued and so AIRCS won't be so much of a thing anymore.

This has been the main part of why no AIRCS post vaccines, not just COVID.

I, and I would guess some others at CFAR, am interested in running AIRCS-like programs going forward, especially if there are groups that want to help us pay the direct expenses for those programs and/or researchers that want to collaborate with us on such programs. (Message me if you're reading this and in one of those categories.) But it'll be less MIRI-specific this time, since there isn't that recruiting angle.

Also, more broadly, CFAR has adopted different structures for organizing ourselves internally, and we are bigger now into "if you work for CFAR, or are a graduate of our instructor training program, and you have a 'telos' that you're on fire to do, you can probably do it with CFAR's venue/dollars/collaborations of some sorts" (we're calling this "platform CFAR," Elizabeth Garrett invented it and set it up maybe about a year ago, can't remember), and also into doing hourly rather than salaried work in general (so we don't feel an obligation to fill time with some imagined 'supposed to do CFAR-like activity" vagueness, so that we can be mentally free) and are also into taking more care not to have me or anyone speak for others at CFAR or organize people into a common imagined narrative one must pretend to believe, but rather into letting people do what we each believe in, and try to engage each other where sensible. Which makes it a bit harder to know what CFAR will be doing going forward, and also leaves me thinking it'll have a bit more variety in it. Probably.

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-25T04:14:56.707Z · LW · GW

Thanks! I would love follow-up on LW to the twitch stream, if anyone wants to. There were a lot of really interesting things being said in the text chat that we didn’t manage to engage with, for example. Although unfortunately the recording was lost, which is unfortunate because IMO it was a great conversation.

TekhneMakre writes:

This suggests, to me, a (totally conjectural!) story where [Geoff] got into an escalating narrative cold war with the rationality community: first he perceives (possibly correctly) that the community rejects him…

This seems right to me

Anna says there were in the early 2010s rumors that Leverage was trying to fundraise from "other people's donors". And that Leverage/Geoff was trying to recruit, whether ideologically or employfully, employees of other EA/rationality orgs.

Yes. My present view is that Geoff’s reaching out to donors here was legit, and my and others’ complaints were not; donors should be able to hear all the pitches, and it’s messed up to think of “person reached out to donor X to describe a thingy X might want to donate to” as a territorial infringement.

This seems to me like an example of me and others escalating the “narrative cold war” that you mention.

[Geoff] seemed to talk in long, apparently low content sentences with lots of hemming and hawing and attention to appearance…

I noticed some of this, though less than I might’ve predicted from the background context in which Geoff was, as you note, talking to 50 people, believing himself to be recorded, and in an overall social context in which a community he has long been in a “narrative cold war” with (under your hypothesis, and mine) was in the midst of trying to decide whether to something-like scapegoat him.

I appreciate both that you mentioned your perception (brought it into text rather than subtext, where we can reason about it, and can try to be conscious of all the things together), and that you’re trying to figure out how to incentivize and not disincentivize Geoff’s choice to do the video (which IMO shared a bunch of good info).

I’d like to zoom in on an example that IMO demonstrates that the causes of the “hemming and hawing” are sometimes (probably experience-backed) mistrust of the rationalist community as a [context that is willing to hear and fairly evaluate his actual evidence], rather than, say, desire for the truth to be hidden:

At one point toward the end of the twitch, Geoff was responding to a question about how we got from a pretty cooperative state in ~2013, and said something kinda like “… I’m trying to figure out how to say this without sounding like I’m being unfair your side of things,” or something, and I was like “maybe just don’t, and I or others can disagree if we think you’re wrong,” and then he sort of went “okay, if you’re asking for it” and stopped hemming and hawing and told a simple and direct story about how in the early days of 2011-2014, Leverage did a bunch of things to try to cause specific collaborations that would benefit particular other groups (THINK, the original EA leaders gathering in the Leverage house in 2013, the 2014 retreat + summit, a book launch party for ‘Our Final Invention’ co-run with SingInst, some general queries about what kind of collaborations folks might want, early attempts to merge with SingInst and with 80k), and how he would’ve been interested in and receptive to other bids for common projects if I or others had brought him some. And I was like “yes, that matches my memory and perception; I remember you and Leverage seeming unusually interested in getting specific collaborations or common projects that might support your goals + other groups’ goals at once, going, and more than other groups, and trying to support cooperation in this way” and he seemed surprised that I would acknowledge this.

So, I think part of the trouble is that Geoff didn’t have positive expectations of us as a context in which to truth-seek together.

One partial contributor to this expectation of Geoff’s, I would guess, is the pattern via which (in my perception) the rationalist community sometimes decides peoples’ epistemics/etc. are “different and bad” and then distances from them, punishes those who don’t act as though we need to distance from them, etc., often in a manner that can seem kinda drastic and all-or-nothing, rather than docking points proportional to what it indicates about a person’s likely future ability to share useful thoughts in a milder-mannered fashion. For example, during a panel discussion at the (Leverage-run) 2014 EA Summit, in front of 200 people, I asked Geoff aloud whether he in fact thought that sticking a pole though someone’s head (a la Phineas Gage) would have no effect on their cognition except via their sense-perception. Geoff answered “yes”, as I expected since he’d previously mentioned this view. And… there was a whole bunch of reaction. E.g., Habryka, in the twitch chat, mentioned having been interning with Leverage at the time of that panel conversation, and said “[that bit of panel conversation] caused me nightmares… because I was interning at Leverage at the time, and it made me feel very alienated from my environment. And felt like some kind of common ground was pulled out from under me.”

I for many years often refrained from sharing some of positive views/data/etc. I had about Leverage, for fear of being [judged or something] for it. (TBC, I had both positive and negative views, and some error bars. But Leverage looked to me like well-meaning people who were trying a hard-core something that might turn out cool, and that was developing interesting techniques and models via psychological research, and I mostly refrained from saying this because I was cowardly about it in response to social pressure. … in addition to my usual practice of sometimes refraining from sharing some of my hesitations about the place, as about most places, in a flinchy attempt to avoid conflict.)

I didn't hear anything that strongly confirms or denies adversarial hypotheses like "Geoff was fairly actively doing something pretty distortiony in Leverage that caused harm, and is sort of hiding this by downplaying / redirecting attention / etc.".

My guess is that he was and is at least partially some of doing this, in addition to making an earnest (and better than I’d expected on generic-across-people priors) effort to share true things. Re: the past dynamics, I and IMO others were also doing actively distortionary stuff, and I think the Geoff’s choices, and mine and others’, need to be understood together, as similar responses to a common landscape.

As I mentioned in the twitch that alas didn’t get recorded, in ~2008-2014, ish, somehow a lot of different EA and rationality and AI risk groups felt like allies and members of a common substantive community, at least in my perception (including my perception of the social context that I imagined lots of other peopl were in. And later on, most seemed to me to kinda give up on most of the others, opting still for a social surface of cooperation/harmony, but without any deep hope in anyone else of the sort that might support building common infrastructure, really working out any substantive disagreements (with tools of truth-seeking rather than only truce-seeking/surface-harmony-preservation, etc.). (With some of the together-ness getting larger over time in the early years, and then with things drifting apart again.) I’m really interested in whether that transition matches others’ perceptions, and, if so, what y’all think the causes were. IMO it was partly about what I’ve been calling “narrative addiction” and “narrative pyramid schemes,” which needs elaboration rather than a set of phrases (I tried this a bit in the lost twitch video) but I need to go now so may try it later.

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-24T23:09:44.178Z · LW · GW

Alas, no. I'm pretty bummed about it, because I thought the conversation was rather good, but Geoff pushed the "save recording" button after it was started and that didn't work.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T00:23:58.220Z · LW · GW

Thank you. I disagree with "... relishes 'breaking' others", and probably some other points, but a bunch of this seems really right and like content I haven't seen written up elsewhere. Do share more if you have it. I'm also curious where you got this stuff from.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T10:09:16.246Z · LW · GW

CFAR staff retreats often involve circling. Our last one, a couple weeks ago, had this, though as an optional evening thing that some but not most took part in.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T10:07:26.369Z · LW · GW

Basically no. Can't say a plain "no," but can say "basically no." I'm not willing to give details on this one. I'm somehow fretting on this one, asking if "basically no" is true from all vantage points (it isn't, but it's true from most), looking for a phrase similar to that but slightly weaker, considering e.g. "mostly no", but something stronger is true. I think this'll be the last thing I say in this thread about this topic.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T09:13:14.952Z · LW · GW

I think some of it has got to be that it's somehow easier to talk about CFAR/MIRI, rather than a sheer number of people thing. I think Leverage is somehow unusually hard to talk about, such that maybe we should figure out how to be extraordinarily kind/compassionate/gentle to anyone attempting it, or something.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T09:09:12.194Z · LW · GW

Yes; I want to acknowledge that there was a large cost here. (I wasn't sure, from just the comment threads; but I just talked to a couple people who said they'd been thinking of writing up some observations about Leverage but had been distracted by this.)

I am personally really grateful for a bunch of the stuff in this post and its comment thread. But I hope the Leverage discussion really does get returned to, and I'll try to lend some momentum that way. Hope some others do too, insofar as some can find ways to actually help people put things together or talk.

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-22T08:32:57.793Z · LW · GW

Yep. I hope this isn’t bad to do, but I am doing it.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T09:24:10.395Z · LW · GW

As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong.

Actually, that was true for the last few years (with an ambiguous in-between time during covid), but it is not true now. Partly because MIRI abandoned the research direction we’d most been trying to help them recruit for. CFAR will be choosing its own paths going forward more.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T09:23:53.072Z · LW · GW

Since you're one of the leaders of CFAR, that makes you one of the leading people behind all those things the OP is critical of.

Yes.

So, how all is this compatible with you agreeing with the OP?

Basically because I came to see I’d been doing it wrong.

Happy to try to navigate follow-up questions if anyone has any.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T09:02:27.754Z · LW · GW

I think that you believe, as I do, that there were some high-level structural similarities between the dynamics at MIRI/CFAR and at Leverage, and also what happened at Leverage was an order of magnitude worse than what happened at MIRI/CFAR.

Leverage_2018-2019 sounds considerably worse than Leverage 2013-2016.

My current guess is that if you took a random secular American to be your judge, or a random LWer, and you let them watch the life of a randomly chosen member of the Leverage psychology team from 2018-2019 (which I’m told is the worst part) and also of a randomly chosen staff member at either MIRI or CFAR, they would be at least 10x more horrified by the experience of the one in the Leverage psychology team.

I somehow don’t know how to say in my own person “was an order of magnitude worse”, but I can say the above. The reason I don’t know how to say “was an order of magnitude worse” is because it honestly looks to me (as to Jessica in the OP) like many places are pretty bad for many people, in the sense of degrading their souls via deceptions, manipulations, and other ethical violations. I’m not sure if this view of mine will sound over-the-top/dismissable or we-all-already-know-that/dismissible, or something else, but I have in mind such things as:

  • It seems to me that many many kids enter school with a desire to learn and an ability to trust their own mind, and leave school with a weird kind of “make sure you don’t get it wrong” that inhibits trying and doing. Some of this is normal aging, but my best guess is that an important chunk is more like cultural damage.

  • Many teenagers can do philosophy, stretch, try to think about the world. Most of the same folks at 30 or 40 can’t, outside of the ~one specific disciplines in which they’re a professional. They don’t let themselves.

  • Lots of upper middle class adults hardly know how to have conversations, of the “talk from the person inside who is actually home, asking what they want to know instead of staying safe, hitting new unpredictable thoughts/conversations” sense. This is a change from childhood. Again, this is probably partly aging, but I suspect cultural damage, and I’ve been told a couple times (including by folks who have no contact with Vassar or anyone else in this community) that this is less true for working class folks than for upper middle class folks, which if true is evidence for it being partly cultural damage though I should check this better.

  • Some staff IMO initially expect that folks at CFAR or Google or the FDA or wherever will be trying to do something real, and then come to later relate to it more like belief-in-belief, and to lots of other things too, with language coming to seem more like a mechanism for coordinating our belief-in-beliefs, and less like light with which one can talk and reason. And with things in general coming to seem kind of remote and as though you can’t really hope for anything real.

Anyhow. This essay wants to be larger than I’m willing to make this comment-reply before sleeping, so I’ll just keep doing it poorly/briefly, and hope to have more conversation later not necessarily under Jessica’s OP. But my best guess is that both CFAR of most of the last ten years, and the average workplace, are:

a) On the one hand, quite a bit less overtly hellish than the Leverage psychology teams of 2018-2019; but nevertheless maybe full of secret bits of despair and giving-up-on bits of our birthrights, in ways that are mostly not consciously noticed; b) More than 1/10th as damaging to most employees’ basic human capacities, compared to Leverage_2018-2019.

Why do I think b? Partly because of my observations on what happens to people in the broader world (including control groups of folks who do their own thing among good people and end up fine, but I might be rigging my data and playing “no true scottsmen” games to get rid of the rest, and misconstruing natural aging or something). And partly because I chatted with several people in the past week who spent time at Leverage, and they all seemed like they had intact souls, to me, although my soul-ometer is not necessarily that accurate etc.

But, anyhow, I agree that most people would see what you’re saying, I’m just seeing something else and I care about it and I’m sorry if I said it in a confusing/misleading way but it is actually pretty hard to talk about.

Epistemic status of all this: scratchwork, alas.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T07:42:52.969Z · LW · GW

Yeah, sorry. I agree that my comment “the OP speaks for me” is leading a lot of people to false views that I should correct. It’s somehow tricky because there’s a different thing I worry will be obscured by my doing this, but I’ll do it anyhow as is correct and try to come back for that different thing later.

To the best of my knowledge, the leadership of neither MIRI nor CFAR has ever slept with a subordinate, much less many of them.

Agreed.

While I think staff at CFAR and MIRI probably engaged in motivated reasoning sometimes wrt PR, neither org engaged in anything close to the level of obsessive, anti-epistemic reputational control alleged in Zoe's post. CFAR and MIRI staff were certainly not required to sign NDAs agreeing they wouldn't talk badly about the org—in fact, in my experience CFAR staff much more commonly share criticism of the org than praise.  CFAR staff were regularly encouraged to share their ideas at workshops and on LessWrong, to get public feedback. And when we did mess up, we tried extremely hard to publicly and accurately describe our wrongdoing—e.g., Anna and I personally spent hundreds of hours investigating/thinking about the Brent affair, and tried so hard to avoid accidentally doing anti-epistemic reputational control that in my opinion, our writeup about it actually makes CFAR seem much more culpable than I think it was.

I agree that there’s a large difference in both philosophy of how/whether to manage reputation, and amount of control exhibited/attempted about how staff would talk about the organizations, with Leverage doing a lot of that and CFAR doing less of it than most organizations.

As I understand it, there were ~3 staff historically whose job description involved debugging in some way which you, Anna, now feel uncomfortable with/think was fucky. But to the best of your knowledge, these situations caused much less harm than e.g. Zoe seems to have experienced, and the large majority of staff did not experience this—in general staff rarely explicitly debugged each other, and when it did happen it was clearly opt-in, and fairly symmetrical (e.g., in my personal conversations with you Anna, I'd guess the ratio of you something-like-debugging me to the reverse is maybe 3/2?).

I think this understates both how many people it happened with, and how fucky it sometimes was. (Also, it was job but not “job description”, although I think Zoe’s was “job description”). I think this one was actually worse in some of the early years, vs your model of it. My guess is indeed that it involved fewer hours than Zoe, and was overall less deliberately part of a dynamic quite as fucky as Zoe’s, but as I mentioned to you on the phone, an early peripheral staff member left CFAR for a mental institution in a way that seemed plausibly to do with how debugging and trials worked, and definitely to do with workplace stress of some sort, as well as with a preexisting condition they entered with and didn’t tell us about. (We would’ve handled this better later, I think.) There are some other situations that were also I think pretty fucked up, in the sense of “I think the average person would experience some horror/indignation if they took in what was happening.”

I can also think of stories of real scarring outside the three people I was counting.

I… do think it was considerably less weird looking, and less overtly fucked-up looking, than the descriptions I have (since writing my “this post speaks for me” comment) gotten of Leverage in the 2018-2019 era.

Also, most people at CFAR, especially in recent years, I think suffered none or nearly none of this. (I believe the same was true for parts of Leverage, though not sure.)

So, if we are playing the “compare how bad Leverage and CFAR are along each axis” game (which is not the main thing I took the OP to be doing, at all, nor the main thing I was trying to agree with, at all), I do think Leverage is worse than CFAR on this axis but I think the “per capita” damage of this sort that hit CFAR staff in the early years (“per capita” rather than cumulative, because Leverage had many more people) was maybe about a tenth of my best guess at what was up in the near-Zoe parts of Leverage in 2018-2019, which is a lot but, yes, different.

CFAR put really a lot of time and effort into trying to figure out how to teach rationality techniques, and how to talk with people about x-risk, without accidentally doing something fucky to people's psyches. Our training curriculum for workshop mentors includes extensive advice on ways to avoid accidentally causing psychological harm. Harm did happen sometimes, which was why our training emphasized it so heavily. But we really fucking tried, and my sense is that we actually did very well on the whole at establishing institutional and personal knowledge about how to be gentle with people in these situations; personally, it's the skillset I'd most worry about the community losing if CFAR shut down and more events started being run by other orgs.

We indeed put a lot of effort into this, and got some actual skill and good institutional habits out.

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-20T06:29:16.309Z · LW · GW

Thanks! To check: did one or more of the ex-Leveragers say Geoff said he was willing to lie? Do you have any detail you can add there? The lying one surprises me more than the others, and is something I'd want to know.

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-20T05:15:17.382Z · LW · GW

Allowing somebody to continue to be an organizer for something after they confess to rape

To fill in some details (I asked Robert, he's fine with it):

Robert had not confessed to rape, at least not the way I would use the word. He had told me of an incident where (as he told it to me) [edit: the following text is rot13'd, because it contains explicit descriptions of sexual acts] ur naq Wnl unq obgu chg ba pbaqbzf, Wnl unq chg ure zbhgu ba Eboreg’f cravf, naq yngre Eboreg unq chg uvf zbhgu ba Wnl’f cravf jvgubhg nfxvat, naq pbagvahrq sbe nobhg unys n zvahgr orsber abgvpvat fbzrguvat jnf jebat. Wnl sryg genhzngvmrq ol guvf. Eboreg vzzrqvngryl erterggrq vg, naq ernyvmrq ur fubhyq unir nfxrq svefg, naq fubhyq unir abgvprq rneyvre fvtaf bs qvfpbzsbeg.

Robert asked for my help getting better at consent, and I recommended he do a bunch of sessions on consent with a life coach named Matt Porcelli, which he did (he tells me they did not much help); I also had a bunch of conversations with him about consent across several months, but suspect these did at most a small part of what was needed. I did allow him to continue using CFAR’s community space to run (non-CFAR-affiliated) LW events after he told me of this incident. In hindsight I would do a bunch of things differently around these events, particularly asking Jay more questions about how it went, and asking Robert more questions too probably, particularly since in hindsight there were a number of other signs that Robert didn’t have the right skills and character here (e.g., he found it difficult to believe he could refuse hugs; and he’d told me about a previous more minor incident involving Robert giving someone else “permission” to touch Jay’s hair.) My guess in hindsight is that the incident had more warning signs about it than I noticed at the time. But I don’t think “he confessed to rape” is a good description.

(Separately, Somni and Jay later published complaints about Robert that included more than what’s above, after which CFAR asked Robert not to be in CFAR’s community space. Robert and I remained and remain friends.)

(Robert has since worked with an AltJ group that he says actually helped a lot, if it matters, and has shown me writeups and things that leave me thinking he’s taken things pretty seriously and has been slowly acquiring the skills/character he initially lacked. I am inclined to think he has made serious progress, via serious work. But I am definitely not qualified to judge this on behalf of a community; if CFAR ever readmits Robert to community events it will be on someone else’s judgment who seems better at this sort of judgment, not sure who.)

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-20T02:49:12.580Z · LW · GW

I wish there were more facts about Leverage out in actual common knowledge.

One thing I’d find really helpful, and that I suspect might be helpful broadly for untangling what happened and making parts of it obvious / common knowledge, is if I/someone/a group could assemble a Leverage timeline that included:

  • Who worked there in different years. When they came and left.
  • Who was dating whom at different years, in cases where both parties worked at Leverage and at least one was within leadership.
  • Funding cycles: when funding from different sources was applied for and/or received; what the within-Leverage narrative was for what was needed to get the funding.
  • Maybe anything else broad and simple/factual/obvious about that time period.

If anyone wants to give me any of this info, either anonymously or with your name attached, I’d be very glad to help assemble this into a timeline. I’m also at least as enthusiastic about anyone else doing this, and would be glad to pay a small amount for someone’s time if that would help. Maybe also it could be cobbled together in common here, if anyone is willing to contribute some of these basic facts in common here.

Is anyone up for collaborating toward this in some form? I’m hoping it might be easier than some kinds of sorting-through, and like it might make some of the harder stuff easier once done.

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-20T02:18:51.619Z · LW · GW

Sorry, only just now saw that I was mentioned by name here. I agree that Zoe's experiences were horrifying and sad, and that it's worth quite a bit to try to spare others that kind of thing. Not mangling peoples' souls matters, rather a lot, both intrinsically (because people matter) and instrumentally (because we need integrity if we want to do anything real and sustained).

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T23:25:11.628Z · LW · GW

I agree with all of the above. And yet a third thing, which Jessica also discusses in the OP, is the community near MIRI and/or CFAR, whose ideology has been somewhat shaped by the two organizations.

There are some good things to be gained from lumping things together (larger datasets on which to attempt inference) and some things that are confusing.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T15:40:19.002Z · LW · GW

Not a direct response to you, but if anyone who hasn't talked to Vassar is wanting an example of Vassar-conversation that may be easier to understand or get some sense from than most examples would (though it'll have a fair bit in it that'll probably still seem false/confusing), you might try Spencer Greenberg's podcast with Vassar.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T15:26:35.530Z · LW · GW

This sounds like an extreme and surprising statement.

Well, maybe I phrased it poorly; I don't think what I'm doing is extreme; "much" is doing a bunch of work in my "I am not much trying to..." sentence.

I mean, there's plenty I don't want to share, like a normal person. I have confidential info of other peoples that I'm committed to not sharing, and plenty of my own stuff that I am private about for whatever reason. But in terms of rough structural properties of my mind, or most of my beliefs, I'm not much trying for privacy. Like when I imagine being in a context where a bunch of circling is happening or something (circling allows silence/ignoring questions/etc..; still, people sometimes complain that facial expressions leak through and they don't know how to avoid it), I'm not personally like "I need my privacy though." And I've updated some toward sharing more compared to what I used to do.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T13:12:14.206Z · LW · GW

I enjoyed it (and upvoted) for humor plus IMO having a point. Humor is great after a thread this long.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T12:51:35.101Z · LW · GW

Okay, so, that old textbook does not look like a picture of goal-factoring, at least not on that page. But I typed "goal-factoring" into my google drive and got up these old notes that used the word while designing classes for the 2012 minicamps. A rabbithole, but one I enjoyed so maybe others will.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T11:53:54.681Z · LW · GW

Related to my reply to PhoenixFriend (in the parent comment), but hopping meta from it:

I have a question for whoever out there thinks they know how the etiquette of this kind of conversation should go. I had a first draft of my reply to PhoenixFriend, where I … basically tried to err on the side of being welcoming, looking for and affirming the elements of truth I could hear in what PhoenixFriend had written, and sort of emphasizing those elements more than my also-real disagreements. I ran it by a CFAR colleague at my colleague’s request, who said something like “look, I think your reply is pretty misleading; you should be louder and clearer about the ways your best guess about what happened differed from what’s described in PhoenixFriend’s comment. Especially since I and others at CFAR have our names on the organization too, so if you phrase things in ways that’ll cause strangers who’re skim-reading to guess that things at CFAR were worse than they were, you’ll inaccurately and unjustly mess with other peoples’ reputations too.” (Paraphrased.)

So then I went back and made my comments more disagreeable and full of details about where my and PhoenixFriend’s models differ. (Though probably still less than the amount that would've fully addressed my colleague's complaints.)

This… seems better in that it addresses my colleague’s pretty reasonable desire, but worse in that it is not welcoming to someone who is trying to share info and is probably finding that hard. I am curious if anyone has good thoughts on how this sort of etiquette should go, if we want to have an illuminating, get-it-all-out-there, non-misleading conversation.

Part of why I’m worried, is it seems to me pretty easy for people who basically think the existing organizations are good, and also that mainstream workplaces are non-damaging and so on, to upvote/downvote each new datum based on those priors plus a (sane and sensible) desire to avoid hurting others’ feelings and reputations without due cause, etc., in ways that despite their reasonability may make it hard for real and needed conversations that are contrary to our current patterns of seeing to get started.

For example, I think PhoenixFriend indeed saw some real things at CFAR that many of those downvoting their comment did not see and mistakenly wouldn’t expect to see, but that also many of the details of PhoenixFriend’s comment are off, partly maybe because they were mis-generalizing from their experiences and partly because it’s hard to name things exactly (especially to people who have a bit of an incentive to mishear.)

(Also, to try briefly and poorly to spell out why I’m rooting for a “get it all out on the table” conversation, and not just a more limited “hear and acknowledge the mostly blatant/known harms, correct those where possible, and leave the rest of our reputation intact” conversation: basically, I think there’s a bunch of built-up “technical debt”, in the form of confusion and mistrust and trying-not-to-talk-about-particular-things-because-others-will-form-“unreasonable”-conflusions-if-we-do and who-knows-why-we-do-that-but-we-do-so-there’s-probably-a-reason, that I’m hoping gets cleared out by the long and IMO relatively high-quality and contentful conversation that’s been happening so far. I want more of that if we can get it. I want culture and groups to be able to build around here without building on top of technical debt. I also want information about how organizations do/don’t work well, and, in terms of means of acquiring this information, I much prefer bad-looking conversations on LW to wasting another five years doing it wrong.)

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T11:47:04.167Z · LW · GW

Thank you for adding your detailed take/observations.

My own take on some of the details of CFAR that’re discussed in your comment:

Debugging sessions with Anna and with other members of the leadership was nigh unavoidable and asymmetric, meaning that while the leadership could avoid getting debugged it was almost impossible to do so as a rank-and-file member. Sometimes Anna described her process as "implanting an engine of desperation" within the people she was debugging deeply. This obviously had lots of ill psychological effects on the people involved, but some of them did seem to find a deeper kind of motivation.

I think there were serious problems here, though our estimates of the frequencies might differ. To describe the overall situation in detail:

  • I often got debugging help from other members of CFAR, but, as noted in the quote, it was voluntary. I picked when and about what and did not feel pressure to do so.
  • I can think of at least three people at CFAR who had a lot of debugging sort of forced on them (visibly expected as part of their job set-up or of check-in meetings or similar; they didn’t make clear complaints but that is still “sort of forced”), in ways that were large and that seem to me clearly not okay in hindsight. I think lots of other people mostly did not experience this. There are a fair number of people about whom I am not sure or would make an in-between guess. To be clear, I think this was bad (predictably harmful, in ways I didn’t quite get at the time but that e.g. standard ethical guidelines in therapy have long known about), and I regret it and intend to avoid “people doing extensive debugging of those they have direct power over” contexts going forward.
  • I believe this sort of problem was more present in the early years, and less true as CFAR became older, better structured, somewhat “more professional”, and less centered around me. In particular, I think Pete’s becoming ED helped quite a bit. I also think the current regime (“holocracy”) has basically none of this, and is structured so as to predictably have basically none of this -- predictably, since there’s not much in the way of power imbalances now.
  • It’s plausible I’m wrong about how much of this happened, and how bad it was, in different eras. In particular, it is easy for those in power (e.g., me) to underestimate aspects of how bad it is not to have power; and I did not do much to try to work around the natural blindspot. If anyone wants to undertake a survey of CFAR’s past and present staff on this point (ideally someone folks know and can accurately trust to maintain their anonymity while aggregating their data, say, and then posting the results to LW), I’d be glad to get email addresses for CFAR’s past and present staff for the purpose.
  • I’m sure I did not describe my process as “implanting an engine of desperation”; I don’t remember that and it doesn’t seem like a way I would choose to describe what I was doing. “Implanting” especially doesn’t. As Eli notes (this hadn’t occurred to me, but might be what you’re thinking of?), I did talk some about trying to get in touch with one’s “quiet desperation”, and referenced Pink Floyd’s song “Time” and “the mass of men lead lives of quiet desperation” and developed concepts around that; but this was about accessing a thing that was already there, not "implanting" a thing. I also led many people in “internal double cruxes around existential risk”, which often caused fairly big reactions as people viscerally noticed “we might all die.”

Relatedly, the organization uses a technique called goal factoring during debugging which was in large part inspired by Geoff Anders' Connection Theory and was actually taught by Geoff at CFAR workshops at some point. This means that CFAR debugging in many ways resembles Leverage's debugging and the similarity in naming isn't just a coincidence of terms.

I disagree with this point overall. Goal-Factoring was first called “use fungibility”, a technique I taught within a class called “microeconomics 1” at the CFAR 2012 minicamps prior to Geoff doing any teaching. It was also discussed at times in some form at the old SingInst visiting fellows program, IIRC.
Geoff developed it, and taught it at many CFAR workshops in early years (2013-2014, I think). The choice that it was Goal-Factoring that Geoff (was asked to teach? wanted to teach? I don’t actually remember; probably both?) was I think partly to do with its resemblance to the beginning/repeated basic move in Connection Theory.

No one at CFAR was required to use the double-crux conversational technique for reaching agreement, but if a rank-and-file member refused to they were treated as if they were being intellectually dishonest, while if a leader refused to they were just exercising their right to avoid double-cruxing. While I believe the technique is epistemically beneficial, the uneven demands on when it is used biases outcomes of conversations.

My guess is that there were asymmetries like this, and that they were important, and that they were not worse than most organizations (though that’s really not the right benchmark). Insofar as you have experience at other organizations (e.g. mainstream tech companies or whatnot), or have friends with such experience who you can ask questions of, I am curious how you think they compare.

On my own list of “things I would do really differently if I was back in 2012 starting CFAR again”, the top-ranked item is probably:

  • Share information widely among staff, rather than (mostly unconsciously/not-that-endorsedly) using lack-of-information-sharing to try to control people and outcomes.
  • Do consider myself to have some duty to explain decisions and reply to questions. Not “before acting”, because the show must go on and attempts to reach consensus would be endless. And not “with others as an authority that can prevent me from acting if they don’t agree.” But yes with a sincere attempt to communicate my actual beliefs and causes of actions, and to hear others’ replies, insofar as time permits.

I don’t think I did worse than typical organizations in the wider world, on the above points.

I’m honestly uncertain how much this is/isn’t related to the quoted complaint.

There were required sessions of a social/relational practice called circling (which kind of has a cult of its own). It should be noted that circling as a practice is meant to be egalitarian and symmetric, but circling within the context of CFAR had a weird power dynamic because subordinates would circle with the organizational leaders. The whole point of circling is to create a state of emotional vulnerability and openness in the person who is being circled. This often required rank-and-file members to be emotionally vulnerable to the leadership who perhaps didn't actually have their best interests at heart.

Duncan’s reply here is probably more accurate to the actual situation at CFAR than mine would be. (I wrote much of the previous paragraphs before seeing his, but endorsing Duncan’s on this here seems best.) If Pete wants to weigh in I would also take his perspective quite seriously here. I don’t quite remember some of the details.

As Duncan noted, “creating a state of emotional vulnerability and openness” is really not supposed to be the point of circling, but it is a thing that happens pretty often and that a person might not know how to avoid.

The point of circling IMO is to break all the fourth walls that conversations often skirt around, let the subtext or manner in which the conversation is being done be made explicit text, and let it all thereby be looked at together.

A different thing that I in hindsight think was an error (that I already had on my explicit list of “things to do differently going forward”, and had mentioned in this light to a few people) was using circling in the way we did at AIRCS workshops, where some folks were there to try to get jobs. My current view, as mentioned a bit above, is that something pretty powerfully bad sometimes happens when a person accesses bits of their insides (in the way that e.g. therapy or some self-help techniques lead people to) while also believing they need to please an external party who is looking at them and has power over them.

(My guess is that well-facilitated circling is fine at AIRCS-like programs that are less directly recruiting-oriented. Also that circling at AIRCS had huge upsides. This is a can of worms I don’t plan to go into right now, in the middle of this comment reply, but flagging it to make my above paragraph not overgenralized-from.)

The overall effect of all this debugging and circling was that it was hard to maintain the privacy and integrity of your mind if you were a rank-and-file employee at CFAR.

I believe this was your experience, and am sorry. My non-confident guess is that some others experienced this and most didn’t, and that the impact on folks’ mental privacy was considerably more invasive than a standard workplace would’ve been, and that the impact on folks’ integrity was probably less bad than my guess at many mainstream workplace’s impact but still a lot worse than the CFAR we ought to aim for.

Personally I am not much trying to maintain the privacy of my own mind at this point, but I am certainly trying to maintain its integrity, and I think being debugged by people with power over me would not be good for that.

The longer you stayed with the organization, the more it felt like your family and friends on the outside could not understand the problems facing the world, because they lacked access to the reasoning tools and intellectual leaders you had access to. This led to a deep sense of alienation from the rest of society. Team members ended up spending most of their time around other members and looking down on outsiders as "normies".

This wasn’t my experience at all, personally. I did have some feeling of distance when I first started caring about AI risk in ~2008, but it didn’t get worse across CFAR. I also stayed in a lot of contact with folks outside the CFAR / EA / rationalist / AI risk spheres through almost all of it. I don’t think I looked down on outsiders.

There was a rarity narrative around being part of the only organization trying to "actually figure things out", ignoring other organizations in the ecosystem working on AI safety and rationality and other communities with epistemic merit. CFAR/MIRI perpetuated the sense that there was nowhere worthwhile to go if you left the organization.

I thought CFAR and MIRI were part of a rare and important thing, but I did not think CFAR (nor CFAR + MIRI) was the only thing to matter. I do think there’s some truth in the “rarity narrative” claim, at CFAR, mostly via me and to a much smaller extent some others at CFAR having some of this view of MIRI.

There was a rarity narrative around the sharpness of Anna's critical thinking skills, which made it so that if Anna knew everything you knew about a concern and disagreed with you, there was a lot of social pressure to defer to her judgment.

I agree that this happened and that it was a problem. I didn’t consciously intend to set this up, but my guess is that I did a bunch of things to cause it anyhow. In particular, there’s a certain way I used to sort of take the ground out from under people when we talked, that I think contributed to this. (I used to often do something like: stay cagey about my own opinions; listen carefully to how my interlocutor was modeling the world; show bits of evidence that refuted some of their assumptions; listen to their new model; repeat; … without showing my work. And then they would defer to me, instead of having stubborn opinions I didn’t know how to shift, which on some level was what I wanted.)

People at current-CFAR respect my views still, but it actually feels way healthier to me now. Partly because I’m letting my own views and their causes be more visible, which I think makes it easier to respond to. And because I somehow have less of a feeling of needing to control what other people think or do via changing their views.

(I haven't checked the above much against others' perceptions, so would be curious for anyone from current or past CFAR with a take.)

There was rampant use of narrative warfare (called "narrativemancy" within the organization) by leadership to cast aspersions and blame on employees and each other. There was frequent non-ironic use of magical and narrative schemas which involved comparing situations to fairy-tales or myths and then drawing conclusions about those situations with high confidence. The narrativemancer would operate by casting various members of the group into roles and then using the narrative arc of the story to make predictions about how the relationship dynamics of the people involved would play out. There were usually obvious controlling motives behind the narrative framings being employed, but the framings were hard to escape for most employees.

I believe this was your experience, mostly because I’m pretty sure I know who you are (sorry; I didn’t mean to know and won’t make it public) and I can think of at least one over-the-top (but sincere) conversation you could reasonably describe at least sort of this way (except for the “with high confidence”, I guess, and the "frequent"; and some other bits), plus some repeated conflicts. I don’t think this was a common experience, or that it happened much at all (or at all at all?) in contexts not involving you, but it’s possible I’m being an idiot here somehow in which case someone should speak up. Which I guess is to say that the above bullet point seems to me, from my experiences/observations, to be mostly or almost-entirely false, but that I think you’re describing your experiences and guesses about the place accurately and that I appreciate you speaking up.

[all the other bullet points] I agree with parts and disagree with parts; but seemed mostly less interesting than the above. —

Anyhow, thanks for writing, and I’m sorry you had bad experiences at CFAR, especially about the fairly substantial parts of the above bad parts that were my fault.

I expect my reply will accidentally make some true points you’re making harder to see (as well as hopefully adding light to some other parts), and I hope you’ll push back in those places.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T14:08:17.395Z · LW · GW

I, too, asked people questions after that incident and failed to locate any evidence of drugs.

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-17T18:44:16.427Z · LW · GW

Which thing are you claiming here? I am a bit confused by the double negative (you're saying there's "widely known evidence that it isn't true that representatives don't even notice when abuse happens", I think; might you rephrase?).

I've made stupid and harmful errors at various time, and e.g. should've been much quicker on the uptake about Brent, and asked more questions when Robert brought me info about his having been "bad at consent" as he put it. I don't wish to be and don't think I should be one of the main people trying to safeguard victims' rights; I don't think I have needed eyes/skill for it. (Separately, I am not putting in the time and effort required to safeguard a community of many hundreds, nor is anyone that I know of, nor do I know if we know how or if there's much agreement on what kinds of 'safeguarding' are even good ideas, so there are whole piles of technical debt and gaps in common knowledge and so on here.)

Nonetheless, I don't and didn't view abuse as acceptable, nor did I intend to tolerate serious harms. Parts of Jay's account of the meeting with me are inaccurate (differ from what I'm really pretty sure I remember, and also from what Robert and his husband said when I asked them for their separate recollections). (From your perspective, I could be lying, in coordination with Robert and his husband who also remember what I remember. But I'll say my piece anyhow. And I don't have much of a reputation for lying.) If you want details on how the me/Robert/Jay interaction went as far as I can remember, they're discussed in a closed FB group with ~130 members that you might be able to join if you ask the mods; I can also paste them in here I guess, although it's rather personal/detailed stuff about Robert to have on the full-on public googleable internet so maybe I'll ask his thoughts/preferences first, or I'm interested in others' thoughts on how the etiquette of this sort of thing ought to go. Or could PM them or something, but then you skip the "group getting to discuss it" part. We at CFAR brought Julia Wise into the discussion last time (not after the original me/Robert/Jay conversation, but after Jay's later allegations plus Somni's made it apparent that there was something more serious here), because we figured she was trustworthy and had a decent track record at spotting this kind of thing.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-17T01:12:28.508Z · LW · GW

To be clear, a lot of what I find so relaxing about Jessica’s post is that my experience reading it is of seeing someone who is successfully noticing a bunch of details in a way that, relative to what I’m trying to track, leaves room for lots of different things to get sorted out separately.

I just got an email that led me to sort of triggeredly worry that folks will take my publicly agreeing with the OP to mean that I e.g. think MIRI is bad in general. I don’t think that; I really like MIRI and have huge respect and appreciation for a lot of the people there; I also like many things about the CFAR experiment and love basically all of the people who worked there; I think there’s a lot to value across this whole space.

I like the detailed specific points that are made in the OP (with some specific disagreements; though also with corroborating detail I can add in various places); I think this whole “how do we make sense of what happens when people get together into groups? and what happened exactly in the different groups?” question is an unusually good time to lean on detail-tracking and reading comprehension.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-16T23:00:08.882Z · LW · GW

I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said "yes". This part of the plan was the same.

Re: “this part of the plan was the same”: IMO, some at CFAR were interested in helping some subset of people become Elon Musk, but this is different from the idea that everyone is supposed to become Musk and that that is the plan. IME there was usually mostly (though not invariably, which I expect led to problems; and for all I know “usually” may also have been the case in various parts and years of Leverage) acceptance for folks who did not wish to try to change themselves much.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-16T22:53:04.205Z · LW · GW

Here is a thread for detail disagreements, including nitpicks and including larger things, that aren’t necessarily meant to connect up with any particular claim about what overall narratives are accurate. (Or maybe the whole comment section is that, because this is LessWrong? Not sure.)

I’m starting this because local validity semantics are important, and because it’s easier to get details right if I (and probably others) can consider those details without having to pre-compute whether those details will support correct or incorrect larger claims.

For me personally, part of the issue is that though I disagree with a couple of the OPs details, I also have some other details that support the larger narrative which are not included in the OP, probably because I have many experiences in the MIRI/CFAR/adjacent communities space that Jessicata doesn’t know and couldn’t include. And I keep expecting that if I post details without these kinds of conceptualizing statements, people will use this to make false inferences about my guesses about higher-order-bits of what happened.

Comment by AnnaSalamon on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-16T22:23:14.121Z · LW · GW

FWIW, the above matches my own experiences/observations/hearsay at and near MIRI and CFAR, and seems to me personally like a sensible and correct way to put it together into a parsable narrative. The OP speaks for me. (Clarifying at a CFAR colleague's request that here and elsewhere, I'm speaking for just for myself and not for CFAR or anyone else.)

(I of course still want other conflicting details and narratives that folks may have; my personal 'oh wow this puts a lot of pieces together in a parsable form that yields basically correct predictions' level is high here, but insofar as I'm encouraging anything because I'm in a position where my words are loud invitations, I want to encourage folks to share all the details/stories/reactions pointing in all the directions.) I also have a few factual nitpicks that I may get around to commenting, but they don’t subtract from my overall agreement.

I appreciate the extent to which you (Jessicata) manage to make the whole thing parsable and sensible to me and some of my imagined readers. I tried a couple times to write up some bits of experience/thoughts, but had trouble managing to say many different things A without seeming to also negate other true things A’, A’’, etc., maybe partly because I’m triggered about a lot of this / haven’t figured out how to mesh different parts of what I’m seeing with some overall common sense, and also because I kept anticipating the same in many readers.

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-14T20:08:08.951Z · LW · GW

I imagine a lot of people want to say a lot of things about Leverage and the dynamics around it, except it’s difficult or costly/risky or hard-to-imagine-being-heard-about or similar.

If anyone is up for saying a bit about how that is for you personally (about what has you reluctant to try to share stuff to do with Leverage, or with EA/Leverage dynamics or whatever, that in some other sense you wish you could share — whether you had much contact with Leverage or not), I think that would be great and would help open up space.

I’d say err on the side of including the obvious.

Comment by AnnaSalamon on Common knowledge about Leverage Research 1.0 · 2021-10-14T19:59:47.197Z · LW · GW

That's right; I am daydreaming of something very difficult being brought together somehow, in person or in writing (probably slightly less easily-visible-across-the-whole-internet writing, if in writing). I’d be interested in helping but don’t have the know-how on my own to pull it off. I agree with you there’re lots of ways to try this and make things worse; I expect it's key to have very limited ambitions and to be clear about how very much one is not attempting/promising.

Comment by AnnaSalamon on Common knowledge about Leverage Research 1.0 · 2021-10-14T15:26:11.390Z · LW · GW

Yes.

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-14T09:00:21.678Z · LW · GW

I'm also not a fan of requests that presume that the listener ...

From my POV, requests, and statements of what I hope for, aren't advice. I think they don't presume that the listener will want to do them or will be advantaged by them, or anything much else about the listener except that it's okay to communicate my request/hope to them. My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people, and I guess I'm also assuming that my words won't be assumed to be a trustworthy voice of authorities that know where the person's own interests are, or something. That I can be just some person, hoping and talking and expecting to be evaluated by equals.

Is it that you think these assumptions of mine are importantly false, such that I should try to follow some other communication norm, where I more try to only advocate for things that will turn out to be in the other party's interests, or to carefully disclaim if I'm not sure what'll be in their interests? That sounds tricky; I'm not peoples' parents and they shouldn't trust me to do that, and I'm afraid that if I try to talk that way I'll make it even more confusing for anyone who starts out confused like that.

I think I'm missing part of where you're coming from in terms of what good norms are around requests, or else I disagree about those norms.

you have way more private info than me, so perhaps...

I don't have that much relevant-info-that-hasn't-been-shared, and am mostly not trying to rely on it in whatever arguments I'm making here. Trying to converse more transparently, rather.

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-14T03:34:34.832Z · LW · GW

I would like it if we showed the world how accountability is done

So would I. But to do accountability (as distinguished from scapegoating, less-epistemic blame), we need to know what happened, and we need to accurately trust each other (or at least most of each other) to be able to figure out what happened, and to care what actually happened.

The “figure out what happened” and “get in a position where we can have a non-fucked conversation” steps come first, IMO.

I also sort of don’t expect that much goal divergence on the accountability steps that very-optimistically come after those steps, either, basically because integrity and visible trustworthiness serve most good goals in the long run, and vengeance or temporarily-overextended-trust serves little.

Though, accountability is admittedly a weak point of mine, so I might be missing/omitting something. Maybe spell it out if so?

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-14T03:16:37.131Z · LW · GW

Thanks for the clarifying question, and the push-back. To elaborate my own take: I (like you) predict that some (maybe many) will take shared facts in a politicized way, will use them as an excuse for false or uncareful judgments, etc. I am not guaranteeing, nor predicting, that this won’t occur.

I am intending to myself do inference and conversation in a way that tries to avoids these “politicized speech” patterns, even if it turns out politically costly or socially awkward for me to do so. I am intending to make some attempt (not an infinite amount of effort, but some real effort, at some real cost if needed) to try to make it easier for others to do this too, and/or to ask it of others who I think may be amenable to being asked this, and/or to help coalesce a conversation in what I take to be a better pattern if I can figure out how to do so. I also predict, independently of my own efforts, that a nontrivial number of others will be trying this.

If “reputation management” is a person’s main goal, then the small- to medium-sized efforts I can hope to contribute toward a better conversation, plus the efforts I’m confident in predicting independently of mine, would be insufficient to mean that a person’s goals would be well-served in the short run by following my request to avoid “refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.”

However, I’m pretty sure most people in this ecosystem, maybe everyone, would deep down like to figure out how to actually see what kinds of fucked up individual and group and inter-group dynamics we’ve variously gotten into, and why and how, so that we can have a realer shot at things going forward. And I'm pretty sure we want this (in the deeper, long-run sense) much more than we want short-run reputation management. Separately, I suspect most peoples’ reputation-management will be kinda fucked in the long run if we don’t figure out how to get enough right to do actual progress on the world (vs creating local illusions of the same), although this last sentence is more controversial and less obvious. So, yeah, I’m asking people to try to engage in real conversation with me and others even though it’ll probably mess up parts of their/our reputation in the short run, and even though probably many won't manage to joint this in the short run. And I suspect this effort will be good for many peoples’ deeper goals despite the political dynamics you mention.

Here’s to trying.

Comment by AnnaSalamon on Common knowledge about Leverage Research 1.0 · 2021-10-13T17:30:08.968Z · LW · GW

???? I'm so confused about what happened here. The aliens part (as stated) isn't a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there's something being lost in translation here, and missing context for why people didn't immediately see that this person was having a mental breakdown?

FWIW, my own experience is that people often miss fairly blatant psychotic episodes; so I'm not sure how Leverage-specific the explanation needs to be for this one. For example, once I came to believe that an acquaintance was having a psychotic episode and suggested he see a psychiatrist; the psychiatrist agreed. A friend who'd observed most of the same data I had asked me how I'd known. I said it was several things, but that the bit where our acquaintance said God was talking to him through his cereal box was one of the tip-offs from my POV. My friend's response was "oh, I thought that was a metaphor." I know several different stories like this one, including a later instance where I was among those who missed what in hindsight was fairly blatant evidence that someone was psychotic, none of which involved weird group-level beliefs or practices.

Comment by AnnaSalamon on How to think about and deal with OpenAI · 2021-10-13T04:32:33.426Z · LW · GW

I think we should not be hesitant to talk about this in public. I used to be of the opposite opinion, believing-as-if there was a benevolent conspiracy that figured out which conversations could/couldn’t nudge AI politics in useful ways, whose upsides were more important than the upsides of LWers/etc. knowing what’s up. I now both believe less in such a conspiracy, and believe more that we need public fora in which to reason because we do not have functional private fora with memory (in the way that a LW comment thread has memory) that span across organizations.

It’s possible I’m still missing something, but if so it would be nice to have it spelled out publicly what exactly I am missing.

I agree with Lincoln Quirk’s comment that things could turn into a kind of culture war, and that this would be harmful. It seems to me it’s worth responding to this by trying unusually hard (on this or other easily politicizable topics) to avoid treating arguments like soldiers. But it doesn’t seem worthwhile to me to refrain from honest attempts to think in public.

Comment by AnnaSalamon on How to think about and deal with OpenAI · 2021-10-13T03:40:03.297Z · LW · GW

I disagree with Lincoln's comment, but I'm confused that when I read it just now it was at -2; it seems like a substantive comment/opinion that deserves to be heard and part of the conversation.

If comments expressing some folks' actual point of view are downvoted below the visibility threshold, it'll be hard to have good substantive conversation.

Comment by AnnaSalamon on Zoe Curzi's Experience with Leverage Research · 2021-10-13T03:25:15.689Z · LW · GW

More thoughts:

I really care about the conversation that’s likely to ensue here, like probably a lot of people do.

I want to speak a bit to what I hope happens, and to what I hope doesn’t happen, in that conversation. Because I think it’s gonna be a tricky one.

What I hope happens:

  • Curiosity
  • Caring,
  • Compassion,
  • Interest in understanding both the specifics of what happened at Leverage, and any general principles it might hint at about human dynamics, or human dynamics in particular kinds of groups.

What I hope doesn’t happen:

  • Distancing from uncomfortable data.
  • Using blame and politics to distance from uncomfortable data.
  • Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.

This is LessWrong; let’s show the world how curiosity/compassion/inquiry is done!

Comment by AnnaSalamon on Common knowledge about Leverage Research 1.0 · 2021-10-13T03:24:29.172Z · LW · GW

CFAR recently hosted a “Speaking for the Dead” event, where a bunch of current and former staff got together to try to name as much as we could of what had happened at CFAR, especially anything that there seemed to have been (conscious or unconscious) optimization to keep invisible.

CFAR is not dead, but we took the name anyhow from Orson Scott Card’s novel by the same name, which has quotes like:

“...and when their loved ones died, a believer would arise beside the grave to be the Speaker for the Dead, and say what the dead one would have said, but with full candor, hiding no faults and pretending no virtues.”

“A strange thing happened then. The Speaker agreed with her that she had made a mistake that night, and she knew when he said the words that it was true, that his judgment was correct. And yet she felt strangely healed, as if simply saying her mistake were enough to purge some of the pain of it. For the first time, then, she caught a glimpse of what the power of speaking might be. It wasn’t a matter of confession, penance, and absolution, like the priests offered. It was something else entirely. Telling the story of who she was, and then realizing that she was no longer the same person. That she had made a mistake, and the mistake had changed her, and now she would not make the mistake again because she had become someone else, someone less afraid, someone more compassionate.”

“... there were many who decided that their life was worthwhile enough, despite their errors, that when they died a Speaker should tell the truth for them.”

CFAR’s “speaking for the dead” event seemed really good to me. Healing, opening up space for creativity. I hope the former members of Leverage are able to do something similar. I really like and appreciate Zoe sharing all these details, and I hope folks can meet her details with other details, all the details, whatever they turn out to have been.

I don't know what context permits that kind of conversation, but I hope all of us on the outside help create whatever kind of context it is that allows truth to be shared and heard.