Posts

Hints about where values come from 2023-10-18T00:07:58.051Z
BBE W1: HMCM and Notetaking Systems 2020-06-09T19:20:31.975Z
BBE W1: Personal Notetaking Desiderata Walkthrough 2020-06-09T19:19:05.860Z
Build a Better Exobrain: Week 0 Overview 2020-06-09T19:09:14.177Z
Coping and Cultures 2020-05-06T22:49:13.428Z
Does SARS-CoV-2 utilize antibody-dependent enhancement? 2020-03-14T22:46:38.293Z
Bioinfohazards 2019-09-17T02:41:30.175Z
Spiracular's Shortform Feed 2019-06-13T20:36:26.603Z

Comments

Comment by Spiracular on Some thoughts on the cults LW had · 2023-02-27T21:05:42.169Z · LW · GW

I like something about this formulation? No idea if you have time, but I'd be interested if you expanded on it.

I'm not convinced "high-energy" is the right phrasing, since the attributes (as I seem them) seem to be:

  • Diverges from current worldview
  • High-confidence
    • Expressed or uptaken, in a way that allows little space for uncertainty/wavering. May take a dark attitude on ensembling it with other worldviews.
    • May have a lot of internal consistency.
      • "Unreasonable internal consistency" is (paradoxically) sometimes a marker for reality, and sometimes a tell that something is truly mad and self-reinforcing.
  • Pushes a large change in behavior, and pushes it hard
  • The change is costly, at least under your original paradigm
  • The change may be sticky (& here are some possible mechanisms)
    • Activates morality or tribal-affiliation concerns
      • "If you hear X, and don't believe X and convert X into praxis immediately... then you are our enemy and are infinitely corrupt" or similar attitudes and beliefs
    • Hard to get data that updates you out of the expensive behavior
      • ex: Ziz using revenge to try to change the incentive landscape in counterfactual/multiverse-branching universes, which you cannot directly observe? Can't observe = no clear way to learn if this isn't working, and update out. (I believe this is how she justifies resisting arrest, too.)
    • The change in behavior comes with an exhortation for you to do lots of things that spread the idea to other people.
      • This is sometimes an indicator for highly-contagious memes, that were selected more for virulence than usefulness to the bearer. (Not always, though.)
    • Leaves you with too little slack to re-evaluate what you've been doing, or corrupts your re-evaluation metrics.
      • ex: It feels like you'd need to argue with someone who is hard to argue with, or else you've dismissed it prematurely. That would be really bad. You model that argument as likely to go poorly, and you really don't want to...
        • This sentiment shows up really commonly among people deeply affected by "reality warper" people and their beliefs? It shows up in normal circumstances, too. It seems much, much more intense in "reality warper" cases, though.

I would add that some people seem to have a tendency to take what is usually a low-energy meme in most hands, and turn it into a high-energy form? I think this is an attribute that characterizes some varieties of charisma, and is common among "reality warpers."

(Awkwardly, I think "mapping high-minded ideas to practical behaviors" is also an incredibly useful attribute of highly-practical highly-effective people? Good leaders are often talented at this subskill, not just bad ones. Discernment in what ideas you take seriously, can make a really big difference in the outcomes, here.)

Some varieties of couching or argumentation will push extreme change in behavior and action, harder than others, for the same idea. Some varieties of receptivity and listening, seem more likely to uptake ideas as high-energy memes.

I feel like Pascal's Mugging is related, but not the only case. Ex: Under Utilitarianism, you can also justify a costly behavior by arguing from very high certainty of a moderate benefit. However, this is usually not as sticky, and it is more likely to rapidly right itself if future data disputes the benefit.

Comment by Spiracular on [Link] A community alert about Ziz · 2023-02-26T20:00:04.419Z · LW · GW

At the risk of this looking too much like me fighting a strawman...

Cults may have a tendency to interact and pick up adaptations from each other, but it seems wrong to operate on the assumption that they're all derivatives of one ancestral "proto-cult" or whatever. Cult leaders are not literal vampires, where you only become a cult leader by getting bit by a previous cult leader or whatever.

It's a cultural attractor, and a cult is a social technology simple enough that it can be spontaneously re-derived. But cults can sometimes pick up or swap beliefs & virulence factors with each other, when they interact. And I do think Ziz picked up a few beliefs from the Vassarite cluster.

I can dig up cases in Ziz's writing where Ziz has interacted with Vassar before, or may have indirectly learned things from him through Alice.

Doesn't make Vassar directly responsible for Ziz's actions. I think Vassar is not directly responsible for Ziz.

I do want to spell this out, because I'm reading a subtle implication here, that I want to push back against.

Comment by Spiracular on Spiracular's Shortform Feed · 2022-07-14T15:33:27.520Z · LW · GW

Give a man a fish, feed him for a day. Teach a man to fish, feed him for a lifetime.

Cultivate someone with an earnest and heartfelt interest in fishing, until he learns how to grow his own skills and teach himself... and you can probably feed at least 12 people.

Which just might finally allow the other 11 to specialize into something more appealing to them than subsistence fishing.

Comment by Spiracular on Fighting in various places for a really long time · 2022-06-16T01:29:34.956Z · LW · GW

Okay, a lot of this commentary hit "sideways." Let me see if I can unpack some of this.

0

A lot of what's missing is meditation.

TL;DR: It's a meditation metaphor movie, with some heavily Eastern themes and symbology.

I'm about 99% sure that at its thematic core, it's an "enlightenment/meditation metaphor" movie. I thought it does a really good job at being that, but that part is understandably not going to hit with everyone.

Did you notice that the damn circle has at least 3 different meanings or references, which all tie in neatly with each other? One of the major ones, that I think some people are likely to miss, is Ensō.

Ensō has a pile of deep associations and meanings in Zen, many of which they also touch on in other places in the movie. I thought they unpacked that symbol pretty masterfully, and that was pretty central to my enjoyment and understanding of the movie. However, it is something I expect a lot of western audiences to miss completely.

(I have not found a good extensive commentary to link, that unpacks this to my satisfaction. But this guy on twitter seems to get it.)

...on the art level, it also struck me as pretty chaotic. It's a flashy fighting movie, a family comedy, some cringe humor, a bit of an art movie... put it down for "a little of everything," really?

If I'd missed the theme, or God Forbid, if I had mostly tried to assess its merits in terms of how often she's making sensible or strategic goal-directed moves? The movie probably would have landed more as loud silly nonsense.

Some people like loud silly nonsense! I don't think I would have found just the loud silly nonsense all that compelling, though.

This one came with a really strong core theme, that I do think you missed or misunderstood.

1

Everything Else.

"The violence is pointless": The violence being pointless, is actually part of the point. While it's used to generate some initial attention and interest (...in people who find that interesting for some reason), the violence is also deliberately pointless, and the protagonist is supposed to slowly realize this and grow out of it.

(...however, the movie did handle grief with less maturity than a 5-year-old, basically by just ignoring it. I have no idea why! Maybe they really didn't want to slow the movie down? Bleh, even that reason doesn't feel entirely compelling to me, and it did undercut the movie for me a bit.)

"Weirdly NON-attention-getting": I'm pretty sure that the late stages of the movie are actively trying to be held in broad/diffuse attention, not single-point laser-attention. I think so, anyway?

This is kinda part of its whole deal as a "meditation metaphor" movie. Also ties in with its commentary on "looking around, even when what's immediately in front of you seems extremely urgent," as echoed in stuff like looking up from the circled receipt.

If that diffuse state-of-mind is uncomfortable or somnolent for you? You are not alone in that! It's a pretty common sentiment, actually. There are a whole lot of people who complain about finding parts of meditation uncomfortable or sleep-inducing, especially when it gets to the "broad/diffuse attention" step.

(...although this doesn't necessarily rule out that you found the movie boring for unintended reasons, though! To which, shrug it's cool if you didn't like it.)

Comment by Spiracular on Monks of Magnitude · 2022-02-18T16:03:15.427Z · LW · GW

Just distilling some relevant intuitions:

(This is all me thinking about the problem, and I make no claim that others will align with me on these.)

Grant-making

  • Typically, clusters of applications are assessed in 1 rung lower time-frames? Some topics get bumped up to same-tier of 1-tier-higher assessment, if assessors feel it advised.
    • ex: 10E4 is scheduled such that one can submit a grant application, do a 10E3 round, and come back to an answer.
      • cont ex: If the grant-making is running late, there are a bunch of 10E2 jobs opened up to help speed finishing it along? (The number of these is tracked, and marked on the 10E3 assessor's record, but this is generally considered better than running late.)
  • Low-order grant-making is mostly fast-turnaround for small amounts of money, and involves a lot of funding-based-on-past-accomplishments. About 1/2 of it gets funded informally by random friends, drawing from built-up social credit.
    • Involves "rant-branch" and "brief-branch" application rounds (about 10x as many are funded through brief-branch). Rant-branch has higher word limits, and slower turn-around.
      • Rant is for "this is a potentially-valuable idea I haven't distilled yet, is it worth putting in additional time to investigate/summarize/distill it?" It's reasonably common for it to get funded to 1 tier higher than the original grant-seeker lodged.
  • Your first round of 10E1s are funded by the state. There are also periodic "jubilees" where everyone gets a free 10E1 (solving starter problems)
  • High-order grant-making is slower in turnaround, and includes a lot of questions about order-of-magnitude and scope-of-problem/scope-of-potential-solution.
    • Writing summary sequences or field-wide Review articles are one potential way higher-order people demonstrate legible competence.
    • Some 1E(x) people spend their lives dedicated to distilling ~1-10 1E(x+1) persons for the sake of 1E(x-1) (sometimes called ghostwriters)

Some even-less-ordered thoughts on this:

  • There's probably some kind of rotating board full of scattered pre-structured 1E1 and 1E2 job listings that people have pre-funded, which fast-turnaround people can pull from if they don't have a unique 1E1 or 1E2 idea themselves.
    • (1E0 might be too small for this? Hm.)
    • Similarly, 1E2s can pool together the resources to fund a 1E3 on an issue they find relevant. This has somehow been streamlined, method TBD.
  • Somebody needs to be able to restructure and break a subset of 1E(x) problems into 1E(x-1) and 1E(x-2) jobs.
    • People who do this successfully, should probably gain a fair amount of prestige for it (especially if they break it into a smaller time-block in total, or have enabled substantial parallelization of something time-sensitive)
  • 1E(4)s and higher often develop obnoxiously dense tangles of infralanguages and jargon, as a manner of course. This is treated as normal. 1E(4)s who remain legible to 1E(2)s (especially those who are able to translate other 1E(4)'s work to something similarly legible) are called "bridges," and are appropriately prized.
    • This overlaps some with ghostwriters, but is also its own distinct sub-category of researchers. (It's important that bridges aren't all subordinated into 1E(3) work.)
    • There are occasionally bridge & ghostwriter conferences, which tend to be followed by a tidal wave of applications to write or update various dictionaries and encyclopedias.
    • Similar to the current world: Some fields/jargon-sets/infralanguages that have solved the onboarding problem, are widespread enough to have their own conferences, specialized grant ecosystems, assigned ghostwriters, etc.
Comment by Spiracular on Quis cancellat ipsos cancellores? · 2021-12-21T10:13:54.561Z · LW · GW

Geoff tweeted about it, I forwarded that to you.

But after thinking a bit more, including hearing a little more background context from Aella? I think the tension here predates that, and that this is predominantly a reaction to the MAPLE post. Please treat this mostly as a side-note.

There's no recording anymore, but I actually appreciated him on the stream. My overall take on the author is "has a lot of compassion, but I don't always trust his discernment."

Comment by Spiracular on Quis cancellat ipsos cancellores? · 2021-12-20T00:38:11.615Z · LW · GW

Yeah, some of this gets the facts wrong, or a bit off. I don't think this was fact-checked very competently, and in this sort of context, that does matter.

(ex: I can confirm that mittenscautious was not Aella, although Aella was indeed a housemate to Persephone.)


...I hate dishing based on something this speculative, but I do think it's a potentially relevant piece of context...

Aella and Geoff (Executive Director of Leverage) have a lot of enmity towards each other. This is just straightforwardly true.

If I am identifying the author of this Aella-attacking post, correctly? The author of this post was a special guest on one of Geoff Anders' Twitch streams.


I'm normally a mistake theorist, but I find it really tempting to interpret this as the end result of talking to a really skewed sample of people.

(And I might be assigning better-than-even odds that Geoff was involved in that process, somehow.)

UPDATE: I'm updating a few steps in the direction of "I may have gotten some of the causality here, backwards." Tension with Aella predates that. Disliked Aella's MAPLE post, and this might have been some of why he and Geoff got in touch.

Comment by Spiracular on In Defense of Attempting Hard Things, and my story of the Leverage ecosystem · 2021-12-19T20:18:26.767Z · LW · GW

It feels worth pointing out that Universities seem to try to set up this sort of absurdly protective bubble, by design. Uni extracts sometimes-exorbitant rent, while doing so; Leverage was at least usually paying people salaries.

Meanwhile, a lot of US bureaucracy appears almost... tailor-made to make life more complicated, with a special exception to that reserved for "full-time employees of large corporations"? (I think that for historic reasons, some of their bureaucratic complications are consistently outsourced to their company to handle.)

Against this societal backdrop, I find it hard to fault Cathleen or Leverage for trying what they did. While also not being too surprised, that it led to some dependence issues.

(Maybe having a hard "2 years" limit, and accepting a little less "heroic responsibility," would have downgraded a lot of issues to just "University dorm level.")

Comment by Spiracular on In Defense of Attempting Hard Things, and my story of the Leverage ecosystem · 2021-12-19T08:02:37.026Z · LW · GW

Seconded.

I really appreciate Cathleen being willing to talk about it, even given the reasonable expectation that some people are going to... be jerks about it, misinterpret things, take things out of context, and engage in ways she won't like. Or even just fail to engage in ways that would be good for her?

I don't always see eye-to-eye with Cathleen, but she poured a lot into this project. She is not exaggerating when she conveys that she was responsible for a really giant pile of ops and maintenance tasks at Leverage.

(I'm not sure how Leverage handled her sick days, but I would be surprised if it wasn't a whole thing. That feels like one way to point to just how large an amount she ended up being personally responsible for. One of the most grounded and productive people there.)

I'm sad to hear that this project hurt her, in the ways it did? (ex: overwork, lack of support, high interpersonal conflict environment)

I'm somewhat glad that she hasn't completely given up on the idea of well-intentioned ambitious projects, and I'm really happy that it sounds like she has a solid bubble of people now.

This is a lot of information, and there was a cost to writing it up, I'm sure. I can't really weigh in on whether it was worth what she gave up to do so, but I'm grateful that she shared it.

Comment by Spiracular on In Defense of Attempting Hard Things, and my story of the Leverage ecosystem · 2021-12-19T07:09:06.167Z · LW · GW

What to do when society is wrong about something?

I thought this aside was an extremely good snapshot of class of problem that I have seen come up in other contexts as well.

Briefly: People have a tendency to develop social bubbles that are, in a key way or two, way more comfortable or healthy for them. If you get strongly immersed in one, then "the rest of society" starts looking unpleasant or unhealthy, and what do you do with that when that hapens?

I don't find it easy to answer! But I'd be curious to hear from other people about variants of this scenario that they've run into, how they've handled it, and how that has gone.

(It sounds like Leverage had a bit of this dynamic, combined with a feeling that the norms were up for grabs. I had not previously pegged that down about Leverage, but having that context feels helpful for making sense of it.)

Comment by Spiracular on Memetic Hazards in Videogames · 2021-12-01T13:53:35.639Z · LW · GW

Some of the other F-grade feed-ins, for completion's sake...

  • A lot of people went to a bad high school. Some have learned helplessness, and don't know how to study. Saw the occasional blatant cheating habit, too.
    • Community colleges know this, and offer some courses that are basically "How to study"
    • So much of many middle-class cultures is just hammering "academics matter" and "advice on how to study or network" into your brain. Most middle-class students still manage to miss the memo on 1-2 key study skills or resources, though. Maybe everyone should go to "how to study" class...
      • Personally? As a teen, I didn't know how to ask for help, and I couldn't stand sounding like an idiot. Might have saved myself some time, if I'd learned how to do that earlier.
  • Nobody uses office-hours enough.
    • At worst, it's free tutoring. At best, it's socially motivating and now the teacher feels personally invested in your story and success.
    • "High-achievers who turned an early D into an A" are frequently office-hour junkies.
    • Someone with a big family crisis, is probably still screwed even if they go to office hours. Past some threshold, people should just take a W.
  • A few people just genuinely can't do math, in a "it doesn't fit in their brain" kind of way
    • My mom thinks this exists, but only accounts for <1%
Comment by Spiracular on Memetic Hazards in Videogames · 2021-12-01T13:45:43.377Z · LW · GW

TL;DR: As people get older, it's common for people to acquire responsibilities that make it hard to focus on school (ex: kids, elderly parents). Fairly high confidence that this is a big factor in community college grades.


As someone whose parent teaches basic math at community college, and who attended community college for 2 years myself (before transferring)...

I have absolutely seen some people pick up these skills late. The work ethic & directedness of community college high-achievers is often notably better than that of people in their late teens.

They also usually have healthier attitudes around failure (relative to the high-achieving teens), which sometimes makes them better at recovering from an early bad grade. Relatedly, the UCs say their CC transfers have much lower drop-out rates.

One major "weakness" I can think of, is that adults are probably going in fully-cognizant that school feels like an "artificial environment." Some kids manage to not notice this until grad school.


From my mom's work, I know that the grading distribution in high-school-remedial math classes is basically bimodal: "A"s and "F"s, split almost 50-50.

The #1 reason my mom cites for this split, is probably a responsibilities and life-phase difference?

A lot of working class adults are incredibly busy. Many are under more stress and strain than they can handle, at least some of the time. (The really unlucky ones, are under more strain than they can really handle basically all of the time, but those are less likely to try to go to community college.)

If someone is holding down a part-time job, doesn't have a lot in savings, is married, is taking care of a kid, and is caring for their elderly mother? That basically means a high load of ambient stress and triage, and also having 5 different avenues for random high-priority urgent crises (ex: health problems involving any of these) to bump school out of the prioritization matrix.

(Notably, "early achievers" on the child-having front usually also end up a bit crippled academically. I think that's another point in favor of "life phase" or "ambient responsibility load" theory being a big deal here, in a way that competes with or even cannibalizes academic focus/achievement.)

My take-away is that if you have a bunch in savings, and don't have a kid, then my bet is that learning a lot of curricula late is likely to not be a problem. Might actually be kinda fun?

But if you're instead juggling a dozen other life responsibilities, then God help you. If your class has tight deadlines, you may have to conduct a whole lot of triage to make it work.

Comment by Spiracular on Frame Control · 2021-11-28T23:11:08.850Z · LW · GW

There's actually 1 additional dynamic, that I can't quite put my finger on, but here's my attempt.

It's shaped something like...

If you are a pretty powerful person, and you take a desperate powerless person, and you hand them something that could indiscriminately destroy you? That is very likely to be a horrible mistake that you will one day regret. It's a bit like handing some rando a version of The One Ring, which is specific to controlling you.

Unless you had really good judgement and the person you handed it to is either Tom Bombdil or a hobbit who manages to spastically fling it into a volcano even despite himself? It is likely to corrupt them, and they are probably going to end up doing terrible things with it.

Never jump someone from 0 to 11 units of power over you, until you've seen what they're like with a 3 or a 5.

Comment by Spiracular on Frame Control · 2021-11-28T22:19:52.831Z · LW · GW

I think I have seen the "sanity-check"/"sanity-guillotine" thing done well. I have also seen it done poorly, in a way that mostly resembles the "finger-trap" targeting any close friends who notice problems.

For actual accountability/protection? "Asking to have it reported publicly/to an outside third party" seems to usually work better than "Report it to me privately."

(A very competent mass-crowd-controller might have a different dynamic, though; I haven't met one yet.)


For strong frame-controllers? "Encouraging their students to point out a vague category of issue in private," has a nasty tendency to speed up evaporative cooling, and burns out the fire of some of the people who might otherwise have reported misbehavior to a more-objective third-person.

It can set up the frame-controller as the counter/arbiter of "how many real complains have been leveled their way about X" (...which they will probably learn to lie about...), frames them as "being careful about X," and gives the frame-controller one last pre-reporting opportunity to re-frame-control things in the sender.

I think the "private reporting" variant is useful to protect a leader from unpleasant surprises, gives them a quick chance to update out of a bad pattern early on, and is slightly good for that reason. But I think as an "accountability method," this is simply not a viable protection against an even halfway-competent re-framer.


I think the gold-standard for actual accountability, is closer to the "outside HR firm" model. Having someone outside your circle, who people report serious issues to, and who is not primarily accountable to you.

Not everyone has access to the gold-standard, though.

When I single a person out for my future accountability? I pick people who I view as (high-integrity low-jealousy) peers-or-higher, AND/OR people on a totally different status-ladder. I want things set up such that even a maximally-antagonistic me, probably has no way to easily undermine them.

If I have a specific concern, I give them a very clear sense in advance of: "Here is a concrete threshold condition. If I ever trigger this, please destroy me unless I remove myself from a position of power over others. I am asking you in specific (negates bystander effect). I will thank you later."

(Probably also hand them something that would make it easier to selectively shut me down, such as "A signed letter from myself." Concrete thresholds are useful, because it is hard to frame-obscure your way out of hard facts.)

I think this variant requires knowing, and trusting, someone pretty non-petty and non-jealous who has a higher bar of integrity than you do. I do kinda think most people's judgement around identifying those is terrible, unfortunately?

But I think the drawbacks of this are at least... different. And I generally take that shape of thing, as a strong signal of real vulnerability and accountability.

Comment by Spiracular on Spiracular's Shortform Feed · 2021-11-08T18:40:48.216Z · LW · GW

Working out how this applies to other fields is left as an exercise to the reader, because I'm lazy and the space of places I use this metaphor is large (and paradoxically, so overbuilt that it's probably quite warped).

Also: minimally-warped lenses aren't always the most useful lens! Getting work done requires channeling attention, and doing it disproportionately!

And most heavily-built things are pretty warped; it's usually a safe default assumption. Doesn't make heavily-built things pointless, that is not what I'm getting at.

...but stuff that hews close to base-reality has the the important distinction of surviving most cataclysms basically-intact, and robustness is a virtue that works in their favor.

Comment by Spiracular on Spiracular's Shortform Feed · 2021-11-08T18:37:26.949Z · LW · GW

I do think some things are actually quite real and grounded? Everything is shoved through a lens as you perceive it, but not all lenses are incredibly warping.

If you're willing to work pretty close to the lower-levels of perception, and be quite careful while building things up, well and deeply-grounded shit EXISTS.


To give an evocative, and quite literally illustrative, example?

I think learning how to see the world well enough to do realistic painting is an exceptionally unwarping and grounding skill.

Any other method of seeing while drawing, doubles up on your attentional biases and lets you see the warped result*. When you view it, you re-apply your lens to your lens' results, and see the square any warping you were doing.

It's no coincidence that most people who try take one look at their first attempt at realistic drawing, will cringe and go "that's obviously wrong..."

When you can finally produce an illustration that isn't "obviously wrong," it stands as a piece of concrete evidence that you've learned some ability to engage at-will with your visual-perception, in a way that is relatively non-warping.

Or, to math-phrase it badly...

* Taking as a totally-unreasonable given, that your "skill at drawing" is good enough to not get in the way.

Comment by Spiracular on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-08T18:27:13.878Z · LW · GW

Now to actually comment...

(Ugh, I think I ended up borderline-incoherent myself. I might revisit and clean it up later.)

I think it's worth keeping in mind that "common social reality" is itself sometimes one of these unstable/ungrounded top-heavy many-epicycles self-reinforcing collapses-when-reality-hits structures.

I am beyond-sick of the fights about whether something is "erroneous personal reality vs social reality" or "personal reality vs erroneous social reality," so I'm going to leave simulating that out as an exercise for the reader.

loud sigh

Jumping meta, and skipping to the end.

Almost every elaborate worldview is built on at least some fragile low-level components, and might also have a few robustly-grounded builds in there, if you're lucky.

"Some generalizable truth can be extracted" is more likely to occur, if there were incentives and pressure to generate robust builds.*

* (...God, I got a sudden wave of sympathy for anyone who views Capitalists and Rationalists as some form of creepy scavengers. There is a hint of truth in that lens. I hope we're more like vultures than dogs; vultures have a way better "nutrition to parasite" ratio.)


By pure evolutionary logic: whichever thing adhered closer to common properties of base-reality, and/or was better-trained to generalize or self-update, will usually hold up better when some of its circumstances change. This tends to be part of what boils up when worldview conflicts and cataclysms play out.

I do see "better survival of a worldview across a range of circumstances" as somewhat predictive of attributes that I consider good-to-have in a worldview.

I also think surviving worldviews aren't always the ones that make people the happiest, or allow people to thrive? Sometimes that sucks.

(If anyone wants to get into "everything is all equally-ungrounded social reality?" No. That doesn't actually follow, even from the true statement that "everything you perceive goes through a lens." I threw some quick commentary on that side-branch here, but I mostly think it's off-topic.)

Comment by Spiracular on Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation · 2021-11-08T17:47:21.079Z · LW · GW

On the one hand, I think this is borderline-unintelligible as currently phrased? On the other hand, I think you have a decent point underneath it all.

Let me know if I'm following, while I try to rephrase it.


When insulated from real-world or outer-world incentives, a project can build up a lot of internal-logic and inferential distance by building upon itself repeatedly.

The incentives of insulated projects can be almost artificially-simple? So one can basically Goodhart, or massage data and assessment-metrics, to an incredible degree. This is sometimes done unconsciously.

When such a project finally comes into contact with reality, this can topple things at the very bottom of the structure, which everything else was built upon.

So for some heavily-insulated, heavily-built, and not-very-well-grounded projects, finally coming into exposure with reality can trigger a lot of warping/worldview-collapse/fallout in the immediate term.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-11-08T05:57:50.945Z · LW · GW

My impression is that Leverage's bodywork is something closer to what other people call "energy work," which probably puts it... closer to Reiki than massage?

But I never had it done to me, and I don't super understand it myself! Pretty low confidence in even this answer.

Comment by Spiracular on Speaking of Stag Hunts · 2021-11-07T03:21:50.028Z · LW · GW

Hm... I notice I'm maybe feeling some particular pressure to personally address this one?

Because I called out the deliberate concentration of force in the other direction that happened on an earlier copy of the BayAreaHuman thread.


I am not really recanting that? I still think something "off" happened there.

But I could stand up and give a more balanced deposition.

To be clear? I do think BAH's tone was a tad aggressive. And I think there were other people in the thread, who were more aggressive than that. I think Leverage Basic Facts EA had an even more aggressive comment thread.

I also think each of the concrete factual claims BAH made, did appear to check out with at least one corner of Leverage, according to my own account-collecting (although not always at the same time).

(I also think a few of the LBFEA's wildest claims, were probably true. Exclusion of the Leverage website from Wayback Machine is definitely true*. The Slack channels characterizing each Pareto attendee as a potential recruit, seems... probably true?)

There were a lot of corners of Leverage, though. Several of them were walled off from the corners BAH talked about, or were not very near to it.

For what it's worth, I think the positive accounts in the BAH comment thread were also basically honest? I up-voted several of them.

Side-note: As much as I don't entirely trust Larissa? I do think some of her is at least trying to hold the fact that both good and bad things happened here. I trust her thoughts, more than Geoff's.

* Delisted from Wayback: The explanation I've heard, is that Geoff was sick of people dragging old things up to make fun of the initial planning document, and critiquing the old Connection Theory posts.


I am also dead-certain that nobody was going into the full story, and some of that was systematic. "BAH + commentary" put together, still doesn't sum to enough of the whole truth, to really make sense of things.

Anna & Geoff's initial twitch-stream included commentary about how Leverage used to be pretty friendly with EA, and ran the first EAG. Several EA founders felt pretty close after that, and then there was some pretty intense drifting apart (partially over philosophical differences?). There was also some sort of kerfuffle where a lot of people ended up with the frame that "Leverage was poaching donors," which may have been unfair to Leverage. As time went on, Geoff and other Leveragers were largely blocked from collaborations, and felt pretty shunned. That all was an important missing piece of the puzzle.

((Meta: Noticing I should add this to Timeline and Threads somewhere? Doing that now-ish.))

(I also personally just really liked Anna's thoughts on "narrative addiction" being something to watch out for? Maybe that's just me.)

The dissolution & information agreement was another important part. Thank you, Matt Falshaw, for putting some of that in a form that could be viewed by people outside of the ecosystem.

I also haven't met anybody except Zoe (and now me, I guess?) who seems to have felt able to even breathe a word about the "objects & demons" memetics thing. I think that was another important missing piece.

Some people do report feeling incapable of speaking positively about Leverage in EA circles? I personally didn't experience a lot of this, but I saw enough surprise when I said good things about Reserve, that it doesn't surprise me particularly. Leverage's social network and some of its techniques were clearly quite meaningful to some people, so I can imagine how rough 'needing to write that out of your personal narrative' could have been.

Comment by Spiracular on Speaking of Stag Hunts · 2021-11-07T02:21:15.467Z · LW · GW

"three people... would like to say positive things about their experience at Leverage Research, but feel they cannot":

Oof. I appreciate you mentioning that.

(And a special note of thanks, for being willing to put down a concrete number? It helps me try to weigh it appropriately, while not compromising anonymity.)


Navigating the fact that people seem to be scared of coming forward on every side of this, is hard. I would love advice on how to shape this thread better.

If you think of something I can do to make talking about all of {the good, the bad, the neutral, the ugly, and the complicated}, easier? I can't guarantee I'll agree to it, but I really do want to hear it.

Please feel free to reach out to me on LW, anytime in the next 2 months. Not just Duncan, anyone. Offer does expire at start of January, though.

I am especially interested in concrete suggestions that improve the Pareto Frontier of reporting, here. But I'm also pretty geared up to try to steelman any private rants that get sent my way, too.

(In this context? I have already been called all of "possessed, angry, jealous, and bad with secrets." I was willing to steelman the lot, because there is a hint of truth in each, although I really don't think any of them are the clearest lens available. If you can be kinder than that, then you're already doing better than the worst baseline that I have had to steelman here.)


P.S. I recognize it is easy to cast me as being on "the other side?" It's an oversimplification, and I'd love to have a more balanced sense of what the hell happened. But I also don't want my other comments to come as a late surprise, to anyone who is already a bit spooked.

So, my personal story is in here, along with some of my current sense-making.

Also, to people who really only want to talk about their story privately, with whoever it is that you trust? That's valid, and I hope you're doing okay.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-11-01T20:31:47.633Z · LW · GW

While I realize I've kinda de-facto "taken a side" by this point (and probably limited who will talk to me as a result)? I was mispronouncing Geoff's name, before this hit; this is pretty indicative of how little I knew him personally. I started out mostly caring about having the consequences-for-him be reached based off of some kind of reasonable assessment, and not caring too much about having it turn out one way or another. I still feel more invested in there being a good process, and in what will generate the best outcomes for the people who worked under him (or will ever work under him), than anything else.

Compared to Brent's end-result of "homeless with health-problems in Hawaii" **? The things I've asked for have felt mild. But I also knew that if I wasn't handling mentioning them, somebody else probably would. In my eyes, we probably needed someone outside of the Leverage ecosystem who knew a lot of the story (despite the substantial information-hiding efforts) to be handling this part of the response.

Pushing for people to publish the information-hiding agreement, and proposing that Geoff maybe shouldn't have a position with a substantial amount of power over others (at least while we sort this out), felt to me like fairly weaksauce requests. I am still a bit surprised that Geoff may have taken this as a convincing audition for a "prosecutor" role? I am angry and clued-in enough to sincerely fill the role, if somebody has to and if nobody else will touch it. But it still surprised me, because it is not what I see as my primary responsibility here.

**Despite all his flaws and vices? I was close to Brent. I do care about Brent, and I wouldn't have wished that for him.

Comment by Spiracular on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-11-01T20:26:07.903Z · LW · GW

I'm finally out about my story here! But I think I want to explain a bit of why I wasn't being very clear, for a while.

I've been "hinting darkly" in public rather than "telling my full story" due to a couple of concerns:

  1. I don't want to "throw ex-friend under the bus," to use their own words! Even friend's Leverager partner (who they weren't allowed to visit, if they were "infected with objects") seemed more "swept-up in the stupidity" than "malicious." I don't know how to tell my truth, without them feeling drowned out. I do still care about that. Eurgh.

  2. Via models that come out of my experience with Brent: I think this level of silence, makes the most sense if some ex-Leveragers did get a substantial amount of good out of the experience (sometimes with none of the bad, sometimes alongside it), and/or if there's a lot of regrettable actions taken by people who were swept up in this at the time, by people who would ordinarily be harmless under normal circumstances. I recognize that bodywork was very helpful to my friend, in working through some of their (unrelated) trauma. I am more than a little reluctant to put people through the sort of mob-driven invalidation I felt, in the face of the early intensely-negative community response to the Brent expose?

Surprisingly irrelevant for me: I am personally not very afraid of Geoff! Back when I was still a nobody, I brute-forced my way out of an agonizing amount of social-anxiety through sheer persistence. My social supports range both wide and deep. I have pretty strong honesty policies. I am not currently employed, so even attacking my workplace is a no-go. I'm planning to marry someone cool this January. Truth be told? I pity any fool who tries to character-assassinate me.

...but I know that others are scared of Geoff. I have heard the phrase "Geoff will do anything to win" bandied about so often, that I view it as something of a stereotyped phrase among Leveragers. I am honestly not sure how concerned I actually should be about it! But it feels like evidence of a narrative that I find pretty concerning, although I don't know how this narrative emerged.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-11-01T20:04:43.858Z · LW · GW

Threads Roundup

  • Several things under the LW Leverage Tag
  • Leverage Basic Facts EA Post & comment thread
    • I discovered this one a little late? Still flipping through it.
  • BayAreaHuman LW Post
    • By now, I have been able to confirm every single concrete point made in that post seems true or reasonable, to myself or at least one of my contacts (not always two). The tone is slightly-aggressive, but seems generally truth-seeking, to me.
    • I think it leans more towards characterizing dysfunctional late-L1, than early-L1? But not strictly.
    • Someone, probably Geoff (it's apparently the kind of thing he does, confirmed by 2+ people), sent out emails to friends of Leverage framing it as an unwarranted attack and encouraging flooding the comment thread with people's positive experiences.
      • I do not like that he did this! I know someone else, who intends to write up something more thorough about this. But if they don't, I am likely to comment on it myself, after saving evidence and articulating my thoughts.
      • EDIT: I do think a lot of the positive accounts are honest! I am not accusing any commenter of lying. My concern here is selective reporting, and something of a concentration of force dynamic that I believe may have been invoked deliberately in a way that I do not trust as truth-seeking.
    • Matt Falshaw's recent email mentioned non-Zoe people writing "disingenuous and deliberately misleading" posts in the past? If that was meant to implicate BAH, then I think it was being a bit "disingenuous and deliberately misleading."
  • Zoe's Medium Post
    • I buy it! I was willing to chime-in in its favor, from early on
    • Late-Leveragers seem to have conceded that it is a valid personal recounting
    • In case this changes location: LW comment thread on it
  • Geoff Anders Twitch Streams
    • Stream 1
      • Included some relevant backstory on the rift with EA, which probably also belongs in a timeline.
      • Audio was recovered, and there's a transcript here of the second half for the less audio-inclined.
      • Geoff's initial twitch-stream (with Anna Salamon) included commentary about how Leverage used to be pretty friendly with EA, and ran the first EAG. Several EA founders felt pretty close after that, and then there was some pretty intense drifting apart (partially over philosophical differences?). There was also some sort of kerfuffle where a lot of people ended up with the frame that "Leverage was poaching donors," which may have been unfair to Leverage. As time went on, Geoff and other Leveragers were largely blocked from collaborations, and felt pretty shunned.
        • Some decent higher-detail text summaries here and here.
        • TekhneMakre started a thread with some good additional thoughts, here
  • Some Press Releases from Leverage
    • A letter from the Executive Director on Negative Past Experiences: Sympathy, Transparency, and Support
      • Commits to:
        • "reimburse any employee of any organization in the Leverage research collaboration for expenditures they made on therapy" (w/ details)
        • "we will share information about intention research in the form of essays, talks, podcasts, etc., so as to give the public greater context on this area of our past research"
        • Sets up 4 intermediaries (to ease coming forward with accounts, in cases of distrust)
          • EDIT: Names Anna Salamon, Eli Tyre, Matthew Graves, and Matt Falshaw as several somewhat-intermediary people who can be contacted.
        • "Leverage Research will thus seek to resolve the current conflict as definitively as possible, publicly dissociate from the Rationalist community, and take actions to prevent future conflict"
      • Overall, I found this one pretty heartening
    • Ecosystem Dissolution Agreement
      • The socially-enforced NDA-like from the end of Leverage 1.0
      • EDIT: For a sense of Leverage's information-suppression policy in prior years, here is the Basic Information Management Checklist from 2017
    • Leverage 1.0 Ecosystem information sharing and initial inquiry
      • Email from Matt Falshaw on Oct 19
    • ETA: Essay On Intention Research
      • This essay seemed really well done, overall.
        • Outlined the history of the research clearly. Seemed pretty good at sticking to fairly grounded descriptions, especially given the slipperyness of the subject matter. Tried to provide multiple hypotheses of what could be happening, and remained open to explanations nobody has come up with yet. This has been a tricky topic for people to describe, and I suspect he handled it well.
      • Mostly gives a history of Intention Research, a line of inquiry that started out poking at energywork and bodywork (directing attention with light touch), got increasingly into espousing detailed reads of each other's nonverbals, and which eventually fed into some really awful interpersonal dynamics that got so bad that Leverage 1.0 was dissolved to diffuse it.
      • Warnings are at the end. My sole complaint with the writing is that I wish they were outlined earlier.
    • ETA: Public Report on Inquiry Findings: Factors and Mistakes that Contributed to a Range of Negative Experiences on Our 2011-2019 Research Collaboration
      • I thought this was quite good. Reading this raised my esteem for Matt Falshaw.
      • I do think this accurately characterized a lot of the structural problems, and leaves me more optimistic that Leverage 2.0 will avoid those. If you are interested in the details of that, I recommend reading it.
        • I don't think all of the problems were structural? But a lot of them were, and the ones that weren't were often exacerbated by structural things. Putting the focus on fixing things at that layer looks like a reasonable choice.
      • 3-5 people with extremely negative experiences and perspectives, out of something like 45 people, does sound plausible to me.
      • Something I felt wasn't handled perfectly: The refusal of people with largely-negative experiences, to talk with investigators, reads to me as some indicator of a feeling of past loss-of-trust or breach-of-trust. And while their absence is gestured at, I did feel like the significance of this tended to get downplayed more than I would have liked.
Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-11-01T20:03:23.680Z · LW · GW

Timeline Additions

I rather liked the idea of making a timeline!

Geoff currently had a short doc on timing of changes in org structure, but it currently doesn't include much else.

Depending on how discussion here goes, I might transfer/transform this into its own post in the future. Will link them, if so.


Preamble

Nobody has talked much in public about the most dysfunctional things, yet? I am going to switch strategies out of dark-hinting and anonymity at this point, and put my cards down on the table.

This will be a sketch of the parts of this story that I know about. I do not have exact dates, and these are just broad-strokes of some of the key incidents here.

And not all of these are my story to tell? So sometimes, I will really only feel comfortable providing the broad-strokes.

(If someone has a better 2-3 sentence summary, or the full story, for some of these? Do chime in.)

These are each things I feel pretty solid about believing in. I think these incidents belong somewhere on any good consensus-timeline, but are not the full set of relevant events.

(I only have about 3-6 relevant contacts at the moment, but I've gotten at least 2 points of confirmation on each of these. It was not in this exact wording, though.)


Timeline Pieces

  • Early L1-present: Leverage has always had weird levels of PR-protective secret-keeping stuff, as far back as I can remember (~2015)
    • I believe this may have been true, since before they probably had anything worth hiding? Not confident in that, though.
  • Early L1: Leverage runs first EA Global (2013)
    • It brought a lot of important early EA people together, and a lot of them left it on friendlier terms
    • They also ran the 2014 one, but the character was pretty different; less "people crowded in a single house" more "conference + summit"
  • Early-L1: Panel discussion between Geoff & Anna Salamon (2014)
    • Brought up a few of Geoff's non-materialist views
    • Seems to have been some sort of turning-point that deepened a rift between Geoff and Leverage vs the Rationalist and EA philosophies and communities
  • ???: Sense of a developing rift between Leverage and EA
    • Started out relatively friendly, in first EAG era
    • Drifted apart due to a mix of philosophical differences and some growing anti-Leverage reputational dynamics
    • Geoff and other Leveragers were largely blocked from collaborations with EA orgs, and felt pretty shunned
  • Mid-L1: Attempt by a 1-2 Leverage-aligned people to use CEA as a Leverage recruitment vehicle. This escalated for a while, eventually became such a problem that they were rebuffed and fired.
    • Geoff was aware of this
    • Oli was pretty burned out, scared, and depressed during this period. The Leverage drama contributed to that, although it was not the sole reason.
    • I was dating Oli (habryka). Oli didn't give me very much detail for a long time, but I could pick up on the fact that he was scared by this component of it, and he did tell me some of it eventually.
    • I have not seen Oli get this scared very often? So this does feel almost-personal for me, and I get pretty incensed about this.
  • Late L1: While worried about defunding, different factions of psychology did stuff towards each other that... bordered on psychological warfare?
    • Some nebulous mix of "actually fucking with each other," "claiming/convincing-self that the other faction fucked with them," and "trying to enforce unreasonable/unhealthy norms on each other."
    • Different factions and splinter-groups often had different degrees and specifics of dysfunction.
    • (I'd like a more Kosher wording for this! But I'm not sure how else to express just how bad it got.)
  • Late L1/post-L1: At least one faction freaked the hell out about social contagion, and engaged in a lot of dysfunctional pressuring of others as a result
    • I happen to think that social contagion is a useful model? But that they failed to factor in enough "it all adds up to normality," and got pretty arrogant about their models and personal competence, in a way that did not end well.
  • End of L1: Leverage 1.0 was dissolved
    • My personal take is that dissolving and restructuring was overall a good move
    • As much as I sympathize with the intent, I am not always a big fan of the legacy and specifics of the information-hiding agreement (I'll pick that fight later, though.)
  • post-L1: A few non-Leverage people who were close to someone in one of the psychology factions, experienced some nasty second-degree fallout drama when their ex-Leverager friend started claiming they were {infected with objects, had attacked the Leverager with their aura, etc.}.
    • This is the short version of my story? I experienced one of these second-degree* echoes. See "My Story" below.
    • I reported the 10-second-version of my story to Anna and MattF in an anonymized text snippet, shortly after Zoe posted.
    • I have discovered others who were affected by some second-degree fallout drama, but the exact stories differ.

My Story

I was sworn into an intense secrecy agreement ("do not tell anyone, even if they will get hurt if they don't know"), and told by a friend that I "contained an object" ("object" is basically their confusing term for a psychologically-unhealthy memetic thing). The Ziz/CFAR incident hit the same damn day. I responded by requesting an outside-view sanity-check from Oli**. I told my friend that I'd told someone, as soon as I got back, and shit hit the fan.

I sometimes call my incident the "quarantine-before-quarantine?" I responded to things like "getting told that I'm not allowed to email partner-of-friend because that gave partner an object" and "friend vents that they can't visit friend's-partner, because they caught an object from me, and now they have to wait until someone has an opening for about an hour of bodywork" and "friend says they felt me attack them, while I was just eating breakfast" by generating and following monotonically-increasing explicit rules-of-separation, until we were living in the same house but were not allowed to talk or even be in the same room together. We both moved out a month later. The whole thing is a long story, but basically, friend and I had a gigantic fallout as a result of all of this.

I was mum for a long time about everything except "I broke a major secrecy agreement" and "friend & I are not even able to calmly co-exist in the same house anymore," because the friend had made it clear that talking honestly about any of this would render to them as "throwing them under the bus." I do genuinely still care about their well-being.

If you know who I am talking about, do not reveal it publicly and please be nice to them. What they went through was even worse than what I experienced.

Same goes for friend's-partner, who always struck me as swept-up and misguided, not malicious. They were the route by which this insane frame reached me, but I genuinely believe that they meant well. There were even times when the partner was more charitable towards me, than ex-friend was.

I distantly wish both of them well. I also do not wish to speak privately with either of them about this, at present.

On Centers of Dysfunction

In terms of clusters-of-dysfunction:

  • I think early-L1 was generally less-dysfunctional, although the culture had many of what I would think of as "risk-factors."
  • The worst stuff seems to have reached a head right near the end of L1?
  • Reserve was one of the more-functional corners, and was basically fine.
    • I worked at Reserve for a while; you could pick up on some of the "taste of Leverage" from that distance, but it never appears to have escalated to anything seriously dysfunctional.
    • For example: my worst complaint is that C was weirdly-intense about not granting me access to the #general Slack channel, even though that could get in the way of doing my job sometimes
  • In general, what I've seen seems consistent with some ex-Leveragers getting a substantial amount of good out of the experience. Sometimes with none of the bad, sometimes alongside it.
    • ex: I recognize that bodywork was very helpful to my ex-friend, in working through some of their (unrelated) trauma. Many people have reported good experiences with belief-reporting, and say they found it useful.
  • I also think there were a lot of regrettable actions taken by people who were swept up in this at the time, by people who would ordinarily be harmless under normal circumstances.
    • It can be hard to judge this, especially from where I am? But as bad as the things that happened were, I think this is broadly true of most of the people involved.
  • I do not want to put people through the sort of mob-driven invalidation, that I once felt.
    • I was once friends with Brent. I still care about his well-being. There are times where I was under a lot of pressure to write that out of my personal narrative, but it was ultimately healthier for me that I chose to keep it.
    • I hope that those with stories about Leverage that are different from mine, feel the right to lay claim to the positives of their experiences, as well as the negatives.

Footnotes

* Technically, my friend was dating an ex-Leverager. So I actually got a third-degree burn.

** I told Oli something to the effect of "Mental illness as social contagion theory; claimed to be spread highly-effectively through circling. Not sure if Ziz incident may be an instance? If there's another psychotic break within 1 month, boost likelihood of this being true. If there's not, please update downward on this model.*** Pieces of Leverage's model of social contagion did not match my own theory of social contagion, and I'm not entirely confident who is in the right, here? Also, this one may have come out of Leverage, but they say it was an accident."

(...that is probably roughly everything I said? It was succinct, in part because I was taking the possibility that I had caught something, seriously. In my theory of social contagion, bandwidth really matters. Some Leveragers behaved in a way that implied thinking bandwidth mattered less, and this was one of the first things -of several- that struck me as insane about their lens on it.)

*** Ziz turned out to be already-crazy as a baseline. There were no psychosis episodes that month from anyone else. I asked around, and I do not believe Ziz had any strong connection to Leverage at all, but especially not in that time-period.

Comment by Spiracular on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T14:50:02.972Z · LW · GW

* I could tell that this had some concerning toxic elements, and I needed an outside sanity-check. I think under the circumstances, this was the correct call for me. I do not regret picking the particular person I chose as a sanity-check. I am also very sympathetic to other people not feeling able to pull this, given the enormous cost to doing it at the time.

This is not a strong systematic assessment of how I usually treat privacy agreements. My harm-assessment process is usually structured a bit like this, with some additional pressure from an "agreement-to-secrecy," and also factors in the meta-secrecy-agreements around "being able to be held to secrecy agreements" and "being honest about how well you can be held to secrecy agreements."

No, I don't feel like having a long discussion about privacy policies right now. But if you care? My thoughts on information-sharing policy were valuable enough to get me into the 2019 Review.

If you start on this here, I will ignore you.

Comment by Spiracular on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T14:23:57.643Z · LW · GW

I agree that Leverage has been unusually hard to talk about bluntly or honestly, and I think this has been true for most of its existence.

I also think the people at the periphery of Leverage, are starting to absorb the fact that they systematically had things hidden from them. That may be giving them new pause, before engaging with Leverage as a topic.

(I think that seems potentially fair, and considerate. To me, it doesn't feel like the same concern applies in engaging about CFAR. I also agree that there were probably fewer total people exposed to Leverage, at all.)


...actually, let me give you a personal taste of what we're dealing with?

The last time I choose to talk straightforwardly and honestly about Leverage, with somebody outside of it? I had to hard-override an explicit but non-legal privacy agreement*, to get a sanity check. When I was honest about having done so shortly thereafter, I completely and permanently lost one of my friendships as a result.

Lost-friend says they were traumatized as a result of me doing this. That having "made the mistake of trusting me" hurt their relationships with other Leveragers. That at the time, they wished they'd lied to me, which stung.

I talked with the person I used as a sanity-check recently, and I get the sense that I still only managed to squeeze out ~3-5 sentences of detail at the time.

(I get the sense that I still did manage to convey a pretty balanced account of what was going through my head at the time. Somehow.)


It is probably safer to talk now, than it was then. At least, that's my current view. 2 year's distance, community support, a community that is willing to be more sympathetic to people who get swept up in movements, and a taste of what other people were going through (and that you weren't the only person going through this), does tend to help matters.

(Edit: They've also shared the Ecosystem Dissolution Information Arrangement, which I find a heartening move. They mention that it was intended to be more socially-enforced than legally-binding. I don't like all of their framing around it, but I'll pick that fight later.)

It wouldn't surprise me at all, if most of this gets sorted out privately for now. Depending a bit on how this ends -- largely on whether I think this kind of harm is likely to recur or not--- I might not even have an objection to that.

But when it comes to Leverage? These are some of the kinds of thoughts and feelings, that I worry we may later see played a role in keeping this quiet.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-18T17:24:08.268Z · LW · GW

Meta: I think it makes some good points. I do not think it was THAT bad, and I think the discussion was good. I would keep it up, but it's your call. Possibly adding an "Edit: (further complicated thoughts)" at the top? (Respect for thinking about it, though.)

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-18T17:05:08.195Z · LW · GW

Since it's mostly just pointers to stuff I've already said/implied... I'll throw out a quick comment.

I would like it if somebody started something like a carefully-moderated private Facebook group, mostly of core people who were there, to come to grips with their experiences? I think this could be good.

I am slightly concerned that people who are still in the grips of "Leverage PR campaigning" tendencies, will start trying to take it over or otherwise poison the well? (Edit: Or conversely, that people who still feel really hurt or confused about it might lash out more than I'd wish. I personally, am more worried about the former.) I still think it might be good, overall.

Be sure to be clear EARLY about who you are inviting, and who you are excluding! It changes what people are willing to talk about.

...I am not personally the right person to do this, though.

(It is too easy to "other" me, if that makes sense.)


I feel like one of the only things the public LW thread could do here?

Is ensuring public awareness of some of the unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms, and showing a public ramp-down of opportunities to do so in the future.

Along with doing what we can, to signal that we generally stand against people over-simplistically demonizing the people and organizations involved in this.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-18T15:16:39.558Z · LW · GW

My current sense? Is that both Unreal and I are basically doing a mix of "take an advocate role" and "using this as an opportunity to get some of what the community got wrong last time -with our own trauma- right." But for different roles, and for different traumas.

It seemed worth being explicit and calling this out. (I don't necessarily think this is bad? I also think both of us seem to have done a LOT of "processing our own shit' already, which helps.)

But doing this is... exhausting for me, all the same. I also, personally, feel like I've taken up too much space for a bit. It's starting to wear on me in ways I don't endorse.

I'm going to take a step back from this for a week, and get myself to focus on living the rest of my life. After a week, I will circle back. In fact, I COMMIT to circling back.


And honestly? I have told several people about the exact nature of my Leverage trauma. I will tell at least several more people about it, before all of this is over.

It's not going to vanish. I've already ensured that it can't. I can't quite commit to "going full public," because that might be the wrong move? But I will not rest on this until I have done something broadly equivalent.

I am a little bit scared of some sort of attempts to undermine me emerging as a consequence, because there's a trend in even the casual reports that leans in this direction? But if it happens, I will go public about THAT fact.

I am a lot less scared of the repercussions than almost anyone else would be. So, fuck it.

(But also? My experience doesn't necessarily rule out "most of the bad that happened here was a total lack of guard-rails + culty death-spirals." It would take some truly awful negligence to have that few guard-rails, and I would not want that person running a company again? But still, just fyi. Yeah, I know, I know, it undercuts the drama of my last statement.)


But if anyone wonders why I vanished? I'm taking a break. That is what I'm doing.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-18T14:39:23.430Z · LW · GW

I see what you're doing? And I really appreciate that you are doing it.

...but simultaneously? You are definitely making me feel less safe to talk about my personal shit.

(My position on this is, and has always been: "I got a scar from Leverage 1.0. I am at least somewhat triggered; on both that level, and by echoes from a past experience. I am scared that me talking about my stuff, rather than doing my best to make and hold space, will scare more centrally-affected people off. And I know that some of those people, had an even WORSE experience than I did. In what was, frankly, a surreal and really awful experience for me.")

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-18T14:37:26.305Z · LW · GW

I basically agree with this.

But also, I think pretty close to ZERO people who were deeply affected (aside from Zoe, who hasn't engaged beyond the post) have come forward in this thread. And I... guess we should talk about that.

I know from firsthand, that there were some pretty bad experiences in the incident that tore Leverage 1.0 apart, which nobody appears to feel able to talk about.

I am currently not at all optimistic that we're managing to balance this correctly? I also want this to go right. I'm not quite sure how to do it.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-18T01:02:40.507Z · LW · GW

I appreciate this too. I think this form of push-back, is a potentially highly-productive one.

I may need to think for a bit about how to respond? But it seemed worth expressing my appreciation for it, first.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-17T21:11:36.658Z · LW · GW

Meta-note: I tried the longer-form gentler one? But somebody ELSE complained about that structure.

(A piece of me recognizes that I can't make everybody happy here, but it's a little annoying.)

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-17T21:03:19.172Z · LW · GW

For whatever it's worth, I think "No" is a pretty acceptable answer to some of these.


"No, for reasons X, Y, Z" is a pretty ordinary answer to the NDA concern. I'd still like to see that response.

"Leverage 2.0 was deliberately structured to avoid a lot of the drawbacks of Leverage 1.0" is something I actually think is TRUE. The fact that Leverage 1.0 was sun-setted deliberately, is something that I thought actually reflected well on both Geoff and the people there.

I think from that, an argument could be made that stepping down is not necessary. I can't say I would necessarily agree with it, but I think the argument could be made.


Most of my stance, is that currently most people are too SCARED to talk. And this is actually really worrying to me.

I don't think "introducing a mediator," who would be spending about half of their time with Geoff --the epicenter of a lot of that fear-- would actually completely solve all of that problem. It would surprise me a lot if it worked here.


My #1 most desired commitment, right now? Is actually #3, and I maybe should have put it first.

A commitment to, in the future, not go after people and especially not to threaten them, for talking about their experiences.

That by itself, would be quite meaningful to me.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-17T19:58:16.878Z · LW · GW

When I said "last sub-point?"

I was referring to "make any strong stealthy attempts to socially discredit people," not "threaten" (by which I mean, "threaten").

I was deliberately treating "no threats" as minimum, and "no strong social pressure" as extra-credit.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-17T16:40:05.389Z · LW · GW

That last point sub-point is a little vague, so let my clarify my personal cut-off on this. Others may disagree.

I wouldn't object to seeing the occasional brief overt statement coming directly from Geoff that his recollection doesn't match someone else's interpretation.

I would object to any further encouragement of things that resemble the "strong, repeated pressure by someone close to Geoff to have the post marked as flawed" that Ruby described.

Consistently denouncing the later going forward, would be very helpful.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-17T15:51:57.597Z · LW · GW

Edit: I got a request to cut the chaff and boil this down to discrete actionables. Let me do that.

  1. Will you release everyone from any NDAs

  2. Will you step down from any management roles (e.g. Leverage and Paradigm)

  3. Will you state for the record, that you commit to not threaten* anyone who comes forward with reports that you do not like, in the course of this process

I get the sense that you have made people afraid to stand against you, historically. Engaging in any further threats, seems likely to impede all of our ability to make sense of, and come to terms with, whatever happened. It could also be quite incriminating on its own.

* For full points, commit to also not make any strong stealthy attempts to socially discredit people.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-17T14:51:26.591Z · LW · GW

I recognize it took some courage to talk about this in the first place, and I don't want to discount that. I am glad that you said something.

...but I also don't want to lose track of this thread.

Edit: I got a request to boil this down, so I separated it to that thread.

And reading the room? I think there is, broadly speaking, a lot of fear of you. And I think part of why that is true, is because you cultivated that.

You have noticed that you made some errors which blinded you to the consequences of some of your actions, and I think that's a good start? I hope you might be able to agree with me that this attitude of fear, is probably blinding you to the reporting of any further harms.

I recognize processing takes time, and there hasn't been a lot of time yet. But also, I think somebody needed to say this to your face, and it might as well be me.

How do you want to help wind down this aura of fear, which I think is still blinding not just most of us, but also YOU, to a lot of the full reality of what happened?

(And it might well be, that you will help with this by saying almost nothing and going after no-one. But if so? I think it would help, if you briefly committed to that outright.)

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-17T14:51:11.248Z · LW · GW

I appreciate hearing from you about some of what you probably got wrong.

I'm pretty sure that a lot of this started out relatively benignly, and spiraled?

I agree with your impression that arrogance was at least one of several pressures that made it hard to see that things were going in a bad direction. A lot of invisible guard-rails were dropped or traded away over time, and the absence of a certain amount of reality-checking made it very hard to fix after things had veered off the rails.

I hope your account contributes to making people less likely to make similar errors in the future.

(I would also be very unhappy, if I ever saw you having a substantial amount of power over people again though, fwiw.)

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-17T13:38:13.144Z · LW · GW

I want to throw out that while I am usually SUPER on team "explicit communication norms", the rule-nuances of the hardest cases might sometimes work best if they are a little chaotic & idiosyncratic.

I personally think there might be something mildly-beneficial and protective, about having "adversarial case detected" escape-clauses that vary considerably from person-to-person.

(Otherwise, a smart lawful adversary can reliably manipulate the shit out of things.)

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-16T17:51:21.616Z · LW · GW

I also have a deep appreciation, for Zoe calling out that different corners of Leverage had very different experiences with it. Because they did! Not all time-slices or sub-groups within it experienced the same problems.

This is probably part of why it was so easy, to systematically play people's personal experiences against each other: Since he knew the context through which Leverage was experienced, Geoff or others could systematically bias whose reports were heard.

(Although I think it will be harder in the future to engage in this kind of bullshit, now that a lot of people are aware of the pattern.)


To those who had one of the better firsthand experiences of Leverage:

I am still interested in hearing your bit! But if you are only engaging with this due to an inducement that probably includes a sampling-bias, I appreciate you including that detail.

(And I am glad to see people in this broader thread, being generally open about that detail.)

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-16T17:50:26.010Z · LW · GW

Since it sounds like just-upvotes might not be as strong a signal of endorsement as positive engagement...

I want to say that I really appreciate and respect that you were willing to come forward, with facts that were broadly-known in your social graph, but had been systematically excluded from most people's models.

And you were willing to do this, in a pretty adversarial environment! You had to deal with a small invisible intellectual cold-war that ensured, almost alone, without backing down. This ​counts for even more.


I do have a little bit of sensitive insider information, and on the basis of that: Both your posts and Zoe's have looked very good-faith to me.

In a lot of places, they accord with or expand on what I know. There are a few parts I was not close enough to confirm, but they have broadly looked right to me.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-16T16:09:23.759Z · LW · GW

I was very up-front about the role I am attempting to embody in this: Relating to, and trying to serve, people with complicated opinions who are finding it hard to talk about this.

I feel we needed someone to take this role. I wish someone had done it for me, when my stuff happened.


You seem to not understand that I am making this statement, from that place and in that capacity.

Try seeing it through through the lens of that, rather than thinking that I'm making confident statements about your epistemic creepiness.

Hopefully this helps to resolve your confusion.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-16T01:57:40.728Z · LW · GW

I get VERY creepy vibes from this proposal, and want to push back hard on it.

Although, hm... I think "lying" and "enemy action" are different?

Enemy action occasionally warrants breaking contracts back, after they didn't respect yours.

Whereas if there is ZERO lying-through-negligence in accounts of PERSONAL EXPERIENCES, we can be certain we set the bar-of-entry far too high.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-16T01:42:06.851Z · LW · GW

On mediators and advocates: I think order-of-operations MATTERS.

You can start seeking truth, and pivot to advocate, as UOC says.

What people often can't do easily is start with advocate, and pivot to truth.

And with something like this? What you advocated early can do a lot to color both what and who you listen to, and who you hear from.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-15T17:48:09.719Z · LW · GW

I hesitated a bit before saying this? I thought it might add a little bit of clarity, so I figured I'd bring it up.

(Sorry it got long; I'm still not sure what to cut.)

There are definitely some needs-conflicts. Between (often distant) people who, in the face of this, feel the need to cling to the strong reassurance that "this could not possibly happen to them"/"they will definitely be protected from this," and would feel reassured at seeing Strong Condemning Action as soon as possible...

...and "the people who had this happen." Who might be best-served, if they absorbed that there is always some risk of this sort of shit happening to people. For them, it would probably be best if they felt their truth was genuinely heard, and took away some actionable lessons about what to avoid, without updating their personal identity to "victim" TOO much. And in the future, embraced connections that made them more robust against attaching to this sort of thing in the future.

("Victim" is just not a healthy personal identity in the long-term, for most people.)


Sometimes, these needs are so different, that it warrants having different forums of discussion. But there is some overlap in these needs (working out what happened, improving reporting, protecting people from cultish reprisals), and I'm not sure that separation is always necessary.

My read of the direction Anna seems to be trying to steer this is "do everything she can to clearly hear out people's stories carefully First." Only later, after people have really really listened, use that to formulate carefully considered harm-reducing actions.

Understanding the issue, in all its complexity, before working on coming up with solutions? I feel pretty on-board with that.


...I admit, I initially chaffed a bit? I have some memories of times Anna has leaned a bit more into the former-group's needs. Some of her attempts to aim differently this time, have felt a little awkward.

I did also get an "ordering other people to ignore politics and be vulnerable" vibe off this, which put my armor up to around my ears. (Something with more of a feel of... "showing own vulnerability to elicit other's vulnerability," would have generally felt more natural to me? I think her later responses cycled to this, a little).

...but I'm starting to think that even the awkwardness, is its own sort of evidence? Of someone who is used to wielding frame control, trying to put it aside to listen. And I feel a lot of affection, in seeing it show that she's working on this.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-15T16:25:24.065Z · LW · GW

I was once in a similar position, due to my proximity to a past (different) thing. I kinda ended up excruciatingly sensitive, to how some things might read or feel to someone who was close, got a lot of good out of it (with or without the bad), and mostly felt like there was no way their account wouldn't be twisted into something unrecognizable. And who may be struggling, with processing an abrupt shift in their own personal narrative --- although I sincerely hope the 2 years of processing helped to make this less of a thing? But if you are going through it anyway, I am sorry.

And... I want this to go right. It didn't go right then; not entirely. I think I got yelled at by someone I respect, the first time I opened up about it. I'm not quite sure how to make this less scary for them? But I want it to be.

The people I know who got swept up in this includes some exceptionally nice people. There is at least one of them, who I would ordinarily call exceptionally sane. Please don't feel like you're obligated to identify as a bad person, or as a victim, because you were swept up in this. Just because some people might say it about you, doesn't make it who you are.

Comment by Spiracular on Zoe Curzi's Experience with Leverage Research · 2021-10-15T16:24:21.913Z · LW · GW

I will talk about my own bit with Leverage later, but I don't feel like it's the right time to share it yet.

(But fwiw: I do have some scars, here. I have a little bit of skin in this one. But most of what I'm going to talk about, comes from analogizing this with a different incident.)

A lot of the position I naturally slide into around this, which I have... kind of just embraced, is of trying to relate hard to the people who:

  • WERE THERE
  • May have received a lot of good along with the bad
  • May have developed a very complicated and narratively-unsatisfying opinion because of that, which feels hard to defend
  • Are very sensitized to condemning mob-speak. Because they've been told, again and again, that anything good they got out of the above, will be swept out with the bathwater if the bad comes to light.
    • This sort of thing only stays covered up for this long, if there was a lot of pressure and plausible-sounding arguments pointing in the direction of "say nothing." The particular forms of that, will vary.
    • Core Leverage seems pretty willing to resort to manipulation and threats? And despite me generally trying so hard to avoid this vibe: I want to condemn that outright.
    • Also, in any other circumstance: Most people are very happy to condemn people who break strong secrecy agreements that they've made. If you feel like you've made one, I recognize that this is not easy to defy.
      • (My own part in this story is small. The only reason I'm semi-comfortable with sharing it, is because I got all of my own "vaguely owning the fact that I broke a very substantial secrecy agreement, publicly, to all my friends" out of the way EARLY. It would be bogging me down like crazy, otherwise. I respect Zoe, and others, for defying comparable pulls, or even worse ones.)
        • If you're stuck on this bit, I would like to say: This is an exceptional circumstance. You should maybe talk to somebody, eventually. Maybe only once your own processing has settled down. Publicly might not be the right call for you, and I won't push for it. Please take care for yourself, and try to be careful to pick someone who is not especially prone to demonizing things.
  • People can feel their truth drowned out by mobs of uninvested people, condemning it from afar.
    • The people who know what happened here, are in the minority. They have the most knowledge of what actually happened, and the most skin in this. They are also the people with the most to fear, and the most to lose.

People often don't appreciate, how much the sheer numbers game can weigh on you. It can come to feel like the chorus is looming over you, in this sort of circumstance; poised, always ready to condemn you and yours from afar. Each individual member is only "speaking-their-truth" once, but in aggregate, they can feel like an army.

It's hard to keep appropriate sight of the fact that the weight of the people who were there, and their story, are probably worth 1000x as much as even the most coherent but distant and un-invested condemning statement. They will not get as many shares. It might not even qualify as a story! But their contributions are worth a lot more, at least in my mind. Because they were THERE.

And I... want to stick up for them where relevant? Because this one wasn't my incident, but I know how hard it might be for them to do it for themselves. I can't swear I will do a good job of it? But the desire is there.


I do think a more-private forum, that is enriched for people who were closer to the event, might be a more comfortable place for some people to recount. It's part of why I tried to talk up that possibility, in another thread.

...it is unfortunately not my place to make this, though. For various reasons, which feel quite solid, to me.

(And after Ryan's account? I honestly have some concerns about it getting infiltrated by one of the more manipulative people around Leverage. I don't want to discount that fear! I still think it might be a good idea?)

I do think we could stand to have a clearer route for things to be shared anonymously, because I suspect at least some people would be more comfortable that way.

(Since "attempts at deanonymization" appears to be a known issue, it may be worth having a flag for "only share as numeric aggregations of >1, using my recounting as a data-point.")

EDITEDIT: This press release names Anna Salamon, Eli Tyre, Matthew Graves, and Matt Falshaw as several somewhat-intermediary people who can be contacted. I feel fewer misgivings around contacting them, than I did around the proposal of contacting Geoff and Larissa to handle this internally.

Comment by Spiracular on Common knowledge about Leverage Research 1.0 · 2021-10-14T18:39:16.561Z · LW · GW

I had to read this a few times before I pieced it together, so I wanted to make sure to clarify this publicly.

You are NOT saying this public forum is the place for that. Correct?

You are proposing that it might be nice, if someone else pulled this together?

Perhaps as something like a carefully-moderated facebook group, or an event.

(I think this would require a good moderator, or it will generate more drama than it solves. It would have to be someone who does NOT have "Leverage PR firm vibes," and needs a lot of early clarity about who will not be invited. Also? Work out early what your privacy policy is! And be clear about how much it intends to be reports-oriented or action-oriented, and do not change that status later. People sometimes make these mistakes, and it's awful.)

Because on the off-chance that you didn't mean that...

I did have some contact with the Leverage strangeness here. But despite that, I have remarkably few social ties that would keep me from "saying what I think about it." I still feel seriously reluctant to get into it, on a public forum like this. I imagine that some others would have an even harder time.