In Defense of Attempting Hard Things, and my story of the Leverage ecosystem

post by Cathleen · 2021-12-17T23:08:20.150Z · LW · GW · 43 comments

This is a link post for https://cathleensdiscoveries.com/LivingLifeWell/in-defense-of-attempting-hard-things

Contents

43 comments

43 comments

Comments sorted by top scores.

comment by Viliam · 2021-12-20T00:15:37.714Z · LW(p) · GW(p)

We wouldn’t actually want Elon clones; he’s combative and erratic and quite difficult to coordinate with (which incidentally seems to be relatively common for abnormally effective people who emerge seemingly randomly in the wild), but imagine 2,000 highly effective benevolent people who can get along with others.

I wonder how much of this is a consequence of being so different from your environment that you must ignore most of their advice in order to unleash your true powers. Perhaps 2000 highly effective benevolent people growing up together would be naturally more cooperative, because they would have someone worth cooperating with.

I realize that some of you may feel like this is off-topic. It’s maybe airing grievances, which you were hoping for, but it’s not sensational in any way and it’s not directly addressing any of the claims that have been made about Leverage.

Actually, these are the data I was looking for. I wanted to have a near-mode picture of what the day-to-day life in Leverage looked like. You provided your perspective, thank you! So did Zoe, now I have two pieces of the puzzle.

(I am also curious about the specific theories, but perhaps some other day someone else will leak the data.)

It's the far-mode, when people keep saying that Geoff is either the most genius philosopher ever, or an evil cult leader, that is quite boring. Like, maybe one or the other conclusion is right, or maybe it's something in between, or maybe it's something else entirely... but give me the data, and I will make the conclusion myself. It may be a conclusion different from what you wanted me to make... but that's exactly the point.

One thought I’ve had is that perhaps the Rationalist community in particular is uncomfortable allowing outside theories in without first insourcing them, but they don’t have a structure for processing and vetting content other than through their leadership, and their leadership is busy and hasn’t opted-in to serving that function. So instead there’s a cursory “eh, looks wrong” and then there’s not really anywhere to go from there for either actually engaging with hard-to-prove content or clearing the status hit. Or maybe the issue is more status-based to begin with--some kind of allergy to confidence?

I believe this is spot on. The community attracts the kind of people who love nitpicking. So when a theory is presented, someone always goes "this is not exactly right", and it can quickly escalate to "this is complete bunk". And it's not really helpful [LW · GW] that people are too familiar with the replication crisis, which allows them to dismiss even things they don't have evidence against, by saying "seems like one of those things that won't replicate". In theory, replication crisis should make people update towards "I am not really sure whether this is right or wrong", but in practice it often makes them update towards "I am sure this is wrong".

And then the situation becomes completely different when a high-status member of the rationalist community vouches for a theory they found in... a scientific paper, a pop-science book, a self-help book, a pop-Buddhism book, or someone's blog. Now, attacking the idea would be like attacking its supporter; only a higher-status person can do that safely. Like, Eliezer himself could say: "Excuse me, but how strong is the scientific evidence in favor of this theory?" and then the supporter might backpedal to: "eh, I was just using it as a fake framework that I personally found useful in certain situations, but of course I am not claiming that..." :D

Yes, the rationalist community has a problem with evaluating theories.

This does not automatically imply [? · GW] that Geoff's theory is correct or useful. It also does not imply that Geoff's theory was dismissed without a debate... because it wasn't.

Let's start with the fact that Connection Theory was introduced [LW · GW] to LessWrong readers by Luke Muehlhauser, the former CEO of Singularity Institute. (I am not sure whether he was a CEO exactly at the moment the article was posted.) That sounds to me like a pretty strong insider support. Luke continues with:

I suspect many LWers are excited (like me) to see another organization working on (among other things) x-risk reduction and rationality training, especially one packed with LW members. But I also suspect many LWers (like me) have many concerns about your research methodology and about connection theory. I think this would be a good place for you to not just introduce yourself (and Leverage Research) but also to address some likely concerns your potential supporters may have (like I did for SI here and here).

(In the second link, Luke answers questions similar to what people had about Leverage: What does the Singularity Institute actually do? Why is their research not peer-reviewed? Did they develop anything practical? Luke admits that SI has a problem and he promises to fix it. He makes some specific promises, some of them even with specific deadlines, and says that he will work towards greater transparency of the organization. Later, Singularity Institute / Machine Intelligence Research Institute actually published some stuff.)

Then came the critique [LW · GW], which started with describing Connection Theory and Geoff's evidence in favor of it. Then the author explained why the evidence does not mean what Geoff thinks it means. So, this was not a knee-jerk reaction.

Then Eliezer weighed in, and asked the unpleasant question: What predictions CT makes that mainstream cognitive science does not? Geoff answered, approximately, that there were no general predictions, but if you make a detailed map of someone's beliefs, then you can make good specific predictions about the specific person. Then PJ Eby (who has read tons of books on psychology and self-help) said that if you spend a lot of time making a model of a specific human, this of course allows you to make good predictions about them, but that's unrelated to whether the theory itself is true or not. It just means that having more data is usually better. Then he mentioned an existing theory called "Current Reality Tree", which is similar but much simpler. (No reply.)

Geoff says that the theory worked for him to get rid of a specific addiction, and that no psychologist would have predicted that. PJ Eby tells him he is merely unfamiliar with mainstream psychological research, where the idea of "secondary gain" is already well known. (I confirm that even I have heard about it.) In a different comment he also mentions "Perceptual Control Theory" and "Method of Levels", which again seem like simplified versions of CT. (The word "simplified" is not derogatory in this context, but points towards Occam's razor.) Etc.

...from my perspective this all means that Geoff's Connection Theory was nicely introduced to LessWrong audience, it just failed to impress the audience. Part of the reason was that some LW readers are actually quite familiar with current state of psychology / cognitive science, so they could point out that the useful parts of CT are already known to mainstream science, and the extra parts are not useful. And then... silence. On the side of Leverage, no one tried to convince the LW audience. On the side of LW, the topic seemed settled.

So to me it seems that Geoff decided (perhaps correctly) that if LW audience isn't immediately impressed with his grand theory, it would be a waste of his time to provide more information or try to address some concerns. But this is not the same as LW refusing to engage with this theory. He actually got some useful feedback (for example that there already exist more elegant versions of his theory), he just seemed to ignore that entirely.

Another way to put it is that instead of trying to convince his audience, Geoff selects his audience from people who are easy to impress. Such as kids fresh out of school. When facing people who actually know something about mainstream psychology, and can tell him how specifically he is wrong, and that the good parts of his theory are actually not new... it is easier to withdraw, and continue impressing the kids in his kingdom.

Instead of assuming that because you can’t see our progress, it doesn’t exist/doesn’t have value, it seems like people should be more curious — recognize that they might be missing something important. “Why are so many smart and strategic people investing financial and human capital into this project?”

Ok, this one is easy. Donors are in the business of buying "lottery tickets". They try a few things, not because they are convinced that it will definitely work, but because the probability seems sufficiently non-zero; so if they support 100 things, and 10 of them turn out to be okay, and 1 of them turns out to be really impressive... that is a success in their books.

Geoff is smart, charismatic, he was hanging out with the people from Rationalist community, his project seemed interesting. It was a ticket worth buying.

This fits in with my comments about the ways that we were, I think, poorly treated by some of the surrounding communities and orgs. Not wanting to acknowledge our contributions, not wanting to give us a platform, not wanting us to hire people, and sometimes being pretty attacky.

Uhm, if you want to make Less Wrong audience acknowledge your contributions, writing an article called "Report on Leverage Research contributions" sounds like the straightforward solution. Did you expect it would get banned? Were you waiting for Luke Muehlhauser to post another article on your behalf?

If the story had been that we (Geoff? Everyone who ended up in a leadership position?) had been trying to maliciously brainwash and manipulate and control people for 6 years and finally succeeded after year 7 and then shut it down…well, that seems like a pretty odd explanation.

Another possible story could be that after 6 years of tinkering with various "psychological techniques", Geoff came across something so harmful that it made his group fall apart.

I am referring to the "intention research" here. I assume it was originally meant to be a weapon against outsiders, but it backfired, as different groups within Leverage started using it against each other.

The poison was so strong, that it was easier for Geoff to just burn down the whole thing, erase the website, change the organization name, and start again with new members somewhere else, where hopefully his previous reputation would not follow him.

Also, I suppose that the more time passed, the more outside observers became impatient about the lack of visible output. Excuses like "we are still working on Connection Theory" make more sense in year 1 than they make in year 6.

For example, speaking for myself, I was originally quite positive about Leverage Research. If you asked me a year or two ago, I would probably be like "oh, they are some awesome rationalists in the Bay Area, I don't actually know any details about them, but that's probably my fault because I am on the opposite side of the planet". Then at some moment I realized that the only reason I believe that Leverage is doing something useful, is because Leverage told me so, without providing any tangible evidence. So I stopped believing that.

Replies from: DaystarEld, lsusr, ChristianKl, CronoDAS
comment by DaystarEld · 2021-12-27T21:16:59.045Z · LW(p) · GW(p)

Thanks for writing this up, I had similar thoughts.

Overall I'm glad Cathleen wrote this post, as it gave me a lot more insight into what life in Leverage was like and why, and more empathy for what people there have been going through. I really hope she and everyone else manages to carry on, be successful, not be stigmatized, and keep working on things that are important to them.

But also, I have not updated in the direction of "Leverage was actually working on important or meaningful things that are valuable to other people outside its ecosystem." I'm still waiting for that, and interested in seeing things that might indicate it. It would be fantastic to learn of new psychology research or insights that could help me and my friends and clients.

This also seems central to why the question of whether Leverage is being treated unfairly now or not feels very different than whether they were treated unfairly before.

comment by lsusr · 2021-12-20T09:45:29.164Z · LW(p) · GW(p)

Many of Viliam's comments are at-or-above the quality standard of top-level posts. This is one of them.

comment by ChristianKl · 2021-12-20T09:48:33.143Z · LW(p) · GW(p)

I am referring to the "intention research" here. I assume it was originally meant to be a weapon against outsiders, but it backfired, as different groups within Leverage started using it against each other.

The point of intention research is not being a weapon. I expect that most of the problem was about people using it to do coaching interventions with other people without consent that were intended for the own good of the other person.

Another aspect is that people went through processes that made them more sensitive and thus more vulnerable in an attempt to increase their perceptive abilities to be better at the field.

A third problem is that it gets people to believe that they can accurately mind read other people which then leads to all sorts of people when it turns out that their mind reading isn't fully accurate.

The poison was so strong, that it was easier for Geoff to just burn down the whole thing, erase the website, change the organization name, and start again with new members somewhere else, where hopefully his previous reputation would not follow him.

There's no name change of the organization. It's still Leverage Research.

Then at some moment I realized that the only reason I believe that Leverage is doing something useful, is because Leverage told me so, without providing any tangible evidence. 

Belief Reporting is useful and I think you were regularly at the LessWrong Community Weekend, so you could have attended the one workshop (from someone who wasn't itself at Leverage but who was at one of the Leverage workshops).

Replies from: Viliam
comment by Viliam · 2021-12-20T15:50:46.731Z · LW(p) · GW(p)

Belief Reporting is useful and I think you were regularly at the LessWrong Community Weekend, so you could have attended the one workshop (from someone who wasn't itself at Leverage but who was at one of the Leverage workshops).

I think I was at one LW Community Weekend in Berlin, at least six years ago, and I no longer remember most of what happened there, so I can neither confirm nor deny this.

comment by CronoDAS · 2021-12-28T04:02:06.951Z · LW(p) · GW(p)

Going off-topic here: is it just me, or is Elon Musk's greatest superpower the ability to get ridiculous amounts of funding for very unusual ventures? (SpaceX, Boring Company, Neuralink, etc?) There must be many more people besides Elon Musk who could be an effective CEO for a speculative technology company, but I suspect that if anyone other than Elon Musk had tried to pitch The Boring Company to venture capitalists they'd be laughed out of the room.

(Similarly, right now Warren Buffet makes money not by outwitting the market the way he did years and years ago, but because people are willing to give him better terms than other investors because having the Warren Buffet Seal Of Approval is valuable.)

comment by Freyja · 2021-12-19T01:11:08.463Z · LW(p) · GW(p)

Hi Cathleen. As someone inadvertently but meaningfully once tangled up in this story who you probably don’t know, I have a deep admiration, gratitude and respect for this post and your decision to write it and post it publicly. I read all of it, and might yet read it again. It helped to make sense of the story and the parts relevant to me in a way that is, in real time, updating and improving my understanding of how different people with different personalities can participate in the same situation and come out of it with different struggles and different earned wisdom. Yours is a perspective I’ve been missing, so thank you for having the courage and grace to share it.

Replies from: Spiracular
comment by Spiracular · 2021-12-19T08:02:37.026Z · LW(p) · GW(p)

Seconded.

I really appreciate Cathleen being willing to talk about it, even given the reasonable expectation that some people are going to... be jerks about it, misinterpret things, take things out of context, and engage in ways she won't like. Or even just fail to engage in ways that would be good for her?

I don't always see eye-to-eye with Cathleen, but she poured a lot into this project. She is not exaggerating when she conveys that she was responsible for a really giant pile of ops and maintenance tasks at Leverage.

(I'm not sure how Leverage handled her sick days, but I would be surprised if it wasn't a whole thing. That feels like one way to point to just how large an amount she ended up being personally responsible for. One of the most grounded and productive people there.)

I'm sad to hear that this project hurt her, in the ways it did? (ex: overwork, lack of support, high interpersonal conflict environment)

I'm somewhat glad that she hasn't completely given up on the idea of well-intentioned ambitious projects, and I'm really happy that it sounds like she has a solid bubble of people now.

This is a lot of information, and there was a cost to writing it up, I'm sure. I can't really weigh in on whether it was worth what she gave up to do so, but I'm grateful that she shared it.

comment by jefftk (jkaufman) · 2021-12-21T18:34:21.557Z · LW(p) · GW(p)

Thank you for writing this. Leverage was an enormous project, and the more we all can learn from it the better.

I think the biggest place where my perspective is different is that you are coming from a place where you have a really strong belief in the value of the things that Leverage was doing. Something like, it was a group of people who in an unorthodox way made a lot of deep and important discoveries about how people think and relate to each other. Outsiders, however, haven't seen evidence of these discoveries, and so we pretty much have to take your word for it. In particular, I don't accept the argument that we should trust that Leverage's funders were carefully evaluating its output: many funders have a "hits based" approach, and even if they were looking closely, from what I know about them they didn't have the background to know what was and wasn't groundbreaking psychology research.

I understand that it's hard to translate results from a high context environment like Leverage into something that can be shared externally, that the people involved were and remain very busy, and that there are questions about the advisability of sharing specific information. But I also think that in the absence of such sharing us outsiders should remain skeptical.

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-12-22T01:57:46.045Z · LW(p) · GW(p)

This comment inspired me to finally write this short piece [LW · GW] that I've been hoping to write for a few years now.

comment by Viliam · 2021-12-19T00:51:29.044Z · LW(p) · GW(p)

Cathleen, I appreciate your heroic effort. If I ever started a world-saving organization (which is unlikely), you would probably be the only person from Leverage that I would want to hire.

Now please forgive me for armchair psychoanalysis, but it feels to me that when people criticize Leverage, you probably hear it as a criticism of your work in Leverage. Which would indeed be completely unfair! My impression from reading the article is that without you, Leverage would probably have fallen apart many years sooner. But the entire project was so psychologically unhealthy, that it was doomed to fall apart at some moment. Your efforts could only buy them a few extra years, at the cost of you burning out. And instead of using this time to fix itself, Leverage became more toxic instead, and finally even turned against you.

Something happened to switch that, and I’ve never figured out what or why. The people I’d been supporting seemed to stop appreciating my efforts and were quick to come to harsh judgments about my motivations and limitations that led them to exclude me or circumvent me and blame me for perceived shortcomings instead of looking for solutions together. Negative implications of things my trainers had come to believe about my mental setup were sometimes hinted at but never made explicit.

It was deeply destabilizing to have some of my closest collaborators and friends seem to update so negatively towards me with no apparent way that I could interface with those negative views or the resulting change in how I was treated.

In the intervening years only one other person from the Leverage ecosystem has reached out to check in on me (though others have been friendly when they email to ask for help with things or in their responses to me reaching out).

This is not okay. You set the bar so desperately low that even "being friendly when asking for help" counts, but come on. This, from people who tried to save the world together, who lived in the same house with you for years?

Like, when I saw the words "something happened to switch that, and I’ve never figured out what or why", my first thought was: "So why don't you just pick up the phone and ask them? The financial incentive is no longer there; they do not have to undergo regular thought-debugging with their bosses anymore; now they could give you a honest answer with no repercussions."

But either the thought never crossed your mind (unlikely, but possible), or you ex-Leverage people simply do not have the type of relationship where you could call them and ask a personal question. Despite spending years in the same house and doing amateur mind surgery on each other. What the fuck?

I remember at some point an advisor telling us that she’d heard that for every negative piece of feedback, you need to give five pieces of positive feedback because of the outsized effect of negative updates. I don’t know that anything like that would’ve been possible in a project where people were so dedicated to rooting out flaws in themselves and others and in the project itself.

I'd like to recommend the book Don't Shoot the Dog which explains how to teach using positive feedback alone.

But this problem goes deeper. It seems to me that there was a general lack of niceness in Leverage. This, combined with the "intention research", turned out to be a deadly combination; people became paranoid and started to see the worst in each other.

(Sorry, I have much more to add, but I am out of time for now.)

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-12-18T18:57:58.952Z · LW(p) · GW(p)

Thanks for sharing! I found this helpful, though I only skimmed it.

The argument about how mainstream psychology looks a lot like woo too if you squint at it (e.g. IFS says you have multiple people living in your head) interests me. My tentative response is: "Yep, totally, and I think we shouldn't trust mainstream psychology much either. And if there was an academic psychology department with grad students and professors all living in one big house inventing novel theories and experimenting on each other, serving as each other's therapists and whatnot... I'd be pretty nervous about that place too."

Replies from: TekhneMakre
comment by TekhneMakre · 2021-12-19T07:47:11.199Z · LW(p) · GW(p)

[Not really responding to your comment, just saying something sparked while reading it]

I wish that there were more things sort of like Leverage, honestly, given my current info. I'd remove the leader worship and the cosmic-battle stuff and the information suppression and the sleeping with subordinates, but I'd keep the living together, the dogfooding, the intense mental exploration, the jargon, the (non-anti-epistemic aspects of) exceptionalism. I'd add concrete stuff like making physical products, or computer code that does obviously interesting or useful stuff according to other people, or math proofs that academic mathematicians think are correct and interesting, etc. My sense is that these groups don't include that stuff because it's too, like, menial or non-abstract or something. Like if you're trying to save the world, "of course" it's mostly a waste of time to make a physical product that people want to buy; you should be spending all your time chewing on highly abstract high leverage questions about minds that have effect that radiate out into everything and determine the course of all future wise leadership etc. etc., like OAK/MAPLE or Circlers or Leverage. Which isn't totally silly, but also yuck. 90% object, 10% meta, maybe 20% if you're careful.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-12-19T12:02:55.962Z · LW(p) · GW(p)

I also agree that intuitively it seems healthy to mostly be focused on object-level stuff. Maybe it's something to do with getting more direct feedback from reality.

Replies from: Viliam
comment by Viliam · 2021-12-19T19:19:13.614Z · LW(p) · GW(p)

What was the "feedback from reality" that Leverage folks got on their psychological and self-improvement skills? I mean, those were the areas they considered themselves to be experts at; where they believed their techniques were so superior that publishing them would be a potential x-risk.

Reading Cathleen's article... despite her admiration for Geoff, trying to sping things in positive light, and even shifting the blame on the rationalist community for not being more supportive... the actual data she provides, they really don't seem to support Leverage's self-image.

At least my impression is of a group of kids, most of them incapable of tying their own shoes, despite years of continuous self-improvement using secret techniques developed by their philosopher-king. And their psychological research culminated in "intention research", which apparently made everyone in the group hostile to everyone else, resulting in the entire group falling apart. Like... if this is the best you can do, then maybe this is time to admit that you are utterly incompetent in everything beyond fundraising.

Replies from: ChristianKl
comment by ChristianKl · 2021-12-20T09:05:10.836Z · LW(p) · GW(p)

At least my impression is of a group of kids, most of them incapable of tying their own shoes,

There's no reason to make an insult like that. 

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-12-21T18:17:58.425Z · LW(p) · GW(p)

How about, "incapable of cleaning up after dropping a glass"?

(Not trying to be mean; I was very surprised to read that this was the kind of thing they relied on Cathleen for)

Replies from: RationallyDense
comment by RationallyDense · 2022-01-02T00:53:52.963Z · LW(p) · GW(p)

I think a lot of it might have to do with the non-linear way cleanup needs to scale with the number of people. If you have 10x people living in a space, they are probably creating around 10x as much of a mess. And because there are 10x people there are 10x people experiencing the messes. So with the broken glass example, you probably have 10x broken glasses per time period and 10x people wandering about risking to find a small glass shard you missed. So cleaning up a broken glass in a 40-persons household needs to be way more thorough than in a 4 persons household.

(It's also why using glassware near your personal pool is fine while using glassware near a public pool tends to be prohibited.)

comment by AnnaSalamon · 2022-01-03T05:09:00.291Z · LW(p) · GW(p)

I, also, really appreciate Cathleen for writing this piece, and found it worth reading and full of relevant details. I'll try to add more substantive comments in a week or so, but wanted meanwhile to add my vote to those recommending that folks wanting to understand Leverage read this piece.

comment by Jarred Filmer (4thWayWastrel) · 2021-12-20T01:27:52.270Z · LW(p) · GW(p)

To echo others, thank you for putting your time and effort into this, I found it coherent and valuable. As an international rat/EA who's only context for Leverage was Zoe's post, this fleshed out my understanding of what you were trying to do in a helpful way and gave me a lot to chew on regarding my own thoughts on ideological communities.

Regarding: "Why do people seem to hate us?"

After reading Zoe's post I had a very negative view of Leverage and Geoff, after some introspection here is my best guess as to why.

Growing up religious, I'm very aware that my individuality can be radically subordinated by:

  1. Potent ideas
  2. Potent community

And when those two are mixed in the form of ideological communities it's powerful and it's intoxicating. And like most intoxicating and powerful things, there is a great potential for harm. Thus whenever I find myself moving in a group of people high on ideas and connection, there's a voice in the back of my mind constantly asking.

Is this a cult yet?
Is this a cult yet?
Is this a cult yet?

This is a voice I know for a fact is speaking in the back of several EAs and rationalists I know personally. And I'd be shocked if anyone who's had a brush with the failure modes of ideological community and finds themsevles in a reading group for The Precipice isn't thinking that to themselves.

So when I read of an insular group in the memeplex I call home getting high on ideas and each-other, and then started talking about "magic" and "demons" my strong reaction was of fear. It's happening, kill it with fire before it spreads and ruins everything.

I'm currently agnostic on whether leveraging the power of potent community and ideas is something that can be channeled safely, but I don't blame you guys for trying; and I recognise that my initial reactions to the topic of Leverage and Geoff Anders were mixed up in with a non-trivial amount of counter productive fear.

comment by ChristianKl · 2021-12-19T22:00:50.549Z · LW(p) · GW(p)

One thought I’ve had is that perhaps the Rationalist community in particular is uncomfortable allowing outside theories in without first insourcing them, but they don’t have a structure for processing and vetting content other than through their leadership, and their leadership is busy and hasn’t opted-in to serving that function.

That's not really true. Writing LessWrong posts is a way that anyone can put up a theory to the rationalist community. People then read the post and vote based on what they believe about the post. The problem is that Leverage's idea of secrecy included not sharing information through such a channel.

comment by Chris_Leong · 2021-12-19T15:35:24.447Z · LW(p) · GW(p)

This was a long post, but an excellent one! I think that it provides a lot of balance to what we've heard so far.

comment by Viliam · 2021-12-18T17:19:27.401Z · LW(p) · GW(p)

After reading the first ~20% of the text, my impression is that you were the only adult person in the house. And I am not talking about the biological age here.

This lack of people “like me” led to me spending the better part of a decade trying my very best to hand oars to people in an effort to row in the same direction, where it’s possible that they didn’t even agree that we needed a boat.

Sounds like they didn't even realize how much they need an adult -- someone to close the fridge, take out the trash, do the dishes, and clean their diapers (sorry, towels and sheets). Couldn't even say: "thank you, Mom!"

Before using the absurdity heuristic, let's note than Paul Erdős was probably even less mature, and he was quite productive and successful. So, this is not an a priori impossible way to save the world. (But if someone wants to go this way, the fraction of nannies in the organization needs to be way higher.) I admit I am quite Petersonian in believing that if someone is unable to clean up their room, they do not seem ready to change the world.

More specifically, I believe in the power of division of labor, and the usefulness of a competent concierge. I can quite imagine myself volunteering for a role similar to what you had (if I believed in the mission, if taking the role would not result in low status, and if that seemed like an effective way to split work according to everyone's skills). But there is a huge difference between "I know that you could do it too, but it is more efficient if I do it for you" and "I am doing it for you, because you seem unable to do it yourself, maybe even unable to learn how to do it yourself", especially if we are talking about things like doing the laundry, in context of a group selected for their interest in self-improvement and improving the world. If you can't solve this, how can you aspire to solve... well, anything? (What magic do you need beyond a 5-minute lecture, a checklist, and a reminder one month later?)

EDIT:

I was getting evidence that this group of people who I was relying on to level-up and discover important truths and work hard to build up to a better world were nowhere near as conscientious as I needed them to be. And this was upsetting not just because of the way that it impacted my Sunday morning routine, but because of the larger implications for my plans.

In my opinion, it also had implications on the very plan to build a better world. Okay, maybe if Leverage had more people like you (and if you were allowed to make some decisions that others would have to follow), perhaps the situation could have been saved. Anyway, I find it ironic that a group of people that was officially so much about self-improvement and psychological research, had this type of problem.

EDIT: I guess the part about incentives for the trainers explains it. Teaching someone to be less of a dependent child, would be a lot of work for zero credit, because credits are only for doing grand things.

comment by Taran · 2022-12-14T22:44:23.870Z · LW(p) · GW(p)

I liked this post, but I don't think it belongs in the review.  It's very long, it needs Zoe's also-very-long post for context, and almost everything you'll learn is about Leverage specifically, with few generalizable insights.  There are some exceptions ("What to do when society is wrong about something?" would work as a standalone post, for example), but they're mostly just interesting questions without any work toward a solution.  I think the relatively weak engagement that it got, relative to its length and quality, reflects that: Less Wrong wasn't up for another long discussion about Leverage, and there wasn't anything else to talk about.

Those things aren't flaws relative to Cathleen's goals, I don't think, but they make this post a poor fit for the review: a didn't make a lot of intellectual progress, and the narrow subfield it did contribute to isn't relevant to most people.

Replies from: Raemon
comment by Raemon · 2022-12-14T22:47:46.314Z · LW(p) · GW(p)

My take was "It'd be quite weird for this post to show up in the Best of LessWrong books, but I think part of the point of the review is to reflect on things that had some kind of impact on your worldmodels, even if the posts aren't directly optimized for that." 

Replies from: Taran
comment by Taran · 2022-12-15T13:40:39.591Z · LW(p) · GW(p)

Sure, I just don't expect that it did impact peoples' models very much*.  If I'm wrong, I hope this review or the other one will pull those people out of the woodwork to explain what they learned.

*Except about Leverage, maybe, but even there...did LW-as-a-community ever come to any kind of consensus on the Leverage questions?  If Geoff comes to me and asks for money to support a research project he's in charge of, is there a standard LW answer about whether or not I should give it to him?  My sense is that the discussion fizzled out unresolved, at least on LW.

Replies from: Raemon
comment by Raemon · 2022-12-15T20:48:07.911Z · LW(p) · GW(p)

A non-obvious thing here: last year, Lightcone put a fair amount of effort into doing lots of interviews, orienting on the Leverage situation, and attempting to publish a blogpost that offered a pretty clear and comprehensive set of information. We were specifically thinking of this from the standpoint of "it doesn't seem like there are very good community justice institutions or practices around, beyond random discussions", and thinking maybe we could contribute something useful.

And then, well, a lot of stuff came up and we didn't get the piece over the finish-line of publishing. 

So I'm coming at this from the perspective, partly "how valuable would it have been to get that across the finish line?". And I see both this piece and the Zoe piece as representing the collective situation. 

I also do just agree some of the claims in this piece (implicit and explicit) that many of the cult-looking-behaviors of leverage are red herrings and are reasonable things I want to defend.

Replies from: Taran
comment by Taran · 2022-12-16T11:14:43.965Z · LW(p) · GW(p)

That post sounds useful, I would have liked to read it.

comment by Viliam · 2021-12-18T19:47:47.319Z · LW(p) · GW(p)

Bubbles are great, but if you lose the ability to live outside the bubble, you have a problem. For example, the bubble owner may start extracting rent from you. But even in a perfectly benevolent bubble, all the resources outside the bubble become inaccesible to you. Life in a bubble is great, if you have the capacity to leave.

I think there were growing pains for people who weren’t adept at what we’d ended up focusing on (but who wanted to continue being part of our group and our mission) but who also weren’t able or willing to contribute in other ways (e.g. fundraising or recruiting or day-to-day operations).

Allow me to rephrase it using my own words: You hired kids right out of university, provided for them accommodation and food, did their dishes and laundry, and paid them for doing "psychological research" which consisted of making up stuff and providing emotional support to each other. When you finally decided to stop paying some of them, they were really sad. (No shit, Sherlock!) But despite the sadness, they didn't volunteer to e.g. help you with your job. Sad to be low-status, but unwilling to gain that status back by doing the dishes.

How much am I exaggerating here?

To me it seems that you hired kids with no experience of real life, absurdly spoiled them, and then... they didn't provide enough value to your project (you considered them insufficiently psychologically insightful; they refused to do the mundane jobs), but also lacked the skills needed to survive outside of this protective bubble and they knew it. They would probably need a "normal life coach" just to be able to keep a job and survive living on their own. The three months of financial runway were generous, but they should have spent the time getting outside-of-bubble skills.

The problem was not the bubble. The problem was populating it with people incapable of surviving outside the bubble. I suspect that this created the stress, and also the potential for abuse. You can be friendly to each other, but if the boss can press a button that will end your life as you know it, that is a huge power differential. That is way more power than an average employer (even an average American employer) has over their employees.

A charitable reader will assume that this all happened by accident.

Replies from: 4thWayWastrel, Spiracular, ChristianKl
comment by Jarred Filmer (4thWayWastrel) · 2021-12-20T01:45:57.888Z · LW(p) · GW(p)

I agree with the content of your comment but the framing gives me a sense of bad faith, and makes me uncomfortable.

If I put a lot of time into a post-mortem detailing how an 8 year project I put a lot into went wrong, and then the top comment was someone summing up what I'd done in an uncharitable paragraph saying things like "making stuff up" and "no shit sherlock" I'd feel like I'd tried to do a good thing for the discourse at large and was defected against.

comment by Spiracular · 2021-12-19T20:18:26.767Z · LW(p) · GW(p)

It feels worth pointing out that Universities seem to try to set up this sort of absurdly protective bubble, by design. Uni extracts sometimes-exorbitant rent, while doing so; Leverage was at least usually paying people salaries.

Meanwhile, a lot of US bureaucracy appears almost... tailor-made to make life more complicated, with a special exception to that reserved for "full-time employees of large corporations"? (I think that for historic reasons, some of their bureaucratic complications are consistently outsourced to their company to handle.)

Against this societal backdrop, I find it hard to fault Cathleen or Leverage for trying what they did. While also not being too surprised, that it led to some dependence issues.

(Maybe having a hard "2 years" limit, and accepting a little less "heroic responsibility," would have downgraded a lot of issues to just "University dorm level.")

Replies from: Chris_Leong
comment by Chris_Leong · 2021-12-20T12:38:08.395Z · LW(p) · GW(p)

The EA hotel has a two year limit (I think exceptions are possible though). After reading this article, I'm feeling like Greg made a good choice here.

comment by ChristianKl · 2021-12-18T20:50:54.758Z · LW(p) · GW(p)

Sad to be low-status, but unwilling to gain that status back by doing the dishes.

On interesting aspect here is that most actual cults have no problem giving people dish duties if they think those people should do it. Lack of getting them to do the dishes is an example of little pressure being put on them even when they didn't add value.

Replies from: Viliam
comment by Viliam · 2021-12-18T22:17:29.488Z · LW(p) · GW(p)

Yeah, as far as I know (I am not yet in the middle of the article -- and in the meanwhile I already did the dishes twice, hung up the laundry, and read a bedtime story to my kids, hehe), living in Leverage sounds like lots of fun... if you enjoy that type of thing, and if you are able to survive outside the bubble.

Generally, group houses are a controversial topic on Less Wrong. Whenever you mention them, someone says "awesome" and someone else screams in horror. I am on the "awesome" side... but that is because I am already capable of living the normal life, so if I joined such project and something went wrong, I could simply... leave. I am also able to keep my boundaries, so I would be like "the entire weekend is my personal free time" (and I am using it to meet people outside this group), and if the group is not okay with that, then I quit (it helps that I have the "fuck-you money"). Though I suspect that with this attitude, I wouldn't get hired in the first place... which perhaps points towards the problem (that the group members were selected, perhaps unintentionally, for their potential to become dependent on Geoff/group).

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-12-21T18:22:08.631Z · LW(p) · GW(p)

Are group houses a controversial topic here? My impression is that overall LW is much more positively inclined toward them than the general public. With a view like, "group houses are something that works really well for the right people, though they aren't for everyone." I've seen reactions where people say that they personally wouldn't want to live in one, but I don't think I've seen people saying that they are harmful in general?

(Disclosure: I live in a group house of sorts)

Replies from: Viliam
comment by Viliam · 2021-12-21T23:58:51.585Z · LW(p) · GW(p)

I think I remember some very negative comments on the "Dragon Army" post, but I can't find them now.

There was a time in the past where people were a bit hysterical about whether LW is a cult or not, and talking about living together was one of the triggers. Recently, everyone seems to have calmed down.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-12-22T01:13:57.585Z · LW(p) · GW(p)

Dragon Army was, in their own words, a "high-commitment, high-standards, high-investment group house model with centralized leadership". Reading through the negative comments, they're about ways in which people expect this complex project to go poorly, and not about group houses in general: https://web.archive.org/web/20180117095209/http://lesswrong.com/lw/p23/dragon_army_theory_charter_30min_read/

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-12-22T01:58:24.513Z · LW(p) · GW(p)

https://medium.com/@ThingMaker/dragon-army-retrospective-597faf182e50

comment by Spiracular · 2021-12-19T07:09:06.167Z · LW(p) · GW(p)

What to do when society is wrong about something?

I thought this aside was an extremely good snapshot of class of problem that I have seen come up in other contexts as well.

Briefly: People have a tendency to develop social bubbles that are, in a key way or two, way more comfortable or healthy for them. If you get strongly immersed in one, then "the rest of society" starts looking unpleasant or unhealthy, and what do you do with that when that hapens?

I don't find it easy to answer! But I'd be curious to hear from other people about variants of this scenario that they've run into, how they've handled it, and how that has gone.

(It sounds like Leverage had a bit of this dynamic, combined with a feeling that the norms were up for grabs. I had not previously pegged that down about Leverage, but having that context feels helpful for making sense of it.)

comment by Nevin · 2022-12-13T17:55:33.870Z · LW(p) · GW(p)

This post is the most comprehensive answer to the question "what was really going on at Leverage Research" anyone has ever given, and that question has been of interest to many in the LW community. I'm happy to see it's been nominated for the year-end review; thank you to whomever did that!

comment by ChristianKl · 2021-12-20T09:08:01.351Z · LW(p) · GW(p)

My advice to leaders would be to encourage people to take care of as many of their personal goals outside of the project as possible: take a yoga class, visit your family for the holidays, write poetry and post it on your instagram, make time to stay connected with your friends, figure out what sparks joy and make sure you’re getting enough of that. Because the more pieces of people’s plans that have to go through the project (or really any centralized thing), the higher the stakes and the more conflict will arise from people’s paths being in a frequent state of threat (even from aspects that you wouldn’t have guessed would conflict with someone’s implicit/underlying plans).

[...]

So when we see people working on ambitious projects, enduring personal suffering and conflict, working long hours, missing family gatherings or delaying starting a family of their own -- I think it’s important to counter the natural inclination one might have to dismiss their drive as foolhardy or to assume that they’ve been duped.

Those seem to be two different sentiments in the same article. 

I imaging that the second one was written first in a sense of defending the work. I imagine that then after thinking more and writing the article you came around to the first sentiment.

Ambitious goals are nice but having time for personal goals besides work is important. 

comment by CronoDAS · 2022-01-10T09:31:28.300Z · LW(p) · GW(p)

My priors on a group like Leverage Research actually discovering something true and important by doing "basic research" in psychology are very low; I don't expect "put a bunch of smart people in a room and have them talk to each other" to produce something more connected to reality than Freudian psychoanalysis or Dianetics. After all, "id-ego-superego" really isn't a terrible model of a person, and studies have trouble telling if one kind of therapy is better than another. So you'll get something that kind of makes sense and may even seem to work but isn't necessarily related to truth in any meaningful way.

Is there anything I should know that might cause me to update my beliefs?

comment by Slider · 2022-01-09T00:11:57.317Z · LW(p) · GW(p)

I have kept quite far from the discussion but it seems to be slowly ungoing and starts tt become relevant just as a thing going in LW. I expect engaging with such long takes to be unwiedly. I did want to say some pointers.

crackpot vs wonk: Like people ar not the villais of their stories persons themselfs should believe in their epistemic process. So every crackpot should classify themselfs as (at worst) wonks. Saying "It is okay I am a wonk" will not really help.

I think the reference class on what was going on is thin in that not very much similar things are going on. But I think that comparing to esblished institutions is not reallt proper. Stanford prison experiment was ran but now we are of the stance that similar setups should not be done. There can be a reason of "we didn't know it was dangerous" like how Marie Curie dies from the research topic. But once we start to know then liability starts to attach. I would also not discredit of the humanitarian harms that german WWII scientists because they were part of an ambitious order. Milgram experiment also shows that people don't need any or much modifification to be making dangerous decisions in far off settings. Road to hell is paved with good intentions and if high aims starts to waive damages it de facto gives a licence to destroy. Sure thinki big and shoot a rocket out your lawn but if the neighbour complains about their cruched dog you really ought to have bought or arranged the safezones. Skimping out on that because you want to succeed quicker/ with less resources is not thiking big but thinking greedy.

I can understand a military operation or skydiving to have a casualty rate but if your math research or psyholoogy program has a causalty rate it raises questions how that can come to be as it doesn't strike like that kind of activity. Further more even if  I have opted into a risky business keeping the risk-deal current can be relevant. If I had a laboratory working on "what is this radioacitvy thing?" and then don't tell people when casualties arise right away but say wait for 6 months keeping the risk information assymmetry seems exploitative.