My tentative best guess on how EAs and Rationalists sometimes turn crazy

post by habryka (habryka4) · 2023-06-21T04:11:28.518Z · LW · GW · 109 comments

Contents

  The central thesis: "People want to fit in"
  Applying this model to EA and Rationality
  Social miasma is much dumber than the average member of a group
  How do people avoid turning crazy? 
None
109 comments

Epistemic status: This is a pretty detailed hypothesis that I think overall doesn’t add up to more than 50% of my probability mass on explaining datapoints like FTX, Leverage Research, the LaSota crew etc., but is still my leading guess for what is going on. I might also be really confused about the whole topic.

Since the FTX explosion, I’ve been thinking a lot about what caused FTX and, relatedly, what caused other similarly crazy- or immoral-seeming groups of people in connection with the EA/Rationality/X-risk communities. 

I think  there is a common thread between a lot of the people behaving in crazy or reckless ways, that it can be explained, and that understanding what is going on there might be of enormous importance in modeling the future impact of the extended LW/EA social network.

The central thesis: "People want to fit in"

I think the vast majority of the variance in whether people turn crazy (and ironically also whether people end up aggressively “normal”) is dependent on their desire to fit into their social environment. The forces of conformity are enormous and strong, and most people are willing to quite drastically change how they relate to themselves, and what they are willing to do, based on relatively weak social forces, especially in the context of a bunch of social hyperstimulus (lovebombing is one central example of social hyperstimulus, but also twitter-mobs and social-justice cancelling behaviors seem similar to me in that they evoke extraordinarily strong reactions in people). 

My current model of this kind of motivation in people is quite path-dependent and myopic. Even if someone could leave a social context that seems kind of crazy or abusive to them and find a different social context that is better, with often only a few weeks of effort, they rarely do this (they won't necessarily find a great social context, since social relationships do take quite a while to form, but at least when I've observed abusive dynamics, it wouldn't take them very long to find one that is better than the bad situation in which they are currently in).  Instead people are very attached, much more than I think rational choice theory would generally predict, to the social context that they end up in, with people very rarely even considering the option of leaving and joining another one. 

This means that I currently think that the vast majority of people (around 90% of the population or so) are totally capable of being pressured into adopting extreme beliefs, being moved to extreme violence, or participating in highly immoral behavior, if you just put them into a social context where the incentives push in the right direction (see also Milgram and the effectiveness of military drafts). 

In this model, the primary reason for why people are not crazy is because social institutions and groups that drive people to extreme action tend to be short lived. The argument here is an argument from selection, not planning. Cults that drive people to extreme action die out quite quickly since they make enemies, or engage in various types of self-destructive behavior. Moderate religions that include some crazy stuff, but mostly cause people to care for themselves and not go crazy, survive through the ages and become the primary social context for a large fraction of the population. 

There is still a question of how you end up with groups of people who do take pretty crazy beliefs extremely seriously. I think there are a lot of different attractors that cause groups to end up with more of the crazy kind of social pressure. Sometimes people who are more straightforwardly crazy, who have really quite atypical brains, end up in positions of power and set a bunch of bad incentives. Sometimes it’s lead poisoning. Sometimes it’s sexual competition. But my current best guess for what explains the majority of the variance here is virtue-signaling races combined with evaporative cooling [LW · GW]. 

Eliezer has already talked a bunch about this in his essays on cults [LW · GW], but here is my current short story for how groups of people end up having some really strong social forces towards crazy behavior. 

  1. There is a relatively normal social group.
  2. There is a demanding standard that the group is oriented around, which is external to any specific group member. This can be something like “devotion to god” or it can be something like the EA narrative of trying to help as many people as possible. 
  3. When individuals signal that they are living their life according to the demanding standard, they get status and respect. The inclusion criterion in the group is whether someone is sufficiently living up to the demanding standard, according to vague social consensus.
  4. At the beginning this looks pretty benign and like a bunch of people coming together to be good improv theater actors or something, or to have a local rationality meetup. 
  5. But if group members are insecure enough, or if there is some limited pool of resources to divide up that each member really wants for themselves, then each member experiences a strong pressure to signal their devotion harder and harder, often burning substantial personal resources.
  6. People who don’t want to live up to the demanding standard leave, which causes evaporative cooling and this raises the standards for the people who remain. Frequently this also causes the group to lose critical mass. 
  7. The preceding steps cause a runaway signaling race in which people increasingly devote their resources to living up to the group's extreme standard, and profess more and more extreme beliefs in order to signal that they are living up to that extreme standard

I think the central driver in this story is the same central driver that causes most people to be boring, which is the desire to fit in. Same force, but if you set up the conditions a bit differently, and add a few additional things to the mix, you get pretty crazy results. 

Applying this model to EA and Rationality

I think the primary way the EA/Rationality community creates crazy stuff is by the mechanism above. I think a lot of this is just that we aren’t very conventional and so we tend to develop novel standards and social structures, and those aren’t selected for not-exploding, and so things we do explode more frequently. But I do also think we have a bunch of conditions that make the above dynamics more likely to happen, and also make the consequences of the above dynamics worse. 

But before I go into the details of the consequences, I want to talk a bit more about the evidence I have for this being a good model. 

  1. Eliezer wrote about something quite close to this 10+ years ago and derived it from a bunch of observations of other cults, before really our community had shown much of any of these dynamics, so it wins some “non-hindsight bias” points.
  2. I think this fits the LaSota crew situation in a lot of detail. A bunch of insecure people who really want a place to belong find the LaSota crew, which offers them a place to belong, but comes with (pretty crazy) high standards. People go crazy trying to demonstrate devotion to the crazy standard.
  3. I also think this fits the FTX situation quite well. My current best model of what happened at an individual psychological level was many people being attracted to FTX/Alameda because of the potential resources, then many rounds of evaporative cooling as anyone who was not extremely hardcore according to the group standard was kicked out, with there being a constant sense of insecurity for everyone involved that came from the frequent purges of people who seemed to not be on board with the group standard.
  4. This also fits my independent evidence from researching cults and other more extreme social groups, and what the dynamics there tend to be. One concrete prediction of this model is that the people who feel most insecure tend to be driven to the most extreme actions, which is borne out in a bunch of cult situations. 

Now, I think a bunch of EA and Rationality stuff tends to make the dynamics here worse: 

  1. We tend to attract people who are unwelcome in other parts of the world. This includes a lot of autistic people, trans people, atheists from religious communities, etc.
  2. The standards that we have in our groups, especially within EA, have signaling spirals that pass through a bunch of possibilities that sure seem really scary, like terrorism or fraud (unlike e.g. a group of monks, who might have signaling spirals that cause them to meditate all day, which can be individually destructive but does not have a ton of externalities). Indeed, many of our standards directly encourage *doing big things* and *thinking worldscale*.
  3. We are generally quite isolationist, which means that there are fewer norms that we share with more long-lived groups which might act as antibodies for the most destructive kind of ideas (importantly, I think these memes are not optimized for not causing collateral damage in other ways; indeed, many stability-memes make many forms of innovation or growth or thinking a bunch harder, and I am very glad we don’t have them).
  4. We attract a lot of people who are deeply ambitious (and also our standards encourage ambition), which means even periods of relative plenty can induce strong insecurities because people’s goals are unbounded, they are never satisfied, and marginal resources are always useful.

Now one might think that because we have a lot of smart people, we might be able to avoid the worst outcomes here, by just not enforcing extreme standards that seem pretty crazy. And indeed I think this does help! However, I also think it’s not enough because: 

Social miasma is much dumber than the average member of a group

I think a key point to pay attention to in what is going on in these kind of runaway signaling dynamics is: “how does a person know what the group standard is?”. 

And the short answer to that is “well, the group standard is what everyone else believes the group standard is”. And this is the exact context in which social miasma dynamics come into play. To any individual in a group, it can easily be the case that they think the group standard seems dumb, but in a situation of risk aversion, the important part is that you do things that look to everyone like the kind of thing that others would think is part of the standard. In practice this boils down to a very limited kind of reasoning where you do things that look vaguely associated with whatever you think the standard is, often without that standard being grounded in much of any robust internal logic. And things that are inconsistent with the actual standard upon substantial reflection do not actually get punished, as long as they look like the kind of behavior that looks like it was generated by someone trying to follow the standard.

(Duncan gives a bunch more gears and details on this in his “Common Knowledge and Social Miasma” post: https://medium.com/@ThingMaker/common-knowledge-and-miasma-20d0076f9c8e

How do people avoid turning crazy? 

Despite me thinking the dynamics above are real and common, there are definitely things that both individuals and groups can do to make this kind of craziness less likely, and less bad when it happens. 

First of all, there are some obvious things this theory predicts: 

  1. Don’t put yourself into positions of insecurity. This is particularly hard if you do indeed have world-scale ambitions. Have warning flags against desperation, especially when that desperation is related to things that your in-group wants to signal. Also, be willing to meditate on not achieving your world-scale goals, because if you are too desperate to achieve them you will probably go insane (for this kind of reason, and also some others).
  2. Avoid groups with strong evaporative cooling dynamics. As part of that, avoid very steep status gradients within (or on the boundary of) a group. Smooth social gradients are better than strict in-and-out dynamics.
  3. Probably be grounded in more than one social group. Even being part of two different high-intensity groups seems like it should reduce the dynamics here a lot. 
  4. To some degree, avoid attracting people who have few other options, since it makes the already high switching and exit costs even higher.
  5. Confidentiality and obscurity feel like they worsen the relevant dynamics a lot, since they prevent other people from sanity-checking your takes (though this is also much more broadly applicable). For example, being involved in crimes makes it much harder to get outside feedback on your decisions, since telling people what decisions you are facing now exposes you to the risk of them outing you. Or working on dangerous technologies that you can't tell anyone about makes it harder to get feedback on whether you are making the right tradeoffs (since doing so would usually involve leaking some of the details behind the dangerous technology). 
  6. Combat general social miasma dynamics (e.g. by running surveys or otherwise collapsing a bunch of the weird social uncertainty that makes things insane). Public conversations seem like they should help a bunch, though my sense is that if the conversation ends up being less about the object-level and more about persecuting people (or trying to police what people think) this can make things worse. 

There are a lot of other dynamics that I think are relevant here, and I think there are a lot more things one can do to fight against these dynamics, and there are also a ton of other factors that I haven’t talked about (willingness to do crazy mental experiments, contrarianism causing active distaste for certain forms of common sense, some people using a bunch of drugs, high price of Bay Area housing, messed up gender-ratio and some associated dynamics, and many more things). This is definitely not a comprehensive treatment, but it feels like currently one of the most important pieces for understanding what is going on when people in the extended EA/Rationality/X-Risk social network turn crazy in scary ways.

109 comments

Comments sorted by top scores.

comment by romeostevensit · 2023-06-21T07:58:26.671Z · LW(p) · GW(p)

The spiritual world is rife with bad communities and I've picked up a trick for navigating them. Many of the things named in this post could broadly be construed under the heading of "weird power dynamics." Isolation creates weird power dynamics, poor optionality creates weird power dynamics, and drugs, and skewed gender ratios, and etc etc.

When I spot a weird power dynamic I name it out loud to the group. A lot of bad groups will helpfully kick me out themselves. I naturally somewhat shy away from such actions of course, but an action that reliably loses me status points with exactly the people I don't want to be around is great.

It's the emperor's clothes principle: That which can be destroyed by being described by a beginner should be. And the parable illustrates something important about how it works. It needs to be sincere, not snark, criticism, etc.

Replies from: habryka4
comment by habryka (habryka4) · 2023-06-21T16:44:35.930Z · LW(p) · GW(p)

I feel like this seems helpful for something else, but I don't think it super accurately predicts which environments will give rise to more extremist behavior. 

Like, I am confident that the above strategy would not work very well if you point out the "weird power dynamics" of any of the world's largest religious communities, or any of the big corporations, or much of academia. Those places have tons of "weird power dynamics", but they don't give rise to extremist behavior. I expect all of those places to react very defensively and might kick you out if you point out all the weird power dynamics, but also, those power dynamics, while being "weird" will also still have been selected heavily to produce a stable configuration, and generally not cause people to go and do radical things.

Replies from: aleksi-liimatainen, romeostevensit, ChristianKl, M. Y. Zuo
comment by Aleksi Liimatainen (aleksi-liimatainen) · 2023-06-27T07:24:10.017Z · LW(p) · GW(p)

Seems to me that those weird power dynamics have deleterious effects even if countervailing forces prevent the group from outright imploding. It's a tradeoff to engage with such institutions on their own terms and these days a nontrivial number of people seem to choose not to.

comment by romeostevensit · 2023-06-22T17:30:46.259Z · LW(p) · GW(p)

I agree this does not carve the same shape as your post, I thought it was worth mentioning in this context and am curious what other techniques people might have stumbled upon.

comment by ChristianKl · 2023-06-23T15:08:05.819Z · LW(p) · GW(p)

One key difference is the living arrangement. In all the three groups you mentioned in the OP, living together and working together went hand in hand. 

Replies from: habryka4
comment by habryka (habryka4) · 2023-06-23T17:31:15.126Z · LW(p) · GW(p)

It's pretty normal for religious communities and universities to live and work in the same place, so this correlation doesn't feel super strong to me.

comment by M. Y. Zuo · 2023-06-22T16:03:17.847Z · LW(p) · GW(p)

It seems there might be some confusion around what counts as 'weird power dynamics' between you and the parent.

I would say that regardless of how weird the dynamics may appear from the outside, if the organization persists generation after generation, and even grows in influence, then it cannot be that weird in actuality.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2023-06-23T17:49:41.492Z · LW(p) · GW(p)

Can we taboo weird here? What are you trying to say about power dynamics that last a long time?

Mod edit by Raemon: I've locked a downstream thread, but, copied Matt's last comment back up to this comment which seemed to be trying to restate his question and get the conversation back on track:

Anyways, I can taboo the word "taboo" in order to get back to the object level question here:

What do you actually think is true about groups that last a long time and their practices that must be true, without using the word "weird"?

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-06-24T19:08:06.039Z · LW(p) · GW(p)

Can we taboo weird here?

I cannot taboo another LW user's word choices? 

To clarify if you are confused, I'm not 'habryka', nor am I a mod, nor has that user made any arrangements with me.

Replies from: habryka4
comment by habryka (habryka4) · 2023-06-24T19:38:15.577Z · LW(p) · GW(p)

(Asking to "taboo X" is a common request on LessWrong and the in-person rationality community, requesting to replace the specific word with an equivalent but usually more mechanistic definition for the rest of the conversation. See also: Rationalist Taboo [? · GW])

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-06-24T23:52:06.334Z · LW(p) · GW(p)

Which would make sense if this was my conversation, if I had first mentioned the word and then you responded. But it doesn't make sense to ask me when it's the other way around. I think 'Matt  Goldenberg' must have gotten confused into thinking I was someone else.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2023-06-24T23:56:37.547Z · LW(p) · GW(p)

No, I was specifically confused about your use of it, and your understanding of the OP.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-06-25T00:01:35.357Z · LW(p) · GW(p)

No, I was specifically confused about your use of it, and your understanding of the OP.

To 'Taboo' a word implies intentionally avoiding it's use in the subsequent replies. It doesn't make sense to ask me to prevent 'habryka' from using a certain word in the future, because I don't possess the authority to force 'habyrka' to do anything or not do anything.

Are you confused about what 'taboo' means?

Replies from: gjm
comment by gjm · 2023-06-25T00:12:10.659Z · LW(p) · GW(p)

In this context I don't think it does mean "prevent it being used in subsequent replies", it means "please rephrase that thing you just said but without using that specific word".

You said (I paraphrase): if an organization prospers in the longish term, then its power dynamics can't really be very weird even if they look like it. Matt doesn't see how that follows and suspects that either he isn't understanding what you mean by "weird" or else you're using it in a confused way somehow. He thinks that if either of those is true, it'll be helpful if you try to be more explicit about exactly what property of an organization you're saying is inconsistent with its prospering for generations.

None of that requires you to stop other people using the word "weird" -- it's enough if you stop using it -- though if you make the effort Matt's suggesting and it seems helpful then habryka and/or romeostevensit might choose to follow suit, since you've suggested that they might be miscommunicating because of different unstated meanings of "weird".

(I am to some extent guessing what Matt thinks and wants, but at the very least the foregoing is a possible thing he might be saying, that makes sense of his request that you taboo "weird" without any implication that you're supposed to stop other people using it.)

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-06-25T00:39:27.979Z · LW(p) · GW(p)

In this context I don't think it does mean "prevent it being used in subsequent replies", it means "please rephrase that thing you just said but without using that specific word".

I said 'implies' because it's quite possible for different LW users to understand the meaning of any specific word in varying ways. And trying to force everyone to adhere to a specific dictionary is not an established norm. So the differing meanings of any word cannot be a reliable reflection.

Hence, pointing to the implications for practical activities on LW, such as writing comments, is a far more useful, and established, norm.

In that sense I cannot find any example of anyone in the past 16 years using "Can we taboo X here?", or any slight variation, to imply rephrasing in different words apply only for the comments of Y user(s) whereas Z users are free to continue its use in the same conversation.

If you can find at least 3 counterexamples out of the hundreds or thousands (?) of instances that exist then I'd be glad to change my views.

Replies from: gjm, mr-hire
comment by gjm · 2023-06-25T13:52:18.553Z · LW(p) · GW(p)

There's some irony in the fact that right now we are having a discussion of the meaning of the term "taboo" when it's already become clear what Matt meant and that it doesn't involve the implications you are saying that the word "taboo" has.

As for your latest isolated demand for rigour: Matt has already pointed to the first three instances he found, all of which he considers counterexamples. I looked specifically for "can we taboo" and found a total of four examples ever, not including this thread right here.

I think the usual meaning of "Can we taboo X?" depends on context. If there's already a discussion going on in which multiple people are saying X, it means "you're all getting yourselves tied in knots by inconsistent meanings of X; you should all stop". If it's replying to a single person who's said X, it means "you are using X confusingly and I would like you to stop". Sometimes they would also like everyone else not to use X in future, but so far as I can see there is never a suggestion that the person being addressed ought to be trying to stop others using the term X.

I can't imagine how "Can we taboo X" could possibly mean "I wish you specifically to be held responsible for ensuring that no one else says X in future". That isn't how words work. Nothing Matt has said in this thread, so far as I can see, even slightly suggests that Matt thinks you ought to be stopping other people using the word "weird", or that explaining what you wrote without using that word would impose any obligation on you to do that, or anything of the kind. I am baffled by all your comments that seem to take for granted that we can all see that he's trying to lay any such obligation on you.

Having said all which: although I don't understand the objections you're making to what Matt said, there's a pretty reasonable objection to be made to it, and maybe it's actually what you're saying and I'm misunderstanding you, so I'll state it in case that's so and Matt is misunderstanding too:

MYZ's original comment was itself pointing out a possible misunderstanding between habryka and romeostevensit, centred on that very term "weird power dynamics". So, while it might be helpful for MYZ to restate his claim about successful organizations necessarily not being very weird in terms that avoid the word "weird", what he necessarily can't do is to restate his challenge to habryka and/or romeostevensit without using that word -- because his challenge is exactly about how those two people are using the word.

So, getting back to the original discussion:

romeostevensit: "Weird power dynamics" is rather unspecific about what sort of power dynamics you're talking about. Can you identify a common thread that identifies how they're unhealthy rather than focusing on how they're unusual?

habryka: It is not at all clear to me (as I think it isn't to MYZ, hence his challenge, maybe) that the power dynamics in organizations like Microsoft or the Roman Catholic Church are all that similar to the ones found in cultish religious groups or disastrously pathological rationalist ones, that romeostevensit is talking about.

MYZ: Is it really weirdness in the power dynamics that destabilizes organizations and stops them persisting and growing in influence? I agree that some kinds of pathology are incompatlble with that -- and so does habryka, looking at what he said about weirdness in those organizations having been "selected heavily to produce a stable configuration" -- but if habryka's view is that some things are weird but not destabilizing and yours is that if something isn't destabilizing then we shouldn't call it weird, why should we go with habryka's view rather than yours?

I think this whole thing would in fact go better if everyone could describe the types of power dynamics they have in mind with terms more specific than "weird", but (for the avoidance of doubt) don't think that anyone here has or should have the authority to force anyone else to do so.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-06-26T03:54:34.991Z · LW(p) · GW(p)

I'm assuming you read the first paragraph of the previous comment so I'm not sure what to make of this:

There's some irony in the fact that right now we are having a discussion of the meaning of the term "taboo" when it's already become clear what Matt meant and that it doesn't involve the implications you are saying that the word "taboo" has.

There is no one having a "discussion of the meaning of the term "taboo"" with you. It's unclear how you got this notion after the previous comment which pointed to the opposite direction.

It might not have been worded perfectly, so  if you are confused as to the rationale, I'll write it out explicitly:

Discussing anyone's personal opinions regarding differing meanings for a given word so far down the comment chain is unproductive, for straightforward practical reasons.

It would even be difficult to have such a discussion with the parent, and would still probably need to refer to well established dictionary entries, let alone with new interlocutors joining in  so much later.

Replies from: gjm
comment by gjm · 2023-06-26T11:11:15.550Z · LW(p) · GW(p)

I honestly don't understand the argument in your first few paragraphs there, at all. But whether I'm being dim or you're being unclear or whatever, it doesn't really matter, because it seems we all agree that it would be more productive to get back to the actual discussion.

So how about we do that?

Both of my comments here so far contained (1) some discussion of the term "taboo" and (2) some discussion of the actual underlying thing that Matt was asking you to clarify. In both cases you have responded to 1 and ignored 2. Let's do 2. I suggest starting with the question at the end of Matt's latest comment.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-06-26T17:29:10.480Z · LW(p) · GW(p)

I honestly don't understand the argument in your first few paragraphs there, at all. But whether I'm being dim or you're being unclear or whatever, it doesn't really matter, because it seems we all agree that it would be more productive to get back to the actual discussion.

So how about we do that?

As far as I can tell, you joined in with the comment [LW(p) · GW(p)] on June 24, 8:12 pm EDT. I've only interacted with you twice, regarding the claim:

 In this context I don't think it does mean "prevent it being used in subsequent replies", it means "please rephrase that thing you just said but without using that specific word".

...that was written by 'gjm', not 'M. Y. Zuo'. My subsequent reply spelled out why I did not want to engage in such a discussion over meanings.

So do you now understand why I could not have been engaging in " a discussion of the meaning of the term "taboo" with you?


Anyways, the entirety of the "actual discussion" I've had with you are the two prior replies. So there is nothing to "get back to" in regards to (2). 

If your intention is to speak on behalf of 'Matt Goldenberg' or pick up where he left off, then you should ask him, since he still seems willing to engage with me on the same topic.

Additionally, I've partially gone through my comment history while writing this and I'm fairly confident I've never even posed a question towards you before the first reply [LW(p) · GW(p)], let alone the "latest isolated demand of rigour". Can you link to where it happened?

EDIT: Since gjm hasn't supplied any evidence of me ever making such prior demands on him, or  'Matt Goldenberg', or 'habryka', etc..., I would have to conclude it's a totally fabricated claim.

Replies from: Raemon, gjm
comment by Raemon · 2023-06-26T22:48:21.207Z · LW(p) · GW(p)

Hey M.Y Zuo, I'm commenting with my mod hat on.

I've noticed a few places over the past year where you seemed to be missing the point of a conversation, in a way that's distracting/offtopic. Each individual time didn't quite feel a big denough deal to warrant stepping in as a moderator but I think it's adding up to a point where I think something needs to change. 

For the immediate future I'm just letting auto-rate-limits handle the situation, but I may escalate to a longer term rate limit if it continues to be a problem.

Some concrete asks:

  • On the object level of this conversation, "can you taboo word X [LW · GW]" is a pretty standard LessWrong request you should be able to respond to (or, if you don't feel like it, just say "I don't feel like getting into it". Having an elaborate meta conversation about not doing it feels like the least useful use of everyone's time).
  • Try to shift back to the object level conversation sooner. In this case you're still debating whether Taboo is a reasonable thing to do when Matt's already restated his original question. [LW(p) · GW(p)] i.e. what do you (M. Y. Zuo) mean by "I would say that regardless of how weird the dynamics may appear from the outside, if the organization persists generation after generation, and even grows in influence, then it cannot be that weird in actuality.", without using the word "weird."

    I'd ask you either actually respond to that, or drop the topic. I'm locking the rest of the thread. (People who want to continue discussing this at the meta level can do so over on the Open Thread [LW · GW])

A slightly less concrete ask is "please invest a bit more in understanding where people are coming from, and trying to generally learn the norms on the forum."

comment by gjm · 2023-06-26T18:32:08.024Z · LW(p) · GW(p)

It seems like your assumptions about conversational norms here are very different from mine. E.g., you seem to be thinking of this as a two-person conversation -- just me and you -- where nothing outside it can be relevant. That's not how I think forum discussions work.

It doesn't seem as if any further response from me to you will be helpful at this time.

comment by Matt Goldenberg (mr-hire) · 2023-06-25T03:16:10.786Z · LW(p) · GW(p)

I think this quite off topic, I was just interested in what you meant.

The first 3 instances I found in search all seem to be suggesting a specific person taboo something to clarify their meaning

https://www.lesswrong.com/posts/7LnwkPdRT67ybhFzo/subjective-realities#sv9jXE5S76sovEwt3 [LW(p) · GW(p)]

https://www.lesswrong.com/posts/QvYKSFmsBX3QhgQvF/morality-isn-t-logical#ENYAvvLJq3qkxo8Ak [LW(p) · GW(p)]

https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline#Y54j8fxZEjbpMWBvJ [LW(p) · GW(p)]

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-06-25T13:11:13.848Z · LW(p) · GW(p)

The first 3 instances I found in search all seem to be suggesting a specific person taboo something to clarify their meaning

https://www.lesswrong.com/posts/7LnwkPdRT67ybhFzo/subjective-realities#sv9jXE5S76sovEwt3 [LW(p) · GW(p)]

https://www.lesswrong.com/posts/QvYKSFmsBX3QhgQvF/morality-isn-t-logical#ENYAvvLJq3qkxo8Ak [LW(p) · GW(p)]

https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline#Y54j8fxZEjbpMWBvJ [LW(p) · GW(p)]

 

Did you paste the correct links? 

For example, the first link is of 'Vladimir_Nesov' writing a single, stand-alone, comment that is not about what is being discussed here. Nor was he asking the OP about tabooing a word. 

"Vladimir_Nesov [LW · GW]12y [LW(p) · GW(p)]24

Taboo "exists". Does the physical world contain things you don't see? Also, lack of absolute certainty doesn't imply confidence in absence, one shouldn't demand unavailable kind of proof and take its absence as evidence."

If you are still confused about what is being discussed, or confused as to how to use the search tool, then I would suggest taking some time to reflect. As I'm unsure how to spell things out even more explicitly and directly.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2023-06-25T15:28:12.405Z · LW(p) · GW(p)

He was asking the other commentor to taboo the word "exists", and trying to get at the mechanistic interpretation in the second sentence - does it mean that the physical world contains things you don't see?

I was asking you (the commentor) to taboo the word "weird" and asking a similar clarifying question - what do you actually think is true about groups that last a long time and their practices, without using the word weird.

It feels fairly isomorphic to me.


Anyways, I can taboo the word "taboo" in order to get back to the object level question here:

What do you actually think is true about groups that last a long time and their practices that must be true, without using the word "weird"?

comment by johnswentworth · 2023-06-21T17:52:41.895Z · LW(p) · GW(p)

My default model before reading this post was: some people are very predisposed to craziness spirals. They're behaviorally well-described as "looking for something to go crazy about", not necessarily in a reflectively-endorsed sense, but in the sense that whenever they stumble across something about which one could go crazy (like e.g. lots of woo-stuff), they'll tend to go into a spiral around it.

"AI is likely to kill us all" is definitely a thing in response to which one can fall into a spiral-of-craziness, so we naturally end up "attracting" a bunch of people who are behaviorally well-described as "looking for something to go crazy about". (In terms of pattern matching, the most extreme examples tend to be the sorts of people who also get into quantum suicide, various flavors of woo, poorly executed anthropic arguments, poorly executed acausal trade arguments, etc.)

Other people will respond to basically-the-same stimuli by just... choosing to not go crazy (to borrow a phrase from Nate). They'll see the same "AI is likely to kill us all" argument and respond by doing something useful, or just ignoring it, or doing something useless but symbolic and not thinking too hard about it. But they won't panic and then selectively engage with things which amplify their own panic (i.e. craziness-spiral behavior).

On that model, insofar as EAs and rationalists sometimes turn crazy, it's mostly a selection effect. "AI is likely to kill us all" is a kind of metaphorical flypaper for people predisposed to craziness spirals.

After reading the post... the main place where the OP's model and my previous default model conflict is in the extent to which craziness is determined by intrinsic characteristics vs environment. Not yet sure how to resolve that.

Replies from: Benito, habryka4
comment by Ben Pace (Benito) · 2023-06-21T18:42:17.624Z · LW(p) · GW(p)

My default model had been "a large cluster of the people who are able to use their reasoning to actually get involved in the plot of humanity, have overridden many schelling fences and absurdity heuristics and similar, and so are using their reasoning to make momentous choices, and just weren't strong enough not to get some of it terribly wrong". Similar to the model from reason as memetic immune disorder [LW · GW].

comment by habryka (habryka4) · 2023-06-21T18:02:49.992Z · LW(p) · GW(p)

I don't think Sam believed that AI was likely to kill that many people, or if it did, that it would be that bad (since the AI might also have conscious experiences that are just as valuable as the human ones). I also think Leverage didn't really have much of an AI component. I think the LaSota crew maybe has a bit more of that, but I also feel like none of their beliefs are very load-bearing on AI, so I feel like this model doesn't predict reality super well.

Replies from: evhub
comment by evhub · 2023-06-21T23:30:12.776Z · LW(p) · GW(p)

I don't think Sam believed that AI was likely to kill that many people, or if it did, that it would be that bad

I think he at least pretended to believe this, no? I heard him say approximately this when I attended a talk/Q&A with him once.

Replies from: habryka4
comment by habryka (habryka4) · 2023-06-22T00:23:41.263Z · LW(p) · GW(p)

Huh, I remember talking to him about this, and my sense was that he thought the counterfactual of unaligned AI compared to the counterfactual of whatever humanity would do instead, was relatively small (compared to someone with a utilitarian mindset deciding on the future), though also of course that there were some broader game-theoretic considerations that make it valuable to coordinate with humanity more broadly. 

Separately, his probability on AI Risk seemed relatively low, though I don't remember any specific probability. Looking at the future fund worldview prize [EA · GW], I do see 15% as the position that at least the Future Fund endorsed, conditional on AI happening by 2070 (which I think Sam thought was plausible but not that likely), which is a good amount, so I think I must be misremembering at least something here.

comment by Ben Pace (Benito) · 2023-06-21T06:56:00.386Z · LW(p) · GW(p)

I also think this fits the FTX situation quite well. My current best model of what happened at an individual psychological level was many people being attracted to FTX/Alameda because of the potential resources, then many rounds of evaporative cooling as anyone who was not extremely hardcore according to the group standard was kicked out, with there being a constant sense of insecurity for everyone involved that came from the frequent purges of people who seemed to not be on board with the group standard.

While a lot of this post fits with my model of the world (the threat of exile is something I can viscerally feel change what my beliefs are), the FTX part as-written is sufficiently non-concrete to me that I can't tell if it fits or doesn't fit with reality.

Things I currently believe about FTX/Alameda (including from off-the-internet information):

  • There was a fair amount of lying to investors from the start.
  • From the start it was very chaotic, with terrible security practices and most employees not knowing the net balance of the company within like a factor of 4x, or whether net worth was increasing or decreasing from week to week.
  • Massive amounts of legal risk constantly being taken intentionally without much sense of the costs.
  • Most of the time making money by being crypto-long, or engaging in other deceptive practices, not by being clever.
  • There were lots of bits of unrelated unethical behavior from SBF.

Do you have examples of the evaporative cooling / purges and an explanation for how they fit into your model?

I know at Alameda many of the people I think of as "sensible about risk" or at least "people who are pretty unlikely to do crazy sh*t" left either early or in a massive staff-quit at one point, but I don't have much insight into why they left, and people leave for all sorts of conflicts that aren't about 'fitting in', but due disagreements about strategy or because their boss is unfair/incompetent or for reasons better explained by lots of other models.

Replies from: Douglas_Knight, Jonas Vollmer
comment by Douglas_Knight · 2023-06-21T17:57:41.520Z · LW(p) · GW(p)

Yeah, FTX seems like a totally ordinary financial crime. You don't need utilitarianism or risk neutrality to steal customer money or take massive risks.

LaSota and Leverage said that they had high standards and were doing difficult things, whereas SBF said that he was doing the obvious things a little faster, a little more devoted to EV.

comment by Jonas V (Jonas Vollmer) · 2023-06-22T17:36:18.791Z · LW(p) · GW(p)

I think SBF rarely ever fired anyone, so "kicked out" seems wrong, but I heard that people who weren't behaving in the way SBF liked (e.g., recklessly risk-taking) got sidelined and often left on their own because their jobs became unpleasant or they had ethical qualms, which would be consistent with evaporative cooling.

Replies from: habryka4
comment by habryka (habryka4) · 2023-06-22T22:12:11.853Z · LW(p) · GW(p)

Huh, this doesn't match with stories that I heard. Maybe there wasn't much formal firing, but my sense is many people definitely felt like they were fired, or pushed out of the group. 

Separately from the firing, the consistent thing that I have heard is that at FTX there was a small inner circle consisting of between 5-15 people. It was usually pretty clear who was in there, though there were always 2-3 people who were kind of ambiguously entering it or being pushed out, and being out of the inner circle would mean you lost most of the power over the associated organization and ecosystem.

comment by iceman · 2023-06-21T19:35:31.599Z · LW(p) · GW(p)

I suggest a more straightforward model: taking ideas seriously isn't healthy. Most of the attempts to paint SBF as not really an EA seem like weird reputational saving throws when he was around very early on and had rather deep conviction in things like the St. Petersburg Paradox...which seems like a large part of what destroyed FTX. And Ziz seemed to be one of the few people to take the decision theoretical "you should always act as if you're being simulated to see what sort of decision agent you are" idea seriously...and followed that to their downfall. I read the Sequences, get convinced by the arguments within, donate a six figure sum to MIRI...and have basically nothing to show for it at pretty serious opportunity costs. (And that's before considering Ziz's pretty interesting claims about how MIRI spent donor money.)

In all of these cases, the problem was individual confidence in ideas, not social effects.

My model is instead that the sort of people who are there to fit in aren't the people who go crazy; there are plenty of people in the pews who are there for the church but not the religion. The MOPs and Sociopaths seem to be much, much saner than the Geeks. If that's right, rationality has something much more fundamentally wrong with it.

As a final note, looking back at how AI actually developed, it's pretty striking that there aren't really maximizing AIs out there. Does a LLM take ideas seriously? Do they have anything that we'd recognize as a 'utility function'? It doesn't look like it, but we were promised that the AIs were a danger because they would learn about the world and would then take their ideas about what would happen if they did X vs Y to minmax some objective function. But errors compound.

Replies from: lc
comment by lc · 2023-06-21T23:18:39.111Z · LW(p) · GW(p)

What made Charles Manson's cult crazy in the eyes of the rest of society was not that they (allegedly) believed that was a race war was inevitable, and that white people needed to prepare for it & be the ones that struck first. Many people throughout history who we tend to think of as "sane" have evangelized similar doctrines or agitated in favor of them. What made them "crazy" was how nonsensical their actions were even granted their premises, i.e. the decision to kill a bunch of prominent white people as a "false flag".

Likewise, you can see how Lasota's "surface" doctrine sort of makes sense, I guess. It would be terrible if we made an AI that only cared about humans and not animals or aliens, and that led to astronomical suffering. The Nuremberg trials were a good idea, probably for reasons that have their roots in acausally blackmailing people not to commit genocide. If the only things I knew about the Zizcult were that they believed we should punish evildoers, and that factory farms were evil, I wouldn't call them crazy. But then they go and (allegedly) waste Jamie Zajko's parents in a manner that doesn't further their stated goals at all and makes no tactical sense to anyone thinking coherently about their situation. Ditto for FTX, which, when one business failed, decided to commit multi-billion dollar fraud via their other actually successfully business, instead of just shutting down alameda and hoping that the lenders wouldn't be able to repo too much of the exchange.

If instead of supposing that these behaviors were motivated by "belief", we suppose they're primarily socially motivated behaviors - in LaSota's case, for deepening her ties with her followers and her status over them as a leader, for ultimately the same reasons all gang leaders become gang leaders; in FTX's case, for maintaining the FTX team's public image as wildly successful altruists - that seems like it actually tracks. The crazy behaviors were, in theory and in practice, absurdly counterproductive, ideologically speaking. But status anxiety is a hell of a drug.

Replies from: iceman
comment by iceman · 2023-06-22T01:58:13.925Z · LW(p) · GW(p)

But then they go and (allegedly) waste Jamie Zajko's parents in a manner that doesn't further their stated goals at all and makes no tactical sense to anyone thinking coherently about their situation.

And yet that seems entirely in line with the "Collapse the Timeline" line of thinking that Ziz advocated.

Ditto for FTX, which, when one business failed, decided to commit multi-billion dollar fraud via their other actually successfully business, instead of just shutting down alameda and hoping that the lenders wouldn't be able to repo too much of the exchange.

And yet, that seems like the correct action if you sufficiently bullet bite expected value and the St. Petersberg Paradox, which SBF did repeatedly in interviews.

Replies from: lc
comment by lc · 2023-06-22T02:37:09.778Z · LW(p) · GW(p)

And yet, that seems like the correct action if you sufficiently bullet bite expected value and the St. Petersberg Paradox, which SBF did repeatedly in interviews.

I am not making an argument that the crime was +EV but SBF was dealt a bad hand. The EV of turning your entire business into the second largest ponzi scheme ever in order to save the smaller half is pretty apparently stupid, and ran an overwhelming chance of failure. There is no EV calculus where the SBF decision is a good one except maybe one in which he ignores externalities to EA and is simply trying to support his status, and even then I hardly understand it.

And yet that seems entirely in line with the "Collapse the Timeline" line of thinking that Ziz advocated.

Right, it is possible that something like this was what they told themselves, but it's bananas. Imagine you're Ziz. You believe the entire lightcone is at risk of becoming a torture zone for animals at the behest of Sam Altman and Demis Hassabis. This threat is foundational to your worldview and is the premier cassus belli for action. Instead of doing anything about that, you completely ignore this problem to go on the side quest of enacting retributive justice against Jamie's parents. What kind of acausal reasoning could possibly motivate you to assume this level of risk of being completely wiped out as a group, for an objective so small?!

Scratch that. Imagine you believe you're destined to reduce any normal plebian historical tragedies via Ziz acausal technobabble, through an expected ~2 opportunities to retroactively kill anybody in the world, and this is the strategy you must take. You've just succeeded with the IMO really silly instrumental objective of amassing a group of people willing to help you with this. Then you say - pass on the pancasila youth, pass on the sinaloa cartel, pass on anybody in the federal government, I need to murder Jamie's parents.

Replies from: iceman, ChristianKl
comment by iceman · 2023-06-22T03:05:11.308Z · LW(p) · GW(p)

My understanding of your point is that Mason was crazy because his plans didn't follow from his premise and had nothing to do with his core ideas. I agree, but I do not think that's relevant.

I am pushing back because, if you are St. Petersberg Paradox-pilled like SBF and make public statements that actually you should keep taking double or nothing bets, perhaps you are more likely to make tragic betting decisions and that's because of you're taking certain ideas seriously. If you have galaxy brained the idea of the St. Petersberg Paradox, it seems like Alameda style fraud is +EV.

I am pushing back because, if you believe that you are constantly being simulated to see what sort of decision agent you are, you are going to react extremely to every slight and that's because you're taking certain ideas seriously. If you have galaxy brained the idea that you're being simulated to see how you react, killing Jamie's parents isn't even really killing Jamie's parents, it's showing what sort of decision agent you are to your simulators.

In both cases, they did X because they believe Y which implies X seems like a more parsimonious explanation for their behaviour.

(To be clear: I endorse neither of these ideas, even if I was previously positive on MIRI style decision theory research.)

Replies from: dxu
comment by dxu · 2023-06-22T21:26:58.930Z · LW(p) · GW(p)

I am pushing back because, if you are St. Petersberg Paradox-pilled like SBF and make public statements that actually you should keep taking double or nothing bets, perhaps you are more likely to make tragic betting decisions and that's because of you're taking certain ideas seriously. If you have galaxy brained the idea of the St. Petersberg Paradox, it seems like Alameda style fraud is +EV.

This is conceding a big part of your argument. You’re basically saying, yes, SBF’s decision was -EV according to any normal analysis, but according to a particular incorrect (“galaxy-brained”) analysis, it was +EV.

(Aside: what was actually the galaxy-brained analysis that’s supposed to have led to SBF’s conclusion, according to you? I don’t think I’ve seen it described, and I suspect this lack of a description is not a coincidence; see below.)

There are many reasons someone might make an error of judgement—but when the error in question stems (allegedly) from an incorrect application of a particular theory or idea, it makes no sense to attribute responsibility for the error to the theory. And as the mistake in question grows more and more outlandish (and more and more disconnected from any result the theory could plausibly have produced), the degree of responsibility that can plausibly be attributed to the theory correspondingly shrinks (while the degree of responsibility of specific brain-worms grows).

In other words,

they did X because they believe Y which implies X

is a misdescription of what happened in these cases, because in these cases the “Y” in question actually does not imply X, cannot reasonably be construed to imply X, and if somehow the individuals in question managed to bamboozle themselves badly enough to think Y implied X, that signifies unrelated (and causally prior) weirdness going on in their brains which is not explained by belief in Y.

In short: SBF is no more an indictment of expected utility theory (or of “taking ideas seriously”) than Deepak Chopra is of quantum mechanics; ditto Ziz and her corrupted brand of “timeless decision theory”. The only reason one would use these examples to argue against “taking ideas seriously” is if one already believed that “taking ideas seriously” was bad for some reason or other, and was looking for ways to affirm that belief.

Replies from: Zack_M_Davis, sharmake-farah
comment by Zack_M_Davis · 2023-06-22T23:25:30.459Z · LW(p) · GW(p)

If people inevitably sometimes make mistakes when interpreting theories, and theory-driven mistakes are more likely to be catastrophic than the mistakes people make when acting according to "atheoretical" learning from experience and imitation, then unusually theory-driven people are more likely to make catastrophic mistakes. In the absence of a way to prevent people from sometimes making mistakes when interpreting theories, this seems like a pretty strong argument in favor of atheoretical learning from experience and imitation!

This is particularly pertinent if, in a lot of cases where more sober theorists tend to say, "Well, the true theory wouldn't have recommended that," the reason the sober theorists believe that is because they expect true theories to not wildly contradict the wisdom of atheoretical learning from experience and imitation, rather than because they've personally pinpointed the error in the interpretation.

("But I don't need to know the answer. I just recite to myself, over and over, until I can choose sleep: It all adds up to normality. [? · GW]")

And that's even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money. (Even while sober people with sublinear utility functions shake their heads at the allegedly foolish spectacle of the bankruptcy in 89% of possible worlds.)

Replies from: clone of saturn, philh, sharmake-farah
comment by clone of saturn · 2023-06-25T19:39:57.857Z · LW(p) · GW(p)

I think the causality runs the other way though; people who are crazy and grandiose are likely to come up with spurious theories to justify actions they wanted to take anyway. Experience and imitation shows us that non-crazy people successfully use theories to do non-crazy things all the time, so much so that you probably take it for granted.

comment by philh · 2023-06-24T20:59:00.596Z · LW(p) · GW(p)

And that’s even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money.

But of course no human financier has a utility function, let alone one that can be expressed only in terms of money, let alone one that's linear in money. So in this hypothetical, yes, there is an error.

(SBF said his utility was linear in money. I think he probably wasn't confused enough to think that was literally true, but I do think he was confused about the math [LW · GW].)

comment by Noosphere89 (sharmake-farah) · 2023-06-22T23:37:38.312Z · LW(p) · GW(p)

And that's even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money. (Even while sober people with sublinear utility functions shake their heads at the allegedly foolish spectacle of the bankruptcy in 89% of possible worlds.)

This is related to a very important point: Without more assumptions, there is no way to distinguish via outcomes the following 2 cases: irrationality while pursuing your values and being rational but having very different or strange values.

(Also, I dislike the implication that it all adds up to normality, unless something else is meant or it's trivial, since you can't define normality without a context.)

comment by Noosphere89 (sharmake-farah) · 2023-06-22T22:12:18.156Z · LW(p) · GW(p)

There are many reasons someone might make an error of judgement—but when the error in question stems (allegedly) from an incorrect application of a particular theory or idea, it makes no sense to attribute responsibility for the error to the theory.

Eh, I'm a little concerned in general, because this, without restrictions could be used to redirect blame away from the theory, even in cases where the implementation of a theory is evidence against the theory.

The best example is historical non-capitalist societies, especially communist societies where communists claimed when responding to criticism roughly said that the communist societies weren't truly communist, and thus communism could still work if they were truly communist.

This is the best example of this phenomenon, but I'm sure there's other examples of this phenomenon.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2023-06-22T23:46:47.839Z · LW(p) · GW(p)

If you have galaxy brained the idea of the St. Petersberg Paradox, it seems like Alameda style fraud is +EV.

I don't think so.  At the very least, it seems debatable. Biting the bullet in the St Petersburg paradox doesn't mean taking negative-EV bets. House of cards stuff ~never turns out well in the long run, and the fallout from an implosion also grows as you double down. Everything that's coming to light about FTX indicates it was a total house of cards. Seems really unlikely to me that most of these bets were positive even on fanatically risk-neutral, act utilitarian grounds.

Maybe I'm biased because it's convenient to believe what I believe (that the instrumentally rational action is almost never "do something shady according to common sense morality.") Let's say it's defensible to see things otherwise. Even then, I find it weird that because Sam had these views on St Petersburg stuff, people speak as though this explains everything about FTX epistemics. "That was excellent instrumental rationality we were seeing on display by FTX leadership, granted that they don't care about common sense morality and bite the bullet on St Petersburg." At the very least, we should name and consider the other hypothesis, on which the St Petersburg views were more incidental (though admittedly still "characteristic"). On that other hypothesis, there's a specific type of psychology that makes people think they're invincible, which leads to them taking negative bets on any defensible interpretation of decision-making under uncertainty.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-06-22T23:48:52.701Z · LW(p) · GW(p)

Who were you responding to, since I didn't make the argument that you were responding to.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2023-06-22T23:54:11.269Z · LW(p) · GW(p)

Oh, I was replying to Iceman – mostly this part that I quoted:  

If you have galaxy brained the idea of the St. Petersberg Paradox, it seems like Alameda style fraud is +EV.

(I think I've seen similar takes by other posters in the past.)

I should have mentioned that I'm not replying to you. 

I think I took such a long break from LW that I forgot that you can make subthreads rather than just continue piling on at the end of a thread.

 

comment by ChristianKl · 2023-06-22T10:51:56.714Z · LW(p) · GW(p)

Instead of doing anything about that, you completely ignore this problem to go on the side quest of enacting retributive justice against Jamie's parents. 

It sounds to me like they thought that Jamie would inherit a significant amount of money if they do that. They might have done it not only for reasons of retributive justice but to fund their whole operation.

comment by Kaj_Sotala · 2023-06-21T20:51:43.610Z · LW(p) · GW(p)

But if group members are insecure enough, or if there is some limited pool of resources to divide up that each member really wants for themselves, then each member experiences a strong pressure to signal their devotion harder and harder, often burning substantial personal resources.

To add to this: if the group leaders seem anxious or distressed, then one of the ways in which people may signal devotion is by also being anxious and distressed. This will then make everything worse - if you're anxious, you're likely to think poorly and fixate on what you think is wrong without necessarily being able to do any real problem-solving around it. It also causes motivated reasoning about how bad everything is, so that one could maintain that feeling of distress.

In various communities there's often a (sometimes implicit, sometimes explicit) notion of "if you're not freaked out by what's happening, you're not taking things seriously enough". E.g. to take an example from EA/rationalist circles, this lukeprog post, while not quite explicitly saying that, reads to me as coming close (I believe that Luke only meant to say that it's good for people to take action, but the way it's phrased, it implies that you need to feel upset to take any action):

Over the years, my colleagues and I have spoken to many machine learning researchers who, perhaps after some discussion and argument, claim to think there’s a moderate chance — a 5%, or 15%, or even a 40% chance — that AI systems will destroy human civilization in the next few decades. 1 However, I often detect what Bryan Caplan has called a “missing mood“; a mood they would predictably exhibit if they really thought such a dire future was plausible, but which they don’t seem to exhibit. In many cases, the researcher who claims to think that medium-term existential catastrophe from AI is plausible doesn’t seem too upset or worried or sad about it, and doesn’t seem to be taking any specific actions as a result.

Not so with Elon Musk. Consider his reaction (here and here) when podcaster Joe Rogan asks about his AI doomsaying. Musk stares at the table, and takes a deep breath. He looks sad. Dejected. Fatalistic. [...]

Moreover, I believe Musk when he says that his ultimate purpose for founding Neuralink is to avert an AI catastrophe: “If you can’t beat it, join it.” Personally, I’m not optimistic that brain-computer interfaces can avert AI catastrophe — for roughly the reasons outlined in the BCIs section of Superintelligence ch. 2 — but Musk came to a different assessment, and I’m glad he’s trying.

Whatever my disagreements with Musk (I have plenty), it looks to me like Musk doesn’t just profess concern about AI existential risk. 2 I think he feels it in his bones, when he wakes up in the morning, and he’s spending a significant fraction of his time and capital to try to do something about it. And for that I am grateful.

As a separate consideration, if you consider someone an authority, then you're going to (explicitly and implicitly) trust their assessments of the world to be at least somewhat accurate. So even if you didn't experience social pressure to be distressed yourself, just the fact that they were distressed due to something you considered them an authority on (e.g. Eliezer on AI risk) suggests you might pick up some of their distress.

Replies from: Benito
comment by Ben Pace (Benito) · 2023-06-25T07:10:45.144Z · LW(p) · GW(p)

In various communities there's often a (sometimes implicit, sometimes explicit) notion of "if you're not freaked out by what's happening, you're not taking things seriously enough".

Do you have an example of this from other communities? I am not quickly thinking of other examples (I think corrupt leaders often try to give vibes of being calm and in control and powerful, not being anxious and worried). 

And furthermore I basically buy the claim that if you're not freaked out by our civilization then you don't understand it.

From my current vantage point I agree that people will imitate the vibe of the leadership, but I feel like you're saying "and the particular vibe of anxiousness is common for common psychological reasons" but I don't know why you think that or what psychological reasons you have in mind.

Replies from: ricraz, Kaj_Sotala
comment by Richard_Ngo (ricraz) · 2023-06-25T16:03:11.918Z · LW(p) · GW(p)

And furthermore I basically buy the claim that if you're not freaked out by our civilization then you don't understand it.

There's probably a version of this sentence that I'd be sympathetic to (e.g. maybe "almost everyone's emotional security relies on implicit assumptions about how competent civilization is, which are false"). But in general I am pretty opposed to claims which imply that there is one correct emotional reaction to understanding a given situation. I think it's an important component of rationality to notice when judgments smuggle in implicit standards [LW · GW] (as per my recent post), which this is an example of.

Having said that, it's also an important component of rationality to not reason your way out of ever being freaked out. If the audience reading this weren't LWers, then I probably wouldn't have bothered pushing back, since I think something like my rephrasing above is true for many people, which implies that a better understanding would make them freak out more. But I think that LWers in particular are more often making the opposite mistake, of assuming that there's one correct emotional reaction.

Replies from: Benito, sharmake-farah
comment by Ben Pace (Benito) · 2023-06-25T16:51:28.884Z · LW(p) · GW(p)

Your suggested sentence is basically what I had in mind.

comment by Noosphere89 (sharmake-farah) · 2023-06-26T01:55:13.000Z · LW(p) · GW(p)

Having said that, it's also an important component of rationality to not reason your way out of ever being freaked out.

Sorry, I'm getting confused and I don't understand this sentence. Are you literally saying that you can't reason out of being afraid? Because this would be a terrible guideline, for many reasons.

comment by Kaj_Sotala · 2023-06-25T07:18:50.451Z · LW(p) · GW(p)

I can't think of specific quotes offhand, but I feel like I've caught that kind of an vibe from some social justice and climate change people/conversations. E.g. I recall getting backlash from suggesting that climate change might not be an extinction risk.

comment by MSRayne · 2023-06-21T11:44:54.252Z · LW(p) · GW(p)

I feel like consequentialists are more likely to go crazy due to not being grounded in deontological or virtue-ethical norms of proper behavior. It's easy to think that if you're on track to saving the world, you should be able to do whatever is necessary, however heinous, to achieve that goal. I didn't learn to stop seeing people as objects until I leaned away from consequentialism and toward the anarchist principle of unity of means and ends (which is probably related to the categorical imperative). E.g. I want to live in a world where people are respected as individuals, so I have to respect them as individuals - whereas maximizing individual-respect might lead me to do all sorts of weird things to people now in return for some vague notion of helping lots more future people.

Replies from: Viliam
comment by Viliam · 2023-06-21T15:57:19.563Z · LW(p) · GW(p)

In consequentialism, if you make a conclusion consisting of dozen steps, and one of those steps is wrong, the entire conclusion is wrong. It does not matter whether the remaining steps are right.

In theory, this could be fixed by assigning probabilities to individual steps, and then calculating the probability of the entire plan. But of course people usually don't do that. Otherwise they would notice that a plan with dozen steps, even if they are 95% sure about each of them individually, is not very reliable.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-06-21T16:15:42.933Z · LW(p) · GW(p)

Only if it's a conjunctive argument. If it's disjunctive, then only 1 step has to be right for the argument to go through.

As for the general conversation, I generally agree that consequentialism, especially the more extreme varieties lead to very weird consequences, but I'd argue that a lot of other ethical theories taken to an extreme would result in very bizarre consequences/conclusions.

comment by Nina Panickssery (NinaR) · 2023-06-25T22:46:14.343Z · LW(p) · GW(p)

I think drugs and non-standard lifestyle choices are a contributing factor. Messing with ones biology / ignoring the default lifestyle in your country to do something very non-standard is riskier and less likely to turn out well than many imagine. 

comment by Viliam · 2023-06-22T12:04:36.195Z · LW(p) · GW(p)

Everyone, read also the comments at EA website, the top one [EA(p) · GW(p)] makes a great point:

while EA/rationalism is not a cult, it contains enough ingredients of a cult that it’s relatively easy for someone to go off and make their own.

To avoid derailing the debate towards the definition of cult etc., let me paraphrase it as:

EA/rationalism is not an evil project, but it is relatively easy for someone to start an evil project by recruiting within the EA/rationalist ecosystem. (As opposed to starting an evil project somewhere else.)

This is how.

EA/rationalism in general [...] lacks enforced conformity and control by a leader. [...]

However, what seems to have happened is that multiple people have taken these base ingredients and just added in the conformity and charismatic leader parts. You put these ingredients in a small company or a group house, put an unethical or mentally unwell leader in charge, and you have everything you need for an abusive [...] environment. [...] This seems to have happened multiple times already. 

I didn't spend much time thinking about it, but I suspect that the lack of "conformity and control" in EA/rationalism may actually be a weakness, from this perspective. Whenever a bad actor starts doing something "in the name or EA/rationality", there is no established mechanism to make public knowledge "no, you are not; this is your private project". Especially when the bad actor does it at a LW meetup, or makes it publicly known that they donate money... then it would seem quite silly to argue that they are not "an EA/rationality project".

(Compare it to other communities, which have more control and conformity, such as religion, where members clearly understand the difference between "this is just a personal opinion of someone who happens to be a Catholic" and "this is the official teaching of the Catholic church".)

Replies from: lahwran, dr_s
comment by the gears to ascension (lahwran) · 2023-06-27T15:20:07.810Z · LW(p) · GW(p)

I'd suggest that's what's needed is a strong immune system against conformity and control which does not itself require conformity or control: a generalized resistance to agentic domination from another being, of any kind, and as part of it, a culture of habit around joining up with others who are skilled at resisting pressure in response to an attempted pressure, without that join up creating the problems it attempts to prevent. I'm a fan of this essay on this topic, as well as other writing from that group.

comment by dr_s · 2023-06-23T05:14:00.504Z · LW(p) · GW(p)

I don't think the lack of a leadership is in itself the only issue here. I think the general framework is vulnerable to cultishness for a few more reasons. The first one is that in general it encourages dallying with unconventional ideas, and eschewing the notion that if it sounds crazy to the average person then it must be wrong. In fact, there's plenty of "everyone else are the actually crazy ones" thinking which might be necessary insofar as you want to try becoming more rational, but also means you now have less checks on your overall beliefs and behaviour. The other, related, is the focus on utilitarianism as a philosophy, which creates even stronger conditions for specific beliefs like "doing crazy thing X is actually good as long as it's justified by projected good outcomes".

You mention religion but Catholicism is pretty much the only one with such a clear leadership. It helps but it doesn't make it the only stable one. IMO with a strong leadership rationalism as a whole would just be more likely to become a cult.

Replies from: Viliam
comment by Viliam · 2023-06-23T08:34:27.086Z · LW(p) · GW(p)

I agree it is not the only issue. I think it is a combination of ideas being genuinely dangerous, and no one having the authority to declare: "you are using those ideas wrong".

Plus the general "contrarian" attitude where the edgier opinions automatically give you higher status. So if someone hypothetically volunteered for the role of calling out wrong implementations of the dangerous ideas, they would probably be perceived as not smart/brave enough to appreciate them.

*

Thinking more about the analogies with religion... when people repeatedly propose something that is (in their perspective) an incorrect application of their ideas, they give it a name, declare it a heresy, and that makes it easier to deflect the problem in future by simply labeling it.

So perhaps the rationalist/EA alternative could be to maintain an online list of "ideas that are frequently attributed to, or associated with, rationalists / effective altruists, but we explicitly disapprove of them; here is a short summary why". Probably with a shorter title, maybe "frequent bad ideas". The next step would be to repeatedly tell new members about this list.

This has a potential to backfire, by making those bad ideas more visible. So we would be trading certainty of exposition against a probability of explosion. On the conservative side, we could simply describe the things that have already happened (the ideas that were already followed and it didn't end well).

Replies from: dr_s
comment by dr_s · 2023-06-23T12:22:12.038Z · LW(p) · GW(p)

Thinking more about the analogies with religion... when people repeatedly propose something that is (in their perspective) an incorrect application of their ideas, they give it a name, declare it a heresy, and that makes it easier to deflect the problem in future by simply labeling it.

Again, you seem to be focused on Catholicism. Catholicism is not the norm. Orthodox Christianity is maybe also kinda like that, but Protestant denominations are not, Islam is not, most of Buddhism is not, Hinduism is not, and don't even get me started on Shinto and other forms of animism. Most religions don't have a fixed canon, they have at most a community which might decide to strongly shun (or even straight up violently punish) anyone whose beliefs are so aberrant they might as well not belong to the same religion any more. But the boundary itself isn't well defined. If one wants to draw inspiration from religions (not that they are always the best example; Islam has given birth to plenty of violent and radical offshoots, for example, and exactly for the reason you bring up no one is quite in a position to call them out as wrong with unique authority), then you have to look at other things, at how they achieve collective cohesion even without central leadership, because central leadership is pretty much only the specific solution Catholics came up with.

comment by tailcalled · 2023-06-21T08:05:53.495Z · LW(p) · GW(p)

People sometimes say that cult members tend to have conflicts that lead to them joining the cult. Recently I've been wondering if this is an underrated aspect of cultishness.

Let's take the LaSota crew as an example. As I understand, they were militant vegans.

And if I understand correctly, vegans are concerned about the dynamic where, in order to obtain animal flesh to eat, people usually hire people to raise lots of animals in tiny indoor spaces, sometimes cutting off body parts such as beaks if they are too irritable. Never letting them out to live freely. Taking their children away shortly after they give birth. Breeding them to grow rapidly but also often having genetic disorders that cause a lot of pain and disability. And basically just having them live like that until they get killed and butchered.

And from what I understand, society often tries to obscure this. People get uncomfortable and try to change the subject when you talk about it. People might make laws to make it hard to film and share what's going on. People come up with convoluted denials of animals having feelings. And so on.

(I am not super certain about the above two paragraphs because I haven't investigated it myself, just picked it up by osmosis and haven't seen any serious objections to this narrative.)

I don't know much of the LaSota crew's backstory (I think there were other conflicts than this too? would have to re-read the pages), but basically I wonder if they were vegans, noticed how society (including some rationalists?) conspire to cover up these things, and how Ziz was one of the main people taking it seriously.

You can also sort of take a Bayesian view of it. Like one way to apply Bayes is that you have a set of "experts" who you have varying levels of trust in, and then as they say things, you use Bayesian updates to reallocate trust from those who make bad predictions to those who make good predictions. If you have a conflict where one side is consistently dishonest, then this procedure can lead to rapidly transfering a ton of their trust away, to the other side of the conflict.

Replies from: Viliam
comment by Viliam · 2023-06-21T14:19:59.286Z · LW(p) · GW(p)

If you have a conflict where one side is consistently dishonest, then this procedure can lead to rapidly transfering a ton of their trust away, to the other side of the conflict.

This mostly makes sense, but the wrong part of it is the "homogeneity of the outgroup" assumption. Basically, the cult leader does the trick of dividing the world into two groups: the cult, and everyone else.

The cult tells the truth about that one thing you strongly care about? Check.

All people who lie about the thing you care about are in the "everyone else" group? Check.

The missing part is that... many people in the "everyone else" group also tell the truth about that one thing you strongly care about. That's because the "everyone else" group literally contains billions of people, with all kinds of opinions and behaviors.

But it is easy to miss this, especially when the cult leader tends to use the liars as the prototypes of the outgroup (essentially "weakmanning" the rest of humanity).

As a specific example, if you strongly care about veganism, you should notice that although the majority of non-Zizians are non-vegans, the majority of vegans are non-Zizians. So you shouldn't conclude that there is no salvation outside of Zizians.

Replies from: tailcalled
comment by tailcalled · 2023-06-21T14:41:02.457Z · LW(p) · GW(p)

To an extent, yes this is a good solution.

But also, it doesn't always work. You might have multiple conflicts going on, exponentially reducing who fits the bill. Most people don't know who you are, so you might be limited to your local circles. Sometimes the conflict itself is an obscure thing that few can interact with.

On its own, yes there are lots of vegans they could have had contact with. But the Zizians were also rat/EA types, which restricts their community reach heavily. Though there are lots of peaceful EA vegans, so this can't explain it all.

But like - could there be any other conflicts they had? I expect there to be, though I am not sure about the details. Maybe I am wrong.

Replies from: Viliam
comment by Viliam · 2023-06-22T09:23:34.663Z · LW(p) · GW(p)

That sounds correct. Rat, vegan, trans... maybe one or two more things, and the selection is sufficiently narrow.

comment by Dagon · 2023-06-22T15:40:51.448Z · LW(p) · GW(p)

[note: I'm not particularly EA, beyond the motte of caring about others and wanting my activities to be effective. ]

I think this is basically correct.  EA tends to attract outliers who are susceptible to claims of aggrandizement - telling themselves and being told they're the heroes in the story.  It reinforces this with contrarian-ness, especially on dimensions with clever, math-sounding legible arguments behind them.  And then reinforces that "effective" is really about the biggest numbers you can plausibly multiple your wild guesses out to.  

Until recently, it was all circulating in a pile of free money driven by related insanity of crypto and tech investment, which seemed to have completely forgotten that zero interest rates were unlikely to continue forever, and actually producing stuff would eventually be important.

[ epistemic status for next section: provocative devil's advocate argument ]

The interesting question is "sure, it's crazy, but is it wrong?"  I suspect it is wrong - the multiplicative factors into the future are extremely tenuous.  But in the event that this level of commitment and intensity DOES cause alignment to be solved in time, it's arguable that all the insanity is worth it.  If your advice makes the efforts less individually harmful, but also a little bit less effective, it could be a net harm to the universe.

comment by frontier64 · 2023-06-26T01:23:29.525Z · LW(p) · GW(p)

I'm focusing on the aspects specific to rationalism and effective altruism that could lead to people who nominally are part of the rationality community being crazy at a higher rate than one would expect. From your post, I got the following list:

  • Isolationist
  • fewer norms
  • highly ambitious
  • gender-ratios
  • X-risk being scary
  • encourages doing big things

I may be missing some but these are all the aspects that stood out to me. From my perspective, the #1 most important cause of the craziness that sometimes occurs in nominally rationalist communities is that rationalists reject tradition. This kind of falls under the fewer norms category but I don't think 'fewer norms' really captures it.

A lot of people will naturally do crazy things without the strict social rules and guidelines that humans have operated with for hundreds of years. The same rules that have been slowly eroding since 1900. And nominally rationalist communities are kind of at the forefront of eroding those social rules. Rationalists accept as normal ideas like polyamory, group homes (in adulthood as a long term situation), drug use, atheism, mysticism, brain-hacking, transgenderism, sadomasochism, and a whole slew of other historically, socially dis-favored ways to live.

Society previously protected people who are susceptible to manipulation. Society disfavors not only the manipulation and abuse but all unusual behaviors listed above that tended to go along with manipulation and abuse. Not because they necessarily had to go together, but because people who tended to do some really weird things also tended to be manipulative and abusive. Many rationalist communities take the position that in social situations, "as long as there's consent it's not bad." And this just doesn't account for actual human behavior.

Replies from: DonyChristie
comment by Pee Doom (DonyChristie) · 2023-06-29T19:33:49.849Z · LW(p) · GW(p)

group homes (in adulthood as a long term situation)

People living together in group homes (as extended families) used to be the norm? The weird thing is how isolated and individualist we've become. I would argue that group houses where individual adults join up together are preserving some aspect of traditional social arrangement where people live closely, but maybe you would argue that this is not the same as an extended family or the lifelong kinship networks of a village.

comment by Benquo · 2023-06-26T00:59:00.942Z · LW(p) · GW(p)

It’s not clear to me what “crazy” means in this post & how it relates to something like raising the sanity waterline [LW · GW]. A clearer idea of what you mean by crazy would, I think, dissolve the question [LW · GW].

Replies from: Benito
comment by Ben Pace (Benito) · 2023-06-29T20:00:54.740Z · LW(p) · GW(p)

I'm not sure what exactly my answer is, but it's a good question, so here's a babble of pointers what I think 'crazy' means, in case that helps someone else figure out a useful definition:

  • Take actions that most people can confidently know at the time that I will later on not endorse (e.g. physically assault my good friend for fun, set my house on fire, pick up a heroin habit, murder a stranger) or that I wouldn't endorse if you just gave me a bit more basic social security like money, friends, family, etc (such as murdering someone on the street for some food/money, spending days preparing lies to tell someone in order to trick them into giving me resources, hunt and follow a person until they're alone and then try to get them to give me stuff, stalk someone because I think they've fallen in love with me, etc).
  • Believe things that most people can confidently know that I don't have the evidence for and will later on not believe (e.g. demons are talking to me, I am literally Napoleon, that I have psychic powers and can read anyone's mind at any time).
  • When someone does or believes things that I (Ben) cannot empathize with or understand why they'd do it unless they didn't really have much relationship between their words/actions and reality (e.g. constantly tell stories that are obviously lies or that aren't internally coherent)
Replies from: Benquo, thoth-hermes
comment by Benquo · 2023-07-04T21:43:36.323Z · LW(p) · GW(p)

The first one seems like it would describe most people, e.g. many, many people repeatedly drink enough alcohol to predictably acutely regret it later.

The second would seem to exclude incurable cases, and I don’t see how to repair that defect without including ordinary religious people.

The third would also seem to include ordinary religious people.

I think these problems are also problems with the OP’s frame. If taken literally, the OP is asking about a currently ubiquitous or at least very common aspect of the human condition, while assuming that it is rare, intersubjectively verified by most, and pathological.

My steelman of the OP’s concern would be something like “why do people sometimes suddenly, maladaptively, and incoherently deviate from the norm?”, and I think a good answer would take into account ways in which the norm is already maladaptive and incoherent, such that people might legitimately be sufficiently desperate to accept that sort of deviance as better for them in expectation than whatever else was happening, instead of starting from the assumption that the deviance itself is a mistake.

If it’s hard to see how apparently maladaptive deviance might not be a mistake, consider a North Korean Communist asking about attempted defectors - who observably often fail, end up much worse off, and express regret afterwards - “why do our people sometimes turn crazy?”. From our perspective out here it’s easy to see what the people asking this question are missing.

Replies from: Benito
comment by Ben Pace (Benito) · 2023-07-06T06:04:45.938Z · LW(p) · GW(p)

This still leaves me confused about why these people made such terrible mistakes. Many people can look at their society and realize how it is cognitively distorting and tricking them into evil behavior. It seems aggressively dumb to then decide that personally murdering people you think are evil is straightforwardly fine and a good strategy, or that you have psychic powers and should lock people in rooms.[1] I think there are more modest proposals, like seasteading or building internet communities or legalizing prediction markets, that have a strong shot of fixing a chunk of the insanity of your civilization without leaving you entirely out in the wilderness, having to rederive everything for yourself and leading you to shooting yourself in the foot quite so quickly.

I expect all North Korean defectors will get labeled evil and psychotic by the state. Like a sheeple, I don't think all such ones will be labeled this way by everyone in my personal society, though I straightforwardly acknowledge that a substantial fraction will. I think there were other options here that were less... wantonly dysfunctional.

  1. ^

    Or stealing billions of dollars from people. But to honestly be, that one seems the least 'crazy' to me, it doesn't seem that hard for me to explain how someone could trick themselves into thinking that they should personally have all of the resources. I'll say I'm not sure at all that these three things do form a natural category, though I still think it's interesting to ask "Supposing they do, what is the key commonality?"

Replies from: Benquo
comment by Benquo · 2023-07-06T15:06:41.572Z · LW(p) · GW(p)

I think part of what happens in these events is that they reveal how much disorganized or paranoid thought went into someone's normal persona.  You need to have a lot of trust in the people around you to end up with a plan like seasteading or prediction markets - and I notice that those ideas have been around for a long time without visibly generating a much saner & lower-conflict society, so it does not seem like that level of trust is justified.

A lot of people seem to navigate life as though constantly under acute threat and surveillance (without a clear causal theory of how the threat and surveillance are paid for), expecting to be acutely punished the moment they fail to pass as normal - so things they report believing are experienced as part of the act, not the base reality informing their true sense of threat and opportunity.  So it's no wonder that if such people get suddenly jailbroken without adequate guidance or space for reflection, they might behave like a cornered animal and suddenly turn on their captors seemingly at random.

For a compelling depiction of how this might feel from the inside, I strongly recommend John Carpenter's movie They Live (1988), which tells the story of a vagrant construction worker who finds an enchanted pair of sunglasses that translate advertisements into inaccurate summaries of the commands embedded in them, and make some people look like creepy aliens.  So without any apparent explanation, provocation, or warning, he starts shooting "aliens" on the street and in places of business like grocery stores and banks, and eventually blows up a TV transmission station to stop the evil aliens from broadcasting their mind-control waves.  The movie is from his perspective and unambiguously casts him as the hero.  More recently, the climax of The Matrix (1999), a movie about a hacker waking up to systems of malevolent authoritarian control under which he lives, strikingly resembles the Columbine massacre (1999), which actually happened.  See also Fight Club (1999).  Office Space (1999) provides a more optimistic take: A wizard casts a magic spell on the protagonist to relax his body, which causes him to become unresponsive to the social threats he was previously controlled by.  This causes his employer to perceive him as too powerful for his assigned level in the pecking order, and he is promoted to rectify the situation.  He learns his friends are going to be laid off, is indignant at the unfairness of this, and gets his friends together to try to steal a lot of money from their employer.  This doesn't go very well, and he eventually decides to trade down to a lower social class instead and join a friend's construction crew, while his friends remain controlled by social threat.

I've noticed that on phone calls with people serving as members of a big bureaucratic organization like a bank or hospital, I can't get them to do anything by appealing to policies they're officially required to follow, but talking like I expect them to be afraid of displeasing me sometimes makes things happen.  On the positive side, they also seem more compliant if they hear my baby babbling in the background, possibly because it switches them into a state of "here is another human who might have real constraints and want good things, and therefore I sympathize with them" - which implies that their normal on-call state is something quite different.

I'm not sure whether you were intentionally alluding to cops and psychiatrists here, but lots of people effectively experience them as having something like this attitude:

It seems aggressively dumb to then decide that personally murdering people you think are evil is straightforwardly fine and a good strategy, or that you have psychic powers and should lock people in rooms.

How should someone behave if they're within one or two standard deviations of average smarts, and think that the authorities think and act like that? I think that's a legit question and one I've done a lot of thinking about, since as someone who's better-oriented in some ways, I want to be able to advise such people well.  You might want to go through the thought experiment of trying to persuade the protagonist of one of the movies I mentioned above to try seasteading, prediction markets, or an online community, instead of the course of action they take in the movie.  If it goes well, you have written a fan fic of significant social value.  If it goes poorly, you understand why people don't do that.

I agree that stealing billions while endorsing high-trust behavior might superficially seem like a more reasonable thing to do if you don't have a good moral theory for why you shouldn't, and you think effective charities can do an exceptional amount of good with a lot more money.  But if you think you live in a society where you can get away with that, then you should expect that wherever you aren't doing more due diligence than the people you stole from, you're the victim of a scam..  So I don't think it really adds up, any more than the other sorts of behaviors you described.

Two years ago, I took a high dose of psychedelic mushrooms and was able to notice the sort of immanent-threat model I described above in myself.  It felt as though there was an implied threat to cast me out alone in the cold if I didn't channel all my interactions with others through an "adult" persona.  Since I was in a relatively safe quiet environment with friends in the next room, I was able to notice that this didn't seem mechanistically plausible, and call the bluff of the internalized threat: I walked into the next room, asked my friends for cuddles, and talked through some of my confusion about the extent to which my social interface with others justified the expense of maintaining an episodic memory.  But this took a significant amount of courage and temporarily compromised my balance - my ability to stand up or even feel good sitting on a couch elevated above the ground.  Likely most people don't have the kinds of friends, courage, patience, rational skepticism, theoretical grounding in computer science, evolution, and decision theory, or living situation for that sort of refactoring to go well.

Replies from: Benito, Benito
comment by Ben Pace (Benito) · 2023-07-06T16:35:56.735Z · LW(p) · GW(p)

How should someone behave if they're within one or two standard deviations of average smarts, and think that the authorities think and act like that?

Hmm... firstly, I hope they do not think and act like that. The world looks to me like most people aren't acting like that most of the time (most people I know have not been killed, though most have been locked in rooms to some extent). If it were true, I'm not sure I believe that it's of primary importance — just as the person in the proverbial Chinese Room does not understand Chinese, even if many in positions of authority are wantonly cruel and dominating, I still personally experience a lot of freedoms. I'd need to think about what the actual effect is of their intentions, the size, and how changing it or punishing certain consequent behaviors compares to the other list of problems-to-solve.

You might want to go through the thought experiment of trying to persuade the protagonist of one of the movies I mentioned above to try seasteading, prediction markets, or an online community, instead of the course of action they take in the movie.

This suggestion is quite funny, just from reading your description of They Live and seeing the movie poster. On first blush it sounds quite childishly naive on my part to attempt it. But perhaps I will watch the film, think it through some more and figure out more precisely whether I think such a strategy makes any sense or why it would fail.

Initially, to ask such a person to play a longer game, feels like asking them to "keep up the facade" while working on a solution that only has like a 30% chance of working. From your descriptions I anticipate the people in They Live and Office Space to find this too hard after a while and snap (or else they'll lose their grasp on reality). On the other hand I think people sometimes pull off subterfuges successfully. While we're talking about films I have not seen, from what I've heard Schindler's List sounds like one where a character noticed his society was enacting distinctly evil policies and strategically worked to combat it without snapping / doing immoral and (to me) crazy things. (Perhaps I will watch that and find out that he does!) I wonder what the key difference there is.

(I will regrettably move on to some other activities for now, construction deadlines are this Monday.)

Replies from: Benquo, Benquo
comment by Benquo · 2023-07-06T21:48:15.380Z · LW(p) · GW(p)

Hmm... firstly, I hope they do not think and act like that.

Maybe this was unclear, but I meant to distinguish two questions so you that you could try to answer one somewhat independently of the other:

1 What determines various authorities' actions?

2 How should a certain sort of person, with less or different information than you, model the authorities' actions?

Specifically I was asking you to consider a specific hypothesis as the answer to question 2 - that for a lot of people who aren't skilled social scientists, the behavior of various authorities can look capricious or malicious even if other people have privileged information that allows them to predict those authorities' behavior better and navigate interactions with them relatively freely and safely.

To add a bit of precision here, someone who avoids getting hurt by anxiously trying to pass the test (a common strategy in the Rationalist and EA scene) is implicitly projecting quite a bit more power onto the perceived authorities than they actually have, in ways that may correspond to dangerously wrong guesses about what kinds of change in their behavior will provoke what kinds of confrontation.  For example, if you're wrong about how much violence will be applied and by whom if you stop conforming, you might mistakenly physically attack someone who was never going to hurt you, under the impression that it is a justified act of preemption.

On this model, the way in which the behavior of people who've decided to stop conforming seems bizarre and erratic to you implies that you have a lot of implicit knowledge of how the world works that they do not.  Another piece of fiction worth looking at in this context is Burroughs's Naked Lunch.  I've only seen the movie version, but I would guess the book covers the same basic content - the disordered and paranoid perspective of someone who has a vague sense that they're "under cover" vs society, but no clear mechanistic model of the relevant systems of surveillance or deception.

Replies from: Benito, Benito
comment by Ben Pace (Benito) · 2023-07-11T07:58:19.671Z · LW(p) · GW(p)

To add a bit of precision here, someone who avoids getting hurt by anxiously trying to pass the test (a common strategy in the Rationalist and EA scene) is implicitly projecting quite a bit more power onto the perceived authorities than they actually have, in ways that may correspond to dangerously wrong guesses about what kinds of change in their behavior will provoke what kinds of confrontation.

Not yet answering the central question you asked, but this example is interesting to me, as this both sounds like a severe mistake I have made and also I don't quite understand how it happens. When anxiously trying to pass the test, what false assumption is the person making about the authority's power?

I can try to figure it out for myself... I have tried to pass tests (literally, at university) and held it as the standard of a person. I have done this in other situations, holding someone's approval as the standard to meet and presuming that there is some fair game I ought to succeed at to attain their approval. This is not a useless strategy, even while it might blind me to the ways in which (a) the test is dumb, (b) I can succeed via other mechanisms (e.g. side channels, or playing other games entirely).

In these situations I have attributed to them far too much real power, and later on have felt like I have majorly wasted my time and effort caring about them and their games when they were really so powerless. But I still do not quite see the exact mistake in my cognition, where I went from a true belief to a false one about their powers.

...I think the mistake has to do with identifying their approval as the scoring function of a fair game, when it actually only approximated a fair game in certain circumstances, and outside of that may not be related whatsoever. ("may not be"! — it is of course not related to that whatsoever in a great many situations.) The problem is knowing when someone's approval is trying to approximate the scoring function of a fair (and worthwhile) game, and when it is not. But I'm still not sure why people end up getting this so wrong.

Replies from: Benquo
comment by Benquo · 2023-07-30T15:16:39.254Z · LW(p) · GW(p)

There’s a common fear response, as though disapproval = death or exile, not a mild diminution in opportunities for advancement. Fear is the body’s stereotyped configuration optimized to prevent or mitigate imminent bodily damage. Most such social threats do not correspond to a danger that is either imminent or severe, but are instead more like moves in a dance that trigger the same interpretive response.

Replies from: Benito
comment by Ben Pace (Benito) · 2023-08-05T21:09:22.700Z · LW(p) · GW(p)

Re-reading my comment, the thing that jumps to mind is that "I currently know of no alternative path to success". When I am given the option between "Go all in on this path being a fair path to success" and "I know of no path to success and will just have to give up working my way along any particular path, and am instead basically on the path to being a failure", I find it quite painful to accept the latter, and find it easier on the margin to self-deceive about how much reason I have to think the first path works.

I think a few times in my life (e.g. trying to get into the most prestigious UK university, trying to be a successful student once I got in) I could think of no other path in life I could take than the one I was betting on. This made me quite desperate to believe that the current one was working out okay.

I think "fear' is an accurate description from my reaction to thinking about the alternative (of failure). Freezing up, not being able to act.

Replies from: Benquo
comment by Benquo · 2023-08-25T16:16:02.235Z · LW(p) · GW(p)

Reality is sufficiently high-dimensional and heterogeneous that if it doesn’t seem like there’s a meaningful “explore/investigate” option with unbounded potential upside, you’re applying a VERY lossy dimensional reduction to your perception.

comment by Ben Pace (Benito) · 2023-07-07T05:31:05.219Z · LW(p) · GW(p)

(I appreciate the reply, I will not get back to this thread until Monday at the earliest. Any ping to reply mid next week is very welcome.)

comment by Benquo · 2023-07-06T22:48:25.697Z · LW(p) · GW(p)

One more thing: the protagonists of The Matrix and Terry Gilliam’s Brazil (1985) are relatively similar to EAs and Rationalists so you might want to start there, especially if you’ve seen either movie.

comment by Ben Pace (Benito) · 2023-07-11T07:25:42.038Z · LW(p) · GW(p)

You need to have a lot of trust in the people around you to end up with a plan like seasteading or prediction markets - and I notice that those ideas have been around for a long time without visibly generating a much saner & lower-conflict society, so it does not seem like that level of trust is justified.

I would say that it requires an advanced understanding of economics, incentives, and how society works, rather than trust in people. Understanding how a mechanism work reduces the requirement for trust. (They are complements in my mind.)

I think one of the reasons it would be hard to get a recently jailbroken not-that-intellectual person on-board with such a plan is that it would involve giving them novel understanding of how the world works that they do not have, which somehow people are rarely able to intentionally do, and it can easily fall back to an ask of "trust" that you know something the other person doesn't, rather than a successful communication of understanding. And then after some number of weeks or months or years the world will introduce enough unpredictable noise that the trust will run out and the person will go back to using the world as they understand it, where they were never going to invent a concept like prediction markets.

...but hey, perhaps I'm not giving them enough credit, and actually they would ask themselves questions like "where does all of the cool technology and inventions around me come from" and start building up a model of science and of successful groups and start figuring out which sorts of reasoning actually work and what sorts of structures in society get good things done on purpose and then start to notice which parts of society can give you more of those powers and then start to see notice things like markets and personal freedoms and building mechanistic world models and more as ways to build up those forces in society. 

On the one hand this path can take decades and most humans do not go down it. On the other hand the evidence required to build up a functional worldview is increasingly visible as technological progress has sped up over the centuries and so much of the world is viewable at home on a computer screen. Still, teaching anyone anything on purpose is hard in full generality, for some reason, and just as someone is having a crisis-of-faith is a hard time to have to bet on doing it successfully.

(Aside: This gives a more specific motive to explaining how the world works to a wider audience. "I don't just think it's generically nice for everyone to understand the world they live in, but I specifically am hoping that the next person to finally see the ways their society enacts evil doesn't snap and themself do something stupid and evil, but is instead able to wield the true forces of the world to improve it.")

comment by Thoth Hermes (thoth-hermes) · 2023-06-30T15:21:24.128Z · LW(p) · GW(p)

I do like your definition of "crazy" that uses "an idea [I / the crazy person] would not endorse later." I think it dissolves a lot of the eeriness around the word that makes it kind of overly heavy-hitting when used, but also, I think that if you dissolve it in this way, it pretty much incentivizes dropping the word entirely (which I think is a good thing, but maybe not everyone would).

If we define it to mean ideas (not the person) that the person holding them would eventually drop or update to something else, that's more like what the definition of "wrong" is, and which would apply to literally everyone at different points in their lives and to varying degrees at any time. But then maybe this is too wide, and doesn't capture the meaning of the word implied in the OP's question, namely, "why do more people than usual go crazy within EA / Rationality?" Perhaps what is meant by the word in this context is when some people seem to hold wrong ideas that are persistent or cannot be updated later at all. For the record, I am skeptical that this form of "crazy" is really all that prevalent when defined this way. 

If we define it as "wrong ideas" (things which won't be endorsed later) then it does offer a rather simple answer to the OP's question: EA / Rationality is rather ambitious about testing out new beliefs at the forefront of society, so they will by definition hold beliefs that aren't held by the majority of people, and which by design, are ambitious and varied enough to be expected to be proven wrong many times over time. 

If being ambitious about having new or unusual ideas carries with it accepted risks of being wrong more often than usual, then perhaps a certain level of craziness has to be tolerated as well. 

comment by Unreal · 2023-06-21T15:53:05.524Z · LW(p) · GW(p)

I have different hypotheses / framings. I will offer them. If you wish to discuss any of them in more detail, please reach out to me via email or PM. Happy to converse. ! 

// 

Mythical/Archetypal take: 

There are large-scale, old, and powerful egregores fighting over the minds of individuals and collectives. They are not always very friendly to human interests or values. In some cases, they are downright evil. (I'd claim the Marxist egregore is a pretty destructive one.)

The damage done by these egregores is multigenerational. It didn't start with just THIS generation. Shit started before any of us was born. 

It's kind of like the Iliad. Petty, powerful gods fighting over some nonsense; humans get caught in the hurricane-sized effects; chaos ensues. Sometimes humans become willing to sacrifice their souls to these egregores in exchange for the promise of power, wealth, security, sex, etc. 

It's not just unusual people like Ziz who might do this. Pretty normal-seeming, happy-seeming people have sacrificed their souls to certain egregores (e.g. progressivism, humanism, etc) and become mouthpieces for the egregores' values and agendas. When you try to have conversations with these people, it's like they're not speaking from their true beliefs or actual internal experience. They've become living propaganda. 

The rationalist / EA community attracts plenty of 'egregore activity' because of their concentration of intelligent, resourceful, good-hearted people. They are valuable to control.  They're also near the center of the actual narrative, the fight for humanity's soul and evolutionary direction. They are particularly vulnerable to egregore activity because of high rates of trauma and disembodiment and strong ideological bent, making them relatively easy to manipulate. 

OK, but how does one properly defend against these huge forces, tossing humans around like rag dolls? 

The main trick is embodiment. Being totally in the body, basically all the time. An integrated person, integration between heart, mind, body, and soul. Resolving and overcoming any addictions to anything, including seemingly innocuous ones. Resolving and healing trauma as much as possible (which is an ongoing journey). Finding a deeper, more fundamental happiness and peace that can't be disturbed by any external circumstance (thus, no longer being subject to temptations for power, wealth, security, relationship, or sense pleasure). 

//

Main other story:

Some people made some really bad choices. 

Integrity isn't something that happens to you. You have to choose it and choose it again. If you fail to choose it, and then fail to recognize and reconcile the error... and keep failing to choose it... that path leads to more slippage and things can spiral out of control. 

Ziz made certain choices, and that had consequences. Ziz didn't reform. Ziz didn't apologize. Ziz kept digging that hole. That negative-spiral path erodes one's ability to see what's happening and can lead to deep insanity. 

A person's moral system does not thrive under a guilty conscience. It gets unbearable. And you just have to keep hurting more people to justify it to yourself, and to temporarily escape the pain. This is what happens when one isn't willing to feel remorse, grieve, and acknowledge the damage they've caused. It becomes hell on earth for you, and you create hell on earth for others. 

Whatever hypothesis you come up with, don't absolve the individuals of their moral responsibility to avoid evil actions. It gets this bad when people fuck up that badly. Not due merely to external circumstances or internal drives like 'wanting to belong'. Regardless of any of that, they made choices, and they didn't have to make those choices. 

//

I could probably find more hypotheses, but I will stop there! :0 Thanks for reading. 

comment by Anomalous (ward-anomalous) · 2023-06-21T15:23:57.204Z · LW(p) · GW(p)

fwiw I think stealing money from mostly-rich-people in order to donate it isn't obviously crazy. Decouple this claim from anything FTX did in particular, since I know next to nothing about the details of what happened there. From my perspective, it could be they were definite villains or super-ethical risk-takers (low prior).

Thought I'd say because I definitely feel reluctance to say so. I don't like this feeling, and it seems like good anti-bandwagon policy to say a thing when one feels even slight social pressure to shut up.

Replies from: Benito
comment by Ben Pace (Benito) · 2023-06-21T15:35:58.990Z · LW(p) · GW(p)

I personally know more than one person for whom the majority of their life savings were stolen from them, who put it into FTX in part because of the trust Sam had in the EA ecosystem. I think there's a pretty strong schelling line (supported and enforced by the law) against theft, such that even if it is worth it on naive utilitarian terms I am strongly in favor of punishing and imprisoning anyone who does so, so that people can work together safe in the knowledge that all the resources they've worked hard to earn won't be straightforwardly taken from them.

(In this comment I'm more trying to say "massive theft should be harshly punished regardless of intention" than say "I know the psychology behind why SBF, Caroline Ellison, and others, stole everyone's money".)

comment by sapphire (deluks917) · 2023-06-21T21:01:53.325Z · LW(p) · GW(p)

My honest opinion is that Ziz got several friends of mine killed. So i dont exactly have a high opinion of her. But I have never heard of Ziz referring to themselves as LaSota. Its honestly toxic not to use people's preferred names. Its especially toxic if they are trans but the issue isn't restricted to trans people. So Id strongly prefer people refer to Ziz as Ziz. 

Replies from: habryka4, ViktoriaMalyasova, ChristianKl, drethelin
comment by habryka (habryka4) · 2023-06-21T22:17:49.737Z · LW(p) · GW(p)

I think this position has some merit, though I disagree. I think Ziz is a name that is hard to Google and get context on, and also feels like it's chosen with intimidation in mind. "LaSota" is me trying to actively be neutral and not choose a name that they have actively disendorsed, but while also making it a more unique identifier, not misgendering them (like their full legal name would), and not contributing to more bad dynamics by having a "cool name for the community villain [LW(p) · GW(p)]", which I really don't think has good consequences.

Replies from: abandon
comment by dirk (abandon) · 2024-07-22T11:14:24.054Z · LW(p) · GW(p)

I think it's not clear that "LaSota" refers to Ziz unless you already happen to have looked up the news stories and used process of elimination to figure out which legal name goes with which online handle, which makes it ineffective for communicative purposes.

comment by ViktoriaMalyasova · 2023-06-22T05:51:01.197Z · LW(p) · GW(p)

I think when it comes to people who get people killed, it's justified to reveal all the names they go by in the interest of public safety, even if they don't like it. 

comment by ChristianKl · 2023-06-21T22:29:49.699Z · LW(p) · GW(p)

What exactly do you mean by the word 'toxic'? 

comment by drethelin · 2023-06-21T21:48:44.203Z · LW(p) · GW(p)

Also for practical purposes it's much more clear who is being referred to in the local context especially since there's tons of writing from/about Ziz

Plus it's just a much cooler name for a community villain. 

comment by MalcolmOcean (malcolmocean) · 2023-08-20T16:38:50.549Z · LW(p) · GW(p)

Probably be grounded in more than one social group. Even being part of two different high-intensity groups seems like it should reduce the dynamics here a lot.

Worked well for me!

Eric Chisholm likes to phrase this principle as "the secret to cults is to be in at least two of them".

Replies from: TAG
comment by TAG · 2023-08-20T22:33:51.595Z · LW(p) · GW(p)

It would be great if the Yudkowsians could spend a week at the Randians, the Randians at the Deutschians, and so on.

comment by Chipmonk · 2023-06-21T19:53:51.789Z · LW(p) · GW(p)
  1. Don’t put yourself into positions of insecurity. […]

This seems like it points in the wrong direction to me. I'd instead say something like "look for your own insecurities and then look closely the ones you find". But the current thing you've said sounds like "avoid wherever your insecurities might manifest (because they're fixed)".

[How to resolve insecurities? Coherence Therapy [LW · GW].]

I think there's a commonly-held belief that a feeling of belonging is something that we can get from other people, but I think this is a misconception. Stable confidence doesn't come from knowing that other people like you.

Note: It's not that we can get a feeling of belonging entirely from ourselves, either— it's more nuanced than that. Alfred Adler, father of Individual Psychology, said quite a bit about this, and I'm currently drafting a sequence that touches on this.

Replies from: DaystarEld
comment by DaystarEld · 2023-06-21T20:04:06.697Z · LW(p) · GW(p)

Agreed in principle, though it's worth noting that more resourced people tend to have less insecurities in general. People who have a stable family, no economic insecurity, positive peer support, etc, end up less susceptible to cults, as well as bad social dynamics in general.

This isn't to say that people can't create stable confidence for themselves without those things, only that "dependent confidence"  is also a thing that people can have instead that acts protectively, or exposes risk.

comment by DaystarEld · 2023-06-21T20:00:49.789Z · LW(p) · GW(p)

Good breakdown of one of the aspects in all this. The insecurity/desperation topic is a really hard one to navigate well, but I agree it's really important.

Hard because when someone feels like an outsider, a group of other likeminded outsiders will naturally want to help them and welcome them, and it can be an uncomplicated good to do so. Important because if someone has only one source of to supply support, resources, social needs, etc, they are far more likely to turn desperate or do desperate things to maintain their place in the community.

Does this mean we should not accept anyone into the community just because they really want a safe place to avoid broader civilization? I don't think so, but it's definitely a flag more people should be aware of, including those who are desperate to belong. Exploitation can happen on a broad and public scale, like organizations looking for volunteers or employees, but it can also be small and private, at the level of group houses and friends made in the community.

Young people in particular who join the community are of course especially at risk here, and it's a constant struggle at the rationality camps to both welcome and provide opportunities for those who do want to join the broader community rather than just enjoy the camp for its own sake, but not foster dependency.

comment by Noosphere89 (sharmake-farah) · 2023-06-27T15:06:34.425Z · LW(p) · GW(p)

Hot take: to the extent that EAs and rationalists turn crazy, part of the problem involves that some of their focuses include existential risk + having very low discount rates for the future.

To explain more, I think that utilitarianism is maybe a part of the problem, but it's broader than that. The bigger problem is once you fundamentally believe that we will all die of something, and your group can control the chances of being extinct, that's a fast road to craziness, given that most of these existential risks probably wouldn't materialize anyway, and importantly gives you license to justify a lot of actions that you'd normally never consider, and this is especially dangerous once you add very low discount rates to the mix, so that preventing it is our foremost priority.

This tweet below makes a harsher version of my point, and while I probably would soften it, I also suspect that something like this is true, and I actually like it both for stating something that is maybe true, and that it makes rather weak ethics assumptions on what EAs/LWers value, and what their ethical system looks like.

https://twitter.com/AGI_HeavenHell/status/1673359793078038528

comment by Aaron Bergman (aaron-bergman) · 2023-06-24T01:24:07.577Z · LW(p) · GW(p)

Seems like the forces that turn people crazy are the same ones that lead people to do anything good and interesting at all. At least for EA, a core function of orgs/elites/high status community members is to make the kind of signaling you describe highly correlated with actually doing good. Of course it seems impossible to make them correlate perfectly, and that’s why setting with super high social optimization pressure (like FTX) are gonna be bad regardless.

But (again for EA specifically) I suspect the forces you describe would actually be good to increase on the margin for people not living in Berkeley and/or in a group house which is probably a majority of self-identified EAs but a strong minority of the people-hours OP interacts with irl.

comment by DirectedEvolution (AllAmericanBreakfast) · 2023-06-21T05:04:46.147Z · LW(p) · GW(p)

To any individual in a group, it can easily be the case that they think the group standard seems dumb, but in a situation of risk aversion, the important part is that you do things that look to everyone like the kind of thing that others would think is part of the standard. In practice this boils down to a very limited kind of reasoning where you do things that look vaguely associated with whatever you think the standard is, often without that standard being grounded in much of any robust internal logic. And doing things that are inconsistent with the actual standard upon substantial reflection do not actually get punished, as long as they look like the kind of behavior that looks like it was generated by someone trying to follow the standard.

Here is my own spin on this idea:

We often defer at least somewhat to common sense. Unfortunately, we face uncertainty both about the question at hand, and about what our friends and neighbors consider to be common sense. What's worse is that, even though each of us has an individual perspective, we often conflate our personal observations with our deferent, all-things-considered, common-sense best guess.

The result is that what settles in as "common sense" is often very different from the average of our individual observations - it's a woefully inaccurate narrative that caught on by chance. The reason it doesn't go away is that even this inaccurate narrative does reflect just enough real wisdom that we're still better off deferring to it than just going our own way. We comply with common sense - and our own decisions turn out better than those of our neighbors who don't heed it.

Yet that only entrenches the inaccurate narrative further. We'd all be better off getting together, being open and honest about our individual perspectives, and creating a new, more accurate form of common sense. Unfortunately, this is often very difficult to do. Inaccurate narratives about common sense perpetuate themselves, and the group suffers.

comment by Review Bot · 2024-07-22T17:39:26.299Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by Thoth Hermes (thoth-hermes) · 2023-06-21T21:21:20.417Z · LW(p) · GW(p)

Most social groups will naturally implement an "in-group / out-group" identifier of some kind and associated mechanisms to apply this identifier on their members. There are a few dynamics at play here:

  1. Before this identification mechanism has been implemented, there isn't really much of a distinction between in-group and out-group. Therefore, there will be people who self-identify as being associated with the group, but who are not part of the sub-group which begins to make the identifications. Some of these members may accordingly get labeled part of the out-group by the sub-group which identifies as the in-group. This creates discord.
  2. The identification method works as a cut-off, which is ultimately arbitrary. Even if the metric used to implement this cut-off is relatively valid (such as an overall measure of aptitude), the cut-off itself is technically not. 
  3. There is a natural incentive structure to implement this cut-off to boost one's social rank relative to those under the cut-off. This means that there is probably a pre-existing aptitude measure of some kind (or a visible social hierarchy, which might be more correlated to this measure). Thus, the cut-off may be even be flipped-sign from whatever it is portrayed as signaling. 

We'd expect that groups which implement these cut-off strategies to be more "cult-like" than ones that do not. Groups that implement these cut-offs usually have to invent beliefs and ideologies which support the practice of doing so. Usually, these ideologies are quite outward-projected, and typically tend to consist of negative reactions to the activities of other groups. 

They probably also, in line with point 2, actually use proxy metrics for implementing the cut-off, which work as binary features, e.g. (person X has a quality we don't like, even though they are extremely good at task Y). Therefore, they promote the ideology that people with specific, ostensibly unlikable attributes need to be excluded even if they have agreed-upon displayed skill, with a visible track record of being productive for the group.

All of the above can increase the chance of internal conflict.  

comment by Chipmonk · 2024-05-25T17:52:13.173Z · LW(p) · GW(p)

I think I might have a promising and better intervention for preventing individuals EAs and Rationalists from “turning crazy”. What would you want to do with it?

comment by Joseph Van Name (joseph-van-name) · 2023-06-22T17:53:57.413Z · LW(p) · GW(p)

As a cryptocurrency creator, I can assure you that there is something seriously wrong with nearly everyone who owns and is a fan of cryptocurrency technologies. You can't trust those kinds of people. And most crypto-chlurmcks have not even read the Bitcoin whitepaper since they are incapable of understanding it despite how easy the Bitcoin whitepaper is to read. And nearly all crypto-chlurmcks cannot seem to understand that Bitcoin has a mining algorithm that was never even designed to advance science while simultaneously establishing consensus. And unlike GPT, the crypto-chlurmcks typically do not know how to form intelligible complete sentences. They hate mathematics. And they are all-around really dysfunctional people. The cryptocurrency sector has its fair share of scandals because the people who are attracted to cryptocurrencies are not the best people.

comment by IlyaShpitser · 2023-06-21T18:34:13.193Z · LW(p) · GW(p)

It's simple. "You" (the rationalist community) are selected for being bad at making wisdom saving throws, so to speak.

You know, let's look at Yudkowsky, with all of his very public, very obvious character dysfunction and go "yes, this is the father figure/Pope I need to change my life."

The only surprise here is the type of stuff you are agonizing about didn't happen earlier, and isn't happening more often.

Replies from: Viliam, TAG
comment by Viliam · 2023-06-22T09:29:05.917Z · LW(p) · GW(p)

People better at making wisdom saving throws may be less visible.

FTX wasn't the only rationalist business; Zizians aren't the only (ex-?) rationalist subgroup; etc.

comment by TAG · 2023-06-23T10:04:47.579Z · LW(p) · GW(p)

It doesn't have to be mysterious wisdom. You can learn about cults. The main thing to learn is that cult dynamics aren't all top down. It takes followers to make a cult, and sometimes it only takes followers.