My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

post by jessicata (jessica.liu.taylor) · 2021-10-16T21:28:12.427Z · LW · GW · 949 comments

Contents

  Background: choosing a career
  Trauma symptoms and other mental health problems
  Why do so few speak publicly, and after so long?
  Strange psycho-social-metaphysical hypotheses in a group setting
  World-saving plans and rarity narratives
  Debugging
  Other issues
  Conclusion
None
960 comments

I appreciate Zoe Curzi's revelations of her experience with Leverage.  I know how hard it is to speak up when no or few others do, and when people are trying to keep things under wraps.

I haven't posted much publicly about my experiences working as a researcher at MIRI (2015-2017) or around CFAR events, to a large degree because I've been afraid.  Now that Zoe has posted about her experience, I find it easier to do so, especially after the post was generally well-received by LessWrong.

I felt moved to write this, not just because of Zoe's post, but also because of Aella's commentary:

I've found established rationalist communities to have excellent norms that prevent stuff like what happened at Leverage. The times where it gets weird is typically when you mix in a strong leader + splintered, isolated subgroup + new norms. (this is not the first time)

This seemed to me to be definitely false, upon reading it.  Most of what was considered bad about the events at Leverage Research also happened around MIRI/CFAR, around the same time period (2017-2019).

I don't want to concentrate on the question of which is "worse"; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases.  I also caution against blame in general, in situations like these, where many people (including me!) contributed to the problem, and have kept quiet for various reasons.  With good reason, it is standard for truth and reconciliation events to focus on restorative rather than retributive justice, and include the possibility of forgiveness for past crimes.

As a roadmap for the rest of the post, I'll start by describing some background, describe some trauma symptoms and mental health issues I and others have experienced, and describe the actual situations that these mental events were influenced by and "about" to a significant extent.

Background: choosing a career

After I finished my CS/AI Master's degree at Stanford, I faced a choice of what to do next.  I had a job offer at Google for machine learning research and a job offer at MIRI for AI alignment research.  I had also previously considered pursuing a PhD at Stanford or Berkeley; I'd already done undergrad research at CoCoLab, so this could have easily been a natural transition.

I'd decided against a PhD on the basis that research in industry was a better opportunity to work on important problems that impact the world; since then I've gotten more information from insiders that academia is a "trash fire" (not my quote!), so I don't regret this decision.

I was faced with a decision between Google and MIRI.  I knew that at MIRI I'd be taking a pay cut.  On the other hand, I'd be working on AI alignment, an important problem for the future of the world, probably significantly more important than whatever I'd be working on at Google.  And I'd get an opportunity to work with smart, ambitious people, who were structuring their communication protocols and life decisions around the content of the LessWrong Sequences.

These Sequences contained many ideas that I had developed or discovered independently, such as functionalist theory of mind, the idea that Solomonoff Induction was a formalization of inductive epistemology, and the idea that one-boxing in Newcomb's problem is more rational than two-boxing.  The scene attracted thoughtful people who cared about getting the right answer on abstract problems like this, making for very interesting conversations.

Research at MIRI was an extension of such interesting conversations to rigorous mathematical formalism, making it very fun (at least for a time).  Some of the best research I've done was at MIRI (reflective oracles, logical induction, others).  I met many of my current friends through LessWrong, MIRI, and the broader LessWrong Berkeley community.

When I began at MIRI (in 2015), there were ambient concerns that it was a "cult"; this was a set of people with a non-mainstream ideology that claimed that the future of the world depended on a small set of people that included many of them.  These concerns didn't seem especially important to me at the time.  So what if the ideology is non-mainstream as long as it's reasonable?  And if the most reasonable set of ideas implies high impact from a rare form of research, so be it; that's been the case at times in history.

(Most of the rest of this post will be negative-valenced, like Zoe's post; I wanted to put some things I liked about MIRI and the Berkeley community up-front.  I will be noting parts of Zoe's post and comparing them to my own experience, which I hope helps to illuminate common patterns; it really helps to have an existing different account to prompt my memory of what happened.)

Trauma symptoms and other mental health problems

Back to Zoe's post.  I want to disagree with a frame that says that the main thing that's bad was that Leverage (or MIRI/CFAR) was a "cult".  This makes it seem like what happened at Leverage is much worse than what could happen at a normal company.  But, having read Moral Mazes and talked to people with normal corporate experience (especially in management), I find that "normal" corporations are often quite harmful to the psychological health of their employees, e.g. causing them to have complex PTSD symptoms, to see the world in zero-sum terms more often, and to have more preferences for things to be incoherent.  Normal startups are commonly called "cults", with good reason.  Overall, there are both benefits and harms of high-demand ideological communities ("cults") compared to more normal occupations and social groups, and the specifics matter more than the general class of something being "normal" or a "cult", although the general class affects the structure of the specifics.

Zoe begins by listing a number of trauma symptoms she experienced.  I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break.

The psychotic break was in October 2017, and involved psychedelic use (as part of trying to "fix" multiple deep mental problems at once, which was, empirically, overly ambitious); although people around me to some degree tried to help me, this "treatment" mostly made the problem worse, so I was placed in 1-2 weeks of intensive psychiatric hospitalization, followed by 2 weeks in a halfway house.  This was followed by severe depression lasting months, and less severe depression from then on, which I still haven't fully recovered from.  I had PTSD symptoms after the event and am still recovering.

During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation.  I was catatonic for multiple days, afraid that by moving I would cause harm to those around me.  This is in line with scrupulosity-related post-cult symptoms.

Talking about this is to some degree difficult because it's normal to think of this as "really bad".  Although it was exceptionally emotionally painful and confusing, the experience taught me a lot, very rapidly; I gained and partially stabilized a new perspective on society and my relation to it, and to my own mind.  I have much more ability to relate to normal people now, who are also for the most part also traumatized.

(Yes, I realize how strange it is that I was more able to relate to normal people by occupying an extremely weird mental state where I thought I was destroying the world and was ashamed and suicidal regarding this; such is the state of normal Americans, apparently, in a time when suicidal music is extremely popular among youth.)

Like Zoe, I have experienced enormous post-traumatic growth.  To quote a song, "I am Woman": "Yes, I'm wise, but it's wisdom born of pain.  I guess I've paid the price, but look how much I've gained."

While most people around MIRI and CFAR didn't have psychotic breaks, there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR; at least one other than me had worked at MIRI for a significant time, and at least one had done work with MIRI on a shorter-term basis.  There was, in addition, a case of someone becoming very paranoid, attacking a mental health worker, and hijacking her car, leading to jail time; this person was not an employee of either organization, but had attended multiple CFAR events including a relatively exclusive AI-focused one.

I heard that the paranoid person in question was concerned about a demon inside him, implanted by another person, trying to escape.  (I knew the other person in question, and their own account was consistent with attempting to implant mental subprocesses in others, although I don't believe they intended anything like this particular effect).  My own actions while psychotic later that year were, though physically nonviolent, highly morally confused; I felt that I was acting very badly and "steering in the wrong direction", e.g. in controlling the minds of people around me or subtly threatening them, and was seeing signs that I was harming people around me, although none of this was legible enough to seem objectively likely after the fact.  I was also extremely paranoid about the social environment, being unable to sleep normally due to fear.

There are even cases of suicide in the Berkeley rationality community associated with scrupulosity and mental self-improvement (specifically, Maia Pasek/SquirrelInHell [LW · GW], and Jay Winterford/Fluttershy [LW · GW], both of whom were long-time LessWrong posters; Jay wrote an essay about suicidality, evil, domination, and Roko's basilisk months before the suicide itself).  Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz.  (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; the splinter group seems to have selected for high scrupulosity and not attenuated its mental impact.)

The cases discussed are not always of MIRI/CFAR employees, so they're hard to attribute to the organizations themselves, even if they were clearly in the same or a nearby social circle.  Leverage was an especially legible organization, with a relatively clear interior/exterior distinction, while CFAR was less legible, having a set of events that different people were invited to, and many conversations including people not part of the organization.  Hence, it is easier to attribute organizational responsibility at Leverage than around MIRI/CFAR.  (This diffusion of responsibility, of course, doesn't help when there are actual crises, mental health or otherwise.)

Obviously, for every case of poor mental health that "blows up" and is noted, there are many cases that aren't.  Many people around MIRI/CFAR and Leverage, like Zoe, have trauma symptoms (including "cult after-effect symptoms") that aren't known about publicly until the person speaks up.

Why do so few speak publicly, and after so long?

Zoe discusses why she hadn't gone public until now.  She first cites fear of response:

Leverage was very good at convincing me that I was wrong, my feelings didn't matter, and that the world was something other than what I thought it was. After leaving, it took me years to reclaim that self-trust.

Clearly, not all cases of people trying to convince each other that they're wrong are abusive; there's an extra dimension of institutional gaslighting, people telling you something you have no reason to expect they actually believe, people being defensive and blocking information, giving implausible counter-arguments, trying to make you doubt your account and agree with their bottom line.

Jennifer Freyd writes about "betrayal blindness", a common problem where people hide from themselves evidence that their institutions have betrayed them.  I experienced this around MIRI/CFAR.

Some background on AI timelines: At the Asilomar Beneficial AI conference, in early 2017 (after AlphaGo was demonstrated in late 2016), I remember another attendee commenting on a "short timelines bug" going around.  Apparently a prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years.

This trend in belief included MIRI/CFAR leadership; one person commented that he noticed his timelines trending only towards getting shorter, and decided to update all at once.  I've written [LW · GW] about AI timelines in relation to political motivations before (long after I actually left MIRI).

Perhaps more important to my subsequent decisions, the AI timelines shortening triggered an acceleration of social dynamics.  MIRI became very secretive about research.  Many researchers were working on secret projects, and I learned almost nothing about these.  I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact.  Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects.

I had disagreements with the party line, such as on when human-level AGI was likely to be developed and about security policies around AI, and there was quite a lot of effort to convince me of their position, that AGI was likely coming soon and that I was endangering the world by talking openly about AI in the abstract (not even about specific new AI algorithms). Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness [EDIT: Eliezer himself and Sequences-type thinking, of course, would aggressively disagree [LW(p) · GW(p)] with the epistemic methodology advocated by this person].  I experienced a high degree of scrupulosity about writing anything even somewhat critical of the community and institutions (e.g. this post).  I saw evidence of bad faith around me, but it was hard to reject the frame for many months; I continued to worry about whether I was destroying everything by going down certain mental paths and not giving the party line the benefit of the doubt, despite its increasing absurdity.

Like Zoe, I was definitely worried about fear of response.  I had paranoid fantasies about a MIRI executive assassinating me.  The decision theory research I had done came to life, as I thought about the game theory of submitting to a threat of a gun, in relation to how different decision theories respond to extortion.

This imagination, though extreme (and definitely reflective of a cognitive error), was to some degree re-enforced by the social environment.  I mentioned the possibility of whistle-blowing on MIRI to someone I knew, who responded that I should consider talking with Chelsea Manning, a whistleblower who is under high threat.  There was quite a lot of paranoia at the time, both among the "establishment" (who feared being excluded or blamed) and "dissidents" (who feared retaliation by institutional actors).  (I would, if asked to take bets, have bet strongly against actual assassination, but I did fear other responses.)

More recently (in 2019), there were multiple masked protesters at a CFAR event (handing out pamphlets critical of MIRI and CFAR) who had a SWAT team called on them (by camp administrators, not CFAR people, although a CFAR executive had called the police previously about this group), who were arrested, and are now facing the possibility of long jail time.  While this group of people (Ziz and some friends/associates) chose an unnecessarily risky way to protest, hearing about this made me worry about violently authoritarian responses to whistleblowing, especially when I was under the impression that it was a CFAR-adjacent person who had called the cops to say the protesters had a gun (which they didn't have), which is the way I heard the story the first time.

Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past events secretively.  This matches my experience.

Like Zoe, I care about the people I interacted with during the time of the events (who are, for the most part, colleagues who I learned from), and I don't intend to cause harm to them through writing about these events.

Zoe discusses an unofficial NDA people signed as they left, agreeing not to talk badly of the organization.  While I wasn't pressured to sign an NDA, there were significant security policies discussed at the time (including the one about researchers not asking each other about research).  I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner, which would be more dangerous (the blog post in question would have been similar to the AI forecasting work I did later, here and here; judge for yourself how dangerous this is).  This made it hard to talk about the silencing dynamic; if you don't have the freedom to speak about the institution and limits of freedom of speech, then you don't have freedom of speech.

(Is it a surprise that, after over a year in an environment where I was encouraged to think seriously about the possibility that simple actions such as writing blog posts about AI forecasting could destroy the world, I would develop the belief that I could destroy everything through subtle mental movements that manipulate people?)

Years before, MIRI had a non-disclosure agreement that members were pressured to sign, as part of a legal dispute with Louie Helm.

I was certainly socially discouraged from revealing things that would harm the "brand" of MIRI and CFAR, by executive people.  There was some discussion at the time of the possibility of corruption in EA/rationality institutions (e.g. Ben Hoffman's posts criticizing effective altruism, GiveWell, and the Open Philanthropy Project); a lot of this didn't end up on the Internet due to PR concerns.

Someone who I was collaborating with at the time (Michael Vassar) was commenting on social epistemology and the strengths and weaknesses of various people's epistemology and strategy, including people who were leaders at MIRI/CFAR.  Subsequently, Anna Salamon said that Michael was causing someone else at MIRI to "downvote Eliezer in his head" and that this was bad because it meant that the "community" would not agree about who the leaders were, and would therefore have akrasia issues due to the lack of agreement on a single leader in their head telling them what to do.  (Anna says, years later, that she was concerned about bias in selectively causing downvotes rather than upvotes; however, at the time, based on what was said, I had the impression that the primary concern was about coordination around common leadership rather than bias specifically.)

This seemed culty to me and some friends; it's especially evocative in relation to Julian Jaynes' writing about bronze age cults, which detail a psychological model in which idols/gods give people voices in their head telling them what to do.

(As I describe these events in retrospect they seem rather ridiculous, but at the time I was seriously confused about whether I was especially crazy or in-the-wrong, and the leadership was behaving sensibly.  If I were the type of person to trust my own judgment in the face of organizational mind control, I probably wouldn't have been hired in the first place; everything I knew about how to be hired would point towards having little mental resistance to organizational narratives.)

Strange psycho-social-metaphysical hypotheses in a group setting

Zoe gives a list of points showing how "out of control" the situation at Leverage got.  This is consistent with what I've heard from other ex-Leverage people.

The weirdest part of the events recounted is the concern about possibly-demonic mental subprocesses being implanted by other people. As a brief model of something similar to this (not necessarily the same model as the Leverage people were using): people often pick up behaviors ("know-how") and mental models from other people, through acculturation and imitation. Some of this influence could be (a) largely unconscious on the part of the receiver, (b) partially intentional or the part of the person having mental effects on others (where these intentions may include behaviorist conditioning, similar to hypnosis, causing behaviors to be triggered under certain circumstances), and (c) overall harmful to the receiver's conscious goals. According to IFS-like psychological models, it's common for a single brain to contain multiple sub-processes with different intentions. While the mental subprocess implantation hypothesis is somewhat strange, it's hard to rule out based on physics or psychology.

As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR. These strange experiences are, as far as I can tell, part of a more general social phenomenon around that time period; I recall a tweet commenting that the election of Donald Trump convinced everyone that magic was real.

Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community.  While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible.  (I noted at the time that there might be a sense in which different people have "auras" in a way that is not less inherently rigorous than the way in which different people have "charisma", and I feared this type of comment would cause people to say I was crazy.)

As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.

The case Zoe recounts of someone "having a psychotic break" sounds tame relative to what I'm familiar with.  Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas.

Alternatively, like me, they can explore these metaphysics while:

Being able to discuss somewhat wacky experiential hypotheses, like the possibility of people spreading mental subprocesses to each other, in a group setting, and have the concern actually taken seriously as something that could seem true from some perspective (and which is hard to definitively rule out), seems much more conducive to people's mental well-being than refusing to have that discussion, so they struggle with (what they think is) mental subprocess implantation on their own.  Leverage definitely had large problems with these discussions, and perhaps tried to reach more intersubjective agreement about them than was plausible (leading to over-reification, as Zoe points out), but they seem less severe than the problems resulting from refusing to have them, such as psychiatric hospitalization and jail time.

"Psychosis" doesn't have to be a bad thing, even if it usually is in our society; it can be an exploration of perceptions and possibilities not before imagined, in a supportive environment that helps the subject to navigate reality in a new way; some of R.D. Liang's work is relevant here, describing psychotic mental states as a result of ontological insecurity following from an internal division of the self at a previous time. Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing.

World-saving plans and rarity narratives

Zoe cites the fact that Leverage has a "world-saving plan" (which included taking over the world) and considered Geoff Anders and Leverage to be extremely special, e.g. Geoff being possibly the best philosopher ever:

Within a few months of joining, a supervisor I trusted who had recruited me confided in me privately, “I think there’s good reason to believe Geoff is the best philosopher who’s ever lived, better than Kant. I think his existence on earth right now is an historical event.”

Like Leverage, MIRI had a "world-saving plan".  This is no secret; it's discussed in an Arbital article written by Eliezer Yudkowsky.  Nate Soares frequently talked about how it was necessary to have a "plan" to make the entire future ok, to avert AI risk; this plan would need to "backchain" from a state of no AI risk and may, for example, say that we must create a human emulation using nanotechnology that is designed by a "genie" AI, which does a narrow task rather than taking responsibility for the entire future; this would allow the entire world to be taken over by a small group including the emulated human. [EDIT: See Nate's clarification [LW(p) · GW(p)], the small group doesn't have to be MIRI specifically, and the upload plan is an example of a plan rather than a fixed super-plan.]

I remember taking on more and more mental "responsibility" over time, noting the ways in which people other than me weren't sufficient to solve the AI alignment problem, and I had special skills, so it was uniquely my job to solve the problem.  This ultimately broke down, and I found Ben Hoffman's post on responsibility to resonate (which discusses the issue of control-seeking).

The decision theory of backchaining and taking over the world somewhat beyond the scope of this post.  There are circumstances where back-chaining is appropriate, and "taking over the world" might be necessary, e.g. if there are existing actors already trying to take over the world and none of them would implement a satisfactory regime.  However, there are obvious problems with multiple actors each attempting to control everything, which are discussed in Ben Hoffman's post.

This connects with what Zoe calls "rarity narratives".  There were definitely rarity narratives around MIRI/CFAR.  Our task was to create an integrated, formal theory of values, decisions, epistemology, self-improvement, etc ("Friendliness theory"), which would help us develop Friendly AI faster than the rest of the world combined was developing AGI (which was, according to leaders, probably in less than 20 years).  It was said that a large part of our advantage in doing this research so fast was that we were "actually trying" and others weren't.  It was stated by multiple people that we wouldn't really have had a chance to save the world without Eliezer Yudkowsky (obviously implying that Eliezer was an extremely historically significant philosopher).

Though I don't remember people saying explicitly that Eliezer Yudkowsky was a better philosopher than Kant, I would guess many would have said so.  No one there, as far as I know, considered Kant worth learning from enough to actually read the Critique of Pure Reason in the course of their research; I only did so years later, and I'm relatively philosophically inclined.  I would guess that MIRI people would consider a different set of philosophers relevant, e.g. would include Turing and Einstein as relevant "philosophers", and I don't have reason to believe they would consider Eliezer more relevant than these, though I'm not certain either way.  (I think Eliezer is a world-historically-significant philosopher, though not as significant as Kant or Turing or Einstein.)

I don't think it's helpful to oppose "rarity narratives" in general.  People need to try to do hard things sometimes, and actually accomplishing those things would make the people in question special, and that isn't a good argument against trying the thing at all.  Intellectual groups with high information integrity, e.g. early quantum mechanics people, can have a large effect on history.  I currently think the intellectual work I do is pretty rare and important, so I have a "rarity narrative" about myself, even though I don't usually promote it.  Of course, a project claiming specialness while displaying low information integrity is, effectively, asking for more control and resources that it can beneficially use.

Rarity narratives can have the effects of making a group of people more insular, more concentrating relevance around itself and not learning from other sources (in the past or the present), making local social dynamics be more centered on a small number of special people, and increasing pressure on people to try to do (or pretend to try to do) things beyond their actual abilities; Zoe and I both experienced these effects.

(As a hint to evaluating rarity narratives yourself: compare Great Thinker's public output to what you've learned from other public sources; follow citations and see where Great Thinker might be getting their ideas from; read canonical great philosophy and literature; get a quantitative sense of how much insight is coming from which places throughout spacetime.)

The object-level specifics of each case of world-saving plan matter, of course; I think most readers of this post will be more familiar with MIRI's world-saving plan, especially since Zoe's post provides few object-level details about the content of Leverage's plan.

Debugging

Rarity ties into debugging; if what makes us different is that we're Actually Trying and the other AI research organizations aren't, then we're making a special psychological claim about ourselves, that we can detect the difference between actually and not-actually trying, and cause our minds to actually try more of the time.

Zoe asks whether debugging was "required"; she notes:

The explicit strategy for world-saving depended upon a team of highly moldable young people self-transforming into Elon Musks.

I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said "yes".  This part of the plan was the same [EDIT: Anna clarifies [LW(p) · GW(p)] that, while some people becoming like Elon Musk was some people's plan, there was usually acceptance of people not changing themselves; this might to some degree apply to Leverage as well].

Self-improvement was a major focus around MIRI and CFAR, and at other EA orgs.  It often used standard CFAR techniques, which were taught at workshops.  It was considered important to psychologically self-improve to the point of being able to solve extremely hard, future-lightcone-determining problems.

I don't think these are bad techniques, for the most part.  I think I learned a lot by observing and experimenting on my own mental processes.  (Zoe isn't saying Leverage's techniques are bad either, just that you could get most of them from elsewhere.)

Zoe notes a hierarchical structure where people debugged people they had power over:

Trainers were often doing vulnerable, deep psychological work with people with whom they also lived, made funding decisions about, or relied on for friendship. Sometimes people debugged each other symmetrically, but mostly there was a hierarchical, asymmetric structure of vulnerability; underlings debugged those lower than them on the totem pole, never their superiors, and superiors did debugging with other superiors.

This was also the case around MIRI and CFAR.  A lot of debugging was done by Anna Salamon, head of CFAR at the time; Ben Hoffman noted that "every conversation with Anna turns into an Anna-debugging-you conversation", which resonated with me and others.

There was certainly a power dynamic of "who can debug who"; to be a more advanced psychologist is to be offering therapy to others, being able to point out when they're being "defensive", when one wouldn't accept the same from them.  This power dynamic is also present in normal therapy, although the profession has norms such as only getting therapy from strangers, which change the situation.

How beneficial or harmful this was depends on the details.  I heard that "political" discussions at CFAR (e.g. determining how to resolve conflicts between people at the organization, which could result in people leaving the organization) were mixed with "debugging" conversations, in a way that would make it hard for people to focus primarily on the debugged person's mental progress without imposing pre-determined conclusions.  Unfortunately, when there are few people with high psychological aptitude around, it's hard to avoid "debugging" conversations having political power dynamics, although it's likely that the problem could have been mitigated.

[EDIT: See PhoenixFriend's pseudonymous comment [LW(p) · GW(p)], and replies to it, for more on power dynamics including debugging-related ones at CFAR specifically.]

It was really common for people in the social space, including me, to have a theory about how other people are broken, and how to fix them, by getting them to understand a deep principle you do and they don't.  I still think most people are broken and don't understand deep principles that I or some others do, so I don't think this was wrong, although I would now approach these conversations differently.

A lot of the language from Zoe's post, e.g. "help them become a master", resonates.  There was an atmosphere of psycho-spiritual development, often involving Kegan stages.  There is a significant degree of overlap between people who worked with or at CFAR and people at the Monastic Academy [EDIT: see Duncan's comment [LW(p) · GW(p)] estimating that the actual amount of interaction between CFAR and MAPLE was pretty low even though there was some overlap in people].

Although I wasn't directly financially encouraged to debug people, I infer that CFAR employees were, since instructing people was part of their job description.

Other issues

MIRI did have less time pressure imposed by the organization itself than Leverage did, despite the deadline implied by the AGI timeline; I had no issues with absurdly over-booked calendars.  I vaguely recall that CFAR employees were overworked especially around workshop times, though I'm pretty uncertain of the details.

Many people's social lives, including mine, were spent mostly "in the community"; much of this time was spent on "debugging" and other psychological work.  Some of my most important friendships at the time, including one with a housemate, were formed largely around a shared interest in psychological self-improvement.  There was, therefore, relatively little work-life separation (which has upsides as well as downsides).

Zoe recounts an experience with having unclear, shifting standards applied, with the fear of ostracism.  Though the details of my experience are quite different, I was definitely afraid of being considered "crazy" and marginalized for having philosophy ideas that were too weird, even though weird philosophy would be necessary to solve the AI alignment problem.  I noticed more people saying I and others were crazy as we were exploring sociological hypotheses that implied large problems with the social landscape we were in (e.g. people thought Ben Hoffman was crazy because of his criticisms of effective altruism). I recall talking to a former CFAR employee who was scapegoated and ousted after failing to appeal to the winning internal coalition; he was obviously quite paranoid and distrustful, and another friend and I agreed that he showed PTSD symptoms [EDIT: I infer scapegoating based on the public reason given being suspicious/insufficient; someone at CFAR points out that this person was paranoid and distrustful while first working at CFAR as well].

Like Zoe, I experienced myself and others being distanced from old family and friends, who didn't understand how high-impact the work we were doing was.  Since leaving the scene, I am more able to talk with normal people (including random strangers), although it's still hard to talk about why I expect the work I do to be high-impact.

An ex-Leverage person I know comments that "one of the things I give Geoff the most credit for is actually ending the group when he realized he had gotten in over his head. That still left people hurt and shocked, but did actually stop a lot of the compounding harm."  (While Geoff is still working on a project called "Leverage", the initial "Leverage 1.0" ended with most of the people leaving.) This is to some degree happening with MIRI and CFAR, with a change in the narrative about the organizations and their plans, although the details are currently less legible than with Leverage.

Conclusion

Perhaps one lesson to take from Zoe's account of Leverage is that spending relatively more time discussing sociology (including anthropology and history), and less time discussing psychology, is more likely to realize benefits while avoiding problems.  Sociology is less inherently subjective and meta than psychology, having intersubjectively measurable properties such as events in human lifetimes and social network graph structures.  My own thinking has certainly gone in this direction since my time at MIRI, to great benefit.  I hope this account I have written helps others to understand the sociology of the rationality community around 2017, and that this understanding helps people to understand other parts of the society they live in.

There are, obviously from what I have written, many correspondences, showing a common pattern for high-ambition ideological groups in the San Francisco Bay Area.  I know there are serious problems at other EA organizations, which produce largely fake research (and probably took in people who wanted to do real research, who become convinced by their experience to do fake research instead), although I don't know the specifics as well.  EAs generally think that the vast majority of charities are doing low-value and/or fake work.  I also know that San Francisco startup culture produces cult-like structures (and associated mental health symptoms) with regularity.  It seems more productive to, rather than singling out specific parties, think about the social and ecological forces that create and select for the social structures we actually see, which include relatively more and less cult-like structures.  (Of course, to the extent that harm is ongoing due to actions taken by people and organizations, it's important to be able to talk about that.)

It's possible that after reading this, you think this wasn't that bad.  Though I can only speak for myself here, I'm not sad that I went to work at MIRI instead of Google or academia after college.  I don't have reason to believe that either of these environments would have been better for my overall intellectual well-being or my career, despite the mental and social problems that resulted from the path I chose.  Scott Aaronson, for example, blogs about "blank faced" non-self-explaining authoritarian bureaucrats being a constant problem in academia.  Venkatesh Rao writes about the corporate world, and the picture presented is one of a simulation constantly maintained thorough improv.

I did grow from the experience in the end.  But I did so in large part by being very painfully aware of the ways in which it was bad.

I hope that those that think this is "not that bad" (perhaps due to knowing object-level specifics around MIRI/CFAR justifying these decisions) consider how they would find out whether the situation with Leverage was "not that bad", in comparison, given the similarity of the phenomena observed in both cases; such an investigation may involve learning object-level specifics about what happened at Leverage.  I hope that people don't scapegoat; in an environment where certain actions are knowingly being taken by multiple parties, singling out certain parties has negative effects on people's willingness to speak without actually producing any justice.

Aside from whether things were "bad" or "not that bad" overall, understanding the specifics of what happened, including harms to specific people, is important for actually accomplishing the ambitious goals these projects are aiming at; there is no reason to expect extreme accomplishments to result without very high levels of epistemic honesty.

949 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2021-10-17T22:08:56.321Z · LW(p) · GW(p)

I want to add some context I think is important to this.

Jessica was (I don't know if she still is) part of a group centered around a person named Vassar, informally dubbed "the Vassarites". Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to "jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.

Vassar ran MIRI a very long time ago, but either quit or got fired, and has since been saying that MIRI/CFAR is also infinitely corrupt and conformist and traumatizing (I don't think he thinks they're worse than everyone else, but I think he thinks they had a chance to be better, they wasted it, and so it's especially galling that they're just as bad).  Since then, he's tried to "jailbreak" a lot of people associated with MIRI and CFAR - again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. The combination of drugs and paranoia caused a lot of borderline psychosis, which the Vassarites mostly interpreted as success ("these people have been jailbroken out of the complacent/conformist world, and are now correctly paranoid and weird"). Occasionally it would also cause full-blown psychosis, which they would discourage people from seeking treatment for, because they thought psychiatrists were especially evil and corrupt and traumatizing and unable to understand that psychosis is just breaking mental shackles.

(I am a psychiatrist and obviously biased here)

Jessica talks about a cluster of psychoses from 2017 - 2019 which she blames on MIRI/CFAR. She admits that not all the people involved worked for MIRI or CFAR, but kind of equivocates around this and says they were "in the social circle" in some way. The actual connection is that most (maybe all?) of these people were involved with the Vassarites or the Zizians (the latter being IMO a Vassarite splinter group, though I think both groups would deny this characterization). The main connection to MIRI/CFAR is that the Vassarites recruited from the MIRI/CFAR social network.

I don't have hard evidence of all these points, but I think Jessica's text kind of obliquely confirms some of them. She writes:

"Psychosis" doesn't have to be a bad thing, even if it usually is in our society; it can be an exploration of perceptions and possibilities not before imagined, in a supportive environment that helps the subject to navigate reality in a new way; some of R.D. Liang's work is relevant here, describing psychotic mental states as a result of ontological insecurity following from an internal division of the self at a previous time. Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing.

RD Laing was a 1960s pseudoscientist who claimed that schizophrenia is how "the light [begins] to break through the cracks in our all-too-closed minds". He opposed schizophrenics taking medication, and advocated treatments like "rebirthing therapy" where people role-play fetuses going through the birth canal - for which he was stripped of his medical license. The Vassarites like him, because he is on their side in the whole "actually psychosis is just people being enlightened as to the true nature of society" thing. I think Laing was wrong, psychosis is actually bad, and that the "actually psychosis is good sometimes" mindset is extremely related to the Vassarites causing all of these cases of psychosis.

Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community.  While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible.  (I noted at the time that there might be a sense in which different people have "auras" in a way that is not less inherently rigorous than the way in which different people have "charisma", and I feared this type of comment would cause people to say I was crazy.) As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.

Jessica is accusing MIRI of being insufficiently supportive to her by not taking her talk about demons and auras seriously when she was borderline psychotic, and comparing this to Leverage, who she thinks did a better job by promoting an environment where people accepted these ideas. I think MIRI was correct to be concerned and (reading between the lines) telling her to seek normal medical treatment, instead of telling her that demons were real and she was right to worry about them, and I think her disagreement with this is coming from a belief that psychosis is potentially a form of useful creative learning. While I don't want to assert that I am 100% sure this can never be true, I think it's true rarely enough, and with enough downside risk, that treating it as a psychiatric emergency is warranted.

On the two cases of suicide, Jessica writes:

Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz.  (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; the splinter group seems to have selected for high scrupulosity and not attenuated its mental impact.)

Ziz tried to create an anti-CFAR/MIRI splinter group whose members had mental breakdowns. Jessica also tried to create an anti-CFAR/MIRI splinter group and had a mental breakdown. This isn't a coincidence - Vassar tried his jailbreaking thing on both of them, and it tended to reliably produce people who started crusades against MIRI/CFAR, and who had mental breakdowns. Here's an excerpt from Ziz's blog on her experience (edited heavily for length, and slightly to protect the innocent):

When I first met Vassar, it was a random encounter in an experimental group call organized by some small-brand rationalist. He talked for about an hour, and automatically became the center of conversation, I typed notes as fast as I could, thinking, “if this stuff is true it changes everything; it’s the [crux] of my life.” (It true, but I did not realize it immediately.) Randomly, another person found the link, came in and said, “hi”. [Vassar] said “hi”, she said “hi” again, apparently for humor. [Vassar] said something terse I forget “well if this is what …”, apparently giving up on the venue, and disconnected without further comment. One by one, the other ~10 people including besides her, including me disconnected disappointedly, wordlessly or just about right after. A wizard was gracing us with his wisdom and she fucked it up. And in my probably-representative case that was just about the only way I could communicate how frustrated I was at her for that.

[Vassar explained how] across society, the forces of gaslighting were attacking people’s basic ability to think and to a justice as a Schelling point until only the built-in Schelling points of gender and race remained, Vassar listed fronts in the war on gaslighting, disputes in the community, and included [local community member ZD] [...] ZD said Vassar broke them out of a mental hospital. I didn’t ask them how. But I considered that both badass and heroic. From what I hear, ZD was, probably as with most, imprisoned for no good reason, in some despicable act of, “get that unsightly person not playing along with the [heavily DRM’d] game we’ve called sanity out of my free world”.

I heard [local community member AM] was Vassar’s former “apprentice”. And I had started picking up jailbroken wisdom from them secondhand without knowing where it was from. But Vassar did it better. After Rationalist Fleet, I concluded I was probably worth Vassar’s time to talk to a bit, and I emailed him, carefully briefly stating my qualifications, in terms of ability to take ideas seriously and learn from him, so that he could get maximally dense VOI on whether to talk to me. A long conversation ensued. And I got a lot from it. [...]

Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve. And didn’t detransition. This all created an awful tension in me. The rationality community was kind of compromised as a rallying point for truthseeking. This was desperately bad for the world. [Vassar] was at the center of, largely the creator of a “no actually for real” rallying point for the jailbroken reality-not-social-reality version of this.

Ziz is describing the same cluster of psychoses Jessica is (including Jessica's own), but I think doing so more accurately, by describing how it was a Vassar-related phenomenon. I would add Ziz herself to the list of trans women who got negative mental effects from Vassar, although I think (not sure) Ziz would not endorse my description of her as having these.

What was the community's response to this? I have heard rumors that Vassar was fired from MIRI a long time ago for doing some very early version of this, although I don't know if it's true. He was banned from REACH (and implicitly rationalist social events) for somewhat unrelated reasons. I banned him from SSC meetups for a combination of reasons including these. For reasons I don't fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything's kind of been frozen in place since then.

I want to clarify that I don't dislike Vassar, he's actually been extremely nice to me, I continue to be in cordial and productive communication with him, and his overall influence on my life personally has been positive. He's also been surprisingly gracious about the fact that I go around accusing him of causing a bunch of cases of psychosis. I don't think he does the psychosis thing on purpose, I think he is honest in his belief that the world is corrupt and traumatizing (which at the margin, shades into values of "the world is corrupt and traumatizing" which everyone agrees are true) and I believe he is honest in his belief that he needs to figure out ways to help people do better. There are many smart people who work with him and support him who have not gone psychotic at all. I don't think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.  My main advice is that if he or someone related to him asks you if you want to take a bunch of drugs and hear his pitch for why the world is corrupt, you say no.

EDIT/UPDATE: I got a chance to talk to Vassar, who disagrees with my assessment above. We're still trying to figure out the details, but so far, we agree that there was a cluster of related psychoses around 2017, all of which were in the same broad part of the rationalist social graph. Features of that part were - it contained a lot of trans women, a lot of math-y people, and some people who had been influenced by Vassar, although Vassar himself may not have been a central member. We are still trying to trace the exact chain of who had problems first and how those problems spread. I still suspect that Vassar unwittingly came up with some ideas that other people then spread through the graph. Vassar still denies this and is going to try to explain a more complete story to me when I have more time.

Replies from: devi, jessica.liu.taylor, Zack_M_Davis, jessica.liu.taylor, nshepperd, jimrandomh, Yvain, gwern, ChristianKl, Desrtopa, jessica.liu.taylor, lc, Dr_Manhattan, Yoav Ravid
comment by devi · 2021-10-18T16:48:56.294Z · LW(p) · GW(p)

Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC

Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I'll try to explain some context for the record.

In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme "trans women are [psychologically] men". I experienced this while dating AM (same as mentioned above). She repeatedly brought up this point in various interactions. Since we were both trans women this was hurting us both, so I look back with more pity than concern about malice. At some point during this time I started treating this as a hidden truth that I was proud of myself for being able to see, which I in retrospect I feel disgusted and complicit to have accepted. This was my state of mind when I discussed these issues with Zack reinforcing each others views. I believe (less certain) I also broached the topic with Michael and/or Anna at some point which probably went like a brief mutual acknowledgement of this hidden fact before continuing on to topics that were more important.

I don't think anyone mentioned above was being dishonest about what they thought or was acting from a desire to hurt trans people. Yet, above exchanges did in retrospect cause me emotional pain, stress, and contributed to internalizing sexism and transphobia. I definitely wouldn't describe this as a main causal factor to my psychosis (that was very casual drug use that even Michael chided me for). I cant' think of a good policy that would have been helpful to me in above interactions. Maybe emphasizing bucket-errors [LW · GW] in this context more, or spreading caution about generalizing from abstract models to yourself, but I think I would have been too rash to listen.

I wouldn't say I completely moved past this until years following the events. I think the following things were helpful for that (in no particular order): the intersex brains model and associated brain imagining studies, everyday-acceptance while living a normal life not allowing myself concerns larger than renovations or retirement savings, getting to experience some parts of female socialization and mother-daughter bonding, full support from friends and family in cases my gender has come into question, and the acknowledgement of a medical system that still has some gate-keeping aspects (note: I don't think this positive effect of a gate-keeping system at all justifies the negative of denying anyone morphological freedom).

Thinking back to these events, engaging with the LessWrong community, and even publicly engaging under my real name bring back fear and feelings of trauma. I'm not saying this to increase a sense of having been wronged but as an apology for this not being as long as it should be, or as well-written, and for the lateness/absence of any replies/followups.

comment by jessicata (jessica.liu.taylor) · 2021-10-18T13:53:19.744Z · LW(p) · GW(p)

I want to point out that the level of mental influence being attributed to Michael in this comment and others (e.g. that he's "causing psychotic breaks" and "jailbreaking people" through conversation, "that listening too much to Vassar [causes psychosis], predictably") isn't obviously less than the level of mental influence Leverage attributed to people in terms of e.g. mental objects. Some people in the thread [LW(p) · GW(p)] are self-congratulating on the rationalists not being as crazy and abusive as Leverage was in worrying that people were spreading harmful psychological objects to each other, and therefore isolating these people from their friends. Yet many in this comment thread are, literally, calling for isolating Michael Vassar from his friends on the basis of his mental influence on others.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2021-10-18T19:30:08.780Z · LW(p) · GW(p)

Yes, I agree with you that all of this is very awkward.

I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.

But we have to admit at least small violations of it even to get the concept of "cult". Not just the sort of weak cults we're discussing here, but even the really strong cults like Heaven's Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven's Gate is bad for them, and leave. When we use the word "cult", we're implicitly agreeing that this doesn't always work, and we're bringing in creepier and less comprehensible ideas like "charisma" and "brainwashing" and "cognitive dissonance".

(and the same thing with the concept of "emotionally abusive relationship")

I don't want to call the Vassarites a cult because I'm sure someone will confront me with a Cult Checklist that they don't meet, but I think that it's not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it's weird that you can get that far without robes and chanting or anything, but I never claimed to really understand exactly how cults work, plus I'm sure the drugs helped.

I think believing cults are possible is different in degree if not in kind from Leverage "doing seances...to call on demonic energies and use their power to affect the practitioners' social standing". I'm claiming, though I can't prove it, that what I'm saying is more towards the "believing cults are possible" side.

I'm actually very worried about this! I hate admitting cults are possible! If you admit cults are possible, you have to acknowledge that the basic liberal model has gaps, and then you get things like if an evangelical deconverts to atheism, the other evangelicals can say "Oh, he's in a cult, we need to kidnap and deprogram him since his best self wouldn't agree with the deconversion." I want to be extremely careful in when we do things like that, which is why I'm not actually "calling for isolating Michael Vassar from his friends". I think in the Outside View we should almost never do this!

But you were the one to mention this cluster of psychotic breaks, and I am trying to provide what I think is a more accurate perspective on them. Maybe in the future we learn that this was because of some weird neuroinflammatory virus that got passed around at a Vassarite meeting and we laugh that we were ever dumb enough to think a person/group could transmit psychotic breaks. But until then, I think the data point that all of this was associated with Vassar and the Vassarites is one we shouldn't just ignore.

Replies from: ChristianKl, jessica.liu.taylor, Benquo
comment by ChristianKl · 2021-10-19T07:15:27.917Z · LW(p) · GW(p)

It seems to me like in the case of Leverage, them working 75 hours per week reduced the time the could have used to use Reason to conclude that they are in a system that's bad for them. 

That's very different from someone having a few conversation with Vassar and then adopting a new belief and spending a lot of the time reasoning about that alone and the belief being stable without being embedded into a strong enviroment that makes independent thought hard because it keeps people busy.

A cult in it's nature is a social institution and not just a meme that someone can pass around via having a few conversations.

Replies from: Viliam
comment by Viliam · 2021-10-19T08:36:57.592Z · LW(p) · GW(p)

Perhaps the proper word here might be "manipulation" or "bad influence".

Replies from: Holly_Elmore, ChristianKl
comment by Holly_Elmore · 2021-10-19T23:20:13.235Z · LW(p) · GW(p)

I think "mind virus" is fair. Vassar spoke a lot about how the world as it is can't be trusted. I remember that many of the people in his circle spoke, seemingly apropos of nothing, about how bad involuntary commitment is, so that by the time someone was psychotic their relationship with psychiatry and anyone who would want to turn to psychiatry to help them was poisoned. Within the envelope of those beliefs you can keep a lot of other beliefs safe from scrutiny. 

comment by ChristianKl · 2021-10-22T10:25:29.631Z · LW(p) · GW(p)

The thing with "bad influence" is that it's a pretty value-laden thing. In a religious town the biology teacher who tells the children about evolution and explains how it makes sense that our history goes back a lot further then a few thousands years is reasonably described as bad influence by the parents. 

The religion teacher gets the children to doubt the religious authorities. Those children then can also be a bad influence on others by also getting them to doubt authorities. In a similar war Vassar gets people to question other authorities and social conventions and how those ideas can then be passed on. 

Vassar speaks about things like Moral Mazes [LW · GW]. Memes like that make people distrust institutions. There are the kind of bad influence that can get people to quit their job.

Talking about the biology teacher like they are intend to start an evolution cult feels a bit misleading.

comment by jessicata (jessica.liu.taylor) · 2021-10-21T00:50:06.577Z · LW(p) · GW(p)

It seems to me that, at least in your worldview, this question of whether and what sort of subtle mental influence between people is possible is extremely important, to the point where different answers to the question could lead to pretty different political philosophies.

Let's consider a disjunction: 1: There isn't a big effect here, 2: There is a big effect here.

In case 1:

  • It might make sense to discourage people from talking too much about "charisma", "auras", "mental objects", etc, since they're pretty fake, really not the primary factors to think about when modeling society.
  • The main problem with the relevant discussions at Leverage is that they're making grandiose claims of mind powers and justifying e.g. isolating people on the basis of these, not actual mental influence.
  • The case made against Michael, that he can "cause psychotic breaks" by talking with people sometimes (or, in the case of Eric B, by talking sometimes with someone who is talking sometimes with the person in question), has no merit. People are making up grandiose claims about Michael to justify scapegoating him, it's basically a witch hunt. We should have a much more moderated, holistic picture where there are multiple people in a social environment affecting a person, and the people closer to them generally have more influence, such that causing psychotic breaks 2 hops out is implausible, and causing psychotic breaks with only occasional conversation (and very little conversation close to the actual psychotic episode) is also quite unlikely.
  • There isn't a significant falsification of liberal individualism.

In case 2:

  • Since there's a big effect, it makes sense to spend a lot of energy speculating on "charisma", "auras", "mental objects", and similar hypotheses. "Charisma" has fewer details than "auras" which has fewer details than "mental objects"; all of them are hypotheses someone could come up with in the course of doing pre-paradigmatic study of the phenomenon, knowing that while these initial hypotheses will make mis-predictions sometimes, they're (in expectation) moving in the direction of clarifying the phenomenon. We shouldn't just say "charisma" and leave it at that, it's so important that we need more details/gears.
  • Leverage's claims about weird mind powers are to some degree plausible, there's a big phenomenon here even if their models are wrong/silly in some places. The weird social dynamics are a result of an actual attempt to learn about and manage this extremely important phenomenon.
  • The claim that Michael can cause psychotic breaks by talking with people is plausible. The claim that he can cause psychotic breaks 2 hops out might be plausible depending on the details (this is pretty similar to a "mental objects" claim).
  • There is a significant falsification of liberal individualism. Upon learning about the details of how mental influence works, you could easily conclude that some specific form of collectivism is much more compatible with human nature than liberal individualism.

(You could make a spectrum or expand the number of dimensions here, I'm starting with a binary here to make the poles obvious)

It seems like you haven't expressed a strong belief whether we're in case 1 or case 2. Some things you've said are more compatible with case 1 (e.g. Leverage worrying about mental objects being silly, talking about demons being a psychiatric emergency, it being appropriate for MIRI to stop me from talking about demons and auras, liberalism being basically correct even if there are exceptions). Some are more compatible with case 2 (e.g. Michael causing psychotic breaks, "cults" being real and actually somewhat bad for liberalism to admit the existence of, "charisma" being a big important thing).

I'm left with the impression that your position is to some degree inconsistent (which is pretty normal, propagating beliefs fully is hard) and that you're assigning low value to investigating the details of this very important variable.

(I myself still have a lot of uncertainty here; I've had the impression of subtle mental influence happening from time to time but it's hard to disambiguate what's actually happening, and how strong the effect is. I think a lot of what's going on is people partly-unconsciously trying to synchronize with each other in terms of world-model and behavioral plans, and there existing mental operations one can do that cause others' synchronization behavior to have weird/unexpected effects.)

Replies from: Yvain, Natália Mendonça
comment by Scott Alexander (Yvain) · 2021-10-21T01:04:00.887Z · LW(p) · GW(p)

I agree I'm being somewhat inconsistent, I'd rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I'm trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you're open to that.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-21T01:12:29.512Z · LW(p) · GW(p)

Yes, I'd be open to answering email questions.

comment by Natália (Natália Mendonça) · 2021-10-21T02:05:21.694Z · LW(p) · GW(p)

This misses the fact that people’s ability to negatively influence others might vary very widely, making it so that it is silly to worry about, say, 99.99% of people strongly negatively influencing you, but reasonable to worry about the other 0.01%. If Michael is one of those 0.01%, then Scott’s worldview is not inconsistent.

Replies from: TekhneMakre, jessica.liu.taylor
comment by TekhneMakre · 2021-10-21T02:32:58.374Z · LW(p) · GW(p)

If it's reasonable to worry about the .01%, it's reasonable to ask how the ability varies. There's some reason, some mechanism. This is worth discussing even if it's hard to give more than partial, metaphorical hypotheses. And if there are these .01% of very strong influencers, that is still an exception to strong liberal individualism.

comment by jessicata (jessica.liu.taylor) · 2021-10-21T02:26:45.156Z · LW(p) · GW(p)

That would still admit some people at Leverage having significant mental influence, especially if they got into weird mental tech that almost no one gets into. A lot of the weirdness is downstream of them encountering "body workers" who are extremely good at e.g. causing mental effects by touching people's back a little; these people could easily be extremal, and Leverage people learned from them. I've had sessions with some post-Leverage people where it seemed like really weird mental effects are happening in some implicit channel (like, I feel a thing poking at the left side of my consciousness and the person says, "oh, I just did an implicit channel thing, maybe you felt that"), I've never experienced effects like that (without drugs, and not obviously on drugs either though the comparison is harder) with others including with Michael, Anna, or normal therapists. This could be "placebo" in a way that makes it ultimately not that important but still, if we're admitting that 0.01% of people have these mental effects then it seems somewhat likely that this includes some Leverage people.

Also, if the 0.01% is disproportionately influential (which, duh), then getting more detailed models than "charisma" is still quite important.

Replies from: EI
comment by EI · 2021-10-21T02:33:42.112Z · LW(p) · GW(p)
comment by Benquo · 2021-10-19T20:27:13.119Z · LW(p) · GW(p)

One important implication of "cults are possible" is that many normal-seeming people are already too crazy to function as free citizens of a republic.

In other words, from a liberal perspective, someone who can't make their own decisions about whether to hang out with Michael Vassar and think about what he says is already experiencing a severe psychiatric emergency and in need of a caretaker, since they aren't competent to make their own life decisions. They're already not free, but in the grip of whatever attractor they found first.

Personally I bite the bullet and admit that I'm not living in a society adequate to support liberal democracy, but instead something more like what Plato's Republic would call tyranny. This is very confusing because I was brought up to believe that I lived in a liberal democracy. I'd very much like to, someday.

Replies from: Holly_Elmore, Jayson_Virissimo
comment by Holly_Elmore · 2021-10-19T23:22:30.821Z · LW(p) · GW(p)

I think there are less extreme positions here. Like "competent adults can make their own decisions, but they can't if they become too addicted to certain substances." I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.  

Replies from: Benquo
comment by Benquo · 2021-10-19T23:40:00.670Z · LW(p) · GW(p)

competent adults can make their own decisions, but they can’t if they become too addicted to certain substances

I think the principled liberal perspective on this is Bryan Caplan's: drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.

I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.

I don't think that many people are "fundamentally incapable of being free." But it seems like some people here are expressing grievances that imply that either they themselves or some others are, right now, not ready for freedom of association.

The claim that someone is dangerous enough that they should be kept away from "vulnerable people" is a declaration of intent to deny "vulnerable people" freedom of association for their own good. (No one here thinks that a group of people who don't like Michael Vassar shouldn't be allowed to get together without him.)

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-19T23:57:09.308Z · LW(p) · GW(p)

drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.

I really don't think this is an accurate description of what is going on in people's mind when they are experiencing drug dependencies. I've spent a good chunk of my childhood with an alcoholic father, and he would have paid most of his wealth to stop being addicted to drinking, went through great lengths trying to tie himself to various masts to stop, and generally expressed a strong preference for somehow being able to self-modify the addiction away, but ultimately failed to do so. 

Of course, things might be different for different people, but at least in the one case where I have a very large amount of specific data, this seems like it's a pretty bad model of people's preferences. Based on the private notebooks of his that I found after his death, this also seemed to be his position in purely introspective contexts without obvious social desirability biases. My sense is that he would have strongly preferred someone to somehow take control away from him, in this specific domain of his life.

Replies from: Benquo, NancyLebovitz
comment by Benquo · 2021-10-20T00:06:50.689Z · LW(p) · GW(p)

This seems like some evidence that the principled liberal position is false - specifically, that it is not self-ratifying. If you ask some people what their preferences are, they will express a preference for some of their preferences to be thwarted, for their own good.

Contractarianism can handle this sort of case, but liberal democracy with inalienable rights cannot, and while liberalism is a political philosophy, contractarianism is just a policy proposal, with no theory of citizenship or education.

comment by NancyLebovitz · 2021-11-19T10:16:48.172Z · LW(p) · GW(p)

https://en.wikipedia.org/wiki/Olivier_Ameisen

A sidetrack, but a French surgeon found that Baclofen (a muscle relaxant) cured his alcoholism by curing the craving. He was surprised to find that it cured compulsive spending when he didn't even realize he had a problem.

He had a hard time raising money for an official experiment, and it came out inconclusive, and he died before the research got any further.


 

comment by Jayson_Virissimo · 2021-10-19T20:50:48.060Z · LW(p) · GW(p)

This is more-or-less Aristotle's defense of (some cases of) despotic rule: it benefits those that are naturally slaves (those whose deliberative faculty functions below a certain threshold) in addition to the despot (making it a win-win scenario).

Replies from: Benquo
comment by Benquo · 2021-10-20T04:45:40.434Z · LW(p) · GW(p)

Aristotle seems (though he's vague on this) to be thinking in terms of fundamental attributes, while I'm thinking in terms of present capacity, which can be reduced by external interventions such as schooling.

Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.

*As far as I know I didn't know any such people before 2020; it's very easy for members of the educated class to mistake our bubble for statistical normality.

Replies from: Hazard, NancyLebovitz
comment by Hazard · 2021-10-20T13:28:41.688Z · LW(p) · GW(p)

Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.

This is very interesting to me! I'd like to hear more about how the two group's behavior looks diff, and also your thoughts on what's the difference that makes the difference, what are the pieces of "being brought up to go to college" that lead to one class of reactions?

Replies from: CharlieTheBananaKing, ChristianKl, Benquo
comment by CharlieTheBananaKing · 2021-10-21T19:28:22.992Z · LW(p) · GW(p)

I have talked to Vassar, while he has a lot of "explicit control over conversations" which could be called charisma, I'd hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)

My hypothesis is the following:  I've met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people's identity ("I'm an EA person, thus I'm a good person doing important work"). Two anecdotes to illustrate this:
- I'd recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we're both self-declared rationalists!) because I'd realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity. 
- I'd had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: "Only if it works on the alignment problem, everything else is irrelevant to me". 

Vassar very persuasively argues against EA and work done at MIRI/CFAR (He doesn't disagree alignment is a problem AFAIK). Assuming people largely defined by these ideas, one can see how that could be threatening to their identity. I've read "I'm an evil person" from multiple people relating their "Vassar-psychosis" experience. To me it's very easy to see how one could get there if the defining part of the identity is "I'm a good person because I work on EA/Alignment" + "EA/Aligment is a scam" arguments. 
It also makes Vassar look like a genius (God), because "why wouldn't the rest of the rationalists see the arguments", while it's really just a group-bias phenomenon, where the social truth of the rationalist group is that obviously EA is good and AI alignment terribly important.

This would probably predict, that the people experiencing "Vassar-psychosis" would've a stronger-than-average constructed identity based on EA/CFAR/MIRI?

Replies from: michael-chen
comment by mic (michael-chen) · 2021-10-22T18:14:36.525Z · LW(p) · GW(p)

What are your or Vassar's arguments against EA or AI alignment? This is only tangential to your point, but I'd like to know about it if EA and AI alignment are not important.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-23T16:03:26.588Z · LW(p) · GW(p)

The general argument is that EA's are not really doing what they say they do. One example from Vassar would be that when it comes to COVID-19 for example there seem to be relatively little effective work by EA's. In contrast Vassar considered giving prisoners access to personal equipment the most important and organized effectively for that to happen. 

EA's created in EA Global an enviroment where someone who wrote a good paper warning about the risks of gain-of-function research doesn't address that directly but only talks indirectly about it to focus on more meta-issues. Instead of having conflicts with people doing gain-of-function research the EA community mostly ignored it's problems and funded work that's in less conflict with the establishment. There's nearly no interest in learning from those errors in the EA community and people rather avoid conflicts.

If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information. 

AI alignment is important but just because one "works on AI risk" doesn't mean that the work actually decreases AI risk. Tying your personal identity to being someone who works to decrease AI risk makes it hard to clearly reason about whether one's actions actually do. OpenAI would be an organization where people who see themselves as "working on AI alignment" work and you can look at the recent discussion that whether or not that work reduces or increases actual risk is in open debate.

In a world where human alignment doesn't work to prevent dangerous gain of function experiments from happening thinking about AI alignment instead of the problem of human alignment where it's easier to get feedback might be the wrong strategic focus.

Replies from: NancyLebovitz, jkaufman
comment by NancyLebovitz · 2021-11-19T10:18:50.164Z · LW(p) · GW(p)

Did Vassar argue that existing EA organizations weren't doing the work they said they were doing, or that EA as such was a bad idea? Or maybe that it was too hard to get organizations to do it?

Replies from: jessica.liu.taylor, ChristianKl, jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-11-19T15:26:07.128Z · LW(p) · GW(p)

He argued

(a) EA orgs aren't doing what they say they're doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally), and it's hard to get organizations to do what they say they do

(b) Utilitarianism isn't a form of ethics, it's still necessary to have principles, as in deontology or two-level consequentialism

(c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn't well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved

(d) It usually makes more sense to focus on ways of helping others that also build capacities, including gathering more information, to increase long-term positive impact

comment by ChristianKl · 2021-11-19T17:31:32.918Z · LW(p) · GW(p)

If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him. 

The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.

Vassar's action themselves are about doing altruistic actions more directly by looking for who are most powerless who need help and working to help them. In the COVID case he identified prisoners and then worked on making PPE available for them.

You might see his thesis is that "effective" in EA is about adding a management layer for directing interventions and that management layer has the problems that the immoral maze sequence describes. According to Vassar someone who wants to be altrustic shouldn't delegate his judgements of what's effective and thus warrents support to other people.

comment by jefftk (jkaufman) · 2021-10-24T00:57:42.374Z · LW(p) · GW(p)

If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information.

Link? I'm not finding it

Replies from: ChristianKl
comment by ChristianKl · 2021-10-24T07:28:01.101Z · LW(p) · GW(p)

https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=zqcynfzfKma6QKMK9

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-10-25T20:41:26.948Z · LW(p) · GW(p)

I think what you're pointing to is:

I have a large number of negative Leverage experiences between 2015-2017 that I never wrote up due to various complicated adversarial dynamics surrounding Leverage and CEA (as well as various NDAs and legal threats, made by both Leverage and CEA, not leveled at me, but leveled at enough people around me that I thought I might cause someone serious legal trouble if I repeat a thing I heard somewhere in a more public setting)

I'm getting a bit pedantic, but I wouldn't gloss this as "CEA used legal threats to cover up Leverage related information". Partly because the original bit is vague, but also because "cover up" implies that the goal is to hide information.

For example, imagine companies A and B sue each other, which ends up with them settling and signing an NDA. Company A might accept an NDA because they want to move on from the suit and agreeing to an NDA does that most effectively. I would not describe this as company A using legal threats to cover up B-related information.

Replies from: ChristianKl, habryka4, ChristianKl
comment by ChristianKl · 2021-10-25T23:02:06.342Z · LW(p) · GW(p)

In the timeframe CEA and Leverage where doing together the Pareto Fellowship. If you read the common knowledge post you find people finding that they were mislead by CEA because the announcement didn't mention that the Pareto Fellowship was largely run by Leverage.

On their mistakes page CEA, they have a section about the Pareto Fellowship but it hides the fact that Leverage was involved in the Pareto Fellowship but says "The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers."

That does look to me like hiding information about the cooperation between Leverage and CEA. 

I do think that publically presuming that people who hide information have something to hide is useful. If there's nothing to hide I'd love to know what happened back then or who thinks what happened should stay hidden. At the minimum I do think that CEA witholding the information that the people who went to their programs spend their time in what now appears to be a cult is something that CEA should be open about in their mistakes page. 

Replies from: habryka4, jkaufman
comment by habryka (habryka4) · 2021-10-26T19:04:30.226Z · LW(p) · GW(p)

Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage's history with Effective Altruism. I think this was bad, and continues to be bad.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-27T18:06:52.816Z · LW(p) · GW(p)

My initial thought on reading this was 'this seems obviously bad', and I assumed this was done to shield CEA from reputational risk.

Thinking about it more, I could imagine an epistemic state I'd be much more sympathetic to: 'We suspect Leverage is a dangerous cult, but we don't have enough shareable evidence to make that case convincingly to others, or we aren't sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause others to be drawn into its orbit and suffer harm. But we don't feel confident enough to feel comfortable tarring Leverage in public, or our evidence was shared in confidence and we can't say anything we expect others to find convincing. So we'll have to just steer clear of the topic for now.'

Still seems better to just not address the subject if you don't want to give a fully accurate account of it. You don't have to give talks on the history of EA!

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-27T18:35:20.153Z · LW(p) · GW(p)

I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like "Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth".

Replies from: ChristianKl
comment by ChristianKl · 2021-10-27T19:39:22.838Z · LW(p) · GW(p)

"Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth"

That has the collary: "We don't expect EA's to care enough about the truth/being transparent that this is a huge reputational risk for us."

comment by jefftk (jkaufman) · 2021-10-26T01:38:30.546Z · LW(p) · GW(p)

It does look weird to me that CEA doesn't include this on the mistakes page when they talk about Pareto. I just sent CEA an email to ask:

Hi CEA,

On https://www.centreforeffectivealtruism.org/our-mistakes I see "The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers. We realized during and after the program that senior management did not provide enough oversight of the program. For example, reports by some applicants indicate that the interview process was unprofessional and made them deeply uncomfortable."

Is there a reason that the mistakes page does not mention the involvement of Leverage in the Pareto Fellowship? [1]

Jeff

[1] https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=znudKxFhvQxgDMv7k [LW(p) · GW(p)]

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-10-27T19:18:29.147Z · LW(p) · GW(p)

They wrote back, linking me to https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QcdhTjqGcSc99sNN [LW(p) · GW(p)]

("we're working on a couple of updates to the mistakes page, including about this")

comment by habryka (habryka4) · 2021-10-26T18:59:25.254Z · LW(p) · GW(p)

Yep, I think the situation is closer to what Jeff describes here, though, I honestly don't actually know, since people tend to get cagey when the topic comes up.

comment by ChristianKl · 2021-11-18T17:25:58.951Z · LW(p) · GW(p)

I talked with Geoff and according to him there's no legal contract between CEA and Leverage that prevents information sharing. All information sharing is prevented by organization internal NDA's.

Replies from: habryka4
comment by habryka (habryka4) · 2021-11-18T22:33:31.182Z · LW(p) · GW(p)

Huh, that's surprising, if by that he means "no contracts between anyone currently at Leverage and anyone at CEA". I currently still think it's the case, though I also don't see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees? 

Replies from: ChristianKl
comment by ChristianKl · 2021-11-19T08:34:37.241Z · LW(p) · GW(p)

What he said is compatible with Ex-CEA people still being bound by the NDA's they signed they were at CEA. I don't think anything happened that releases ex-CEA people from NDAs.

The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn't unilaterally lift the settlement contract.

Public pressure on CEA seems to be necessary to get the information out in the open.

comment by ChristianKl · 2021-10-21T09:46:32.847Z · LW(p) · GW(p)

Talking with Vassar feels very intellectually alive. Maybe, like a high density of insight porn. I imagine that the people Ben talks about wouldn't get much enjoyment out of insight porn either, so that emotional impact isn't there.

There's probably also an element that plenty of people who can normally follow an intellectual conversation can't keep up a conversation with Vassar and then are filled after a conversation with a bunch of different ideas that lack order in their mind. I imagine that sometimes there's an idea overload that prevents people from critically thinking through some of the ideas.

If you have a person who hasn't gone to college, they are used to encountering people who make intellectual arguments that go over their head and have a way to deal with that. 

From meeting Vassar, I don't feel like he has the kind of charisma that someone like Valentine has (which I guess Valentine has downstream of doing a lot of bodywork stuff). 

Replies from: Benquo
comment by Benquo · 2021-10-22T03:58:29.714Z · LW(p) · GW(p)

This seems mostly right; they're more likely to think "I don't understand a lot of these ideas, I'll have to think about this for a while" or "I don't understand a lot of these ideas, he must be pretty smart and that's kinda cool" than to feel invalidated by this and try to submit to him in lieu of understanding.

comment by Benquo · 2021-10-22T02:31:16.135Z · LW(p) · GW(p)

The people I know who weren't brought up to go to college have more experience navigating concrete threats and dangers, which can't be avoided through conformity, since the system isn't set up to take care of people like them. They have to know what's going on to survive. This results in an orientation less sensitive to subtle threats of invalidation, and that sees more concrete value in being informed by someone.

In general this means that they're much more comfortable with the kind of confrontation Vassar engages in, than high-class people are.

Replies from: Hazard
comment by Hazard · 2021-10-23T01:42:59.344Z · LW(p) · GW(p)

This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.

comment by NancyLebovitz · 2021-11-19T10:01:54.880Z · LW(p) · GW(p)

This is interesting to me because I was brought up to go to college, but I didn't take it seriously (plausibly from depression or somesuch), and I definitely think of him as a guy with an interesting perspective. Okay, a smart guy with an interesting perspective, but not a god.

It had never occurred to me before that maybe people who were brought up to assume they were going to college might generally have a different take on the world than I do.

comment by Zack_M_Davis · 2021-10-18T02:48:52.693Z · LW(p) · GW(p)

I talked and corresponded with Michael a lot during 2017–2020, and it seems likely that one of the psychotic breaks people are referring to is mine from February 2017? (Which Michael had nothing to do with causing, by the way.) I don't think you're being fair.

"jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself)

I'm confident this is only a Ziz-ism: I don't recall Michael using the term, and I just searched my emails for jailbreak, and there are no hits from him.

again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs [...] describing how it was a Vassar-related phenomenon

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)

But, well ... if you genuinely thought that institutions and a community that you had devoted a lot of your life to building up, were now failing to achieve their purposes, wouldn't you want to talk to people about it? If you genuinely thought that certain chemicals would make your friends lives' better, wouldn't you recommend them?

Michael is a charismatic guy who has strong views and argues forcefully for them. That's not the same thing as having mysterious mind powers to "make people paranoid" or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I'm sure he'd be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.

borderline psychosis, which the Vassarites mostly interpreted as success ("these people have been jailbroken out of the complacent/conformist world, and are now correctly paranoid and weird")

I can't speak for Michael or his friends, and I don't want to derail the thread by going into the details of my own situation. (That's a future community-drama post, for when I finally get over enough of my internalized silencing-barriers to finish writing it.) But speaking only for myself, I think there's a nearby idea that actually makes sense: if a particular social scene is sufficiently crazy (e.g., it's a cult), having a mental breakdown is an understandable reaction. It's not that mental breakdowns are in any way good—in a saner world, that wouldn't happen. But if you were so unfortunate to be in a situation where the only psychologically realistic outcomes were either to fall into conformity with the other cult-members, or have a stress-and-sleep-deprivation-induced psychotic episode as you undergo a "deep emotional break with the wisdom of [your] pack" [LW · GW], the mental breakdown might actually be less bad in the long run, even if it's locally extremely bad.

My main advice is that if he or someone related to him asks you if you want to take a bunch of drugs and hear his pitch for why the world is corrupt, you say no.

I recommend hearing out the pitch and thinking it through for yourself. (But, yes, without drugs; I think drugs are very risky and strongly disagree with Michael on this point.)

ZD said Vassar broke them out of a mental hospital. I didn't ask them how.

(Incidentally, this was misreporting on my part, due to me being crazy at the time and attributing abilities to Michael that he did not, in fact, have. Michael did visit me in the psych ward, which was incredibly helpful—it seems likely that I would have been much worse off if he hadn't come—but I was discharged normally; he didn't bust me out.)

Replies from: Yvain, Yvain, ChristianKl
comment by Scott Alexander (Yvain) · 2021-10-18T10:43:13.426Z · LW(p) · GW(p)

I don't want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn't harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I'm suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their "it's correct to be freaking about learning your entire society is corrupt and gaslighting" shtick. 

comment by Scott Alexander (Yvain) · 2021-10-18T11:24:00.823Z · LW(p) · GW(p)

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)

[...]

Michael is a charismatic guy who has strong views and argues forcefully for them. That's not the same thing as having mysterious mind powers to "make people paranoid" or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I'm sure he'd be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.

I more or less Outside View agree with you on this, which is why I don't go around making call-out threads or demanding people ban Michael from the community or anything like that (I'm only talking about it now because I feel like it's fair for the community to try to defend itself after Jessica attributed all of this to the wider community instead of Vassar specifically) "This guy makes people psychotic by talking to them" is a silly accusation to go around making, and I hate that I have to do it!

But also, I do kind of notice the skulls and they are really consistent, and I would feel bad if my desire not to say this ridiculous thing resulted in more people getting hurt.

I think the minimum viable narrative here is, as you say, something like "Michael is very good at spotting people right on the verge of psychosis, and then he suggests they take drugs." Maybe a slightly more complicated narrative involves bringing them to a state of total epistemic doubt where they can't trust any institutions or any of the people they formerly thought were their friends, although now this is getting back into the "he's just having normal truth-seeking conversation" objection. He also seems really good at pushing trans people's buttons in terms of their underlying anxiety around gender dysphoria (see the Ziz post) , so maybe that contributes somehow. I don't know how it happens, I'm sufficiently embarrassed to be upset about something which looks like "having a nice interesting conversation" from the outside, and I don't want to violate liberal norms that you're allowed to have conversations - but I think those norms also make it okay to point out the very high rate at which those conversations end in mental breakdowns.

Maybe one analogy would be people with serial emotional abusive relationships - should we be okay with people dating Brent? Like yes, he had a string of horrible relationships that left the other person feeling violated and abused and self-hating and trapped. On the other, most of this, from the outside, looked like talking. He explained why it would be hurtful for the other person to leave the relationship or not do what he wanted, and he was convincing and forceful enough about it that it worked (I understand he also sometimes used violence, but I think the narrative still makes sense without it). Even so, the community tried to make sure people knew if they started a relationship with him they would get hurt, and eventually got really insistent about that. I do feel like this was a sort of boundary crossing of important liberal norms, but I think you've got to at least leave that possibility open for when things get really weird.

Replies from: mathenjoyer, Viliam
comment by mathenjoyer · 2021-10-22T02:17:08.384Z · LW(p) · GW(p)

Thing 0:

Scott.

Before I actually make my point I want to wax poetic about reading SlateStarCodex.

In some post whose name I can't remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, "Huh, I need to only be convinced by true things."

This is extremely relatable to my lived experience. I am a stereotypical "high-functioning autist." I am quite gullible, formerly extremely gullible. I maintain sanity by aggressively parsing the truth values of everything I hear. I am extremely literal. I like math.

To the degree that "rationality styles" are a desirable artifact of human hardware and software limitations, I find your style of thinking to be the most compelling.

Thus I am going to state that your way of thinking about Vassar has too many fucking skulls.

Thing 1:

Imagine two world models:

  1. Some people want to act as perfect nth-order cooperating utilitarians, but can't because of human limitations. They are extremely scrupulous, so they feel anguish and collapse emotionally. To prevent this, they rationalize and confabulate explanations for why their behavior actually is perfect. Then a moderately schizotypal man arrives and says: "Stop rationalizing." Then the humans revert to the all-consuming anguish.
  2. A collection of imperfect human moral actors who believe in utilitarianism act in an imperfect utilitarian way. An extremely charismatic man arrives who uses their scrupulosity to convince them they are not behaving morally, and then leverages their ensuing anguish to hijack their agency.

Which of these world models is correct? Both, obviously, because we're all smart people here and understand the Machiavellian Intelligence Hypothesis.

Thing 2:

Imagine a being called Omegarapist. It has important ideas about decision theory and organizations. However, it has an uncontrollable urge to rape people. It is not a superintelligence; it merely an extremely charismatic human. (This is a refutation of the Brent Dill analogy. I do not know much about Brent Dill.)

You are a humble student of Decision Theory. What is the best way to deal with Omegarapist?

  1. Ignore him. This is good for AI-box reasons, but bad because you don't learn anything new about decision theory. Also, humans with strange mindstates are more likely to provide new insights, conditioned on them having insights to give (this condition excludes extreme psychosis).
  2. Let Omegarapist out. This is a terrible strategy. He rapes everybody, AND his desire to rape people causes him to warp his explanations of decision theory.

Therefore we should use Strategy 1, right? No. This is motivated stopping. Here are some other strategies.

1a. Precommit to only talk with him if he castrates himself first.

1b. Precommit to call in the Scorched-Earth Dollar Auction Squad (California law enforcement) if he has sex with anybody involved in this precommitment then let him talk with anybody he wants.

I made those in 1 minute of actually trying.

Returning to the object level, let us consider Michael Vassar. 

Strategy 1 corresponds to exiling him. Strategy 2 corresponds to a complete reputational blank-slate and free participation. In three minutes of actually trying, here are some alternate strategies.

1a. Vassar can participate but will be shunned if he talks about "drama" in the rationality community or its social structure. 

1b. Vassar can participate but is not allowed to talk with one person at once, having to always be in a group of 3.

2a. Vassar can participate but has to give a detailed citation, or an extremely prominent low-level epistemic status mark, to every claim he makes about neurology or psychiatry. 

I am not suggesting any of these strategies, or even endorsing the idea that they are possible. I am asking: WHY THE FUCK IS EVERYONE MOTIVATED STOPPING ON NOT LISTENING TO WHATEVER HE SAYS!!!

I am a contractualist and a classical liberal. However, I recognized the empirical fact that there are large cohorts of people who relate to language exclusively for the purpose of predation and resource expropriation. What is a virtuous man to do?

The answer relies on the nature of language. Fundamentally, the idea of a free marketplace of ideas doesn't rely on language or its use; it relies on the asymmetry of a weapon. The asymmetry of a weapon is a mathematical fact about information processing. It exists in the territory. f you see an information source that is dangerous, build a better weapon.

You are using a powerful asymmetric weapon of Classical Liberalism called language. Vassar is the fucking Necronomicon. Instead of sealing it away, why don't we make another weapon? This idea that some threats are temporarily too dangerous for our asymmetric weapons, and have to be fought with methods other than reciprocity, is the exact same epistemology-hole found in diversity-worship.

"Diversity of thought is good."

"I have a diverse opinion on the merits of vaccination."

"Diversity of thought is good, except on matters where diversity of thought leads to coercion or violence."

"When does diversity of thought lead to coercion or violence?"

"When I, or the WHO, say so. Shut up, prole."

This is actually quite a few skulls, but everything has quite a few skulls. People die very often. 

Thing 3:

Now let me address a counterargument:

Argument 1: "Vassar's belief system posits a near-infinitely powerful omnipresent adversary that is capable of ill-defined mind control. This is extremely conflict-theoretic, and predatory."

Here's the thing: rationality in general in similar. I will use that same anti-Vassar counterargument as a steelman for sneerclub.

Argument 2: "The beliefs of the rationality community posit complete distrust in nearly every source of information and global institution, giving them an excuse to act antisocially. It describes human behavior as almost entirely Machiavellian, allowing them to be conflict-theoretic, selfish, rationalizing, and utterly incapable of coordinating. They 'logically deduce' the relevant possibility of eternal suffering or happiness for the human species (FAI and s-risk), and use that to control people's current behavior and coerce them into giving up their agency."

There is a strategy that accepts both of these arguments. It is called epistemic learned helplessness. It is actually a very good strategy if you are a normie. Metis and the reactionary concept of "traditional living/wisdom" are related principles. I have met people with 100 IQ who I would consider highly wise, due to skill at this strategy (and not accidentally being born religious, which is its main weak point.)

There is a strategy that rejects both of these arguments. It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype. Very few people follow this strategy so it is hard to give examples, but I will leave a quote from an old Scott Aaronson paper that I find very inspiring. "In pondering these riddles, I don’t have any sort of special intuition, for which the actual arguments and counterarguments that I can articulate serve as window-dressing. The arguments exhaust my intuition."

THERE IS NO EFFECTIVE LONG-TERM STRATEGY THAT REJECTS THE SECOND ARGUMENT BUT ACCEPTS THE FIRST! THIS IS WHERE ALL THE FUCKING SKULLS ARE! Why? Because it requires a complex notion of what arguments to accept, and the more complex the notion, the easier it will be to rationalize around, apply inconsistently, or Goodhart. See "A formalist manifesto" by Moldbug for another description of this. (This reminds me of how UDT/FDT/TDT agents behave better than causal agents at everything, but get counterfactually mugged, which seems absurd to us. If you try to come up with some notion of "legitimate information" or "self-locating information" to prevent an agent from getting mugged, it will similarly lose functionality in the non-anthropic cases. [See the Sleeping Beauty problem for a better explanation.])

The only real social epistemologies are of the form:

"Free speech, but (explicitly defined but also common-sensibly just definition of ideas that lead to violence)."

Mine is particular is, "Free speech but no (intentionally and directly inciting panic or violence using falsehoods)."

To put it a certain way, once you get on the Taking Ideas Seriously train, you cannot get off. 

Thing 4:

Back when SSC existed, I got bored one day and looked at the blogroll. I discovered Hivewired. It was bad. Through Hivewired I discovered Ziz. I discovered the blackmail allegations while sick with a fever and withdrawing off an antidepressant. I had a mental breakdown, feeling utterly betrayed by the rationality community despite never even having talked to anyone in it. Then I rationalized it away. To be fair, this was reasonable considering the state in which I processed the information. However, the thought processes used to dismiss the worry were absolutely rationalizations. I can tell because I can smell them.

Fast forward a year. I am at a yeshiva to discover whether I want to be religious. I become an anti-theist and start reading rationality stuff again. I check out Ziz's blog out of perverse curiosity. I go over the allegations again. I find a link to a more cogent, falsifiable, and specific case. I freak the fuck out. Then I get to work figuring how which parts are actually true.

MIRI payed out to blackmail. There's an unironic Catholic working at CFAR and everyone thinks this is normal. He doesn't actually believe in god, but he believes in belief, which is maybe worse. CFAR is a collection of rationality workshops, not a coordinated attempt to raise the sanity waterline (Anna told me this in a private communication, and this is public information as far as I know), but has not changed its marketing to match. Rationalists are incapable of coordinating, which is basically their entire job. All of these problems were foreseen by the Sequences, but no one has read the Sequences because most rationalists are an army of sci-fi midwits who read HPMOR then changed the beliefs they were wearing. (Example: Andrew Rowe. I'm sorry but it's true, anyways please write Arcane Ascension book 4.)

I make contact with the actual rationality community for the first time. I trawl through blogs, screeds, and incoherent symbolist rants about morality written as a book review of The Northern Caves. Someone becomes convinced that I am a internet gangstalker who created an elaborate false identity of a 18-year-old gap year kid to make contact with them. Eventually I contact Benjamin Hoffman, who leads me to Vassar, who leads to the Vassarites.

He points out to be a bunch of things that were very demoralizing, and absolutely true. Most people act evil out of habituation and deviancy training, including my loved ones. Global totalitarianism is a relevant s-risk as societies become more and more hysterical due to a loss of existing wisdom traditions, and too low of a sanity waterline to replace them with actual thinking. (Mass surveillance also helps.)

I work on a project with him trying to create a micro-state within the government of New York City. During and after this project I am increasingly irritable and misanthropic. The project does not work. I effortlessly update on this, distance myself from him, then process the feeling of betrayal by the rationality community and inability to achieve immortality and a utopian society for a few months. I stop being a Vassarite. I continue to read blogs to stay updated on thinking well, and eventually I unlearn the new associated pain. I talk with the Vassarites as friends and associates now, but not  as a club member.

What does this story imply? Michael Vassar induced mental damage in me, partly through the way he speaks and acts. However, as a primary effect of this, he taught me true things. With basic rationality skills, I avoid contracting the Vassar, then I healed the damage to my social life and behavior caused by this whole shitstorm (most of said damage was caused by non-Vassar factors).

Now I am significantly happier, more agentic, and more rational.

Thing 5

When I said what I did in Thing 1, I meant it. Vassar gets rid of identity-related rationalizations. Vassar drives people crazy. Vassar is very good at getting other people to see the moon in finger pointing at the moon problems and moving people out of local optimums into better local optimums. This requires the work of going downwards in the fitness landscape first. Vassar's ideas are important and many are correct. It just happens to be that he might drive you insane. The same could be said of rationality. Reality is unfair; good epistemics isn't supposed to be easy. Have you seen mathematical logic? (It's my favorite field).

An example of an important idea that may come from Vassar, but is likely much older:

Control over a social hierarchy goes to a single person; this is a pluralist preference aggregation system. In those, the best strategy is to vote only in the two blocks who "matter." Similarly, if you need to join and war and know you will be killed if your side loses, you should join the winning side. Thus humans are attracted to powerful groups of humans. This is a (grossly oversimplified) evolutionary origin of one type of conformity effect.

Power is the ability to make other human beings do what you want. There are fundamentally two strategies to get it: help other people so that they want you to have power, or hurt other people to credibly signal that you already have power. (Note the correspondence of these two to dominance and prestige hierarchies). Credibly signaling that you have a lot of power is almost enough to get more power.

However, if you have people harming themselves to signal your power, if they admit publicly that they are harming themselves, they can coordinate with neutral parties to move the Schelling point and establish a new regime. Thus there are two obvious strategies to achieving ultimate power: help people get what they want (extremely difficult), make people publicly harm themselves while shouting how great they feel (much easier). The famous bad equilibrium of 8 hours of shocking oneself per day is an obvious example.

Benjamin Ross Hoffman's blog is very good, but awkwardly organized. He conveys explicit, literal models of these phenomena that are very useful and do not carry the risk of filling your head with whispers from the beyond. However, they have less impact because of it.

Thing 6:

I'm almost done with this mad effortpost. I want to note one more thing. Mistake theory works better than conflict theory. THIS IS NOT NORMATIVE.

Facts about the map-territory distinction and facts about the behaviors of mapmaking algorithms are facts about the territory. We can imagine a very strange world where conflict theory is a more effective way to think. One of the key assumptions of conflict theorists is that complexity or attempts to undermine moral certainty are usually mind control. Another key assumption is that there are entrenched power groups, or individual malign agents, will use these things to hack you.

These conditions are neither necessary nor sufficient for conflict theory to be better than mistake theory. I have an ancient and powerful technique called "actually listening to arguments." When I'm debating with someone who I know to use bad faith, I decrypt everything they say into logical arguments. Then I use those logical arguments to modify my world model. One might say adversaries can used biased selection and rationalization to make you less logical despite this strategy. I say, on an incurable hardware and wetware level, you are already doing this. (For example, any Bayesian agent of finite storage space is subject to the Halo Effect, as you described in a post once.) Having someone do it in a different direction can helpfully knock you out of your models and back into reality, even if their models are bad. This is why it is still worth decrypting the actual information content of people you suspect to be in bad faith.

Uh, thanks for reading, I hope this was coherent, have a nice day.

Replies from: Unreal, cousin_it, FeepingCreature, Hazard, xtz05qw
comment by Unreal · 2021-10-22T10:21:28.557Z · LW(p) · GW(p)

I enjoyed reading this. Thanks for writing it. 

One note though: I think this post (along with most of the comments) isn't treating Vassar as a fully real person with real choices. It (also) treats him like some kind of 'force in the world' or 'immovable object'. And I really want people to see him as a person who can change his mind and behavior and that it might be worth asking him to take more responsibility for his behavior and its moral impacts. I'm glad you yourself were able to "With basic rationality skills, avoid contracting the Vassar, then [heal] the damage to [your] social life." 

But I am worried about people treating him like a force of nature that you make contact with and then just have to deal with whatever the effects of that are. 

I think it's pretty immoral to de-stabilize people to the point of maybe-insanity, and I think he should try to avoid it, to whatever extent that's in his capacity, which I think is a lot. 

"Vassar's ideas are important and many are correct. It just happens to be that he might drive you insane."

I might think this was a worthwhile tradeoff if I actually believed the 'maybe insane' part was unavoidable, and I do not believe it is. I know that with more mental training, people can absorb more difficult truths without risk of damage. Maybe Vassar doesn't want to offer this mental training himself; that isn't much of an excuse, in my book, to target people who are 'close to the edge' (where 'edge' might be near a better local optimum) but who lack solid social support, rationality skills, mental training, or spiritual groundedness and then push them. 

His service is well-intentioned, but he's not doing it wisely and compassionately, as far as I can tell. 

Replies from: SaidAchmiz, mathenjoyer, Unreal
comment by Said Achmiz (SaidAchmiz) · 2021-10-22T10:52:36.746Z · LW(p) · GW(p)

I think that treating Michael Vassar as an unchangeable force of nature is the right way to go—for the purposes of discussions precisely like this one. Why? Because even if Michael himself can (and chooses to) alter his behavior in some way (regardless of whether this is good or bad or indifferent), nevertheless there will be other Michael Vassars out there—and the question remains, of how one is to deal with arbitrary Michael Vassars one encounters in life.

In other words, what we’ve got here is a vulnerability (in the security sense of the word). One day you find that you’re being exploited by a clever hacker (we decline to specify whether he is a black hat or white hat or what). The one comes to you and recommends a patch. But you say—why should we treat this specific attack as some sort of unchangeable force of nature? Rather we should contact this hacker and persuade him to cease and desist. But the vulnerability is still there…

Replies from: ChristianKl
comment by ChristianKl · 2021-10-22T13:09:49.932Z · LW(p) · GW(p)

I think you can either have a discussion that focuses on an individual and if you do it makes sense to model them with agency or you can have more general threat models. 

If you however mix the two you are likely to get confused in both directions. You will project ideas from your threat model into the person and you will take random aspects of the individual into your threat model that aren't typical for the threat.

comment by mathenjoyer · 2021-10-23T02:48:10.540Z · LW(p) · GW(p)

I am not sure how much 'not destabilize people' is an option that is available to Vassar.

My model of Vassar is as a person who is constantly making associations, and using them to point at the moon. However, pointing at the moon can convince people of nonexistent satellites and thus drive people crazy. This is why we have debates instead of koan contests.

Pointing at the moon is useful when there is inferential distance; we use it all the time when talking with people without rationality training. Eliezer used it, and a lot of "you are expected to behave better for status reasons look at my smug language"-style theist-bashing, in the Sequences. This was actually highly effective, although it had terrible side effects.

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, "Vassar, just only say things that you think will have a positive effect on the person." 1. He already does that. 2. That is advocating that Vassar manipulate people. See Valencia in Worth the Candle.

In the pathological case of Vassar, I think the naive strategy of "just say the thing you think is true" is still correct.

Mental training absolutely helps. I would say that, considering that the people who talk with Vassar are literally from a movement called rationality, it is a normatively reasonable move to expect them to be mentally resilient. Factually, this is not the case. The "maybe insane" part is definitely not unavoidable, but right now I think the problem is with the people talking to Vassar, and not he himself.

I'm glad you enjoyed the post.

Replies from: Unreal, ChristianKl, Benquo
comment by Unreal · 2021-10-23T03:41:37.521Z · LW(p) · GW(p)

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication.

My suggestion for Vassar is not to 'try not to destabilize people' exactly. 

It's to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he's interacted with about what it's like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you're speaking to as though they're a full, real human—not a pair of ears to be talked into or a mind to insert things into). When he talks theory, I often get the sense he is talking "at" rather than talking "to" or "with". The listener practically disappears or is reduced to a question-generating machine that gets him to keep saying things. 

I expect this process could take a long time / run into issues along the way, and so I don't think it should be rushed. Not expecting a quick change. But claiming there's no available option seems wildly wrong to me. People aren't fixed points and generally shouldn't be treated as such. 

Replies from: mathenjoyer, ChristianKl
comment by mathenjoyer · 2021-10-23T06:00:36.850Z · LW(p) · GW(p)

This is actually very fair. I think he does kind of insert information into people.

I never really felt like a question-generating machine, more like a pupil at the foot of a teacher who is trying to integrate the teacher's information.

I think the passive, reactive approach you mention is actually a really good idea of how to be more evidential in personal interaction without being explicitly manipulative.

Thanks!

comment by ChristianKl · 2021-10-23T12:06:37.299Z · LW(p) · GW(p)

It's to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he's interacted with about what it's like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you're speaking to as though they're a full, real human—not a pair of ears to be talked into or a mind to insert things into). 

I think I interacted with Vassar four times in person, so I might get some things wrong here, but I think that he's pretty disassociated from his body which closes a normal channel of perceiving impacts on the person he's speaking with. This thing looks to me like some bodily process generating stress / pain and being a cause for disassociation. It might need a body worker to fix whatever goes on there to create the conditions for perceiving the other person better.

Beyond that Circling might be an enviroment in which one can learn to interact with others as humans who have their own feelings but that would require opening up to the Circling frame. 

comment by ChristianKl · 2021-10-23T11:56:08.575Z · LW(p) · GW(p)

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, "Vassar, just only say things that you think will have a positive effect on the person." 1. He already does that. 2. That is advocating that Vassar manipulate people. 

You are making a false dichomaty here. You are assuming that everything that has a negative effect on a person is manipulation. 

As Vassar himself sees the situation people believe a lot of lies for reasons of fitting in socially in society. From that perspective getting people to stop believing in those lies will make it harder to fit socially into society. 

If you would get a Nazi guard at Ausschwitz into a state where the moral issue of their job can't be disassociated anymore, that's very predicably going to have a negative effect on that prison guard. 

Vassar position would be that it would be immoral to avoid talking about the truth about the nature of their job when talking with the guard in a motivation to make life easier for the guard. 

comment by Benquo · 2021-10-23T03:08:17.466Z · LW(p) · GW(p)

I think this line of discussion would be well served by marking a natural boundary in the cluster "crazy." Instead of saying "Vassar can drive people crazy" I'd rather taboo "crazy" and say:

Many people are using their verbal idea-tracking ability to implement a coalitional strategy instead of efficiently compressing external reality. Some such people will experience their strategy as invalidated by conversations with Vassar, since he'll point out ways their stories don't add up. A common response to invalidation is to submit to the invalidator by adopting the invalidator's story. Since Vassar's words aren't selected to be a valid coalitional strategy instruction set, attempting to submit to him will often result in attempting obviously maladaptive coalitional strategies.

People using their verbal idea-tracking ability to implement a coalitional strategy cannot give informed consent to conversations with Vassar, because in a deep sense they cannot be informed of things through verbal descriptions, and the risk is one that cannot be described without the recursive capacity of descriptive language.

Personally I care much more, maybe lexically more, about the upside of minds learning about their situation, than the downside of mimics going into maladaptive death spirals, though it would definitely be better all round if we can manage to cause fewer cases of the latter without compromising the former, much like it's desirable to avoid torturing animals, and it would be desirable for city lights not to interfere with sea turtles' reproductive cycle by resembling the moon too much.

Replies from: pjemb, mathenjoyer
comment by pjen (pjemb) · 2021-10-29T21:11:40.532Z · LW(p) · GW(p)

My problem with this comment is it takes people who:

  • can't verbally reason without talking things through (and are currently stuck in a passive role in a conversation)

and who:

  • respond to a failure of their verbal reasoning
    • under circumstances of importance (in this case moral importance)
    • and conditions of stress, induced by
      • trying to concentrate while in a passive role
      • failing to concentrate under conditions of high moral importance

by simply doing as they are told - and it assumes they are incapable of reasoning under any circumstances.

It also then denies people who are incapable of independent reasoning the right to be protected from harm.

comment by mathenjoyer · 2021-10-23T03:13:49.372Z · LW(p) · GW(p)

EDIT: Ben is correct to say we should taboo "crazy."

This is a very uncharitable interpretation (entirely wrong). The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren't as positive utility as they thought. (entirely wrong)

I also don't think people interpret Vassar's words as a strategy and implement incoherence. Personally, I interpreted Vassar's words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away. I think the other people just no longer have a coalitional strategy installed and don't know how to function without one. This is what happened to me and why I repeatedly lashed out at others when I perceived them as betraying me, since I no longer automatically perceived them as on my side. I rebuilt my rapport with those people and now have more honest relationships with them. (still endorsed)

Beyond this, I think your model is accurate.

Replies from: SaidAchmiz, Benquo
comment by Said Achmiz (SaidAchmiz) · 2021-10-23T06:58:29.147Z · LW(p) · GW(p)

The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought.

“That which can be destroyed by the truth should be”—I seem to recall reading that somewhere.

And: “If my actions aren’t as positive utility as I think, then I desire to believe that my actions aren’t as positive utility as I think”.

If one has such a mental makeup that finding out that one’s actions have worse effects than one imagined causes genuine psychological collapse, then perhaps the first order of business is to do everything in one’s power to fix that (really quite severe and glaring) bug in one’s psyche—and only then to attempt any substantive projects in the service of world-saving, people-helping, or otherwise doing anything really consequential.

Replies from: mathenjoyer
comment by mathenjoyer · 2021-10-24T03:30:37.481Z · LW(p) · GW(p)

Thank you for echoing common sense!

comment by Benquo · 2021-10-24T00:35:07.326Z · LW(p) · GW(p)

What is psychological collapse?

For those who can afford it, taking it easy for a while is a rational response to noticing deep confusion [LW · GW], continuing to take actions based on a discredited model would be less appealing, and people often become depressed when they keep confusedly trying to do things that they don't want to do.

Are you trying to point to something else?

Personally, I interpreted Vassar’s words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away.

What specific claims turned out to be false? What counterevidence did you encounter?

Replies from: mathenjoyer
comment by mathenjoyer · 2021-10-24T03:30:04.991Z · LW(p) · GW(p)

Specific claim: the only nontrivial obstacle in front of us is not being evil

This is false. Object-level stuff is actually very hard.

Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)

This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person's language. Ideas and institutions have the agency; they wear people like skin.

Specific claim: this is how to take over New York.

Didn't work.

Replies from: Benquo, Benquo
comment by Benquo · 2021-11-21T00:53:22.197Z · LW(p) · GW(p)

Specific claim: this is how to take over New York.

Didn’t work.

I think this needs to be broken up into 2 claims:

1 If we execute strategy X, we'll take over New York. 2 We can use straightforward persuasion (e.g. appeals to reason, profit motive) to get an adequate set of people to implement strategy X.

2 has been falsified decisively. The plan to recruit candidates via appealing to people's explicit incentives failed, there wasn't a good alternative, and as a result there wasn't a chance to test other parts of the plan (1).

That's important info and worth learning from in a principled way. Definitely I won't try that sort of thing again in the same way, and it seems like I should increase my credence both that plans requiring people to respond to economic incentives by taking initiative to play against type will fail, and that I personally might be able to profit a lot by taking initiative to play against type, or investing in people who seem like they're already doing this, as long as I don't have to count on other unknown people acting similarly in the future.

But I find the tendency to respond to novel multi-step plans that would require someone do take initiative by sitting back and waiting for the plan to fail, and then saying, "see? novel multi-step plans don't work!" extremely annoying. I've been on both sides of that kind of transaction, but if we want anything to work out well we have to distinguish cases of "we / someone else decided not to try" as a different kind of failure from "we tried and it didn't work out."

Replies from: mathenjoyer
comment by mathenjoyer · 2021-12-18T10:39:41.306Z · LW(p) · GW(p)

This is actually completely fair. So is the other comment.

comment by Benquo · 2021-11-21T00:36:24.527Z · LW(p) · GW(p)

Specific claim: the only nontrivial obstacle in front of us is not being evil

This is false. Object-level stuff is actually very hard.

This seems to be conflating the question of "is it possible to construct a difficult problem?" with the question of "what's the rate-limiting problem?". If you have a specific model for how to make things much better for many people by solving a hard technical problem before making substantial progress on human alignment, I'd very much like to hear the details. If I'm persuaded I'll be interested in figuring out how to help.

So far this seems like evidence to the contrary, though, as it doesn't look like you thought you could get help making things better for many people by explaining the opportunity.

comment by Unreal · 2021-10-22T10:24:01.911Z · LW(p) · GW(p)

To the extent I'm worried about Vassar's character, I am as equally worried about the people around him. It's the people around him who should also take responsibility for his well-being and his moral behavior. That's what friends are for. I'm not putting this all on him. To be clear. 

comment by cousin_it · 2021-10-22T08:59:01.724Z · LW(p) · GW(p)

I think it's a fine way of think about mathematical logic, but if you try to think this way about reality, you'll end up with views that make internal sense and are self-reinforcing but don't follow the grain of facts at all. When you hear such views from someone else, it's a good idea to see which facts they give in support. Do their facts seem scant, cherrypicked, questionable when checked? Then their big claims are probably wrong.

The people who actually know their stuff usually come off very different. Their statements are carefully delineated: "this thing about power was true in 10th century Byzantium, but not clear how much of it applies today".

Also, just to comment on this:

It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype.

I think it's somewhat changeable. Even for people like us, there are ways to make our processing more "fuzzy". Deliberately dimming [LW · GW] some things, rounding [LW(p) · GW(p)] others. That has many benefits: on the intellectual level you learn to see many aspects of a problem instead of hyperfocusing on one; emotionally you get more peaceful when thinking about things; and interpersonally, the world is full of small spontaneous exchanges happening on the "warm fuzzy" level, it's not nearly so cold a place as it seems, and plugging into that market is so worth it.

Replies from: mathenjoyer
comment by mathenjoyer · 2021-10-23T03:08:48.215Z · LW(p) · GW(p)

On the third paragraph:

I rarely have problems with hyperfixation. When I do, I just come back to the problem later, or prime myself with a random stimulus. (See Steelmanning Divination.)

Peacefulness is enjoyable and terminally desirable, but in many contexts predators want to induce peacefulness to create vulnerability. Example: buying someone a drink with ill intent. (See "Safety in numbers" by Benjamin Ross Hoffman. I actually like relaxation, but agree with him that feeling relaxed in unsafe environments is a terrible idea. Reality is mostly an unsafe environment. Am getting to that.)

I have no problem enjoying warm fuzzies. I had problems with them after first talking with Vassar, but I re-equilibrated. Warm fuzzies are good, helpful, and worth purchasing. I am not a perfect utilitarian. However, it is important that when you buy fuzzies instead of utils, as Scott would put it, you know what you are buying. Many will sell fuzzies and market them as utils.

I sometimes round things, it is not inherently bad.

Dimming things is not good. I like being alive. From a functionalist perspective, the degree to which I am aroused (with respect to the senses and the mind) is the degree to which I am a real, sapient being. Dimming is sometimes terminally valuable as relaxation, and instrumentally valuable as sleep, but if you believe in Life, Freedom, Prosperity And Other Nice Transhumanist Things then dimming being bad in most contexts follows as a natural consequence.

On the second paragraph:

This is because people compartmentalize. After studying a thing for a long time, people will grasp deep nonverbal truths about that thing. Sometimes they are wrong; without the legibility of the elucidation, false ideas such gained are difficult to destroy. Sometimes they are right! Mathematical folklore is an example: it is literally metis among mathematicians.

Highly knowledgeable and epistemically skilled people delineate. Sometimes the natural delineation is "this is true everywhere and false nowhere." See "The Proper Use of Humility," and for an example of how delineations often should be large, "Universal Fire."

On the first paragraph:

Reality is hostile through neutrality. Any optimizing agent naturally optimizes against most other optimization targets when resources are finite. Lifeforms are (badly) optimized for inclusive genetic fitness. Thermodynamics looks like the sort of Universal Law that an evil god would construct. According to a quick Google search approximately 3,700 people die in car accidents per day and people think this is completely normal. 

Many things are actually effective. For example, most places in the United States have drinkable-ish running water. This is objectively impressive. Any model must not be entirely made out of "the world is evil" otherwise it runs against facts. But the natural mental motion you make, as a default, should be, "How is this system produced by an aggressively neutral, entirely mechanistic reality?"

See the entire Sequence on evolution, as well as Beyond the Reach of God.

comment by FeepingCreature · 2021-10-22T10:15:07.827Z · LW(p) · GW(p)

I mostly see where you're coming from, but I think the reasonable answer to "point 1 or 2 is a false dichotomy" is this classic, uh, tumblr quote (from memory):

"People cannot just. At no time in the history of the human species has any person or group ever just. If your plan relies on people to just, then your plan will fail."

This goes especially if the thing that comes after "just" is "just precommit."

My expectation is that interaction with Vassar is that the people who espouse 1 or 2 expect that the people interacting are incapable of precommitting to the required strength. I don't know if they're correct, but I'd expect them to be, because I think people are just really bad at precommitting in general. If precommitting was easy, I think we'd all be a lot more fit and get a lot more done. Also, Beeminder would be bankrupt.

Replies from: mathenjoyer
comment by mathenjoyer · 2021-10-23T03:28:49.861Z · LW(p) · GW(p)

This is a very good criticism! I think you are right about people not being able to "just."

My original point with those strategies was to illustrate an instance of motivated stopping about people in the community who have negative psychological effects, or criticize popular institutions. Perhaps it is the case that people genuinely tried to make a strategy but automatically rejected my toy strategies as false. I do not think it is, based on "vibe" and on the arguments that people are making, such as "argument from cult."

I think you are actually completely correct about those strategies being bad. Instead, I failed to point out that I expect a certain level of mental robustness-to-nonsanity from people literally called "rationalists." This comes off as sarcastic but I mean it completely literally.

Precommitting isn't easy, but rationality is about solving hard problems. When I think of actual rationality, I think of practices such as "five minutes of actually trying" and alkjash's "Hammertime." Humans have a small component of behavior that is agentic, and a huge component of behavior that is non-agentic and installed by vaguely agentic processes (simple conditioning, mimicry, social learning.) Many problems are solved immediately and almost effortlessly by just giving the reins to the small part.

Relatedly, to address one of your examples, I expect at least one of the following things to be true about any given competent rationalist.

  1. They have a physiological problem.
  2. They don't believe becoming fit to be worth their time, and have a good reason to go against the naive first-order model of "exercise increases energy and happiness set point."
  3. They are fit.

Hypocritically, I fail all three of these criterion. I take full blame for this failure and plan on ameliorating it. (You don't have to take Heroic Responsibility for the world, but you have to take it about yourself.)

A trope-y way of thinking about it is: "We're supposed to be the good guys!" Good guys don't have to be heroes, but they have to be at least somewhat competent, and they have to, as a strong default, treat potential enemies like their equals.

comment by Hazard · 2021-10-22T03:18:20.523Z · LW(p) · GW(p)

I found many things you shared useful. I also expect that because of your style/tone you'll get down voted :(

comment by xtz05qw · 2021-10-22T07:49:27.755Z · LW(p) · GW(p)

It's not just Vassar. It's how the whole community has excused and rationalized away abuse. I think the correct answer to the omega rapist problem isn't to ignore him but to destroy his agency entirely. He's still going to alter his decision theory towards rape even if castrated.

Replies from: mathenjoyer
comment by mathenjoyer · 2021-10-23T03:47:32.369Z · LW(p) · GW(p)

I think you are entirely wrong.

However, I gave you a double-upvote because you did nothing normatively wrong. The fact that you are being mass-downvoted just because you linked to that article and because you seem to be associated with Ziz (because of the gibberish name and specific conception of decision theory) is extremely disturbing.

Can we have LessWrong not be Reddit? Let's not be Reddit. Too late, we're already Reddit. Fuck.

You are right that, unless people can honor precommitments perfectly and castration is irreversible even with transhuman technology, Omegarapist will still alter his decision theory. Despite this, there are probably better solutions than killing or disabling him. I say this not out of moral ickiness, but out of practicality.

-

Imagine both you are Omegarapist are actual superintelligences. Then you can just make a utility function-merge to avoid the inefficiency of conflict, and move on with your day.

Humans have an similar form of this. Humans, even when sufficiently distinct in moral or factual position as to want to kill each other, often don't. This is partly because of an implicit assumption that their side, the correct side, will win in the end, and that this is less true if they break the symmetry and use weapons. Scott uses the example of a pro-life and pro-choice person having dinner together, and calls it "divine intervention."

There is an equivalent of this with Omegarapist. Make some sort of pact and honor it: he won't rape people, but you won't report his previous rapes to the Scorched Earth Dollar Auction squad. Work together on decision theory the project is complete. Then agree either to utility-merge with him in the consequent utility function, or just shoot him. I call this "swordfighting at the edge of a cliff while shouting about our ideologies." I would be willing to work with Moldbug on Strong AI, but if we had to input the utility function, the person who would win would be determined by a cinematic swordfight. In a similar case with my friend Sudo Nim, we could just merge utilities.

If you use the "shoot him" strategy, Omegarapist is still dead. You just got useful work out of him first. If he rapes people, just call in the Dollar Auction squad. The problem here isn't cooperating with Omegarapist, it's thinking to oneself "he's too useful to actually follow precommitments about punishing" if he defects against you. This is fucking dumb. There's a great webnovel called Reverend Insanity which depicts what organizations look like when everyone uses pure CDT like this. It isn't pretty, and it's also a very accurate depiction of the real world landscape.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-10-23T05:06:09.697Z · LW(p) · GW(p)

Oh come on. The post was downvoted because it was inflammatory and low quality. It made a sweeping assertion while providing no evidence except a link to an article that I have no reason to believe is worth reading. There is a mountain of evidence that being negative is not a sufficient cause for being downvoted on LW, e.g. the OP.

Replies from: TekhneMakre, mathenjoyer
comment by TekhneMakre · 2021-10-23T05:32:58.792Z · LW(p) · GW(p)

(FYI, the OP has 154 votes and 59 karma, so it is both heavily upvoted and heavily downvoted.)

comment by mathenjoyer · 2021-10-23T06:35:53.423Z · LW(p) · GW(p)

You absolutely have a reason to believe the article is worth reading.

If you live coordinated with an institution, spending 5 minutes of actually trying (every few months) to see if that institution is corrupt is a worthy use of time.

Replies from: SaidAchmiz, sil-ver
comment by Said Achmiz (SaidAchmiz) · 2021-10-23T07:03:51.025Z · LW(p) · GW(p)

I read the linked article, and my conclusion is that it’s not even in the neighborhood of “worth reading”.

comment by Rafael Harth (sil-ver) · 2021-10-23T13:51:35.784Z · LW(p) · GW(p)

I don't think I live coordinated with CFAR or MIRI, but it is true that, if they are corrupt, this is something I would like to know.

However, that's not sufficient reason to think the article is worth reading. There are many articles making claims that, if true, I would very much like to know (e.g. someone arguing that the Christian Hell exists).

I think the policy I follow (although I hadn't made it explicit until now) is to ignore claims like this by default but listen up as soon as I have some reason to believe that the source is credible.

Which incidentally was the case for the OP. I have spent a lot more than 5 minutes reading it & replies, and I have, in fact, updated on my view of CRAF and Miri. It wasn't a massive update in the end, but it also wasn't negligible. I also haven't downvoted the OP, and I believe I also haven't downvoted any comments from jessicata. I've upvoted some.

Replies from: mathenjoyer
comment by mathenjoyer · 2021-10-24T03:26:40.599Z · LW(p) · GW(p)

This is fair, actually.

comment by Viliam · 2021-10-18T17:24:56.287Z · LW(p) · GW(p)

Michael is very good at spotting people right on the verge of psychosis

...and then pushing them.

Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.

So, this seems deliberate. [EDIT: Or not. Zack makes a fair point.] He is not even hiding it, if you listen carefully.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2021-10-21T05:59:14.322Z · LW(p) · GW(p)

Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.

So, this seems deliberate.

Because high-psychoticism people are the ones who are most likely to understand what he has to say.

This isn't nefarious. Anyone trying to meet new people to talk to, for any reason, is going to preferentially seek out people who are a better rather than worse match. Someone who didn't like our robot cult could make structurally the same argument about, say, efforts to market Yudkowsky's writing (like spending $28,000 distributing copies of Harry Potter and the Methods to math contest winners) [EA · GW]: why, they're preying on innocent high-IQ systematizers and filling their heads with scary stories about the coming robot apocalypse!

I mean, technically, yes. But in Yudkowsky and friends' worldview, the coming robot apocalypse is actually real, and high-IQ systematizers are the people best positioned to understand this important threat. Of course they're going to try to market their memes to that neurotype-demographic. What do you expect them to do? What do you expect Michael to do?

Replies from: steven0461, Unreal
comment by steven0461 · 2021-10-21T20:29:09.862Z · LW(p) · GW(p)

There's a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don't have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.

If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you'd object to that targeting strategy even though they'd be able to make an argument structurally the same as your comment.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-21T20:47:54.055Z · LW(p) · GW(p)

Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it's even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. people who think machine learning / deep learning are important.

In general this seems really expected and unobjectionable? "If I'm trying to convince people of X, I'm going to find people who already believe a lot of the pre-requisites for understanding X and who might already assign X a non-negligible prior". This is how pretty much all systems of ideas spread, I have trouble thinking of a counterexample.

I mean, do a significant number of people not select who they talk with based on who already agrees with them to some extent and is paying attention to similar things?

Replies from: steven0461
comment by steven0461 · 2021-10-21T23:14:45.490Z · LW(p) · GW(p)

If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn't care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.

Replies from: petemichaud-1, jessica.liu.taylor
comment by PeteMichaud (petemichaud-1) · 2021-10-22T12:35:54.890Z · LW(p) · GW(p)

The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from "psychotic," and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren't already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is connected to everything else. Obviously neither extreme state is desirable, but leaning one way or another could be very helpful in different contexts.  

See also: indexicality.

On the other hand, back in my reflective beliefs, I think psychosis is a much scarier failure mode than "autism," on this scale, and I would not personally pursue any actions that pushed people toward it without, among other things, a supporting infrastructure of some kind for processing the psychotic state without losing the plot (social or cultural would work, but whatever).

comment by jessicata (jessica.liu.taylor) · 2021-10-21T23:17:11.726Z · LW(p) · GW(p)

I wouldn't find it objectionable. I'm not really sure what morally relevant distinction is being pointed at here, apocalyptic beliefs might make the inferential distance to specific apocalyptic hypotheses lower.

Replies from: steven0461, dxu
comment by steven0461 · 2021-10-22T00:48:50.023Z · LW(p) · GW(p)

Well, I don't think it's obviously objectionable, and I'd have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like "we'd all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we're talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren't generally either truth-tracking or good for them" seems plausible to me. But I think it's obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.

comment by dxu · 2021-10-21T23:29:49.941Z · LW(p) · GW(p)

I don't have well-formed thoughts on this topic, but one factor that seems relevant to me has a core that might be verbalized as "susceptibility to invalid methods of persuasion", which seems notably higher in the case of people with high "apocalypticism" than people with the other attributes described in the grandparent. (A similar argument applies in the case of people with high "psychoticism".)

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-21T23:36:40.808Z · LW(p) · GW(p)

That might be relevant in some cases but seems unobjectionable both in the psychoticism case and the apocalypse case. I would predict that LW people cluster together in personality measurements like OCEAN and Eysenck, it's by default easier to write for people of a similar personality to yourself. Also, people notice high rates of Asperger's-like characteristics around here, which are correlated with Jewish ethnicity and transgenderism (also both frequent around here).

comment by Unreal · 2021-10-21T15:34:21.395Z · LW(p) · GW(p)

It might not be nefarious. 

But it might also not be very wise. 

I question Vassar's wisdom, if what you say is indeed true about his motives. 

I question whether he's got the appropriate feedback loops in place to ensure he is not exacerbating harms. I question whether he's appropriately seeking that feedback rather than turning away from the kinds he finds overwhelming, distasteful, unpleasant, or doesn't know how to integrate. 

I question how much work he's done on his own shadow and whether it's not inadvertently acting out in ways that are harmful. I question whether he has good friends he trusts who would let him know, bluntly, when he is out of line with integrity and ethics or if he has 'shadow stuff' that he's not seeing. 

I don't think this needs to be hashed out in public, but I hope people are working closer to him on these things who have the wisdom and integrity to do the right thing. 

comment by ChristianKl · 2021-10-18T08:09:36.777Z · LW(p) · GW(p)

But, well ... if you genuinely thought that institutions and a community that you had devoted a lot of your life to building up, were now failing to achieve their purposes, wouldn't you want to talk to people about it? If you genuinely thought that certain chemicals would make your friends lives' better, wouldn't you recommend them?

Rumor has it that https://www.sfgate.com/news/bayarea/article/Man-Gets-5-Years-For-Attacking-Woman-Outside-13796663.php is due to Vassar recommended drugs. In the OP that case does get blamed on CFAR's enviroment without any mentioning of that part.

When talking about whether or not CFAR is responsible for that stories factors like that seem to me to matter quite a bit. I'd love whether anyone who's nearer can confirm/deny the rumor and fill in missing pieces. 

Replies from: andrew-rettek-1, jimrandomh
comment by Andrew Rettek (andrew-rettek-1) · 2021-10-18T12:39:47.755Z · LW(p) · GW(p)

As I mentioned elsewhere, I was heavily involved in that incident for a couple months after it happened and I looked for causes that could help with the defense. AFAICT No drugs were taken in the days leading up to the mental health episode or arrest (or people who took drugs with him lied about it).

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-18T14:08:17.395Z · LW(p) · GW(p)

I, too, asked people questions after that incident and failed to locate any evidence of drugs.

comment by jimrandomh · 2021-10-18T23:55:07.356Z · LW(p) · GW(p)

As I heard this story, Eric was actively seeking mental health care on the day of the incident, and should have been committed before it happened, but several people (both inside and outside the community) screwed up. I don't think anyone is to blame for his having had a mental break in the first place.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-19T06:50:51.797Z · LW(p) · GW(p)

I now got some better sourced information from a friend who's actually in good contact with Eric. Given that I'm also quite certain that there were no drugs involved and that isn't a case of any one person being mainly responsible for it happening but multiple people making bad decisions. I'm currently hoping that Eric will tell his side himself so that there's less indirection about the information sourcing so I'm not saying more about the detail at this point in time.

Replies from: EricB, Yvain
comment by humantoo (EricB) · 2021-10-19T17:00:14.728Z · LW(p) · GW(p)

Edit: The following account is a component of a broader and more complex narrative. While it played a significant role, it must be noted that there were numerous additional challenges concurrently affecting my life. Absent these complicating factors, the issues delineated in this post alone may not have precipitated such severe consequences. Additionally, I have made minor revisions to the third-to-last bullet point for clarity.

It is pertinent to provide some context to parts of my story that are relevant to the ongoing discussions.

  • My psychotic episode was triggered by a confluence of factors, including acute physical and mental stress, as well as exposure to a range of potent memes. I have composed a detailed document on this subject, which I have shared privately with select individuals. I am willing to share this document with others who were directly involved or have a legitimate interest. However, a comprehensive discussion of these details is beyond the ambit of this post, which primarily focuses on the aspects related to my experiences at Vassar.
  • During my psychotic break, I believed that someone associated with Vassar had administered LSD to me. Although I no longer hold this belief, I cannot entirely dismiss it. Nonetheless, given my deteriorated physical and mental health at the time, the vividness of my experiences could be attributed to a placebo effect or the onset of psychosis.
  • My delusions prominently featured Vassar. At the time of my arrest, I had a notebook with multiple entries stating "Vassar is God" and "Vassar is the Devil." This fixation partly stemmed from a conversation with Vassar, where he suggested that my "pattern must be erased from the world" in response to my defense of EA. However, it was primarily fueled by the indirect influence of someone from his group with whom I had more substantial contact.
  • This individual was deeply involved in a psychological engagement with me in the months leading to my psychotic episode. In my weakened state, I was encouraged to develop and interact with a mental model of her. She once described our interaction as "roleplaying an unfriendly AI," which I perceived as markedly hostile. Despite the negative turn, I continued the engagement, hoping to influence her positively.
  • After joining Vassar's group, I urged her to critically assess his intense psychological methods. She relayed a conversation with Vassar about "fixing" another individual, Anna (Salamon), to "see material reality" and "purge her green." This exchange profoundly disturbed me, leading to a series of delusions and ultimately exacerbating my psychological instability, culminating in a psychotic state. This descent into madness continued for approximately 36 hours, ending with an attempted suicide and an assault on a mental health worker.
  • Additionally, it is worth mentioning that I visited Leverage on the same day. Despite exhibiting clear signs of delusion, I was advised to exercise caution with psychological endeavors. Ideally, further intervention, such as suggesting professional help or returning me to my friends, might have been beneficial. I was later informed that I was advised to return home, though my recollection of this is unclear due to my mental state at the time.
  • In the hotel that night, my mental state deteriorated significantly after I performed a mental action which I interpreted as granting my mental model of Vassar substantial influence over my thoughts, in an attempt to regain stability.

While there are many more intricate details to this story, I believe the above summary encapsulates the most critical elements relevant to our discussion.

I do not attribute direct blame to Vassar, as it is unlikely he either intended or could have reasonably anticipated these specific outcomes. However, his approach, characterized by high-impact psychological interventions, can inadvertently affect the mental health of those around him. I hope that he has recognized this potential for harm and exercises greater caution in the future.

Replies from: Ruby, jessica.liu.taylor, Avi Weiss, elityre, Benquo
comment by Ruby · 2021-10-19T17:14:13.902Z · LW(p) · GW(p)

Thank you for sharing such personal details for the sake of the conversation.

comment by jessicata (jessica.liu.taylor) · 2021-10-19T17:27:27.669Z · LW(p) · GW(p)

Thanks for sharing the details of your experience. Fyi I had a trip earlier in 2017 where I had the thought "Michael Vassar is God" and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.

If I'm trying to put my finger on a real effect here, it's related to how Michael Vassar was one of the initial people who set up the social scene (e.g. running singularity summits and being executive director of SIAI), being on the more "social/business development/management" end relative to someone like Eliezer; so if you live in the scene, which can be seen as a simulacrum, the people most involved in setting up the scene/simulacrum have the most aptitude at affecting memes related to it, like a world-simulator programmer has more aptitude at affecting the simulation than people within the simulation (though to a much lesser degree of course).

As a related example, Von Neumann was involved in setting up post-WWII US Modernism, and is also attributed extreme mental powers by modernism (e.g. extreme creativity in inventing a wide variety of fields); in creating the social system, he also has more memetic influence within that system, and could more effectively change its boundaries e.g. in creating new fields of study.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-27T16:21:51.139Z · LW(p) · GW(p)

Fyi I had a trip earlier in 2017 where I had the thought "Michael Vassar is God" and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.

2017 would be the year Eric's episode happened as well. Did this result in multiple conversation about "Michael Vassar is God" that Eric might then picked up when he hang around the group?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-27T20:24:20.086Z · LW(p) · GW(p)

I don't know, some of the people were in common between these discussions so maybe, but my guess would be that it wasn't causal, only correlational. Multiple people at the time were considering Michael Vassar to be especially insightful and worth learning from.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-27T21:17:02.276Z · LW(p) · GW(p)

I haven't used the word god myself nor have heard it used by other people to refer to someone who's insightful and worth learning from. Traditionally, people learn from prophets and not from gods.

comment by Avi (Avi Weiss) · 2021-10-19T17:16:11.573Z · LW(p) · GW(p)

Can someone please clarify what is meant in this conext by 'Vassar's group', or the term 'Vassarites' used by others?

My intution previously was that Michael Vassar had no formal 'group' or insitution of any kind, and it was just more like 'a cluster of friends who hung out together a lot', but this comment makes it seem like something more official.

Replies from: David Hornbein, Benquo
comment by David Hornbein · 2021-10-19T21:11:29.364Z · LW(p) · GW(p)

While "Vassar's group" is informal, it's more than just a cluster of friends; it's a social scene with lots of shared concepts, terminology, and outlook (although of course not every member holds every view and members sometimes disagree about the concepts, etc etc). In this way, the structure is similar to social scenes like "the AI safety community" or "wokeness" or "the startup scene" that coordinate in part on the basis of shared ideology even in the absence of institutional coordination, albeit much smaller. There is no formal institution governing the scene, and as far as I've ever heard Vassar himself has no particular authority within it beyond individual persuasion and his reputation.

Median Group is the closest thing to a "Vassarite" institution, in that its listed members are 2/3 people who I've heard/read describing the strong influence Vassar has had on their thinking and 1/3 people I don't know, but AFAIK Median Group is just a project put together by a bunch of friends with similar outlook and doesn't claim to speak for the whole scene or anything.

Replies from: Benquo
comment by Benquo · 2021-10-20T06:33:31.729Z · LW(p) · GW(p)

As a member of that cluster I endorse this description.

comment by Benquo · 2021-10-19T20:12:44.672Z · LW(p) · GW(p)

Michael and I are sometimes-housemates and I've never seen or heard of any formal "Vassarite" group or institution, though he's an important connector in the local social graph, such that I met several good friends through him.

comment by Eli Tyre (elityre) · 2021-10-20T06:47:58.164Z · LW(p) · GW(p)

Thank you very much for sharing. I wasn't aware of any of these details.

comment by Benquo · 2021-10-19T17:09:08.807Z · LW(p) · GW(p)

It sounds like you're saying that based on extremely sparse data you made up a Michael Vassar in your head to drive you crazy. More generally, it seems like a bunch of people on this thread, most notably Scott Alexander, are attributing spooky magical powers to him. That is crazy cult behavior and I wish they would stop it.

ETA: In case it wasn't clear, "that" = multiple people elsewhere in the comments attributing spooky mind control powers to Vassar. I was trying to summarize Eric's account concisely, because insofar as it assigns agency at all I think it does a good job assigning it where it makes sense to, with the person making the decisions.

Replies from: dxu, EricB
comment by dxu · 2021-10-19T22:06:25.387Z · LW(p) · GW(p)

Reading through the comments here, I perceive a pattern of short-but-strongly-worded comments from you, many of which seem to me to contain highly inflammatory insinuations while giving little impression of any investment of interpretive labor. It's not [entirely] clear to me what your goals are, but barring said goals being very strange and inexplicable indeed, it seems to me extremely unlikely that they are best fulfilled by the discourse style you have consistently been employing.

To be clear: I am annoyed by this. I perceive your comments as substantially lower-quality than the mean, and moreover I am annoyed that they seem to be receiving engagement far in excess of what I believe they deserve, resulting in a loss of attentional resources that could be used engaging more productively (either with other commenters, or with a hypothetical-version-of-you who does not do this). My comment here is written for the purpose of registering my impressions, and making it common-knowledge among those who share said impressions (who, for the record, I predict are not few) that said impressions are, in fact, shared.

(If I am mistaken in the above prediction, I am sure the voters will let me know in short order.)

I say all of the above while being reasonably confident that you do, in fact, have good intentions. However, good intentions do not ipso facto result in good comments, and to the extent that they have resulted in bad comments, I think one should point this fact out as bluntly as possible, which is why I worded the first two paragraphs of this comment the way I did. Nonetheless, I felt it important to clarify that I do not stand against [what I believe to be] your causes here, only the way you have been going about pursuing those causes.


(For the record: I am unaffiliated with MIRI, CFAR, Leverage, MAPLE, the "Vassarites", or the broader rationalist community as it exists in physical space. As such, I have no direct stake in this conversation; but I very much do have an interest in making sure discussion around any topics this sensitive are carried out in a mature, nuanced way.)

Replies from: Benquo
comment by Benquo · 2021-10-20T07:11:36.473Z · LW(p) · GW(p)

If you want to clarify whether I mean to insinuate something in a particular comment, you could ask, like I asked Eliezer [LW(p) · GW(p)]. I'm not going to make my comments longer without a specific idea of what's unclear, that seems pointless.

comment by humantoo (EricB) · 2021-10-19T17:29:39.310Z · LW(p) · GW(p)

It is accurate to state that I constructed a model of him based on limited information, which subsequently contributed to my dramatic psychological collapse. Nevertheless, the reason for developing this particular model can be attributed to his interactions with me and others. This was not due to any extraordinary or mystical abilities, but rather his profound commitment to challenging individuals' perceptions of conventional reality and mastering the most effective methods to do so.

This approach is not inherently negative. However, it must be acknowledged that for certain individuals, such an intense disruption of their perceived reality can precipitate a descent into a detrimental psychological state.

Replies from: Benquo, Benquo
comment by Benquo · 2021-10-20T07:15:15.610Z · LW(p) · GW(p)

Thanks for verifying. In hindsight my comment reads as though it was condemning you in a way I didn't mean to; sorry about that.

The thing I meant to characterize as "crazy cult behavior" was people in the comments here attributing things like what you did in your mind to Michael Vassar's spooky mind powers. You seem to be trying to be helpful and informative here. Sorry if my comment read like a personal attack.

comment by Benquo · 2021-10-20T16:39:08.724Z · LW(p) · GW(p)

This can be unpacked into an alternative to the charisma theory.

Many people are looking for a reference person to tell them what to do. (This is generally consistent with the Jaynesian family of hypotheses.) High-agency people are unusually easy to refer to, because they reveal the kind of information that allows others to locate them. There's sufficient excess demand that even if someone doesn't issue any actual orders, if they seem to have agency, people will generalize from sparse data to try to construct a version of that person that tells them what to do.

A more culturally central example than Vassar is Dr Fauci, who seems to have mostly reasonable opinions about COVID, but is worshipped by a lot of fanatics with crazy beliefs about COVID.

The charisma hypothesis describes this as a fundamental attribute of the person being worshipped, rather than a behavior of their worshippers.

comment by Scott Alexander (Yvain) · 2021-10-19T09:29:14.871Z · LW(p) · GW(p)

If this information isn't too private, can you send it to me? scott@slatestarcodex.com

Replies from: EricB
comment by humantoo (EricB) · 2021-10-19T17:41:48.777Z · LW(p) · GW(p)

I have sent you the document in question. As the contents are somewhat personal, I would prefer that it not be disseminated publicly. However, I am amenable to it being shared with individuals who have a valid interest in gaining a deeper understanding of the matter.

comment by jessicata (jessica.liu.taylor) · 2021-10-17T22:27:05.562Z · LW(p) · GW(p)

I feel pretty defensive reading and responding to this comment, given a previous conversation with Scott Alexander where he said his professional opinion would be that people who have had a psychotic break should be on antipsychotics for the rest of their life (to minimize risks of future psychotic breaks). This has known severe side effects like cognitive impairment and brain shrinkage and lacks evidence of causing long-term improvement. When I was on antipsychotics, my mental functioning was much lower (noted by my friends) and I gained weight rapidly. (I don't think short-term use of antipsychotics was bad, in my case)

It is in this context that I'm reading that someone talking about the possibility of mental subprocess implantation ("demons") should be "treated as a psychological emergency", when the Eric Bryulant case had already happened, and talking about the psychological processes was necessary for making sense of the situation. I feared involuntary institutionalization at the time, quite a lot, for reasons like this.

If someone expresses opinions like this, and I have reason to believe they would act on them, then I can't believe myself to have freedom of speech. That might be better than them not sharing the opinions at all, but the social structural constraints this puts me under are obvious to anyone trying to see them.

Given what happened, I don't think talking to a normal therapist would have been all that bad in 2017, in retrospect; it might have reduced the overall amount of psychiatric treatment needed during that year. I'm still really opposed to the coercive "you need professional help" framing in response to sharing weird thoughts that might be true, instead of actually considering them, like a Bayesian.

Replies from: Yvain, sil-ver
comment by Scott Alexander (Yvain) · 2021-10-17T23:18:14.646Z · LW(p) · GW(p)

I don't remember the exact words in our last conversation. If I said that, I was wrong and I apologize.

My position is that in schizophrenia (which is a specific condition and not just the same thing as psychosis), lifetime antipsychotics might be appropriate. EG this paper suggests continuing for twelve months after a first schizophrenic episode and then stopping and seeing how things go, which seems reasonable to me. It also says that if every time you take someone off antipsychotics they become fully and dangerous psychotic again, then lifetime antipsychotics are probably their best bet. In a case like that, I would want the patient's buy-in, ie if they were medicated after a psychotic episode I would advise them of the reasons why continued antipsychotic use was recommended in their case, if they said they didn't want it we would explore why given the very high risk level, and if they still said they didn't want it then I would follow their direction.

I didn't get a chance to talk to you during your episode, so I don't know exactly what was going on. I do think that psychosis should be thought of differently than just "weird thoughts that might be true", as more of a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom. I think in mild psychosis it's possible to snap someone back to reality where they agree their weird thoughts aren't true, but in severe psychosis it isn't (I remember when I was a student I tried so hard to convince someone that they weren't royalty, hours of passionate debate, and it just did nothing). I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom. Analogy to eg someone having chest pain from a heart attack, and you give them painkillers for the pain but don't treat the heart attack.

(although there's a separate point where it would be wrong and objectifying to falsely claim someone who's just thinking differently is psychotic or pre-psychotic, given that you did end up psychotic it doesn't sound like the people involved were making that mistake)

My impression is that some medium percent of psychotic episodes end in permanent reduced functioning, and some other medium percent end in suicide or jail or some other really negative consequence, and this is scary enough that treating it is always an emergency, and just treating the symptom but leaving the underlying condition is really risky.

I agree many psychiatrists are terrible and that wanting to avoid them is a really sympathetic desire, but when it's something really serious like psychosis I think of this as like wanting to avoid surgeons (another medical profession with more than its share of jerks!) when you need an emergency surgery.

Replies from: jessica.liu.taylor, hg00, TekhneMakre
comment by jessicata (jessica.liu.taylor) · 2021-10-17T23:25:14.830Z · LW(p) · GW(p)

I don’t remember the exact words in our last conversation. If I said that, I was wrong and I apologize.

Ok, the opinions you've described here seem much more reasonable than what I remember, thanks for clarifying.

I do think that psychosis should be thought of differently than just “weird thoughts that might be true”, since it’s a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom.

I agree, yes. I think what I was afraid of at the time was being called crazy and possibly institutionalized for thinking somewhat weird thoughts that people would refuse to engage with, and showing some signs of anxiety/distress that were in some ways a reaction to my actual situation. By the time I was losing sleep etc, things were quite different at a physiological level and it made sense to treat the situation as a psychiatric emergency.

If you can show someone that they're making errors that correspond to symptoms of mild psychosis, then telling them that and suggesting corresponding therapies to help with the underlying problem seems pretty reasonable.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2021-10-17T23:44:23.947Z · LW(p) · GW(p)

Thanks, if you meant that, when someone is at a very early stage of thinking strange things, you should talk to them about it and try to come to a mutual agreement on how worrying this is and what the criteria would be for psych treatment, instead of immediately dehumanizing them and demanding the treatment right away, then I 100% agree.

comment by hg00 · 2021-10-18T09:53:08.251Z · LW(p) · GW(p)

I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom.

If psychosis is caused by an underlying physiological/biochemical process, wouldn't that suggest that e.g. exposure to Leverage Research wouldn't be a cause of it?

If being part of Leverage is causing less reality-based thoughts and nudging someone into mild psychosis, I would expect that being part of some other group could cause more reality-based thoughts and nudge someone away from mild psychosis. Why would causation be possible in one direction but not the other?

I guess another hypothesis here is that some cases are caused by social/environmental factors and others are caused by biochemical factors. If that's true, I'd expect changing someone's environment to be more helpful for the former sort of case.

comment by TekhneMakre · 2021-10-17T23:38:25.826Z · LW(p) · GW(p)

[probably old-hat [ETA: or false], but I'm still curious what you think] My (background unexamined) model of psychosis-> schizophrenia is that something, call it the "triggers", sets a person on a trajectory of less coherence / grounding; if the trajectory isn't corrected, they just go further and further. The "triggers" might be multifarious; there might be "organic" psychosis and "psychic" psychosis, where the former is like what happens from lead poisoning, and the latter is, maybe, what happens when you begin to become aware of some horrible facts. If your brain can rearrange itself quickly enough to cope with the newly known reality, your trajectory points back to the ground. If it can't, you might have a chain reaction where (1) horrible facts you were previously carefully ignoring, are revealed because you no longer have the superstructure that was ignore-coping with them; (2) your ungroundedness opens the way to unepistemic beliefs, some of which might be additionally horrifying if true; (3) you're generally stressed out because things are going wronger and wronger, which reinforces everything.

If this is true, then your statement:

. I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that's kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom

is only true for some values of "guide them back to reality-based thoughts". If you're trying to help them go back to ignore-coping, you might partly succeed, but not in a stable way, because you only pushed the ball partway back up the hill, to mix metaphors--the ball is still on a slope and will roll back down when you stop pushing, the horrible fact is still revealed and will keeping being horrifying. But there's other things you could do, like helping them find a non-ignore-cope for the fact; or show them enough that they become convinced that the belief isn't true.

comment by Rafael Harth (sil-ver) · 2021-10-17T22:55:13.397Z · LW(p) · GW(p)

There is this basic idea (I think from an old blogpost that Eliezer wrote) that if someone says there are goblins in the closet, dismissing them outright is confusing rationality with trust in commonly held claims, whereas the truly rational thing is to just open the closet and look.

I think this is correct in principle but not applicable in many real-world cases. The real reason why even rational people routinely dismiss many weird explanations for things isn't that they have sufficient evidence against them, it's that the weird explanation is inconsistent with a large set of high confidence beliefs that they currently hold. If someone tells me that they can talk to their deceased parents, I'm probably not going to invest the time to test whether they can obtain novel information this way; I'm just going to assume they're delusional because I'm confident spirits don't exist.

That said, if that someone helped write the logical induction paper, I personally would probably hear them out regardless of how weird the thing sounds. Nonetheless, I think it remains true that dismissing beliefs without considering the evidence is often necessary in practice.

Replies from: TekhneMakre, CronoDAS
comment by TekhneMakre · 2021-10-17T23:11:37.541Z · LW(p) · GW(p)
If someone tells me that they can talk to their deceased parents, I'm probably not going to invest the time to test whether they can obtain novel information this way; I'm just going to assume they're delusional because I'm confident spirits don't exist.

This is failing to track ambiguity in what's being refered to. If there's something confusing happening--something that seems important or interesting, but that you don't yet have words to well-articulate it--then you try to say what you can (e.g. by talking about "demons"). In your scenario, you don't know exactly what you're dismissing. You can confidently dismiss, in the absence of extraordinary evidence, that (1) their parents's brains have been rotting in the ground, and (2) they are talking with their parents, in the same way you talk to a present friend; you can't confidently dismiss, for example, that they are, from their conscious perspective, gaining information by conversing with an entity that's naturally thought of as their parents (which we might later describe as, they have separate structure in them, not integrated with their "self", that encoded thought patterns from their parents, blah blah blah etc.). You can say "oh well yes of course if it's *just a metaphor* maybe I don't want to dismiss them", but the point is that from a partially pre-theoretic confusion, it's not clear what's a metaphor and it requires further work to disambiguate what's a metaphor.

comment by CronoDAS · 2021-10-18T06:38:17.543Z · LW(p) · GW(p)

As the joke goes, there's nothing crazy about talking to dead people. When dead people respond, then you start worrying.

comment by nshepperd · 2021-10-18T02:22:59.509Z · LW(p) · GW(p)

I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.

Based on the things I am reading about what has happened, blame, ostracism, and cancelling seem like the bare minimum of what we should do.

Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve.

This is really, really serious. If this happened to someone closer to me I'd be out for blood, and probably legal prosecution.

Let's not minimize how fucked up this is.

Replies from: jessica.liu.taylor, devi
comment by jessicata (jessica.liu.taylor) · 2021-10-18T02:28:23.066Z · LW(p) · GW(p)

Olivia, Devi and I all talked to people other than Michael Vassar, such as Anna Salamon. We gravitated towards the Berkeley community, which was started around Eliezer's writing. None of us are calling for blame, ostracism, or cancelling of Michael. Michael helped all of us in ways no one else did. None of us have a motive to pursue a legal case against him. Ziz's sentence you quoted doesn't implicate Michael in any crimes.

The sentence is also misleading given Devi didn't detransition afaik.

Replies from: Viliam, nshepperd
comment by Viliam · 2021-10-18T09:48:50.679Z · LW(p) · GW(p)

Jessicata, I will be blunt here. This article you wrote was [EDIT: expletive deleted] misleading. Perhaps you didn't do it on purpose; perhaps this is what you actually believe. But from my perspective, you are an unreliable narrator.

Your story, original version:

  • I worked for MIRI/CFAR
  • I had a psychotic breakdown, and I believed I was super evil
  • the same thing also happened to a few other people
  • conclusion: MIRI/CFAR is responsible for all this

Your story, updated version:

  • I worked for MIRI/CFAR
  • then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil
  • I actually used the drugs
  • I had a psychotic breakdown, and I believed I was super evil
  • the same thing also happened to a few other people
  • conclusion: I still blame MIRI/CFAR, and I am trying to downplay Vassar's role in this

If you can't see how these two stories differ, then... I don't have sufficiently polite words to describe it, so let's just say that to me these two stories seem very different.

Lest you accuse me of gaslighting, let me remind you that I am not doubting any of the factual statements you made. (I actually tried to collect them here [LW(p) · GW(p)], to separate them from the long stream of dark insinuations.) What I am saying is that you omitted a few "details", which perhaps seem irrelevant to you, but in my opinion fundamentally change the meaning of the story.

At this moment, we just have to agree to disagree, I guess.

In my opinion, the greatest mistake MIRI/CFAR made in this story, was being associated with Michael Vassar in the first place (and that's putting it mildly; at some moment it seemed like Eliezer was in love with him, he so couldn't stop praising his high intelligence... well, I guess he learned that "alignment is more important than intelligence" applies not just to artificial intelligences but also to humans), providing him social approval and easy access to people who then suffered as a consequence. They are no longer making this mistake. Ironically, now it's you, after having positioned yourself as a victim, who is blinded by his intelligence, and doesn't see the harm he causes. But the proper way to stop other people from getting hurt is to make it known that listening too much to Vassar does this, predictably. So that he can no longer use the rationalist community as a "social proof" to get people's trust.

EDIT: To explain my unkind words "after having positioned yourself as a victim", the thing I am angry about is that you publicly describe your suffering as a way to show people that MIRI/CFAR is evil. But when it turns of that Michael Vassar is more directly responsible for it, suddenly the angle changes and he actually "helped you".

So could you please make up your mind? Is having a psychotic breakdown and spending a few weeks catatonic in hospital a good thing or a bad thing? Is it trauma, or is it jailbreaking? Because it seems like you call it bad when you attribute it to MIRI/CFAR, but when other people suggest that Vassar was responsible, then it seems a bit like no big deal, definitely not anything to blame him for.

Replies from: Eliezer_Yudkowsky, TekhneMakre, Unreal, countingtoten, jessica.liu.taylor
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-10-18T15:33:05.471Z · LW(p) · GW(p)

I could be very wrong, but the story I currently have about this myself is that Vassar himself was a different and saner person before he used too much psychedelics. :( :( :(

Replies from: orthonormal, Viliam, jimrandomh
comment by orthonormal · 2021-10-19T01:32:01.556Z · LW(p) · GW(p)

Non-agenda'd question: about when did you notice changes in him?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-10-19T05:37:08.812Z · LW(p) · GW(p)

My autobiographical episodic memory is nowhere near good enough to answer this question, alas.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-19T07:21:05.730Z · LW(p) · GW(p)

Do you have a timeline of when you think that shift happened? That might make it easier for other people who knew Vassar at the time to say whether their observation matched yours.

comment by Viliam · 2021-10-18T17:35:40.503Z · LW(p) · GW(p)

That... must have hurt a lot.

(I hope your story is right.)

comment by jimrandomh · 2021-10-19T09:24:14.588Z · LW(p) · GW(p)

I saw some him make some questionable drug use decisions at Burning Man in 2011 and 2012, including larger than normal doses, and I don't think I saw all of it.

Replies from: Tenoke
comment by Tenoke · 2021-10-21T12:57:08.536Z · LW(p) · GW(p)

A lot of people take a lot of drugs on big events like Burning Man with little issue. In my observation, it's typically the overly frequent and/or targeted psychedelic use that causes such big changes at least in those that start of fairly stable.

comment by TekhneMakre · 2021-10-18T11:00:18.968Z · LW(p) · GW(p)
you publicly describe your suffering as a way to show people that MIRI/CFAR is evil.

Could you expand more on this? E.g. what are a couple sentences in the post that seem most trying to show this.

Because it seems like you call it bad when you attribute it to MIRI/CFAR, but when other people suggest that Vassar was responsible, then it seems a bit like no big deal, definitely not anything to blame him for.

I appreciate the thrust of your comment, including this sentence, but also this sentence seems uncharitable, like it's collapsing down stuff that shouldn't be collapsed. For example, it could be that the MIRI/CFAR/etc. social field could set up (maybe by accident, or even due to no fault of any of the "central" people) the conditions where "psychosis" is the best of the bad available options; in which case it makes sense to attribute causal fault to the social field, not to a person who e.g. makes that clear to you, and therefore more proximal causes your breakdown. (Of course there's disagreement about whether that's the state of the world, but it's not necessarily incoherent.)

I do get the sense that jessicata is relating in a funny way to Michael Vassar, e.g. by warping the narrative around him while selectively posing as "just trying to state facts" in relation to other narrative fields; but this is hard to tell, since it's also what it might look like if Michael Vassar was systematically scapegoated, and jessicata is reporting more direct/accurate (hence less bad-seeming) observations.

comment by Unreal · 2021-10-18T13:16:02.437Z · LW(p) · GW(p)

Where did jessicata corroborate this sentence "then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil" ? 

comment by countingtoten · 2021-10-18T11:56:43.862Z · LW(p) · GW(p)

I should note that, as an outsider, the main point I recall Eliezer making in that vein is that he used Michael Vassar as a model for the character who was called Professor Quirrell. As an outsider, I didn't see that as an unqualified endorsement - though I think your general message should be signal-boosted.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-18T14:21:10.652Z · LW(p) · GW(p)

The claim that Michael Vassar is substantially like Quirrell seems to me strange. Where did you get the claim that Eliezer modelled Vassar after Quirrell?

To make the claim a bit more based on public data, take Vassar's TedX talk. I think it gives a good impression of how Vassar thinks. There are some official statistics that claim for Jordan that life expectancy, so I think there's a good chance that Vassar here actually believes what he says.

If you however look deeper then Jordan's life expectancy is not as high as is asserted by Vassar. Given that the video is in the public record that's an error that everybody can find who tries to check what Vassar is saying. I don't think it's in Vassar's interest to give a public talk like that with claims that are easily found to be wrong by factchecking. Quirrell wouldn't have made an error like this but is a lot more controlled. 

Eliezer made Vassar president of the precursor of MIRI. That's a strong signal of trust and endorsement.

Replies from: countingtoten, Davis_Kingsley
comment by Davis_Kingsley · 2021-10-18T14:43:13.518Z · LW(p) · GW(p)

Eliezer has openly said Quirrell's cynicism is modeled after a mix of Michael Vassar and Robin Hanson.

comment by jessicata (jessica.liu.taylor) · 2021-10-18T14:10:17.022Z · LW(p) · GW(p)

But from my perspective, you are an unreliable narrator.

I appreciate you're telling me this given that you believe it. I definitely am in some ways, and try to improve over time.

then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil

I said in the text that (a) there were conversations about corruption in EA institutions, including about the content of Ben Hoffman's posts, (b) I was collaborating with Michael Vassar at the time, (c) Michael Vassar was commenting about social epistemology. I admit that connecting points (a) and (c) would have made the connection clearer, but it wouldn't have changed the text much.

In cases where someone was previously part of a "cult" and later says it was a "cult" and abusive in some important ways, there has to be a stage where they're thinking about how bad the social context was, and practically always, that involves conversations with other people who are encouraging them to look at the ways their social context is bad. So my having conversations where people try to convince me CFAR/MIRI are evil is expected given what else I have written.

Besides this, "in order to get a psychotic breakdown" is incredibly false about his intentions, as Zack Davis points out [LW · GW].

I actually used the drugs

This was not in the literally initial version of the post but was included within a few hours, I think, when someone pointed out to me that it was relevant.

But the proper way to stop other people from getting hurt is to make it known that listening too much to Vassar does this, predictably.

As I pointed out [LW(p) · GW(p)], this doesn't obviously attribute less "spooky mind powers" to Michael Vassar compared with what Leverage was attributing to people, where Leverage attributing this (and isolating people from each other on the basis of it) was considered crazy and abusive. Maybe he really was this influential, but logical consistency is important here.

But when it turns of that Michael Vassar is more directly responsible for it, suddenly the angle changes and he actually “helped you”.

In this comment [LW(p) · GW(p)] I'm saying he has an unclear and probably low amount of responsibility, so this is a misread.

So could you please make up your mind?

I was pretty clear in the text that there were trauma symptoms resulting from these events and they also had advantages such as gaining a new perspective, and that overall I don't regret working at MIRI. I was also clear that there are relatively better and worse social contexts in which to experience psychosis symptoms, and hospitalization indicates a relatively worse social context.

comment by nshepperd · 2021-10-18T02:42:24.988Z · LW(p) · GW(p)

None of us are calling for blame, ostracism, or cancelling of Michael.

What I'm saying is that the Berkeley community should be.

Ziz’s sentence you quoted doesn’t implicate Michael in any crimes.

Supplying illicit drugs is a crime (but perhaps the drugs were BYO?). IDK if doing so and negligently causing permanent psychological injury is a worse crime, but it should be.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T02:46:33.279Z · LW(p) · GW(p)

I'm not going to comment on drug usage in detail for legal reasons, except to note that there are psychedelics legal in some places, such as marijuana in CA.

It doesn't make sense to attribute unique causal responsibility for psychotic breaks to anyone, except maybe to the person it's happening to. There are lots of people all of us were talking to in that time period who influenced us, and multiple people were advocating psychedelic use. Not all cases happened to people who were talking significantly with Michael around the time. As I mentioned in the OP, as I was becoming more psychotic, people tried things they thought might help, which generally didn't, and they could have done better things instead. Even causal responsibility doesn't imply blame, e.g. Eliezer had some causal responsibility due to writing things that attracted people to the Berkeley scene where there were higher-variance psychological outcomes. Michael was often talking with people who were already "not ok" in important ways, which probably affects the statistics.

comment by devi · 2021-10-18T16:59:17.952Z · LW(p) · GW(p)

Please see my comment on the grandparent [LW(p) · GW(p)].

I agree with Jessica's general characterization that this is better understood as multi-causal rather than the direct cause of actions by one person.

comment by jimrandomh · 2021-10-18T23:40:53.352Z · LW(p) · GW(p)

Relevant bit of social data: Olivia is the most irresponsible-with-drugs person I've ever met, by a sizeable margin; and I know of one specific instance (not a person named in your comment or any other comments on this post) where Olivia gave someone an ill-advised drug combination and they had a bad time (though not a psychotic break).

Replies from: Viliam
comment by Viliam · 2021-10-19T08:50:12.036Z · LW(p) · GW(p)

gave someone an ill-advised drug combination and they had a bad time

I don't remember specific names, but something similar happened at one of the first rationality minicamps. Technically, this was not about drugs but some supplements (i.e. completely legal things), but there was someone mixing various kinds of powders and saying "yeah, trust me, I have a lot of experience with this, I did a lot of research, it is perfectly safe to take a dose this high, really", and then an ambulance had to be called.

So, I assume you meant that Olivia goes even far beyond this, right?

Replies from: jimrandomh
comment by jimrandomh · 2021-10-19T09:49:33.127Z · LW(p) · GW(p)

My memory of the RBC incident you're referring to was that it wasn't supplements that did it, it was a caffeine overdose from energy drinks leading into a panic attack. But there were certainly a lot of supplements around and they could've played a role I didn't know about.

When I say that I believe Olivia is irresponsible with drugs, I'm not excluding the unscheduled supplements, but the story I referred to involved the scheduled kind.

comment by Scott Alexander (Yvain) · 2021-10-18T20:55:15.797Z · LW(p) · GW(p)

I've posted an edit/update above after talking to Vassar.

comment by gwern · 2021-10-18T02:03:44.503Z · LW(p) · GW(p)

A question for the 'Vassarites', if they will: were you doing anything like the "unihemispheric sleep" exercise (self-inducing hallucinations/dissociative personalities by sleep deprivation) the Zizians are described as doing?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T02:10:44.265Z · LW(p) · GW(p)

No. All sleep deprivation was unintentional (anxiety-induced in my case).

comment by ChristianKl · 2021-10-18T07:58:18.783Z · LW(p) · GW(p)

I banned him from SSC meetups for a combination of reasons including these

If you make bans like these it would be worth to communicate them to the people organizing SSC meetups. Especially, when making bans for safety reasons of meetup participants not communicating those bans seems very strange to me.

Vassar lived a while after he left the Bay Area in Berlin and for decisions whether or not to make an effort to integrate someone like him (and invite him to LW and SSC meetups) such kind of information is valuable and Bay people not sharing it but claiming that they do anything that would work in practice like a ban feels misleading. 

For reasons I don't fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything's kind of been frozen in place since then.

I think Vassar left the Bay area more then a year before COVID happened. As far as I remember his stated reasoning was something along the lines of everyone in the Bay Area getting mindkilled by leftish ideology.

Replies from: Yvain, Viliam
comment by Scott Alexander (Yvain) · 2021-10-18T10:34:57.020Z · LW(p) · GW(p)

It was on the Register of Bans, which unfortunately went down after I deleted the blog. I admit I didn't publicize it very well because this was a kind of sensitive situation and I was trying to do it without destroying his reputation.

Replies from: ChristianKl, ChristianKl
comment by ChristianKl · 2021-10-18T10:59:49.110Z · LW(p) · GW(p)

If there are bans that are supposed to be enforced, mentioning that in the mails that go out to organizers for a ACX everywhere event would make sense. I'm not 100% sure that I got all the mails because Ruben forwarded mails for me (I normally organize LW meetups in Berlin and support Ruben with the SSC/ACX meetups), but in those there was no mention of the word ban.

I don't think it needs to be public but having such information in a mail like the one Aug 23 would likely to be necessary for a good portion of the meetup organizers to know that there an expectation that certain people aren't welcome.

comment by ChristianKl · 2021-10-18T22:28:56.626Z · LW(p) · GW(p)

https://www.lesswrong.com/posts/iWWjq5BioRkjxxNKq/michael-vassar-at-the-slatestarcodex-online-meetup [LW · GW] seems to have happened after that point in time. Vassar not only attended a Slate Star Codex but was central in it and presenting his thoughts.

Replies from: JoshuaFox
comment by JoshuaFox · 2021-10-21T12:56:13.807Z · LW(p) · GW(p)

I organized that, so let me say that:

  • That online meetup, or the invitation to Vassar, was not officially affiliated to or endorsed by SSC. Any responsibility for inviting him is mine.
  • I have  conversed with him a few times, as follows:
  • I met him in Israel around 2010. He was quite interesting, though he did try to get me to withdraw my retirement savings to invest with him. He was somewhat persuasive. During our time in conversation, he made some offensive statements, but I am perhaps less touchy about such things than the younger generation.
  • In 2012, he explained  Acausal Trade to me, and that was the seed of  this post [? · GW]. That discussion was quite sensible and I thank him for that.
  • A few years later, I invited him to speak at LessWrong Israel.  At that time I thought him a mad genius -- truly both.  His talk was verging on incoherence, with flashes of apparent insight.
  • Before the online meetup, 2021, he insisted on a preliminary talk; he made statements that produced twinges of persuasiveness. (Introspecting that is kind of interesting, actually.) I stayed with it for 2 or more hours before begging off, because it was fascinating in a way. I was able to analyze his techniques as Dark Arts. Apparently I am mature enough to shrug off such techniques.
  • His talk at my online meetup was even less coherent than any before, with multiple offensive elements. Indeed, I believe it was a mistake to have him on.

If I have offended anyone, I apologize, though  I believe that letting someone speak is generally not something to be afraid of. But I wouldn't invite him again.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-21T14:07:20.991Z · LW(p) · GW(p)

It seems to me that despite organizing multiple SSC events you had no knowledge that Vassar was banned from SSC events. Neither had anyone reading the event anouncement to the extend that they would tell you that Vassar was banned before the event happened.

To me that suggests that there's a problem of not sharing information about who's banned to those organizing meetups in an effective way, so that a ban has the consequence one would expect it to have.

comment by Viliam · 2021-10-18T10:31:13.463Z · LW(p) · GW(p)

It might be useful to have a global blacklist somewhere. Possible legal consequences, if someone decides to sue you for libel. (Perhaps the list should only contain the names, not the reasons?)

EDIT: Nevermind. There are more things I would like to say about this, but this is not the right place. Later I may write a separate article explaining the threat model I had in mind.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-18T15:14:20.825Z · LW(p) · GW(p)

Legal threats matter a great deal for what can be done in a situation like this.

When it comes to a "global blacklist" there's the question about governance. Who decides who's on and who isn't. When it comes to SSC or ACX meetups the governance question is clear. Anybody who's organizing a meetup under those labels should follow Scott's guidance. 

That however only works if that information is communicated to meetup organizers. 

comment by Desrtopa · 2021-10-18T01:55:31.851Z · LW(p) · GW(p)

So, it's been a long time since I actually commented on Less Wrong, but since the conversation is here...

Hearing about this is weird for me, because I feel like, compared to the opinions I heard about him from other people in the community, I kind of... always had uncomfortable feelings about Mike Vassar? And I say this without having had direct personal contact with him except, IIRC, maybe one meetup I attended where he was there and we didn't talk directly, although we did occasionally participate in some of the same conversations online.

 

By all accounts, it sounds like he's always been quite charismatic in person, and this isn't the first time I've heard someone describe him as a "wizard." But empirically, there are some people who're very charismatic who propagate some really bad ideas and whose impacts on the lives of people around them, or on society at large, can be quite negative. As of last I was paying attention to him, I wouldn't have expected Mike Vassar to have that negative an effect on the lives of the people around him, but I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought to be taken. He evoked in a lot of people that feeling of "if these ideas are true, this is really huge," but... there's no shortage of ideas of ideas you can say that about, and I was always confused by the degree of credence people gave that his ideas were worth taking seriously. He always gave me a cult leaderish impression, in a way that, say, Eliezer never did, as encouraging other people to take seriously ideas which I couldn't understand why they didn't treat with more skepticism. 

I haven't thought about him in quite some time now, but I still distinctly remember that feeling of "why do these smart people around me take this person so seriously? I just don't see how his explanations of his ideas justify that."

Replies from: vanessa-kosoy, Viliam, CronoDAS
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-18T12:51:26.644Z · LW(p) · GW(p)

I met Vassar once. He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd. Something about his delivery was so captivating, that it took me a while to "shake off the fairy dust" and realize just how silly some of his claims were, even when it should have been obvious from the start. Moreover, his worldview seemed heavily based on paranoidal / conspiracy-theory type of thinking. So, yes, I'm not too surprised by Scott's revelations about him.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2021-10-20T03:21:35.720Z · LW(p) · GW(p)

He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd.

Yeah, it definitely didn't work on me. I believe I wrote this thread [LW(p) · GW(p)] shortly after my one-and-only interaction with him, in which he said a lot of things that made me very skeptical but that I couldn't easily refute, or had much time to think about before he would move on to some other topic. (Interestingly, he actually replied in that thread even though I didn't mention him by name.)

It saddens me to learn that his style of conversation/persuasion "works" on many people who otherwise seem very smart and capable (and even self-selected for caring about being rational). It seems like pretty bad news as far as what kind of epistemic situation humanity is in (e.g., how easily we will be manipulated by even slightly-smarter-than-human AIs / human-AI systems).

Replies from: Wei_Dai, Kenny
comment by Wei Dai (Wei_Dai) · 2021-10-20T07:13:17.052Z · LW(p) · GW(p)

(Interestingly, he actually replied in that thread even though I didn’t mention him by name.)

Oh, this is because the OP that I was replying to [LW · GW] did mention him by name:

One of the things that makes Michael Vassar an interesting person to be around is that he has an opinion about everything. If you locked him up in an empty room with grey walls, it would probably take the man about thirty seconds before he'd start analyzing the historical influence of the Enlightenment on the tradition of locking people up in empty rooms with grey walls.

comment by Kenny · 2021-10-20T03:31:36.640Z · LW(p) · GW(p)
comment by Viliam · 2021-10-18T10:26:40.363Z · LW(p) · GW(p)

I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought to be taken.

Heh, the same feeling here. I didn't have much opportunity to interact with him in person. I remember repeatedly hearing praise about how incredibly smart he is (from people whom I admired), then trying to find something smart written by him, and feeling unimpressed and confused, like maybe I wasn't reading the right texts or I failed to discover the hidden meaning that people smarter than me have noticed.

Hypothesis 1: I am simply not smart enough to recognize his greatness. I can recognize people one level above me, and they can recognize people one level above them, but when I try to understand someone two levels above me, it's all gibberish to me.

Hypothesis 2: He is more persuasive in person than in writing. (But once he impressed you in person, you will now see greatness in his writing, too. Maybe because of halo affect. Maybe because now you understand the hidden layers of what he actually meant by that.) Maybe he is more persuasive in person because he can make his message optimized for the receiver; which might be a good thing, or a bad thing.

Hypothesis 3: He gives high-variance advice. Some of it amazingly good, some of it horribly wrong. When people take him seriously, some of them benefit greatly, others suffer. Those who benefitted will tell the story. (Those who suffered will leave the community.)

My probability distribution was gradually shifting from 1 to 3.

Replies from: AnnaSalamon, Avi Weiss
comment by AnnaSalamon · 2021-10-19T15:40:19.002Z · LW(p) · GW(p)

Not a direct response to you, but if anyone who hasn't talked to Vassar is wanting an example of Vassar-conversation that may be easier to understand or get some sense from than most examples would (though it'll have a fair bit in it that'll probably still seem false/confusing), you might try Spencer Greenberg's podcast with Vassar.

Replies from: elityre, Avi Weiss
comment by Eli Tyre (elityre) · 2021-10-20T00:14:25.382Z · LW(p) · GW(p)

As a datapoint: I listened to that podcast 4 times, and took notes 3 of those 4 times, to try and clearly parse what he's saying. I certainly did not fully succeed. 

My notes.

It seems like he said some straightforwardly contradictory things? For instance, that strong conflict theorists trust their own senses and feelings more, but also trust them less?

I would really like to understand what he's getting at by the way, so if it is clearer for you than it is for me, I'd actively appreciate clarification.

Replies from: Unreal
comment by Unreal · 2021-10-20T03:04:07.061Z · LW(p) · GW(p)

i tried reading / skimming some of that summary

it made me want to scream 

what a horrible way to view the world / people / institutions / justice 

i should maybe try listening to the podcast to see if i have a similar reaction to that 

Replies from: JenniferRM
comment by JenniferRM · 2021-10-28T18:53:46.927Z · LW(p) · GW(p)

Seeing as how you posted this 9 days ago, I hope you did not bite off more than you could chew, and I hope you do not want to scream anymore.

In Harry Potter the standard practice seems to be to "eat chocolate" and perhaps "play with puppies" after exposure to ideas that are both (1) possibly true, and (2) very saddening to think about.

Then there is Gendlin's Litany [LW · GW] (and please note that I am linking to a critique, not to unadulterated "yay for the litany" ideas) which I believe is part of Lesswrong's canon somewhat on purpose. In the critique there are second and third thoughts along these lines, which I admire for their clarity, and also for their hopefulness.

Ideally [a better version of the Litany] would communicate: “Lying to yourself will eventually screw you up worse than getting hurt by a truth,” instead of “learning new truths has no negative consequences.”

This distinction is particularly important when the truth at hand is “the world is a fundamentally unfair place that will kill you without a second thought if you mess up, and possibly even if you don’t.”

EDIT TO CLARIFY: The person who goes about their life ignoring the universe’s Absolute Neutrality is very fundamentally NOT already enduring this truth. They’re enduring part of it (arguably most of it), but not all. Thinking about that truth is depressing for many people. That is not a meaningless cost. Telling people they should get over that depression and make good changes to fix the world is important. But saying that they are already enduring everything there was to endure, seems to me a patently false statement, and makes your argument weaker, not stronger.

The reason to include the Litany (flaws and all?) in a canon would be specifically to try to build a system of social interactions that can at least sometimes talk about understanding the world as it really is. 

Then, atop this shared understanding of a potentially sad world, the social group with this litany as common knowledge might actually engage in purposive (and "ethical"?) planning processes that will work because the plans are built on an accurate perception of the barriers and risks of any given plan. In theory, actions based on such plans would mostly tend to "reliably and safely accomplish the goals" (maybe not always, but at least such practices might give one an edge) and this would work even despite the real barriers and real risks that stand between "the status quo" and "a world where the goal has been accomplished"... thus, the litany itself:

What is true is already so.
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.

And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived.
People can stand what is true,
for they are already enduring it.

In my personal experience, as a person with feelings, is that I can only work on "the hot stuff" mostly only in small motions, mostly/usually as a hobby, because otherwise the totalizing implications of some ideas threaten to cause an internal information cascade [LW · GW] that is probably abstractly undesirable, and if the cascade happens it might require the additional injection of additional cognitive and/or emotional labor of a very unusual sort in order to escape from the metaphorical "gravity well" of perspectives like this, which have internal logic that "makes as if to demand" that the perspective not be dropped, except maybe "at one's personal peril". 

Running away from the first hint of a non-trivial infohazard, especially an infohazard being handled without thoughtful safety measures, is a completely valid response in my book.

Another great option is "talk about it with your wisest and most caring grand parent (or parent)".

Another option is to look up the oldest versions of the idea, and examine their sociological outcomes (good and bad, in a distribution), and consider if you want to be exposed to that outcome distribution. 

Also, you don't have to jump in. You can take baby steps (one per week or one per month or one per year) and re-apply your safety checklist after each step?

Personally, I try not to put "ideas that seem particularly hot" on the Internet, or in conversations, by default, without verifying things about the audience, and but I could understand someone who was willing to do so.

However also, I don't consider a given forum to be "the really real forum, where the grownups actually talk"... unless infohazards like this cause people to have some reaction OTHER than traumatic suffering displays (and upvotes of the traumatic suffering display from exposure to sad ideas).

This leads me to be curious about any second thoughts or second feelings you've had, but only if you feel ok sharing them in this forum. Could you perhaps reply with:
<silence> (a completely valid response, in my book)
"Mu." (that is, being still in the space, but not wanting to pose or commit)
"The ideas still make me want to scream, but I can afford emitting these ~2 bits of information." or 
"I calmed down a bit, and I can think about this without screaming now, and I wrote down several ideas and deleted a bunch of them and here's what's left after applying some filters for safety: <a few sentences with brief short authentic abstractly-impersonal partial thoughts>".

comment by Avi (Avi Weiss) · 2021-10-19T15:47:23.368Z · LW(p) · GW(p)

There's also these 2 podcasts which cover quite a variety of topics, for anyone who's interested:
You've Got Mel - With Michael Vassar
Jim Rutt Show - Michael Vassar on Passive-Aggressive Revolution

comment by Avi (Avi Weiss) · 2021-10-18T10:31:10.315Z · LW(p) · GW(p)

I haven't seen/heard anything particularly impressive from him either, but perhaps his 'best work' just isn't written down anywhere?

comment by CronoDAS · 2021-10-18T06:44:34.948Z · LW(p) · GW(p)

My impression as an outsider (I met him once and heard and read some things people were saying about him) was that he seemed smart but also seemed like kind of a kook...

comment by jessicata (jessica.liu.taylor) · 2021-12-18T21:00:14.973Z · LW(p) · GW(p)

I have replied to this comment in a top-level post [LW · GW].

comment by lc · 2023-06-09T07:40:32.104Z · LW(p) · GW(p)

When I first met Vassar, it was a random encounter in an experimental group call organized by some small-brand rationalist. He talked for about an hour, and automatically became the center of conversation, I typed notes as fast as I could, thinking, “if this stuff is true it changes everything; it’s the [crux] of my life.” (It true, but I did not realize it immediately.) Randomly, another person found the link, came in and said, “hi”. [Vassar] said “hi”, she said “hi” again, apparently for humor. [Vassar] said something terse I forget “well if this is what …”, apparently giving up on the venue, and disconnected without further comment. One by one, the other ~10 people including besides her, including me disconnected disappointedly, wordlessly or just about right after. A wizard was gracing us with his wisdom and she fucked it up. And in my probably-representative case that was just about the only way I could communicate how frustrated I was at her for that.

Ziz's perspective here gives you a pretty detailed example of how this social trick works (i.e. spontaneously pretend something someone else did was objectionable and use it as an excuse to make a fit/leave to make the other person walk on eggshells or chase you).

comment by Dr_Manhattan · 2021-10-19T00:31:15.765Z · LW(p) · GW(p)

Since comments get occluded you should refer to an edit/update somewhere at the top if you want it to be seen by those who already read your original comment.

comment by Yoav Ravid · 2021-10-27T18:32:23.735Z · LW(p) · GW(p)

Is this the highest rated comment on the site?

comment by mingyuan · 2021-10-19T20:41:49.413Z · LW(p) · GW(p)

Okay, meta: This post has over 500 comments now and it's really hard to keep a handle on all of the threads. So I spent the last 2 hours trying to outline the main topics that keep coming up. Most top-level comments are linked to but some didn't really fit into any category, so a couple are missing; also apologies that the structure is imperfect.

Topic headers are bolded and are organized very roughly in order of how important they seem (both to me personally and in terms of the amount of air time they've gotten). 

Replies from: Ruby
comment by Ruby · 2021-10-19T20:56:14.153Z · LW(p) · GW(p)

This is hugely helpful, a great community service! Thanks so much, mingyuan.

comment by Aella · 2021-10-17T21:35:28.698Z · LW(p) · GW(p)

I find something in me really revolts at this post, so epistemic status… not-fully-thought-through-emotions-are-in-charge?

Full disclosure: I am good friends with Zoe; I lived with her for the four months leading up to her post, and was present to witness a lot of her processing and pain. I’m also currently dating someone named in this post, but my reaction to this was formed before talking with him.

First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away. If the points in the post felt more compelling, then I’d probably be more down for an argument of “we should bin these together and look at this as a whole”, but as it stands the stuff listed in here feels like it’s describing something significantly less damaging, and of a different kind of damage. I’m also annoyed that this post relies so heavily on Zoe’s, and the comparison feels like it cheapens what Zoe went through. I keep having a recurring thought that the author must have utterly failed to understand the intensity of the very direct impact from Leverage’s operations on Zoe. Most of my emotions here come from a perception that this post is actively hurting a thing I value.

Second, I suspect this post makes a crucial mistake in mistaking symptoms for the cause. Or, rather, I think there’s a core inside of what made Leverage damaging, and it’s really really hard to name it. Zoe’s post seemed like a good effort to triangulate it, but this above post feels like it focuses on the wrong things, or a different brand of analogous things, without understanding the core of what Zoe was trying to get at. Missing the core of the post is an easy mistake to make, given how it's really hard to name directly, but in this case I'm particularly sensitive to the analogy seeming superficial, given how much this post seems to be relying on Zoe's post for validation.

One example for this is comparing Zoe’s mention of someone at Leverage having a psychotic break to the author having a psychotic break. But Zoe’s point was that Leverage treated the psychotic break as an achievement, not that the psychotic break happened. 

Third, and I think this has been touched on by other comments, is that this post feels… sort of dishonest to me? I feel like something is trying to get slipped into my brain without me noticing. Lots of parts the post sort of implicitly presents things as important, or asks you to draw conclusions without explicitly pointing out those conclusions. I might be… overfitting or tryin to see a thing because I’m emotionally charged, but I’m gonna attempt to articulate the thing anyway:

For example, the author summarizes Zoe as saying that Leverage considered Geoff Anders to be extremely special, e.g. Geoff being possibly a better philosopher than Kant.

In Zoe’s post, her actual quote is of a Leverage person saying “I think there’s good reason to believe Geoff is the best philosopher who’s ever lived, better than Kant. I think his existence on earth right now is an historical event.” 

This is small but an actually important difference, and has the effect of slightly downplaying Leverage.

The author here then goes on to say that she doesn’t remember anyone saying Eliezer was a better philosopher than Kant, but that she guesses people would say this, and then points out probably nobody at MIRI read Kant. 

The effect of this is it asks the reader to associate perception of Eliezer’s status with Geoff’s (both elevated) by drawing the comparison of Kant to Eliezer (that hadn’t actually been drawn before), and then implies rationalists being misinformed (not reading Kant).

This is arguably a really uncharitable read, and I’m not very convinced it’s ‘true’, but I think the ‘effect’ is true; as in, this is the impression I got when reading quickly the first time. And the impression isn’t supported in the rest of the words, of course - the author says they don’t have reason to believe MIRI people would view Eliezer as more relevant than philosophers they respected, and that nobody there really respected Kant. But the general sense I get from the overall post is this type of pattern, repeated over and over - a sensation of being asked to believe something terrible, and then when I squint the words themselves are quite reasonable. This makes it feel slippery to me, or I feel like I’ve been struck from behind and when I turn around there’s someone smiling as they’re reaching out to shake my hand.

And to be clear, I don’t think all the comparisons are wrong, or that there’s nothing of value here. It can be super hard to sensemake with confusing narrative stuff, and there’s inevitably going to be some clumsiness in attempting to do it. I think it’s worthwhile and important to be paying close attention to the ways organizations might be having adverse effects on their members, particularly in our type of communities, and I support pointing out even small things and don’t want people to feel like they’re making too big a deal out of something not. But the way this deal is made bothers me, and I feel defensive and have stories in me about this doing more harm than good.

Replies from: mingyuan, Eliezer_Yudkowsky, Ruby, jessica.liu.taylor, Benito, Benquo, 4thWayWastrel, romeostevensit, hg00, farp
comment by mingyuan · 2021-10-20T02:52:30.833Z · LW(p) · GW(p)

I want to note that this post (top-level) now has more than 3x the number of comments that Zoe's does (or nearly 50% more comments than the Zoe+BayAreaHuman posts combined, if you think that's a more fair comparison), and that no one has commented on Zoe's post in 24 hours. [ETA: This changed while I was writing this comment. The point about lowered activity still stands.]

This seems really bad to me — I think that there was a lot more that needed to be figured out wrt Leverage, and this post has successfully sucked all the attention away from a conversation that I perceive to be much more important. 

I keep deleting sentences because I don't think it's productive to discuss how upset this makes me, but I am 100% with Aella here. I was wary of this post to begin with and I feel something akin to anger at what it did to the Leverage conversation.

I had some contact with Leverage 1.0 — had some friends there, interviewed for an ops job there, and was charted a few times by a few different people. I have also worked for both CFAR and MIRI, though never as a core staff member at either organization; and more importantly, I was close friends with maybe 50% of the people who worked at CFAR from mid-2017 to mid-2020. Someone very close to me previously worked for both CFAR and Leverage. With all that backing me up: I am really very confident that the psychological harm inflicted by Leverage was both more widespread and qualitatively different than anything that happened at CFAR or MIRI (at least since mid-2017; I don't know what things might have been like back in, like, 2012). 

The comments section of this post is full of CFAR and MIRI employees attempting to do collaborative truth-seeking. The only comments made by Leverage employees in comparable threads were attempts at reputation management. That alone tells you a lot!

CFAR and MIRI have their flaws, and several people clearly have legitimate grievances with them. I personally did not have a super great experience working for either organization (though that has nothing to do with anything Jessica mentioned in this post; just run-of-the-mill workplace stuff). Those flaws are worth looking at, not only for the edification of the people who had bad experiences with MIRI and CFAR, but also because we care about being good people building effective organizations to make the world a better place. They do not, however, belong in a conversation about the harm done by Leverage. 

(Just writing a sentence saying that Leverage was harmful makes me feel uncomfortable, feels a little dangerous, but fuck it, what are they going to do, murder me?)

Again, I keep deleting sentences, because all I want to talk about is the depth of my agreement with Aella, and my uncharitable feelings towards this post. So I guess I'll just end here.

Replies from: ChristianKl, Avi Weiss, AnnaSalamon, Viliam, Kenny
comment by ChristianKl · 2021-10-21T10:07:50.774Z · LW(p) · GW(p)

It seems like it's relatively easy for people to share information in the CFAR+MIRI conversation. On the other hand, for those people who have actually the most central information to share in the Leverage conversation it's not as easy to share them. 

In many cases I would expect that private in person conversation are needed to progress the Leverage debate and that just takes time. Those people at leverage who want to write up their own experience likely benefit from time to do that.

Practically, helping Anna get an overview over timeline of members and funders and getting people to share stories with Aella seems to be the way going forward that's largely not about leaving LW comments.

comment by Avi (Avi Weiss) · 2021-10-20T07:52:02.824Z · LW(p) · GW(p)

I agree with the intent of your comment mingyuan, but perhaps the reason for the asymmetry in activity on this post is simply due to the fact that there are an order of magnitude (or several orders of magnitude?) more people with some/any experience and interaction with CFAR/MIRI (especially CFAR) compared to Leverage?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-22T09:13:14.952Z · LW(p) · GW(p)

I think some of it has got to be that it's somehow easier to talk about CFAR/MIRI, rather than a sheer number of people thing. I think Leverage is somehow unusually hard to talk about, such that maybe we should figure out how to be extraordinarily kind/compassionate/gentle to anyone attempting it, or something.

Replies from: Spiracular, Avi Weiss
comment by Spiracular · 2021-10-22T14:23:57.643Z · LW(p) · GW(p)

I agree that Leverage has been unusually hard to talk about bluntly or honestly, and I think this has been true for most of its existence.

I also think the people at the periphery of Leverage, are starting to absorb the fact that they systematically had things hidden from them. That may be giving them new pause, before engaging with Leverage as a topic.

(I think that seems potentially fair, and considerate. To me, it doesn't feel like the same concern applies in engaging about CFAR. I also agree that there were probably fewer total people exposed to Leverage, at all.)


...actually, let me give you a personal taste of what we're dealing with?

The last time I choose to talk straightforwardly and honestly about Leverage, with somebody outside of it? I had to hard-override an explicit but non-legal privacy agreement*, to get a sanity check. When I was honest about having done so shortly thereafter, I completely and permanently lost one of my friendships as a result.

Lost-friend says they were traumatized as a result of me doing this. That having "made the mistake of trusting me" hurt their relationships with other Leveragers. That at the time, they wished they'd lied to me, which stung.

I talked with the person I used as a sanity-check recently, and I get the sense that I still only managed to squeeze out ~3-5 sentences of detail at the time.

(I get the sense that I still did manage to convey a pretty balanced account of what was going through my head at the time. Somehow.)


It is probably safer to talk now, than it was then. At least, that's my current view. 2 year's distance, community support, a community that is willing to be more sympathetic to people who get swept up in movements, and a taste of what other people were going through (and that you weren't the only person going through this), does tend to help matters.

(Edit: They've also shared the Ecosystem Dissolution Information Arrangement, which I find a heartening move. They mention that it was intended to be more socially-enforced than legally-binding. I don't like all of their framing around it, but I'll pick that fight later.)

It wouldn't surprise me at all, if most of this gets sorted out privately for now. Depending a bit on how this ends -- largely on whether I think this kind of harm is likely to recur or not--- I might not even have an objection to that.

But when it comes to Leverage? These are some of the kinds of thoughts and feelings, that I worry we may later see played a role in keeping this quiet.

Replies from: Spiracular, Unreal, Spiracular
comment by Spiracular · 2021-11-01T20:26:07.903Z · LW(p) · GW(p)

I'm finally out about my story here [LW(p) · GW(p)]! But I think I want to explain a bit of why I wasn't being very clear, for a while.

I've been "hinting darkly" in public rather than "telling my full story" due to a couple of concerns:

  1. I don't want to "throw ex-friend under the bus," to use their own words! Even friend's Leverager partner (who they weren't allowed to visit, if they were "infected with objects") seemed more "swept-up in the stupidity" than "malicious." I don't know how to tell my truth, without them feeling drowned out. I do still care about that. Eurgh.

  2. Via models that come out of my experience with Brent: I think this level of silence, makes the most sense if some ex-Leveragers did get a substantial amount of good out of the experience (sometimes with none of the bad, sometimes alongside it), and/or if there's a lot of regrettable actions taken by people who were swept up in this at the time, by people who would ordinarily be harmless under normal circumstances. I recognize that bodywork was very helpful to my friend, in working through some of their (unrelated) trauma. I am more than a little reluctant to put people through the sort of mob-driven invalidation I felt, in the face of the early intensely-negative community response to the Brent expose?

Surprisingly irrelevant for me: I am personally not very afraid of Geoff! Back when I was still a nobody, I brute-forced my way out of an agonizing amount of social-anxiety through sheer persistence. My social supports range both wide and deep. I have pretty strong honesty policies. I am not currently employed, so even attacking my workplace is a no-go. I'm planning to marry someone cool this January. Truth be told? I pity any fool who tries to character-assassinate me.

...but I know that others are scared of Geoff. I have heard the phrase "Geoff will do anything to win" bandied about so often, that I view it as something of a stereotyped phrase among Leveragers. I am honestly not sure how concerned I actually should be about it! But it feels like evidence of a narrative that I find pretty concerning, although I don't know how this narrative emerged.

comment by Unreal · 2021-10-22T15:56:39.509Z · LW(p) · GW(p)

The last time I choose to talk straightforwardly and honestly about Leverage, with somebody outside of it? I had to hard-override a privacy concern*, to get a sanity check. When I was honest about having done so shortly thereafter, I completely lost one of my friendships as a result.

Lost-friend says they were traumatized as a result of me doing this. That having "made the mistake of trusting me" hurt their relationships with other Leveragers. That at the time, they wished they'd lied to me which stung.

Any thoughts on why this was coming about in the culture? 

If anyone feels that way (like the lost friend) and wants to talk to me about it, I'd be interested in learning more about it. 

comment by Spiracular · 2021-10-22T14:50:02.972Z · LW(p) · GW(p)

* I could tell that this had some concerning toxic elements, and I needed an outside sanity-check. I think under the circumstances, this was the correct call for me. I do not regret picking the particular person I chose as a sanity-check. I am also very sympathetic to other people not feeling able to pull this, given the enormous cost to doing it at the time.

This is not a strong systematic assessment of how I usually treat privacy agreements. My harm-assessment process is usually structured a bit like this [LW(p) · GW(p)], with some additional pressure from an "agreement-to-secrecy," and also factors in the meta-secrecy-agreements around "being able to be held to secrecy agreements" and "being honest about how well you can be held to secrecy agreements."

No, I don't feel like having a long discussion about privacy policies right now. But if you care? My thoughts on information-sharing policy were valuable enough to get me into the 2019 Review.

If you start on this here, I will ignore you.

comment by Avi (Avi Weiss) · 2021-10-22T09:31:03.770Z · LW(p) · GW(p)

The fact that the people involved apparently find it uniquely difficult to talk about is a pretty good indication that Leverage != CFAR/MIRI in terms of cultishness/harms etc.

comment by AnnaSalamon · 2021-10-22T09:09:12.194Z · LW(p) · GW(p)

Yes; I want to acknowledge that there was a large cost here. (I wasn't sure, from just the comment threads; but I just talked to a couple people who said they'd been thinking of writing up some observations about Leverage but had been distracted by this.)

I am personally really grateful for a bunch of the stuff in this post and its comment thread. But I hope the Leverage discussion really does get returned to, and I'll try to lend some momentum that way. Hope some others do too, insofar as some can find ways to actually help people put things together or talk.

comment by Viliam · 2021-10-20T10:34:56.576Z · LW(p) · GW(p)

Seems to me that, given the current situation, it would probably be good to wait maybe two more days until this debate naturally reaches the end. And then restart the debate about Leverage.

Otherwise, we risk having two debates running in parallel, interfering with each other.

The comments section of this post is full of CFAR and MIRI employees attempting to do collaborative truth-seeking. The only comments made by Leverage employees in comparable threads were attempts at reputation management. That alone tells you a lot!

Then it is good that this debate happened. (Despite my shock when I saw it first.) It's just the timing with regards to the debate about Leverage that is unfortunate.

Replies from: Puxi Deek
comment by Puxi Deek · 2021-10-20T11:15:52.965Z · LW(p) · GW(p)

When everyone knows everyone else it's more like Facebook than say Reddit. I don't know why so many real life organizations are basing their discussions on these open forums online. Maybe they want to attract more people to think about certain problems. Maybe they want to spread their jeans. Either way, normal academic research don't involving knocking on people's doors and ask them if they are interested in doing such and such research. To a less extreme degree, they don't even ask their family and friends to join their research circle. When you befriend your coworkers in the corporate world, things can get real messy real quick, depending on to what extent they are involved/interfering with your life outside of work. Maybe that's why they are distinguishing themselves from your typical workplace.

Replies from: Viliam
comment by Viliam · 2021-10-20T15:31:11.031Z · LW(p) · GW(p)

MIRI and CFAR are non-profits, they need to approach fundraising and talent-seeking differently than universities or for-profit corporations.

In addition, neither of them is pure research institution. MIRI's mission includes making people who work on AI, or make important decisions about AI, aware of the risks involved. CFAR's mission includes teaching of rationality techniques. Both of them require communication with public.

This doesn't explain all the differences, but at least some of them.

comment by Kenny · 2021-10-20T03:09:16.810Z · LW(p) · GW(p)

The only comments made by Leverage employees in comparable threads were attempts at reputation management. That alone tells you a lot!

So much of this on this site, it's incredible. Makes me wonder if people are consciously doing it. If they are, then why would they even join this cult in the first place? Personally I've observed that the people who easily join cults are rather very impressionable. Even my wife got duped by a couple of middle aged men. It's a different type of intelligence and skill set than the stuff they employ at colleges and research institutions.

Replies from: Viliam
comment by Viliam · 2021-10-20T11:00:20.256Z · LW(p) · GW(p)

Uhh. Sadly, this attitude is quite common, so I will try to explain. Some people are in general more gullible or easier to impress, yes. But that is just a part of equation. The remaining parts are:

  • everyone is more vulnerable to manipulation that is compatible with their already existing opinions and desires;
  • people are differently vulnerable at different moment of their lives, so it's a question of luck whether you encounter the manipulation at your strongest or weakest moment;
  • the environment can increase or decrease your resistance: how much free time you have, how many people make a coordinated effort to convince you, whether you have enough opportunity to meet other people or stay alone and reflect on what is happening, whether something keeps you worried and exhausted, etc.

So, some people might easily believe in Mother Gaia, but never in Artificial Intelligence, for other people it is the other way round. You can manipulate some people by appealing to their selfish desires, other people by appealing to their feelings of compassion.

Many people are just lucky that they never met a manipulative group targetting specifically their weaknesses, exactly at a vulnerable moment of their lives. It is easy to laugh at people whose weaknesses are different from yours, when they fail in a situation that exploits their weaknesses.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-10-17T23:30:15.458Z · LW(p) · GW(p)

By way of narrowing down this sense, which I think I share, if it's the same sense: leaving out the information from Scott's comment [LW(p) · GW(p)] about a MIRI-opposed person who is advocating psychedelic use and causing psychotic breaks in people, and particularly this person talks about MIRI's attempts to have any internal info compartments as a terrible dark symptom of greater social control that you need to jailbreak away from using psychedelics, and then those people have psychotic breaks - leaving out this info seems to be not something you'd do in a neutrally intended post written from a place of grave concern about community dynamics.  It's taking the Leverage affair and trying to use it to make a point, and only including the info that would make that point, and leaving out info that would distract from that point.  And I'm not going to posture like that's terribly bad inhuman behavior, but we can see it and it's okay to admit to ourselves that we see it.

And it's also okay for somebody to think that the original Leverage affair needed to be discussed on its own terms, and not be carefully reframed in exactly the right way to make a point about a higher-profile group the author wanted to discuss instead; or to think that Leverage did a clearly bad thing, and we need to have norms against that clearly bad thing and finish up on making those norms before it's proper for anyone to reframe the issue as really being about a less clear bad thing somewhere higher-profile; and then this post is going against that and it's okay for them to be unhappy about that part.

Replies from: Benquo
comment by Benquo · 2021-10-18T01:28:28.984Z · LW(p) · GW(p)

not something you'd do in a neutrally intended post written from a place of grave concern about community dynamics

 

I'm not going to posture like that's terribly bad inhuman behavior, but we can see it and it's okay to admit to ourselves that we see it

These have the tone of allusions to some sort of accusation, but as far as I can tell you're not actually accusing Jessica of any transgression here, just saying that her post was not "neutrally intended," which - what would that mean? A post where Gricean implicature was not relevant?

Can you clarify whether you meant to suggest Jessica was doing some specific harmful thing here or whether this tone is unendorsed?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-10-18T02:52:20.638Z · LW(p) · GW(p)

Okay, sure.  If what Scott says is true, and it matches my recollections of things I heard earlier - though I can attest to very little of it of my direct observation - then it seems like this post was written with knowledge of things that would make the overall story arc it showed, look very different, and those things were deliberately omitted.  This is more manipulation than I myself would personally consider okay to use in a situation like this one, though I am ever mindful of Automatic Norms and the privilege of being more verbally facile than others in which facts I can include but still make my own points.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T02:53:59.000Z · LW(p) · GW(p)

See Zack's reply here [LW(p) · GW(p)] and mine here [LW(p) · GW(p)]. Overall I didn't think the amount of responsibility was high enough for this to be worth mentioning.

comment by Ruby · 2021-10-17T22:17:50.976Z · LW(p) · GW(p)

First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away...

I want to second this reaction (basically your entire second paragraph). I have been feeling the same but hadn't worked up the courage to say it.

Replies from: Freyja, Viliam, Eliezer_Yudkowsky
comment by Freyja · 2021-10-18T16:58:00.273Z · LW(p) · GW(p)

I am also mad at what I see to be piggybacking on Zoe’s post, downplaying of the harms described in her post, and a subtle redirection of collective attention away from potentially new, timid accounts of things that happened to a specific group of people within Leverage and seem to have a lot of difficulty talking about it.

I hope that the sustained collective attention required to witness, make sense of and address the accounts of harm coming out of the psychology division of Leverage doesn’t get lost as a result of this post being published when it was.

comment by Viliam · 2021-10-18T10:40:08.271Z · LW(p) · GW(p)

For a moment I actually wondered whether this was a genius-level move by Leverage, but then I decided that I am just being paranoid. But it did derail the previous debate successfully.

On the positive side, I learned some new things. Never heard about Ziz before, for example.

EDIT:

Okay, this is probably silly, but... there is no connection between the Vassarites and Leverage, right? I just realized that my level of ignorance does not justify me dismissing a hypothesis so quickly. And of course, everyone knows everyone, but there are different levels of "knowing people", and... you know what I mean, hopefully. I will defer to judgment of people from Bay Area about this topic.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-19T07:24:03.032Z · LW(p) · GW(p)

Outside of "these people probably talked to each other like once every few months" I think there is no major connection between Leverage and the Vassarites that I am aware of.

Replies from: Viliam
comment by Viliam · 2021-10-19T08:56:55.120Z · LW(p) · GW(p)

Thanks.

I mostly assumed this; I suppose in the opposite case someone probably would have already mentioned that. But I prefer to have it confirmed explicitly.

comment by jessicata (jessica.liu.taylor) · 2021-10-17T21:51:21.078Z · LW(p) · GW(p)

The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away.

I'm assuming that sensemaking is easier, rather than harder, with more relevant information and stories shared. I guess if it's pulling the spotlight away, it's partially because it's showing relevant facts about things other than Leverage, and partially because people will be more afraid of scapegoating Leverage if the similarities to MIRI/CFAR are obvious. I don't like scapegoating, so I don't really care if it's pulling the spotlight away for the second reason.

If the points in the post felt more compelling, then I’d probably be more down for an argument of “we should bin these together and look at this as a whole”, but as it stands the stuff listed in here feels like it’s describing something significantly less damaging, and of a different kind of damage.

I don't really understand what Zoe went through, just reading her post (although I have talked with other ex-Leverage people about the events). You don't understand what I went through, either. It was really, really psychologically disturbing. I sound paranoid writing what I wrote, but this paranoia affected so many people. What I thought was a discourse community broke down into low-trust behavior and gaslighting and I feared violence. Someone outside the central Berkeley community just messaged me saying it's really understandable that I'd fear retribution given how important the relevant people thought the project was, it was a real risk.

Or, rather, I think there’s a core inside of what made Leverage damaging, and it’s really really hard to name it.

I'm really interested in the core being described better in the Leverage case. It would be unlikely that large parts of such a core wouldn't apply to other cases even if not to MIRI/CFAR specifically. I know I haven't done the best job I could have nailing down what was fucky about the MIRI/CFAR environment at 2017, but I've tried harder to (in the online space) more than anyone but Ziz, AFAICT.

This is small but an actually important difference, and has the effect of slightly downplaying Leverage.

I agree, will edit the post accordingly. I do think the fact that people were saying we wouldn't have a chance to save the world without Eliezer shows that they consider him extremely historically special.

But the general sense I get from the overall post is this type of pattern, repeated over and over—a sensation of being asked to believe something terrible, and then when I squint the words themselves are quite reasonable.

Sorry, it's possible that I'm writing not nearly as clearly as I could, and the stress of what happened might contribute some to that. But it's hard for me to identify how I'm communicating unclearly from your or Logan's description, which are both pretty vague.

But the way this deal is made bothers me, and I feel defensive and have stories in me about this doing more harm than good.

I appreciate that you're communicating about your defensiveness and not just being defensive without signalling that.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-18T08:14:53.616Z · LW(p) · GW(p)

I don't really understand what Zoe went through, just reading her post (although I have talked with other ex-Leverage people about the events). You don't understand what I went through, either. It was really, really psychologically disturbing. I sound paranoid writing what I wrote, but this paranoia affected so many people. 

It would have probably better if you would have focused on your experience and drop all of the talk about Zoe from this post. That would make it easier for the reader to just take the information value from your experience.

I think that your post is still valuable information but that added narrative layer makes it harder to interact with then it would have been if it would have been focused more on your experience.

comment by Ben Pace (Benito) · 2021-10-17T23:19:21.647Z · LW(p) · GW(p)

One example for this is comparing Zoe’s mention of someone at Leverage having a psychotic break to the author having a psychotic break. But Zoe’s point was that Leverage treated the psychotic break as an achievement, not that the psychotic break happened. 

From the quotes in Scott's comment [LW · GW], it seems to me also the case that Michael Vassar also treated Jessica's and Ziz's psychoses as an achievement.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2021-10-18T03:28:11.893Z · LW(p) · GW(p)

it seems to me also the case that Michael Vassar also treated Jessica's [...] psycho[sis] as an achievement

Objection: hearsay. How would Scott know this? (I wrote a separate reply about the ways in which I think Scott's comment is being unfair.) [LW(p) · GW(p)] As some closer-to-the-source counterevidence against the "treating as an achievement" charge, I quote a 9 October 2017 2:13 p.m. Signal message in which Michael wrote to me:

Up for coming by? I'd like to understand just how similar your situation was to Jessica's, including the details of her breakdown. We really don't want this happening so frequently.

(Also, just, whatever you think of Michael's many faults, very few people are cartoon villains [LW · GW] that want their friends to have mental breakdowns.)

Replies from: Benito
comment by Ben Pace (Benito) · 2021-10-18T03:37:33.544Z · LW(p) · GW(p)

Thanks for the counter-evidence.

comment by Benquo · 2021-10-17T22:44:18.388Z · LW(p) · GW(p)

First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away.

If we're trying to solve problems rather than attack the bad people, then the boundaries of the discussion should be determined by the scope of the problem, not by which people we're saying are bad. If you're trying to attack the bad people without standards or a theory of what's going on, that's just mob violence.

Replies from: Aella, Benito
comment by Aella · 2021-10-17T23:17:41.996Z · LW(p) · GW(p)

I... think I am trying to attack the bad people? I'm definitely conflict-oriented around Leverage; I believe that on some important level treating that organization or certain people in it as good-intentioned-but-misguided is a mistake, and a dangerous one. I don't think this is true for MIRI/CFAR; as is summed up pretty well in the last section of Orthonormal's post here. [LW(p) · GW(p)] I'm down for the boundaries of the discussion being determined by the scope of the problem, but I perceive the original post here to be outside the scope of the problem. 

I'm also not sure how to engage with your last sentence. I do have theories for what is going on (but regardless I'm not sure if you give a mob a theory that makes it not a mob).

Replies from: Benquo, Benquo, Benquo
comment by Benquo · 2021-10-18T18:05:05.386Z · LW(p) · GW(p)

This is explicitly opposed to Zoe's stated intentions.

Other people, including me and Jessica, also want to reveal and discuss bad behavior, but don't consent to violence in the name of our grievances.

Agnes Callard's article is relevant here: I Don’t Want You to ‘Believe’ Me. I Want You to Listen.

We want to reveal problems so that people can try to understand and solve those problems. Transforming an attempt to discussion of abuse into a scapegoating movement silences victims, preventing others from trying to interpret and independently evaluate the content of what they are saying, simplifying it to a bid to make someone the enemy.

Historically, the idea that instead of trying to figure out which behaviors are bad and police them, we need to try to quickly attack the bad people, is how we get Holocausts and Stalinist purges. In this case I don't see any upside.

Replies from: Aella, Unreal
comment by Aella · 2021-10-18T18:35:49.302Z · LW(p) · GW(p)

I perceive you as doing a conversational thing here that I don't like, where you like... imply things about my position without explicitly stating them? Or talk from a heavy frame that isn't explicit? 

  1. Which stated intentions? Where she asks people 'not to bother those who were there'? What thing do you think I want to do that Zoe doesn't want me to do? 
  2. Are you claiming I am advocating violence? Or simply implying it?
  3. Are you trying to argue that I shouldn't be conflict oriented because Zoe doesn't want me to be? The last part feels a little weird for someone to tell me, as I'm good friends with Zoe and have talked with her extensively about this.
  4. I support revealing problems so people can understand and solve them. I also don't like whatever is happening in this original article due to reasons you haven't engaged with.
  5. You're saying transforming an attempt to discuss abuse into scapegoating silences victims, keeps other ppl from evaluating the content, and simplifies it a bid to make someone the enemy. But in the comment you were responding to, I was talking about Leverage, not the author of this post. I view Leverage and co. as bad actors, but you sort of... reframe it to make it sound like I'm using a conflict mindset towards Jessica?
  6. You're also not engaging with the points I made, and you're responding to arguments I don't condone.

I don't really view you as engaging in good faith at this point, so I'm precommitting not to respond to you after this.

comment by Unreal · 2021-10-19T01:28:09.448Z · LW(p) · GW(p)

Flagging that... I somehow want to simultaneously upvote and downvote Benquo's comment here. 

Upvote because I think he's standing for good things. (I'm pretty anti-scapegoating, especially of the 'quickly' kind that I think he's concerned about.) 

Downvote because it seems weirdly in the wrong context, like he's trying to punch at some kind of invisible enemy. His response seems incongruous with Aella's actual deal.  

I have some probability on miscommunication / misunderstanding. 

But also ... why ? are you ? why are your statements so 'contracting' ? Like they seem 'narrowizing' of the discussion in a way that seems like it philosophically tenses with your stated desire for 'revealing problems'. And they also seem weirdly 'escalate-y' like somehow I'm more tense in my body as I read your comments, like there's about to be a fight? Not that I sense any anger in you, but I sense a 'standing your ground' move that seems like it could lead to someone trying to punch you because you aren't budging. 

This is all metaphorical language for what I feel like your communication style is doing here. 

Replies from: Benquo
comment by Benquo · 2021-10-19T21:19:10.556Z · LW(p) · GW(p)

Thanks for separating evaluation of content from evaluation of form. That makes it easy for me to respond to your criticism of my form without worrying so much that it's a move to suppress imperfectly expressed criticism.

The true causal answer is that when I perceive someone as appealing to a moralistic framework, I have a tendency to criticize their perspective from inside a moralistic frame, even though I don't independently endorse moralizing. While this probably isn't the best thing I could do if I were perfectly poised, I don't think this is totally pointless either. Attempts to scapegoat someone via moralizing rely on the impression that symmetric moral reasoning is being done, so they can be disrupted by insistent opposition from inside that frame.

You might think of it as standing in territory I think someone else has unjustly claimed, and drawing attention to that fact. One might get punched sometimes in such circumstances, but that's not so terrible; definitely not as bad as being controlled by fear, and it helps establish where recourse/justice is available and where it isn't, which is important information to have! Occasionally bright young people with a moral compass get in touch with me because they can see that I'm conspicuously behaving in a not-ethically-backwards way in proximity to something interesting but sketchy that they were considering getting involved with. Having clear examples to point to is helpful, and confrontation produces clear examples.

A contributing factor is that I (and I think Jessica too) felt time pressure here because it seems to me like there is an attempt to build social momentum against a specific target, which transforms complaints from complementary contributions to a shared map, into competing calls for action. I was seriously worried that if I didn't interrupt that process, some important discourse opportunities would be permanently destroyed. I endorse that concern.

Replies from: Unreal
comment by Unreal · 2021-10-19T22:00:39.940Z · LW(p) · GW(p)

The true causal answer is that when I perceive someone as appealing to a moralistic framework, I have a tendency to criticize their perspective from inside a moralistic frame, even though I don't independently endorse moralizing.

o

hmmm, well i gotta chew on that more but

Aella seems like a counter-productive person to stand your ground against. I sense her as mainly being an 'advocate' for Zoe. She claims wanting to attack the bad people, but compared with other commenters, I sense less 'mob violence' energy from her and ... maybe more fear that an important issue will be dropped / ignored. (I am not particularly afraid of this; the evidence against Leverage is striking and damning enough that it doesn't seem like it will readily be dropped, even if the internet stops talking about it. In fact I hope to see the internet talking about it a bit less, as more real convos happen in private.) 

I'm a bit worried about the way Scott's original take may have pulled us towards a shared map too quickly. There's also a general anti-jessicata vibe I'm getting from 'the room' but it's non-specific and has a lot to do with karma vote patterns. Naming these here for the sake of group awareness and to note I am with you in spirit, not an attempt to add more politics or fighting. 

I was seriously worried that if I didn't interrupt that process, some important discourse opportunities would be permanently destroyed. I endorse that concern.

Hmmmm I feel like advocating for a slightly different mental stance. Instead of taking it upon yourself to interrupt a process in order to gain a particular outcome, what if you did a thing in a way that inspires people to follow because you're being a good role model? If you're standing for what's right, it can inspire people into also doing the right thing. And if no one follows you, you accept that as the outcome; rather than trying to 'make sure' something happens? 

Attachment to an outcome (like urgently trying to avoid 'opportunities being permanently destroyed') seems like it subtly disempowers people and perpetuates more of the pattern that I think we both want less of in the world? Checking to see where a disagreement might be found... 

Replies from: Benquo
comment by Benquo · 2021-10-20T00:02:20.253Z · LW(p) · GW(p)

I think it seems hard to find a disagreement because we don't disagree about much here.

Aella seems like a counter-productive person to stand your ground against. I sense her as mainly being an ‘advocate’ for Zoe. She claims wanting to attack the bad people, but compared with other commenters, I sense less ‘mob violence’ energy from her

Aella was being basically cooperative in revealing some details about her motives, as was Logan. But that behavior is only effectively cooperative if people can use that information to build shared maps. I tried to do that in my replies, albeit imperfectly & in a way that picked a bit more of a fight than I ideally would have.

I feel like advocating for a slightly different mental stance. Instead of taking it upon yourself to interrupt a process in order to gain a particular outcome, what if you did a thing in a way that inspires people to follow because you’re being a good role model?

At leisure, I do this. I'm working on a blog post trying to explain some of the structural factors that cause orgs like Leverage to go wrong in the way Zoe described. I've written extensively about both scapegoating and mind control outside the context of particular local conflicts, and when people seem like they're in a helpable state of confusion I try to help them. I spent half an hour today using a massage gun on my belly muscles, which improved my reading comprehension of your comment and let me respond to it more intelligently.

But I'm in an adversarial situation. There are optimizing processes trying to destroy what I'm trying to build, trying to threaten people into abandoning their perspectives and capitulating to violence.

It seems like you're recommending that I build new capacities instead of defending old ones. If I'm deciding between those, I shouldn't always get either answer. Instead, for any process damaging me, I should compare these two quantities:

(A) The cost of replacement - how much would it cost me to repair the damage or build an equivalent amount of capacity elsewhere?

(B) The cost of preventing the damage.

I should work on prevention when B<A, and building when A>B.

Since I expect my adversaries to make use of resources they seize to destroy more of what I care about, I need to count that towards the total expected damage caused (and therefore the cost of replacement).

If I'd been able to costlessly pause the world for several hours to relax and think about the problem, I would almost certainly have been able to write a better reply to Aella, one that would score better on the metric you're proposing, while perhaps still accomplishing my "defense" goals.

I'm taking Tai Chi lessons in large part because I think ability to respond to fights without getting triggered is a core bottleneck for me, so I'm putting many hours of my time into being able to perform better on that metric. But I'm not better yet, and I've got to respond to the situations I'm in now with the abilities I've got now.

Replies from: Unreal
comment by Unreal · 2021-10-20T02:45:23.172Z · LW(p) · GW(p)

Well I feel somewhat more relaxed now, seeing that you're engaging in a pretty open and upfront manner. I like Tai Chi :) 

The main disagreement I see is that you are thinking strategically and in a results-oriented fashion about actions you should take; you're thinking about things in terms of resource management and cost-benefit analysis. I do not advocate for that. Although I get that my position is maybe weird? 

I claim that kind of thinking turns a lot of situations into finite games. Which I believe then contributes to life-ending / world-ending patterns. 

... 

But maybe a more salient thing: I don't think this situation is quite as adversarial as you're maybe making it out to be? Or like, you seem to be adding a lot to an adversarial atmosphere, which might be doing a fair amount of driving towards more adversarial dynamics in the group in general. 

I think you and I are not far apart in terms of values, and so ... I kind of want to help you? But also ... if you're attached to certain outcomes being guaranteed, that's gonna make it hard... 

Replies from: Benquo
comment by Benquo · 2021-10-20T04:15:53.683Z · LW(p) · GW(p)

I don't understand where guarantees came into this. I don't understand how I could answer a question of the form "why did you do X rather than Y" without making some kind of comparison of the likely outcomes of X and Y.

I do know that in many cases people falsely claim to be comparing costs and benefits honestly, or falsely claim that some resource is scarce, as part of a strategy of coercion. I have no reason to do this to myself but I see many people doing it and maybe that's part of what turned you off from the idea.

On the other hand, there's a common political strategy where a dominant coalition establishes a narrative that something should be provided universally without rationing, or that something should be absolutely prevented without acknowledging taboo tradeoffs. Since this policy can't be implemented as stated, it empowers people in the position to decide which exceptions to make, and benefits the kinds of people who can get exceptions made, at the expense of less centrally connected people.

It seems to me like thinking about tradeoffs is the low-conflict alternative to insisting on guaranteed outcomes.

Generalizing from your objection to thinking about things in terms of resource management and cost-benefit analysis and your reaction to Eli's summary of Michael and Spencer's podcast [LW(p) · GW(p)], it seems like you're experiencing a strong aversion (though not an infinitely strong one, since you said you might try listening to the podcast) to assimilating information about conflict or resource constraints, which will make it hard for you to understand behaviors determined by conflicts or resource constraints, which is a LOT of behavior.*

If you can point out specific mistakes I'm making, or at least try to narrow down your sense that I'm falsely assuming adversariality, we can try to discuss it.


  • But not all. Sexual selection seems like a third thing, though it might only common because it helps evolution find solutions to the other two - it would be surprising to see a lot of sexual selection across many species on a mature planet if it didn't pay rent somehow.
Replies from: Unreal
comment by Unreal · 2021-10-20T17:06:20.577Z · LW(p) · GW(p)

Uhhh sorry, the thing about 'guarantees' was probably a mis-speak. 

For reference, I used to be a competitive gamer. This meant I used to use resource management and cost-benefit analysis a lot in my thinking. I also ported those framings into broader life, including how to win social games. I am comfortable thinking in terms of resource constraints, and lived many years of my life in that mode. (I was very skilled at games like MTG, board games, and Werewolf/Mafia.) 

I have since updated to realize how that way of thinking was flawed and dissociated from reality.

I don't understand how I could answer a question of the form "why did you do X rather than Y" without making some kind of comparison of the likely outcomes of X and Y.

I wrote a whole response to this part, but ... maybe I'm missing you. 

Thinking strategically seems fine to the extent that one is aligned with love / ethics / integrity and not acting out of fear, hate, or selfishness. The way you put your predicament caused me to feel like you were endorsing a fear-aligned POV. 

"Since I expect my adversaries to make use of resources they seize to destroy more of what I care about," "But I'm in an adversarial situation. There are optimizing processes trying to destroy what I'm trying to build, trying to threaten people into abandoning their perspectives and capitulating to violence." 

The thing I should have said... was not about the strategy subplot, sorry, ... rather, I have an objection to the seeming endorsement of acting from a fear-aligned place. Maybe I was acting out of fear myself... and failed to name the true objection. 

... 

Those above quotes are the strongest evidence I have that you're assuming adversarial-ness in the situation, and I do not currently know why you believe those quoted statements. Like the phrase about 'adversaries' sounds like you're talking about theoretical ghosts to me. But maybe you have real people in mind. 

I'm curious if you want to elaborate. 

Replies from: Benquo
comment by Benquo · 2021-10-20T17:56:26.032Z · LW(p) · GW(p)

the phrase about ‘adversaries’ sounds like you’re talking about theoretical ghosts to me. But maybe you have real people in mind.

I'm talking about optimizing processes coordinating with copies of themselves, distributed over many people. My blog post Civil Law and Political Drama is a technically precise description of this, though Towards optimal play as Villager in a mixed game adds some color that might be helpful. I don't think my interests are opposed to the autonomous agency of almost anyone. I do think that some common trigger/trauma behavior patterns are coordinating against autonomous human agency.

The gaming detail helps me understand where you're coming from here. I don't think the right way to manage my resource constraints looks very much like playing a game of MtG. I am in a much higher-dimensional environment where most of my time should be spent playing/exploring, or resolving tension patterns that impede me from playing/exploring. My endorsed behavior pattern looks a little more like the process of becoming a good MtG player, or discovering that MtG is the sort of thing I want to get good at. (Though empirically that's not a game it made sense to me to invest in becoming good at - I chose Tai Chi instead for reasons!)

rather, I have an objection to the seeming endorsement of acting from a fear-aligned place.

I endorse using the capacities I already have, even when those capacities are imperfect.

When responding to social conflict, it would almost always be more efficient and effective for me to try to clarify things out of a sense of open opportunity, than from a fear-based motive. This can be true even when a proper decision-theoretic model the situation would describe it as an adversarial one with time pressure; I might still protect my interests better by thinking in a free and relaxed way about the problem, than tensing up like a monkey facing a physical threat.

But a relaxed attitude is not always immediately available to me, and I don't think I want to endorse always taking the time to detrigger before responding to something in the social domain.

Part of loving and accepting human beings as they are, without giving up on intention to make things better, is appreciating and working with the benefits people produce out of mixed motives. There's probably some irrational fear-based motivation in Elon Musk's and Jeff Bezos's work ethic, and maybe they'd have found more efficient and effective ways to help the world if their mental health were better, but I'm really, really glad I get to use Amazon, and that Tesla and SpaceX and Starlink exist, and it's not clear to me that I'd want to advise younger versions of them to spend a lot of time working on themselves first. That seems like making purity the enemy of the good.

Replies from: Unreal
comment by Unreal · 2021-10-20T18:25:49.533Z · LW(p) · GW(p)

optimizing processes coordinating with copies of themselves, distributed over many people

Question about balance: how do you not end up reifying these in your mind, creating a paranoid sense of 'there be ghosts lurking in shadows' ? 

This question seems central to me because the poison I detect in Vassar-esque-speak is 

a) Memetically more contagious stories seem to include lurking ghosts / demons / shadows because adding a sense of danger or creating paranoia is sticky and salient. Vassar seems to like inserting a sense of 'hidden danger' or 'large demonic forces' into his theories and way of speaking about things. I'm worried this is done for memetic intrigue, viability, and stickiness, not necessarily because it's more true. It makes people want to listen to him for long periods of time, but I don't sense it being an openly curious kind of listening but a more addicted / hungry type of listening. (I can detect this in myself.) 

I guess I'm claiming Vassar has an imbalance between the wisdom/truth of his words and the power/memetic viability of his words. With too much on the side of power. 

b) Reifying these "optimizing processes coordinating" together, maybe "against autonomous human agency" or whatever... seems toxic and harmful for a human mind that takes these very seriously. Unless it comes with ample antidote in the form of (in my world anyway) a deep spiritual compassion / faith and a wisdom-oriented understanding of everyone's true nature, among other things in this vein. But I don't detect Vassar is offering this antidote, so it just feels like poison to me. One might call this poison a deep cynicism, lack of faith / trust, a flavor of nihilism, or "giving into the dark side." 

I do believe Vassar might, in an important sense, have a lot of faith in humanity... but nonetheless, his way of expressing gives off a big stench of everything being somehow tainted and bad. And the faith is not immediately detectable from listening to him, nor do I sense his love. 

I kind of suspect that there's some kind of (adversarial) optimization process operating through his expression, and he seems to have submitted to this willingly? And I am curious about what's up with that / whether I'm wrong about this. 

Replies from: Benquo
comment by Benquo · 2021-10-20T18:48:00.847Z · LW(p) · GW(p)

Question about balance: how do you not end up reifying these in your mind, creating a paranoid sense of ‘there be ghosts lurking in shadows’ ?

Mostly just by trying to think about this stuff carefully, and check whether my responses to it add up & seem constructive. I seem to have been brought up somehow with a deep implicit faith that any internal problem I have, I can solve by thinking about - i.e. that I don't have any internal infohazards. So, once I consciously notice the opportunity, it feels safe to be curious about my own fear, aggression, etc. It seems like many other people don't have this faith, which would make it harder for them to solve this class of problem; they seem to think that knowing about conflicts they're engaged in would get them hurt by making them blameworthy; that looking the thing in the face would mark them for destruction.

My impression is that insofar as I'm paranoid, this is part of the adversarial process I described, which seems to believe in something like ontologically fundamental threats that can't be reduced to specific mechanisms by which I might be harmed, and have to be submitted to absolutely. This model doesn't stand up to a serious examination, so examining it honestly tends to dissolve it.

I've found psychedelics helpful here. Psilocybin seems to increase the conscious salience of fear responses, which allows me to analyze them. In one of my most productive shrooms trips, I noticed that I was spending most of my time pretending to be a reasonable person, under the impression that an abstract dominator wouldn't allow me to connect with other people unless I passed as a simulacrum of a rational agent. I noticed that it didn't feel available to just go to the other room and ask my friends for cuddles because I wanted to, and I considered maybe just huddling under the blankets scared in my bedroom until the trip ended and I became a simulacrum again. Then I decided I had no real incentive do to this, and plenty of incentive to go try to interact with my friends without pretending to be a person, so I did that and it worked.

THC seems to make paranoid thoughts more conscious, which allows me to consciously work through their implications and decide whether I believe them.

I agree that stories with a dramatic villain seem more memetically fit and less helpful, and I avoid them when I notice the option to.

Replies from: Unreal
comment by Unreal · 2021-10-21T15:56:36.830Z · LW(p) · GW(p)

Thanks for your level-headed responses. At this point, I have nothing further to talk about on the object-level conversation (but open to anything else you want to discuss). 

For information value, I do want to flag that... 

I'm noticing an odd effect from talking with you. It feels like being under a weighted blanket or a 'numbing' effect. It's neither pleasant nor unpleasant.

My sketchpad sense of it is: Leaning on the support of Reason. Something wants me to be soothed, to be reassured, that there is Reasonableness and Order, and it can handle things. That most things can be Solved with ... correct thinking or conceptualization or model-building or something. 

So, it's a projection and all, but I don't trust this "thing" whatever it is, much. It also seems to have many advantages. And it may make it pretty hard for me to have a fully alive and embodied conversation with you. 

Curious if any of this resonates with you or with anyone else's sense of you, or if I'm off the mark. But um also this can be ignored or taken offline as well, since it's not adding to the overall conversation and is just an interpersonal thing. 

Replies from: Benquo
comment by Benquo · 2021-10-21T18:04:38.262Z · LW(p) · GW(p)

I did feel inhibited from having as much fun as I'd have liked to in this exchange because it seemed like while you were on the whole trying to make a good thing happen, you were somewhat scared in a triggered and triggerable way. This might have caused the distortion you're describing. Helpful and encouraging to hear that you picked up on that and it bothered you enough to mention.

Replies from: Unreal
comment by Unreal · 2021-10-21T18:23:33.964Z · LW(p) · GW(p)

Your response here is really perplexing to me and didn't go in the direction I expected at all. I am guessing there's some weird communication breakdown happening. ¯\_(ツ)_/¯ I guess all I have left is: I care about you, I like you, and I wish well for you. <3 

Replies from: Benquo
comment by Benquo · 2021-10-23T03:23:17.857Z · LW(p) · GW(p)

It seems like you're having difficulty imagining that I'm responding to my situation as I understand it, and I don't know what else you might think I'm doing.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2021-10-26T20:07:00.966Z · LW(p) · GW(p)

I read the comment you're responding to as suggesting something like "your impression of Unreal's internal state was so different from her own experience of her internal state that she's very confused".

Replies from: Benquo
comment by Benquo · 2021-10-18T01:29:33.538Z · LW(p) · GW(p)
comment by Benquo · 2021-10-18T17:56:42.353Z · LW(p) · GW(p)
comment by Ben Pace (Benito) · 2021-10-21T07:41:03.223Z · LW(p) · GW(p)

What do you think the problem is that Jessica is trying to solve? (I'm also interested in what problem you think Zoe is trying to solve.)

comment by Jarred Filmer (4thWayWastrel) · 2021-10-17T22:11:50.241Z · LW(p) · GW(p)

I empathise with the feeling of slipperyness in the OP, I feel comfortable attributing that to the subject matter rather than malice.

If I had an experience that matched zoe's to the degree jessicata's did (superficially or otherwise) I'd feel compelled to post it. I found it helpful in the question of whether "insular rationalist group gets weird and experiences rash of psychotic breaks" is a community problem, or just a problem with stray dude.

Replies from: Aella
comment by Aella · 2021-10-17T23:32:10.564Z · LW(p) · GW(p)

Scott's comment [LW(p) · GW(p)] does seem to verify the "insular rationalist group gets weird and experiences rash of psychotic breaks" trend, but it seems to be a different group than the one named in the original post.

comment by romeostevensit · 2021-10-18T06:34:02.414Z · LW(p) · GW(p)

One of the things that can feel like gaslighting in a community that attracts highly scrupulous people is when posting about your interpretation of your experience is treated as a contractual obligation to defend the claims and discuss any possible misinterpretations or consequences of what is a challenging thing to write in the first place.

Replies from: Aella, Duncan_Sabien
comment by Aella · 2021-10-18T18:17:36.224Z · LW(p) · GW(p)

I feel like here and in so many other comments in this discussion that there's important and subtle distinctions that are being missed. I don't have any intention to conditionlessly accept and support all accusations made (I have seen false accusations cause incredible harm and suicidality in people close to me). I do expect people who make serious claims about organizations to be careful about how they do it. I think Zoe's Leverage post easily met my standard, but that this post here triggered a lot of warning flags for me, and I find it important to pay attention to those.

comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-18T06:45:19.020Z · LW(p) · GW(p)

Speaking of highly scrupulous...

I think that the phrases "treated as a contractual obligation" and "any possible misinterpretations or consequences" are both hyperbole, if they are (as they seem) intended as fair summaries or descriptions of what Aella wrote above.

I think there's a skipped step here, where you're trying to say that what Aella wrote above might imply those things, or might result in those things, or might be tantamount to those things, but I think it's quite important to not miss that step.

Before objecting to Aella's [A] by saying "[B] is bad!" I think one should justify or at least explicitly assert [A—>B]

Replies from: romeostevensit
comment by romeostevensit · 2021-10-18T15:49:40.383Z · LW(p) · GW(p)

Yes, and to clarify I am not attempting to imply that there is something wrong with Aella's comment. It's more like this is a pattern I have observed and talked about with others. I don't think people playing a part in a pattern that has some negative side effects should necessarily have a responsibility frame around that, especially given that one literally can't track all various possible side effects of actions. I see epistemic statuses as partially attempting to give people more affordance for thinking about possible side effects of the multi context nature of online comms and that was used to good effect here, I likely would have had a more negative reaction to Aella's post if it hadn't included the epistemic status.

comment by hg00 · 2021-10-18T09:16:14.389Z · LW(p) · GW(p)

The community still seems in the middle of sensemaking around Leverage

Understanding how other parts of the community were similar/dissimilar to Leverage seems valuable from a sensemaking point of view.

Lots of parts the post sort of implicitly presents things as important, or asks you to draw conclusions without explicitly pointing out those conclusions.

I think you may be asking your reader to draw the conclusion that this is a dishonest way to write, without explicitly pointing out that conclusion :-) Personally, I see nothing wrong with presenting only observations.

comment by farp · 2021-10-18T00:14:55.554Z · LW(p) · GW(p)

First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away.

Yeesh. I don't think we should police victims' timing. That seems really evil to me. We should be super skeptical of any attempts to tell people to shut up about their allegations, and "your timing is very insensitive to the real victims" really does not pass the smell test for me.

Replies from: Viliam, Aella, farp
comment by Viliam · 2021-10-18T13:01:15.575Z · LW(p) · GW(p)

Some context, please. Imagine the following scenario:

  • Victim A: "I was hurt by X."
  • Victim B: "I was hurt by Y."

There is absolutely nothing wrong with this, whether it happens the same day, the next day, or week later. Maybe victim B was encouraged by (reactions to) victim A's message, maybe it was just a coincidence. Nothing wrong with that either.

Another scenario:

  • Victim A: "I was hurt by X."
  • Victim B: "I was also hurt by X (in a different way, on another day etc.)."

This is a good thing to happen; more evidence, encouragement for further victims to come out.

But this post is different in a few important ways. First, Jessicata piggybacks on Zoe's story a lot, insinuating analogies, but providing very little actual data. (If you rewrote the article to avoid referring to Zoe, it would be 10 times shorter.) Second, Jessicata repeatedly makes comparison between Zoe's experience at Leverage and her experience at MIRI/CFAR, and usually concludes that Leverage was less bad (for reasons that are weird to me, such as because their abuse was legible, or because they provided space for people to talk about demons and exorcise them). Here are some quotes:

I want to disagree with a frame that says that the main thing that's bad was that Leverage (or MIRI/CFAR) was a "cult".  This makes it seem like what happened at Leverage is much worse than what could happen at a normal company.  But, having read Moral Mazes and talked to people with normal corporate experience (especially in management), I find that "normal" corporations are often quite harmful to the psychological health of their employees, e.g. causing them to have complex PTSD symptoms, to see the world in zero-sum terms more often, and to have more preferences for things to be incoherent.

Leverage was an especially legible organization, with a relatively clear interior/exterior distinction, while CFAR was less legible, having a set of events that different people were invited to, and many conversations including people not part of the organization.  Hence, it is easier to attribute organizational responsibility at Leverage than around MIRI/CFAR.

Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community.  While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible.

Leverage definitely had large problems with these discussions, and perhaps tried to reach more intersubjective agreement about them than was plausible (leading to over-reification, as Zoe points out), but they seem less severe than the problems resulting from refusing to have them, such as psychiatric hospitalization and jail time.

Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing.

An ex-Leverage person I know comments that "one of the things I give Geoff the most credit for is actually ending the group when he realized he had gotten in over his head. That still left people hurt and shocked, but did actually stop a lot of the compounding harm."  (While Geoff is still working on a project called "Leverage", the initial "Leverage 1.0" ended with most of the people leaving.) This is to some degree happening with MIRI and CFAR, with a change in the narrative about the organizations and their plans, although the details are currently less legible than with Leverage.

I hope that those that think this is "not that bad" (perhaps due to knowing object-level specifics around MIRI/CFAR justifying these decisions) consider how they would find out whether the situation with Leverage was "not that bad", in comparison, given the similarity of the phenomena observed in both cases; such an investigation may involve learning object-level specifics about what happened at Leverage.  I hope that people don't scapegoat; in an environment where certain actions are knowingly being taken by multiple parties, singling out certain parties has negative effects on people's willingness to speak without actually producing any justice.

...uhm, does this sound a bit like a defense of Leverage, or at least saying "Zoe, your experience in Leverage was not as bad as my experience in MIRI/CFAR"? That is poor taste, especially when the debate about Zoe's experience hasn't finished yet.

Third, this comparison and downplaying is made even worse by the fact that many supposed analogies are not that much analogical:

  • Zoe had mental trauma after her experience in Leverage. Jessicata had mental trauma after her experience in MIRI/CFAR, and after she started experimenting with drugs, inspired by critics of MIRI/CFAR.
  • Zoe had to sign an NDA, covering lot of what was happening in Leverage, and now she worries about possible legal consequences of her talking about her abuse. Jessicata didn't have to sign anything... but hey, she was once discouraged from writing a blog on AI timeline... which is just as bad, except much worse because MIRI/CFAR is less transparent about being evil. (Sorry, I am too sarcastic here, I find it difficult to say these things with a straight face.)
  • Zoe was convinced by Leverage that everything that happened to her was her own fault. Jessicata joined a group of MIRI/CFAR haters who believed that everything was evil but especially MIRI/CFAR, and then she ended up believing that she was evil... yeah, again, fair analogy! Leverage at least tells you openly that you are a loser, but the insidious MIRI/CFAR uses some super complicated plot, manipulating their haters to convince you about the same thing.
  • etc. (I am out of time, and also being sarcastic is against the norms of LW, so I better end here.)

In summary, it is the combination of: piggybacking on another victim's story, making analogies that are not really analogies, and then downplaying the first victim's experience... plus the timing right in the middle of debating the first victim's experience... that makes it so bad.

comment by Aella · 2021-10-18T00:41:46.950Z · LW(p) · GW(p)

I don't think "don't police victims' timing" is an absolute rule; not policing the timing is a pretty good idea in most cases. I think this is an exception. 

And if I wasn't clear, I'll explicitly state my position here: I think it's good to pay close attention to negative effects communities have on its members, and I am very pro people talking about this, and if people feel hurt by an organization it seems really good to have this publicly discussed. 

But I believe the above post did not simply do that. It also did other things, which is frame things I perceive in misleading ways, leave out key information relevant to a discussion (as per Eliezer's comment here), [LW(p) · GW(p)] and also rely very heavily directly on Zoe's account at Leverage to bring validity to their own claims when I perceive Leverage as have been being both significantly worse and worse in a different category of way. If the above post hadn't done these things, I don't think I would have any issue with the timing.

Replies from: farp
comment by farp · 2021-10-18T05:51:36.096Z · LW(p) · GW(p)

I hope that other people, when considering whether to come forward with allegations, do not worry about timing or pulling the spotlight away from other victims. Even if they think their allegations might be stupid or low quality (which is in fact a very common fear among victims).

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-18T06:13:12.768Z · LW(p) · GW(p)

Strong downvote for choosing to entirely ignore the points/claims/arguments that Aella laid out, in favor of reiterating your frame with no new detail, as if that were a rebuttal.

Seems like a cheap rhetorical trick designed to say "I'm on the side of the good, and if you disagree with me, well ..."

(Or, more precisely, I predict that if we polled one hundred humans on their takeaway from reading the thread, more than sixty of them would tick "yes" next to "to the best of your ability to judge, was this person being snide/passive-aggressive/trying to imply that Aella doesn't largely agree?"  Which seems pretty lacking in reasonable good faith, coming on the heels of her explicitly stating that not policing timing is a pretty good idea in most cases.)

comment by farp · 2021-10-18T00:16:17.215Z · LW(p) · GW(p)

I really doubt that Zoe takes great comfort in seeing other people getting strung up after making allegations.

Replies from: Aella
comment by Aella · 2021-10-18T00:46:23.330Z · LW(p) · GW(p)

I'm not sure what you're trying to do here - call on Zoe as an authority to disapprove of me? Would it update you at all if the answer was what you doubted?

Replies from: farp
comment by farp · 2021-10-18T05:49:17.418Z · LW(p) · GW(p)

I am making an obvious point that how we treat people who make allegations in one case will affect people's comfort in another case. 

I am not sure what I would conclude if in fact Zoe was glad that Jessica was recieving a negative response, but it would be surprising and interesting, and counter-evidence towards ^

Replies from: Aella
comment by Aella · 2021-10-18T18:22:25.973Z · LW(p) · GW(p)

As I mentioned in my post, I am good friends with Zoe and I sent her my comment here right after I posted it. She approved.

comment by Ben Pace (Benito) · 2021-10-17T00:27:59.345Z · LW(p) · GW(p)

Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness.

Just zooming in on this, which stood out to me personally as a particular thing I'm really tired of. 

If you're not disagreeing with people about important things then you're not thinking. There are many options for how to negotiate a significant disagreement with a colleague, including spending lots of time arguing about it, finding a compromise action, or stopping collaborating with the person (if it's a severe disagreement, which often it can be). But telling someone that by disagreeing they're claiming to be 'better' than another person in some way always feels to me like an attempt to 'control' the speech and behavior of the person you're talking to, and I'm against it.

It happens a lot. I recently overheard someone (who I'd not met before) telling Eliezer Yudkowsky that he's not allowed to have extreme beliefs about AGI outcomes. I don't recall the specific claim, just that EY's probability mass for the claim was in the 95-99% range. The person argued that because EY disagrees with some other thoughtful people on that question, he shouldn't have such confidence.

(At the time, I noticed I didn't have to be around or listen to that person and just wandered away. Poor Eliezer stayed and tried to give a thoughtful explanation for why the argument seemed bad.)

Perhaps more important to my subsequent decisions, the AI timelines shortening triggered an acceleration of social dynamics.

I noticed this too. I thought a bunch of people were affected by it in a sort of herd behavior way (not focused so much on MIRI/CFAR, I'm talking more broadly in the rationality/EA communities). I do think key parts of the arguments about how to think about timelines and takeoff are accurate (e.g. 1 [LW · GW], 2 [LW · GW]), but I feel like many people weren't making decisions because of reasons; instead they noticed their 'leaders' were acting scared and then they also acted scared, like a herd. 

In both the Leverage situation and the AI timelines situation, I felt like nobody involved was really appreciating how much fuckery the information siloing was going to cause (and did cause) to the way the individuals in the ecosystem made decisions.

This was one of the main motivations behind my choice of example in the opening section of my 3.5 yr old post A Sketch of Good Communication [LW · GW] btw (a small thing but still meant to openly disagree with the seeming consensus that timelines determined everything). And then later I wrote about the social dynamics a bunch more [LW(p) · GW(p)] 2yrs ago when trying to expand on someone else's question on the topic.

Replies from: Eliezer_Yudkowsky, peter_hurford, alexander-1, elityre, Viliam
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-10-17T18:29:27.442Z · LW(p) · GW(p)

I affirm the correctness of Ben Pace's anecdote about what he recently heard someone tell me.

"How dare you think that you're better at meta-rationality than Eliezer Yudkowsky, do you think you're special" - is somebody trolling?  Have they never read anything I've written in my entire life?  Do they have no sense, even, of irony?  Yeah, sure, it's harder to be better at some things than me, sure, somebody might be skeptical about that, but then you ask for evidence or say "Good luck proving that to us all eventually!"  You don't be like, "Do you think you're special?"  What kind of bystander-killing argumentative superweapon is that?  What else would it prove?

I really don't know how I could make this any clearer.  I wrote a small book whose second half was about not doing exactly this.  I am left with a sense that I really went to some lengths to prevent this, I did what society demands of a person plus over 10,000% (most people never write any extended arguments against bad epistemology at all, and society doesn't hold that against them), I was not subtle.  At some point I have to acknowledge that other human beings are their own people and I cannot control everything they do - and I hope that others will also acknowledge that I cannot avert all the wrong thoughts that other people think, even if I try, because I sure did try.  A lot.  Over many years.  Aimed at that specific exact way of thinking.  People have their own wills, they are not my puppets, they are still not my puppets even if they have read some blog posts of mine or heard summaries from somebody else who once did; I have put in at least one hundred times the amount of effort that would be required, if any effort were required at all, to wash my hands of this way of thinking.

Replies from: jessica.liu.taylor, Benquo, lsusr, throwaway46237896
comment by jessicata (jessica.liu.taylor) · 2021-10-17T18:49:34.366Z · LW(p) · GW(p)

The irony was certainly not lost on me; I've edited the post to make this clearer to other readers.

comment by Benquo · 2021-10-17T19:06:04.377Z · LW(p) · GW(p)

I'm glad you agree that the behavior Jessica describes is explicitly opposed to the content of the Sequences, and that you clearly care a lot about this. I don't think anyone can reasonably claim you didn't try hard to get people to behave better, or could reasonably blame you for the fact that many people persistently try to do the opposite of what you say, in the name of Rationality.

I do think it would be a very good idea for you to investigate why & how the institutions you helped build and are still actively participating in are optimizing against your explicitly stated intentions. Anna's endorsement of this post seems like reasonably strong confirmation that organizations nominally committed to your agenda are actually opposed to it, unless you're actually checking. And MIRI/CFAR donors seem to for the most part think that you're aware of and endorse those orgs' activities.

When Jessica and another recent MIRI employee asked a few years ago for some of your time to explain why they'd left, your response was:

My guess is that I could talk over Signal voice for 30 minutes or in person for 15 minutes on the 15th, with an upcoming other commitment providing a definite cutoff point and also meaning that it wouldn't otherwise be an uninterrupted day.  That's not enough time to persuade each other things, but I suspect that neither would be a week, and hopefully it's enough that you can convey to me any information you want me to know and don't want to write down.  Again, for framing, this is a sort of thing I basically don't do anymore due to stamina limitations--Nate talks to people, I talk to Nate.

You responded a little bit by email, but didn't seem very interested in what was going on (you didn't ask the kinds of followup questions that would establish common knowledge about agreement or disagreement), so your interlocutors didn't perceive a real opportunity to inform you of these dynamics at that time.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-17T19:22:37.028Z · LW(p) · GW(p)

Anna's endorsement of this post seems like reasonably strong confirmation that organizations nominally committed to your agenda are actually opposed to it,

Presumably Eliezer's agenda is much broader than "make sure nobody tries to socially enforce deferral to high-status figures in an ungrounded way" though I do think this is part of his goals.

The above seems to me like it tries to equivocate between "this is confirmation that at least some people don't act in full agreement with your agenda, despite being nominally committed to it" and "this is confirmation that people are actively working against your agenda". These two really don't strike me as the same, and I really don't like how this comment seems like it tries to equivocate between the two.

Of course, the claim that some chunk of the community/organizations Eliezer created are working actively against some agenda that Eliezer tried to set for them is plausible. But calling the above a "strong confirmation" of this fact strikes me as a very substantial stretch.

Replies from: Benquo
comment by Benquo · 2021-10-18T18:11:06.073Z · LW(p) · GW(p)

It's explicitly opposition to core Sequences content, which Eliezer felt was important enough to write a whole additional philosophical dialogue about after the main Sequences were done. Eliezer's response when informed about it was:

is somebody trolling? Have they never read anything I’ve written in my entire life? Do they have no sense, even, of irony?

That doesn't seem like Eliezer agrees with you that someone got this wrong by accident, that seems like Eliezer agrees with me that someone identifying as a Rationalist has to be trying to get core things wrong to end up saying something like that.

Replies from: Sniffnoy
comment by Sniffnoy · 2021-10-19T08:50:50.169Z · LW(p) · GW(p)

I don't think this follows. I do not see how degree of wrongness implies intent. Eliezer's comment rhetorically suggests intent ("trolling") as a way of highlighting how wrong the person is; he is free to correct me if I am wrong, but I am pretty sure that is not an actual suggestion of intent, only a rhetorical one.

I would say moreover, that this is the sort of mistake [LW · GW] that occurs, over and over, by default, with no intent necessary. I might even say that it is avoiding, not committing, this sort of mistake, that requires intent. Because this sort of mistake is just sort of what people fall into by default, and avoiding it requires active effort.

Is it contrary to everything Eliezer's ever written? Sure! But reading the entirety of the Sequences, calling yourself a "rationalist", does not in any way obviate the need to do the actual work of better group epistemology, of noticing such mistakes (and the path to them) and correcting/avoiding them.

I think we can only infer intent like you're talking about if the person in question is, actually, y'know, thinking about what they're doing. But I think people are really, like, acting on autopilot a pretty big fraction of the time; not autopiloting takes effort, and doing that work may be what a "rationalist" is supposed to do, it's still not the default. All I think we can infer from this is a failure to do the work to shift out of autopilot and think. Bad group epistemology via laziness rather than via intent strikes me as the more likely explanation.

Replies from: Benquo
comment by Benquo · 2021-10-20T15:50:21.721Z · LW(p) · GW(p)

This seems exactly backwards, if someone makes uncorrelated errors, they are probably unintentional mistakes. If someone makes correlated errors, they are better explained as part of a strategy.

Once is happenstance. Twice is coincidence. Three times is enemy action.

I can imagine, after reading the sequences, continuing to have the epistemic modesty bias in my own thoughts, but I don't see how I could have been so confused as to refer to it in conversation as a valid principle of epistemology.

Replies from: TekhneMakre, elityre, Sniffnoy
comment by TekhneMakre · 2021-10-20T16:35:33.137Z · LW(p) · GW(p)

Behavior is better explained as strategy than as error, if the behaviors add up to push the world in some direction (along a dimension that's "distant" from the behavior, like how "make a box with food appear at my door" is "distant" from "wiggle my fingers on my keyboard"). If a pattern of correlated error is the sort of pattern that doesn't easily push the world in a direction, then that pattern might be evidence against intent. For example, the conjunction fallacy will produce a pattern of wrong probability estimates with a distinct character, but it seems unlikely to push the world in some specific direction (beyond whatever happens when you have incoherent probabilities). (Maybe this argument is fuzzy on the edges, like if someone keeps trying to show you information and you keep ignoring it, you're sort of "pushing the world in a direction" when compared to what's "supposed to happen", i.e. that you update; which suggests intent, although it's "reactive" rather than "proactive", whatever that means. I at least claim that your argument is too general, proves too much, and would be more clear if it were narrower.)

Replies from: Benquo
comment by Benquo · 2021-10-20T18:24:29.106Z · LW(p) · GW(p)

The effective direction the epistemic modesty / argument from authority bias pushes things, is away from shared narrative as something that dynamically adjusts to new information, and towards shared narrative as a way to identify and coordinate who's receiving instructions from whom.

People frequently make "mistakes" as a form of submission, and it shouldn't be surprising that other types of systematic error function as a means of domination, i.e. of submission enforcement.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-20T23:22:16.984Z · LW(p) · GW(p)

(I indeed find this a more clear+compelling argument and appreciate you trying to make this known.)

comment by Eli Tyre (elityre) · 2023-01-19T15:48:00.583Z · LW(p) · GW(p)

If someone makes correlated errors, they are better explained as part of a strategy.

That does seem right to me.

It seems like very often correlated errors are the result of a mistaken, upstream crux. They're making one mistake, which is flowing into a bunch of specific instances.

This at least has to be another hypothesis, along with "this is a conscious or unconscious strategy to get what they want."

comment by Sniffnoy · 2021-10-20T18:44:10.624Z · LW(p) · GW(p)

This seems exactly backwards, if someone makes uncorrelated errors, they are probably unintentional mistakes. If someone makes correlated errors, they are better explained as part of a strategy.

I mean, there is a word for correlated errors, and that word is "bias"; so you seem to be essentially claiming that people are unbiased? I'm guessing that's probably not what you're trying to claim, but that is what I am concluding? Regardless, I'm saying people are biased towards this mistake.

Or really, what I'm saying it's the same sort of phenomenon that Eliezer discusses here [LW · GW]. So it could indeed be construed as a strategy as you say; but it would not be a strategy on the part of the conscious agent, but rather a strategy on the part of the "corrupted hardware" itself. Or something like that -- sorry, that's not a great way of putting it, but I don't really have a better one, and I hope that conveys what I'm getting at.

Like, I think you're assuming too much awareness/agency of people. A person who makes correlated errors, and is aware of what they are doing, is executing a deliberate strategy. But lots of people who make correlated errors are just biased, or the errors are part of a built-in strategy they're executing, not deliberately, but by default without thinking about it, that requires effort not to execute.

We should expect someone calling themself a rationalist to be better, obviously, but, IDK, sometimes things go bad?

I can imagine, after reading the sequences, continuing to have this bias in my own thoughts, but I don't see how I could have been so confused as to refer to it in conversation as a valid principle of epistemology.

I mean people don't necessarily fully internalize everything they read, and in some people the "hold on what am I doing?" can be weak? <shrug>

I mean I certainly don't want to rule out deliberate malice like you're talking about, but neither do I think this one snippet is enough to strongly conclude it.

Replies from: Benquo
comment by Benquo · 2021-10-20T18:53:38.934Z · LW(p) · GW(p)

In most cases it seems intentional but not deliberate. People will resist pressure to change the pattern, or find new ways to execute it if the specific way they were engaged in this bias is effectively discouraged, but don't consciously represent to themselves their intent to do it or engage in explicit means-ends reasoning about it.

Replies from: Sniffnoy
comment by Sniffnoy · 2021-10-26T06:52:13.012Z · LW(p) · GW(p)

Yeah, that sounds about right to me. I'm not saying that you should assume such people are harmless or anything! Just that, like, you might want to try giving them a kick first -- "hey, constant vigilance, remember?" :P -- and see how they respond before giving up and treating them as hostile.

comment by lsusr · 2021-10-17T18:35:33.668Z · LW(p) · GW(p)

"How dare you think that you're better at meta-rationality than Eliezer Yudkowsky, do you think you're special" reads to me as something Eliezer Yudkowsky himself would never write.

comment by throwaway46237896 · 2021-10-18T18:53:13.307Z · LW(p) · GW(p)

You also wrote a whole screed about how anyone who attacks you or Scott Alexander is automatically an evil person with no ethics, and walked it back only after backlash and only halfway. You don't get to pretend you're exactly embracing criticism there, Yud - in fact, it was that post that severed my ties to this community for good.

Replies from: ESRogs, hg00
comment by ESRogs · 2021-10-19T14:47:05.003Z · LW(p) · GW(p)

FWIW I believe "Yud" is a dispreferred term (because it's predominantly used by sneering critics), and your comment wouldn't have gotten so many downvotes without it.

Replies from: TurnTrout, None, throwaway46237896
comment by TurnTrout · 2021-10-20T11:22:57.070Z · LW(p) · GW(p)

I strong-downvoted because because they didn't bother to even link to the so-called screed. (Forgive me for not blindly trusting throwaway46237896.)

Replies from: hg00, TAG
comment by hg00 · 2021-10-21T01:42:17.546Z · LW(p) · GW(p)

Something I try to keep in mind about critics is that people who deeply disagree with you are also not usually very invested in what you're doing, so from their perspective there isn't much of an incentive to put effort into their criticism. But in theory, the people who disagree with you the most are also the ones you can learn the most from.

You want to be the sort of person where if you're raised Christian, and an atheist casually criticizes Christianity, you don't reject the criticism immediately because "they didn't even take the time to read the Bible!"

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-24T02:20:37.226Z · LW(p) · GW(p)

I think I have a lot less (true, useful, action-relevant) stuff to learn from a random fundamentalist Christian than from Carl Shulman, even though I disagree vastly more with the fundamentalist than I do with Carl.

comment by TAG · 2021-10-20T12:36:07.793Z · LW(p) · GW(p)

The original "sneer club" comment?

comment by [deleted] · 2021-10-20T04:11:49.873Z · LW(p) · GW(p)

Really? I do it because it's easier to type. Maybe I'm missing some historical context here.

Replies from: ESRogs
comment by ESRogs · 2021-10-20T11:12:42.495Z · LW(p) · GW(p)

Maybe I'm missing some historical context here.

For some reason a bunch of people started referring to him as "Big Yud" on Twitter. Here's some context regarding EY's feelings about it.

comment by throwaway46237896 · 2021-10-21T01:19:57.497Z · LW(p) · GW(p)

I'm a former member turned very hostile to the community represented here these days. So that's appropriate, I guess.

Replies from: hg00, ESRogs
comment by hg00 · 2021-10-21T01:37:21.571Z · LW(p) · GW(p)

Any thoughts on how we can help you be at peace?

comment by ESRogs · 2021-10-21T16:18:48.448Z · LW(p) · GW(p)

So that's appropriate, I guess.

I disagree that it's appropriate to use terms for people that they consider slurs because they're part of a community that you don't like.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2021-10-21T16:32:53.554Z · LW(p) · GW(p)

It's entirely appropriate! Expressing hostility is what slurs are for!

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-21T16:58:34.828Z · LW(p) · GW(p)

Prescriptive appropriateness vs. descriptive appropriateness.

ESRogs is pointing out a valuable item in a civilizing peace treaty; an available weapon that, if left unused, allows a greater set of possible cooperations to come into existence.  "Not appropriate" as a normative/hopeful statement, signaling his position as a signatory to that disarmament clause and one who hopes LW has already or will also sign on, as a subculture.

Zack is pointing out that, from the inside of a slur, it has precisely the purpose that ESRogs is labeling inappropriate.  For a slur to express hostility and disregard is like a hammer being used to pound nails.  "Entirely appropriate" as a descriptive/technical statement.

I think it would have been better if Zack had made that distinction, which I think he's aware of, but I'm happy to pop in to help; I suspect meeting that bar would've prevented him from saying anything at all in this case, which would probably have been worse overall.

comment by hg00 · 2021-10-20T23:11:57.867Z · LW(p) · GW(p)

What screed are you referring to?

Replies from: throwaway46237896
comment by throwaway46237896 · 2021-10-21T01:25:54.387Z · LW(p) · GW(p)

The rant (now somewhat redacted) can be found here, in response to the leaked emails of Scott more-or-less outright endorsing people like Steve Sailer re:"HBD". There was a major backlash against Scott at the time, resulting in the departure of many longtime members of the community (including me), and Eliezer's post was in response to that. It opened with:

it should have been obvious to anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside and has no ethics 

...which is, to put it mildly, absurd.

Replies from: philh, hg00
comment by philh · 2021-10-22T22:42:04.198Z · LW(p) · GW(p)

I think it would be good to acknowledge here Eliezer's edits. Like, do you think that

  • The thing you quote here is a reasonable way to communicate the thing Eliezer was trying to communicate, and that thing is absurd?
  • The thing you quoted is absurd, but not what Eliezer was trying to communicate, but the thing he was actually trying to communicate was also absurd?
  • The thing Eliezer was trying to communicate is defensible, but it's ridiculous that he initially tried to communicate it using those words?
  • What Eliezer initially said was a reasonable way to communicate what he meant, and his attempts to "clarify" are actually lying about what he meant?
  • Something else?

Idk, I don't want to be like "I'm fine with criticism but it's not really valid unless you deliver it standing on one foot in a bed of hot coals as is our local custom". And I think it's good that you brought this post into the conversation, it's definitely relevant to questions like how much does Eliezer tolerate criticism. No obligation on you to reply further, certainly. (And I don't commit to replying myself, if you do, so I guess take that into account when deciding if you will or not.)

But also... like, those edits really did happen, and I think they do change a lot.

I'm not sure how I feel about the post myself, there's definitely things like "I actually don't know if I can pick out the thing you're trying to point out" and "how confident are you you're being an impartial judge of the thing when it's directed at you". I definitely don't think it's a terrible post.

But I don't know what about the post you're reacting to, so like, I don't know if we're reacting different amounts to similar things, or you're reacting to things I think are innocuous, or you're reacting to things I'm not seeing (either because they're not there or because I have a blind spot), or what.

(The very first version was actually "...openly hates on Eliezer is probably...", which I think is, uh, more in need of revision than the one you quoted.)

Replies from: Duncan_Sabien, throwaway46237896
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-22T23:00:29.115Z · LW(p) · GW(p)

Strong approval for the way this comment goes about making its point, and trying to bridge the inferential gap.

comment by throwaway46237896 · 2021-10-28T13:14:39.408Z · LW(p) · GW(p)

I think it would be good to acknowledge here Eliezer's edits.

 

I don't. He made them only after ingroup criticism, and that only happened because it was so incredibly egregious. Remember, this was the LAST straw for me - not the first.

The thing about ingroupy status-quo bias is that you'll justify one small thing after another, but when you get a big one-two punch - enough to shatter that bias and make you look at things from outside - your beliefs about the group can shift very rapidly. I had already been kind of leery about a number of things I'd seen, but the one-two-three punch of the Scott emails, Eliezer's response, and the complete refusal of anyone I knew in the community to engage with these things as a problem, was that moment for me.

Even if I did give him credit for the edit - which I don't, really - it was only the breaking point, not the sole reason I left.

Replies from: RobbBB, philh
comment by Rob Bensinger (RobbBB) · 2021-10-28T20:42:43.657Z · LW(p) · GW(p)

I believe Eliezer about his intended message, though I think it's right to dock him some points for phrasing it poorly -- being insufficiently self-critical is an attractor [? · GW] that idea-based groups have to be careful of, so if there's a risk of some people misinterpreting you as saying 'don't criticize the ingroup', you should at least take the time to define what you mean by "hating on", or give examples of the kinds of Topher-behaviors you have in mind.

There's a different attractor I'm worried about, which is something like "requiring community leaders to walk on eggshells all the time with how they phrase stuff, asking them to strictly optimize for impact/rhetoric/influence over blurting out what they actually believe, etc." I think it's worth putting effort into steering away from that outcome. But I think it's possible to be extra-careful about 'don't criticize the ingroup' signals without that spilling over into a generally high level of self-censorship.

Replies from: throwaway46237896
comment by throwaway46237896 · 2021-11-03T13:50:31.783Z · LW(p) · GW(p)

You can avoid both by not having leaders who believe in terrible things (like "black people are genetically too stupid to govern themselves") that they have to hide behind a veil of (im)plausible deniability.

comment by philh · 2021-10-29T22:55:18.057Z · LW(p) · GW(p)

Hm, so. Even just saying you don't give him credit for the edits is at least a partial acknowledgement in my book, if you actually mean "no credit" and not "a little credit but not enough". It helps narrow down where we disagree, because I do give him credit for them - I think it would be better if he'd started with the current version, but I think it would be much worse if he'd stopped with the first version.

But also, I guess I still don't really know what makes this a straw for you, last or otherwise. Like I don't know if it would still be a straw if Eliezer had started with the current version. And I don't really have a sense of what you think Eliezer thinks. (Or if you think "what Eliezer thinks" is even a thing it's sensible to try to talk about.) It seems you think this was really bad[1], worse than Rob's criticism (which I think I agree with) would suggest. But I don't know why you think that.

Which, again. No obligation to share, and I think what you've already shared is an asset to the conversation. But that's where I'm at.

[1]: I get this impression from your earlier comments. Describing it as "the last straw" kind of makes it sound like not a big deal individually, but I don't think that's what you intended?

Replies from: throwaway46237896
comment by throwaway46237896 · 2021-11-03T13:49:17.411Z · LW(p) · GW(p)

It would still be a straw if it started with the current version, because it is defending Scott for holding positions and supporting people I find indefensible. The moment someone like Steve Sailer is part of your "general theory of who to listen to", you're intellectually dead to me.

The last straw for me is that the community didn't respond to that with "wow, Scott's a real POS, time to distance ourselves from him and diagnose why we ever thought he was someone we wanted around". Instead, it responded with "yep that sounds about right". Which means the community is as indefensible as Scott is. And Eliezer, specifically, doing it meant that it wasn't even a case of "well maybe the rank and file have some problems but at least the leadership..."

comment by hg00 · 2021-10-21T01:35:23.249Z · LW(p) · GW(p)

Thanks. After thinking for a bit... it doesn't seem to me that Topher frobnitzes Scott, so indeed Eliezer's reaction seems inappropriately strong. Publishing emails that someone requested (and was not promised) privacy for is not an act of sadism.

Replies from: philh
comment by philh · 2021-10-22T10:09:36.886Z · LW(p) · GW(p)

I believe the idea was not that this was an act of frobnitzing, but that

  • Topher is someone who openly frobnitzes.
  • Now he's done this, which is bad.
  • It is unsurprising that someone who openly frobnitzes does other bad things too.
comment by Peter Wildeford (peter_hurford) · 2021-10-17T16:34:39.347Z · LW(p) · GW(p)

I don't recall the specific claim, just that EY's probability mass for the claim was in the 95-99% range. The person argued that because EY disagrees with some other thoughtful people on that question, he shouldn't have such confidence.

 

I think people conflate the very reasonable "I am not going to adopt your 95-99% range because other thoughtful people disagree and I have no particular reason to trust you massively more than I trust other people" with the different "the fact that other thoughtful people mean there's no way you could arrive at 95-99% confidence" which is false. I think thoughtful people disagreeing with you is decent evidence you are wrong but can still be outweighed.

comment by Alexander (alexander-1) · 2021-10-17T04:25:56.114Z · LW(p) · GW(p)

I sought a lesson we could learn from this situation, and your comment captured such a lesson well.

This is reminiscent of the message of the Dune trilogy. Frank Herbert warns about society's tendencies to "give over every decision-making capacity" to a charismatic leader. Herbert said in 1979:

The bottom line of the Dune trilogy is: beware of heroes. Much better rely on your own judgment, and your own mistakes.

comment by Eli Tyre (elityre) · 2021-10-18T04:09:52.947Z · LW(p) · GW(p)

If you're not disagreeing with people about important things then you're not thinking.

This is a great sentence. I kind of want it on a t-shirt.

comment by Viliam · 2021-10-17T14:27:49.989Z · LW(p) · GW(p)

If you're not disagreeing with people about important things then you're not thinking.

Indeed. And if the people object against someone disagreeing with them, that would imply they are 100% certain about being right.

I recently overheard someone (who I'd not met before) telling Eliezer Yudkowsky that he's not allowed to have extreme beliefs about AGI outcomes.

On one hand, this suggests that the pressure to groupthink is strong. On the other hand, this is evidence of Eliezer not being treated as an infallible leader... which I suppose is a good news in this avalanche of bad news.

(There is a method to reduce group pressure, by making everyone write their opinion first, and only then tell each other the opinions. Problem is, this stops working if you estimate the same thing repeatedly, because people already know what the group opinion was in the past.)

comment by Rob Bensinger (RobbBB) · 2021-10-17T22:23:10.164Z · LW(p) · GW(p)

Kate Donovan messaged me to say:

I think four people experiencing psychosis in a period of five years, in a community this large with high rates of autism and drug use, is shockingly low relative to base rates.

[...]

A fast pass suggests that my 1-3% for lifetime prevalence was right, but mostly appearing at 15-35.

And since we have conservatively 500 people in the cluster (a lot more people than that attended CFAR workshops or are in MIRI or CFAR's orbit), 4 is low. Given that I suspect the cluster is larger and I am pretty sure my numbers don't include drug induced psychosis, just primary psychosis.

The base rate seems important to take into account here, though per Jessica, "Obviously, for every case of poor mental health that 'blows up' and is noted, there are many cases that aren't." (But I'd guess that's true for the base-rate stats too?)

Replies from: jessica.liu.taylor, LGS, Gunnar_Zarncke
comment by jessicata (jessica.liu.taylor) · 2021-10-17T22:43:22.173Z · LW(p) · GW(p)

This is a good point regarding the broader community. I do think that, given that at least 2 cases were former MIRI employees, there might be a higher rate in that subgroup.

EDIT: It's also relevant that a lot of these cases happened in the same few years. 4 of the 5 cases of psychiatric hospitalization or jail time I know about happened in 2017, the other happened sometime 2017-2019. I think these people were in the 15-35 age range, which spans 20 years.

comment by LGS · 2021-10-19T04:11:08.826Z · LW(p) · GW(p)

I'm a complete outsider looking in here, so here's an outsider's perspective (from someone in CS academia, currently in my early 30s).

I've never heard or seen anyone, in real life, ever have psychosis. I know of 0 cases. Yeah, I know that people don't share such things, but I've heard of no rumors either.

By contrast, depression/anxiety seems common (especially among grad students) and I know of a couple of suicides. There was even a murder! But never psychosis; without the internet I wouldn't even know it's a real thing.

I don't know what the official base rate is, but saying "4 cases is low" while referring to the group of people I'm familiar with (smart STEM types) is, from my point of view, absurd.

The rate you quote is high. There may be good explanations for this: maybe rationalists are more open about their psychosis when they get it. Maybe they are more gossipy so each case of psychosis becomes widely known. Maybe the community is easier to enter for people with pre-existing psychotic tendencies. Maybe it's all the drugs some rationalists use.

But pretending the reported rate of psychosis is low seems counterproductive to me.

Replies from: JenniferRM, mingyuan, romeostevensit
comment by JenniferRM · 2021-10-29T05:18:14.728Z · LW(p) · GW(p)

I lived in a student housing cooperative for 3 years during my undergrad experience. These were non-rationalists. I lived with 14 people, then 35, then 35 (somewhat overlapping) people.

In these 3 years I saw 3 people go through a period of psychosis.

Once it was because of whippets, basically, and updated me very very strongly away from nitrous oxide being safe (it potentiates response to itself, so there's a positive feedback loop, and positive feedback loops in biology are intrinsically scary). Another time it was because the young man was almost too autistic to function in social environments and then feared that he'd insulted a woman and would be cast out of polite society for "that action and also for overreacting to the repercussions of the action". The last person was a mixture of marijuana and having his Christianity fall apart after being away from the social environment of his upbringing.

A striking thing about psychosis is that up close it really seems more like a biological problem rather than a philosophic one, whereas I had always theorized naively that there would be something philosophically interesting about it, with opportunities to learn or teach in a way that connected to the altered "so-called mental state".

I saw two of the three cases somewhat closely, and it wasn't "this person believes something false, in a way that maybe they could be talked out of" (which was my previous model of "being crazy").  It was more like "this human body has a brain that is executing microinstructions that might be part of a human-like performance of some coherent motion of the soul, if it progressed statefully, but instead it is a highly stuttering, almost stateless loop of nearly pure distress, repeating words over and over, and forgetting things within 2 seconds of hearing them, and calming itself, but then forgetting why it calmed itself, and then forgetting that it forgot, and so on, with very very obvious dysfunction".

I rarely talk about any of it out of respect for their privacy, but this is so long ago that anyone who can figure out who I'm talking about at this point (from what I've said) probably also knows the events in question.

It seemed almost indecent to have observed it, and it feels wrong to discuss, out of respect for their personhood. Which maybe doesn't make sense, but that is simply part of the tone of these memories. Two of the three left college and never came back, and another took a week off in perhaps a hotel or something, with parental support. People who were there spoke of it in hushed tones. It was spiritually scary.

My understanding is that base rates for schizophrenia are roughly 1% or 2% cross culturally, and are often on the introverted side of things. Also I think that many people rarely talk about the experiences (that they saw others go though, or that they went through), so you could know people who saw or experienced such things... and they might be very unlikely to ever volunteer their observations.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-29T07:50:15.358Z · LW(p) · GW(p)

Thanks for this account.

it wasn't "this person believes something false, in a way that maybe they could be talked out of" (which was my previous model of "being crazy").  It was more like "this human body has a brain that is executing microinstructions

Feels like there's more to the story here. Two of the cases you gave do sound like they had some mental thing (Christianity, social fear) that precipitated the psychosis, even if the psychosis itself was non-mental.

comment by mingyuan · 2021-10-19T21:09:01.502Z · LW(p) · GW(p)

I agree with other commenters that you are just less likely to see psychosis even if it's there, both because it's not ongoing in the way that depression and anxiety are, and because people are less likely to discuss it. I was only one step away from Jessica in the social graph in October of 2017 and never had any inkling that she'd had a psychotic episode until just now. I also wasn't aware that Zack Davis had ever had a psychotic episode, despite having met him several times and having read his blog a bit. I also lived with Olivia during the time that she was apparently inspiring psychosis in others. 

In fact, the only psychotic episodes I've known about are ones that had news stories written about them, which suggests to me that you are probably underestimating the extent to which people keep quiet about the psychotic episodes of themselves and those close to them. It seems in quite poor taste to gossip about, akin to gossiping about friends' suicide attempts (which I also assume happen much more often than I hear about — I think one generally only hears about the ones that succeed or that are publicized to spread awareness).

Just for thoroughness, here are the psychotic episodes I've known about, in chronological order:

  1. Eric Bruylant's, which has been discussed in other comments. I was aware that he was in jail because my housemates were trying to support him by showing up to his trials and stuff, and we still got mail for him (the case had happened pretty recently when I moved in). I think I found out the details — including learning that psychosis was involved — from the news story though.
  2. I was on a sports team in college, and the year after I graduated, one of my teammates had a psychotic break. I only heard about this because he was wandering the streets yelling and ended up trying to attack some campus police officers with a metal pipe and got shot (thankfully non-fatally).
  3. It's unclear to me if what happened with Ziz&co at Westminster Woods was a psychotic episode, but in any case I knew about it at the time, but only had the details clarified in the news story. 
Replies from: LGS
comment by LGS · 2021-10-20T11:43:56.611Z · LW(p) · GW(p)

I feel like people keep telling me that psychosis around me should be higher than what I hear about, which is irrelevant to my point: my point is the frequency in which I hear about psychosis in the rationalist community is like an order of magnitude higher than the frequency I hear about it elsewhere.

It doesn't matter whether people hide psychosis among my social group; the observation to explain is why people don't hide psychosis in the rationalist community to the same extent.

For example, you mention 2 separate example of Bay Area rationalists making the news for psychosis. I know of no people in my academic community who have made the news for psychosis. Assuming equal background rates, what is left to explain is why rationalists are more likely to make the news when they get psychosis.

Another example: there have now been 1-2 people who have admitted to psychosis in blog posts intended as public callouts. I know of no people in my academic community who have written public callout blog posts in which they say they've had psychosis. Is there an explanation for why rationalists who've had psychosis are more likely to write public callout blog posts?

Anyway, this discussion feels kind of moot now that I've read Scott Alexander's update to his comment. He says that several people (who knew each other) all had psychosis around the same time in 2017. No reasonable person can think this is merely baseline; some kind of social contagion is surely involved (probably just people sharing drugs or drug recommendations).

Replies from: tomcatfish, Puxi Deek
comment by Alex Vermillion (tomcatfish) · 2021-10-23T04:01:04.051Z · LW(p) · GW(p)

I think part of it is that this isn't related to your social network, but your news habits and how your news sources cover your social network.

You probably don't read newspapers that are as certain to write about your neighbor having any kind of "psychosis", but you read forums that tell you about Rationalists doing the same.

comment by Puxi Deek · 2021-10-20T11:51:12.615Z · LW(p) · GW(p)

Them leaving out the exact details of what went on with their groups make the whole discussion sketchy. Maybe they just want to keep the conversation to themselves. If that's the case, why are they posting on LW?

comment by romeostevensit · 2021-10-19T04:42:30.649Z · LW(p) · GW(p)

Sampling error. Psychosis is not an ongoing thing, yielding many fewer chances to observe it than months or years long depression or anxiety. Psychosis often manifests when people are already isolated due to worsening mental health, whereas depression and anxiety can be exactly exacerbated by the situations in which you would observe it i.e. socializing. Nor would people volunteer their experience due to much greater stigma.

Replies from: LGS, jessica.liu.taylor
comment by LGS · 2021-10-19T06:17:45.729Z · LW(p) · GW(p)

I am not comparing "number of psychosis among my friends" to "number of depression episodes among my friends". I am comparing "number of psychosis among my friends" to "number of psychosis among rationalists". Any sampling errors should apply equally to the rationalists (or if not, that demands an explanation).

The observation is that there's a lot more reported psychosis among rationalists than reported psychosis among (say) CS grad students. I don't have an explanation (and maybe there's an innocuous one), but I don't think people should be denying this fact.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-19T07:31:53.820Z · LW(p) · GW(p)

A hypothesis is that rationalists are a larger gossip community, so that e.g. you might hear about psychosis from 4 years ago in people you're nth-degree socially connected with, where maybe most other communities aren't like that?

Replies from: LGS
comment by LGS · 2021-10-19T09:41:51.770Z · LW(p) · GW(p)

Certainly possible! I mentioned this hypothesis upthread.

I wonder if there are ways to test it. For instance, do non-Bay-Arean rationalists also have a high rate of reported psychosis? I think not (not sure though), though perhaps most of the gossip centers on the Bay Area.

Are Bay Area rationalists also high in reported levels of other gossip-mediated things? I'm trying to name some, but most sexual ones are bad examples because of the polyamory confounder. How about: are Bay rationalists high in reported rates of plastic surgery? How about abortion? These seem like somewhat embarrassing things that you'd normally not find out about, but that people like to gossip about.

Or maybe people don't care to gossip about these things on the internet, because they are less interesting the psychosis.

Replies from: Freyja, TekhneMakre
comment by Freyja · 2021-10-19T16:45:32.240Z · LW(p) · GW(p)

I’m someone with a family history of psychosis and I spend quite a lot of time researching it—treatments, crisis response, cultural responses to it. There are roughly the same number of incidences of psychosis in my immediate to extended family than are described in this post in the extended rationalist community. Major predictive factors include stress, family history and use of marijuana (and, to a lesser extent, other psychedelics). I don’t have studies to back this up but I have an instinct based on my own experience that openness-to-experience and risk-of-psychosis are correlated in family risk factors. So given the drugs, stress and genetic openness, I’d expect generic Bay Area smart people to have a fairly high risk of psychosis compared to, say, people in more conservative areas already.

comment by TekhneMakre · 2021-10-19T10:20:55.182Z · LW(p) · GW(p)
I mentioned this hypothesis upthread.

(Sort of; you did say "more gossipy -> more widely known", but I wanted to specifically add the word "larger", the point being that a small + extra gossipy community would have a higher that usual report rate, and so would a large + extra gossipy (+ memory-ful) community; but the larger one would have more raw numbers, so you'd get a wrong estimate of the proportional rate if you estimated the size of the relevant reference class using intuitions based on small gossip communities. And maybe even a less gossipy but larger network would still have this effect; like, I *never* hear gossip about people in communities I'm not a part of, even if I talk to some people from those communities, so there's more structure than just the rate of gossip. It's more a question of how large is the "gossip-percolation connected component".)

comment by jessicata (jessica.liu.taylor) · 2021-10-19T05:22:39.157Z · LW(p) · GW(p)

See PhoenixFriend's comment [LW(p) · GW(p)], there were multiple cases I didn't know about, so a lot of people's thoughts about this post are recapitulating sampling bias from my own knowledge (which is from my own social network, e.g. oversampling trans people and people talking with Michael). This confirms that people are avoiding volunteering the information that they had a psychotic break.

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-19T16:59:04.033Z · LW(p) · GW(p)

PhoenixFriend alleges multiple cases you didn't know about, but so far no one else has affirmed that those cases existed or were closely connected with CFAR/MIRI.

I think it's entirely possible that those cases did exist and will be affirmed, but at the moment my state is "betting on skeptical."

comment by nostalgebraist · 2021-10-17T21:01:48.911Z · LW(p) · GW(p)

First, thank you for writing this.

Second, I want to jot down a thought I've had for a while now, and which came to mind when I read both this and Zoe's Leverage post.

To me, it looks like there is a recurring phenomenon in the rationalist/EA world where people...

  • ...become convinced that the future is in their hands: that the fate of the entire long-term future ("the future light-cone") depends on the success of their work, and the work of a small circle of like-minded collaborators
  • ...become convinced that (for some reason) only they, and their small circle, can do this work (or can do it correctly, or morally, etc.) -- that in spite of the work's vast importance, in spite of the existence of billions of humans and surely at least thousands with comparable or superior talent for this type of work, it is correct/necessary for the work to be done by this tiny group
  • ...become less concerned with the epistemic side of rationality -- "how do I know I'm right? how do I become more right than I already am?" -- and more concerned with gaining control and influence, so that the long-term future may be shaped by their own (already-obviously-correct) views
  • ...spend more effort on self-experimentation and on self-improvement techniques, with the aim of turning themselves into a person capable of making world-historic breakthroughs -- if they do not feel like such a person yet, they must become one, since the breakthroughs must be made within their small group
  • ...become increasingly concerned with a sort of "monastic" notion of purity or virtue: some set of traits which few-to-no people possess naturally, which are necessary for the great work, and which can only be attained through an inward-looking process of self-cultivation that removes inner obstacles, impurities, or aversive reflexes ("debugging," making oneself "actually try")
  • ...suffer increasingly from (understandable!) scrupulosity and paranoia, which compete with the object-level work for mental space and emotional energy
  • ...involve themselves in extreme secrecy, factional splits with closely related thinkers, analyses of how others fail to achieve monastic virtue, and other forms of zero- or negative-sum conflict which do not seem typical of healthy research communities
  • ...become probably less productive at the object-level work, and at least not obviously more productive, and certainly not productive in the clearly unique way that would be necessary to (begin to) justify the emphasis on secrecy, purity, and specialness

I see all of the above in Ziz's blog, for example, which is probably the clearest and most concentrated example I know of the phenomenon.  (This is not to say that Ziz is wrong about everything, or even to say Ziz is right or wrong about anything -- only to observe that her writing is full of factionalism, full of concern for "monastic virtue," much less prone to raise the question "how do I know I'm right?" than typical rationalist blogging, etc.)  I got the same feeling reading about Zoe's experience inside Leverage.  And I see many of the same things reported in this post.

I write from a great remove, as someone who's socially involved with parts of the rationalist community, but who has never even been to the Bay Area -- indeed, as someone skeptical that AI safety research is even very important!  This distance has the obvious advantages and disadvantages.

One of the advantages, I think, is that I don't even have inklings of fear or scrupulosity about AI safety.  I just see it as a technical/philosophical research problem.  An extremely difficult one, yes, but one that is not clearly special or unique, except possibly in its sheer level of difficulty.

So, I expect it is similar to other problems of that type.  Like most such problems, it would probably benefit from a much larger pool of researchers: a lot of research is just perfectly-parallelizable brute-force search, trying many different things most of which will not work.

It would be both surprising news, and immensely bad news, to learn that only a tiny group of people could (or should) work on such a problem -- that would mean applying vastly less parallel "compute" to the problem, relative to what is theoretically available, and that when the problem is forbiddingly difficult to begin with.

Of course, if this were really true, then one ought to believe that it is true.  But it surprises me how quick many rationalists are to accept this type of claim, on what looks from the outside like very little evidence.  And it also surprises me how quickly the same people accept unproven self-improvement techniques, even ideas that look like wishful thinking ("I can achieve uniquely great things if I just actually try, something no one else is doing..."), as substitutes for what they lose by accepting insularity.  Ways to make up for the loss in parallel compute by trying to "overclock" the few processors left available.

From where I stand, this just looks like a hole people go into, which harms them while -- sadly, ironically -- not even yielding the gains in object-level productivity it purports to provide.  The challenge is primarily technical, not personal or psychological, and it is unmoved by anything but direct attacks on its steep slopes.

(Relevant: in grad school, I remember feeling envious of some of my colleagues, who seemed able to do research easily, casually, without any of my own inner turmoil.  I put far more effort into self-cultivation, but they were far more productive.  I was, perhaps, "trying hard to actually try"; they were probably not even trying, just doing.  I was, perhaps, "working to overcome my akrasia"; they simply did not have my akrasia to begin with.

I believe that a vast amount of good technical research is done by such people, perhaps even the vast majority of good technical research.  Some AI safety researchers are like this, and many people like this could do great AI safety research, I think; but they are utterly lacking in "monastic virtue" and they are the last people you will find attached to one of these secretive, cultivation-focused monastic groups.)

Replies from: Davis_Kingsley, cousin_it, hg00, TekhneMakre, Gunnar_Zarncke, pktechgirl, TAG
comment by Davis_Kingsley · 2021-10-18T07:34:59.684Z · LW(p) · GW(p)

I worked for CFAR full-time from 2014 until mid to late 2016, and have worked for CFAR part-time or as a frequent contractor ever since. My sense is that dynamics like those you describe were mostly not present at CFAR, or insofar as they were present weren't really the main thing. I do think CFAR has not made as much research progress as I would like, but I think the reasoning for that is much more mundane and less esoteric than the pattern you describe here.

The fact of the matter is that for almost all the time I've been involved with CFAR, there just plain hasn't been a research team. Much of CFAR's focus has been on running workshops and other programs rather than on dedicated work towards extending the art; while there have occasionally been people allocated to research, in practice even these would often end up getting involved in workshop preparation and the like.

To put things another way, I would say it's much less "the full-time researchers are off unproductively experimenting on their own brains in secret" and more "there are no full-time researchers". To the best of my knowledge CFAR has not ever had what I would consider a systematic research and development program -- instead, the organization has largely been focused on delivering existing content and programs, and insofar as the curriculum advances it does so via iteration and testing at workshops rather than a more structured or systematic development process.

I have historically found this state of affairs pretty frustrating (and am working to change it), but I think that it's a pretty different dynamic than the one you describe above.


(I suppose it's possible that the systematic and productive full-time CFAR research team was so secretive that I didn't even know it existed, but this seems unlikely...)

comment by cousin_it · 2021-10-17T22:50:22.373Z · LW(p) · GW(p)

Maybe offtopic, but the "trying too hard to try" part rings very true to me. Been on both sides of it.

The tricky thing about work, I'm realizing more and more, is that you should just work. That's the whole secret. If instead you start thinking how difficult the work is, or how important to the world, or how you need some self-improvement before you can do the work effectively, these thoughts will slow you down and surprisingly often they'll be also completely wrong. It always turns out later that your best work wasn't the one that took the most effort, or felt the most important at the time; you were just having a nose-down busy period, doing a bunch of things, and only the passage of time made clear which of them mattered.

Replies from: pktechgirl
comment by hg00 · 2021-10-18T09:30:25.896Z · LW(p) · GW(p)

Does anyone have thoughts about avoiding failure modes of this sort?

Especially in the "least convenient possible world" where some of the bullet points are actually true -- like, if we're disseminating principles for wannabe AI Manhattan Projects, and we're optimizing the principles for the possibility that one of the wannabe AI Manhattan Projects is the real deal, what principles should we disseminate?


Most of my ideas are around "staying grounded" -- spend significant time hanging out with "normies" who don't buy into your worldview, maintain your sense of humor, fully unplug from work at least one day per week, have hobbies outside of work (perhaps optimizing explicitly for escapism in the form of computer games, TV shows, etc.) Possibly live somewhere other than the Bay Area, someplace with fewer alternative lifestyles and a stronger sense of community. (I think Oxford has been compared favorably to Berkeley with regard to presence of homeless people, at least.)

But I'm just guessing, and I encourage others to share their thoughts. Especially people who've observed/experienced mental health crises firsthand -- how could they have been prevented?

EDIT: I'm also curious how to think about scrupulosity. It seems to me that team members for an AI Manhattan Project should ideally have more scrupulosity/paranoia than average, for obvious reasons. ("A bit above the population average" might be somewhere around "they can count on one hand the number of times they blacked out while drinking" -- I suspect communities like ours already select for high-ish levels of scrupulosity.) However, my initial guess is that instead of directing that scrupulosity towards implementation of some sort of monastic ideal, they should instead direct that scrupulosity towards trying to make sure their plan doesn't fail in some way they didn't anticipate, trying to make sure their code doesn't have any bugs, monitoring their power-seeking tendencies, seeking out informed critics to learn from, making sure they themselves aren't a single point of failure, making sure that important secrets stay secret, etc. (what else should be on this list?) But, how much paranoia/scrupulosity is too much?

Replies from: romeostevensit, abiggerhammer, ChristianKl, Avi Weiss
comment by romeostevensit · 2021-10-18T16:00:20.900Z · LW(p) · GW(p)

IMO, A large number of mental health professionals simply aren't a good fit for high intelligence people having philosophical crises. People know this and intuitively avoid the large hassle and expense of sorting through a large number of bad matches. Finding solid people to refer to who are not otherwise associated with the community in any way would be helpful.

Replies from: RobbBB, ozziegooen, Zian
comment by Rob Bensinger (RobbBB) · 2021-10-18T18:37:30.866Z · LW(p) · GW(p)

I know someone who may be able to help with finding good mental health professionals for those situations; anyone who's reading this is welcome to PM me for contact info.

comment by ozziegooen · 2021-10-18T20:18:14.380Z · LW(p) · GW(p)

There's an "EA Mental Health Navigator" now to help people connect to the right care.
https://eamentalhealth.wixsite.com/navigator

I don't know how good it is yet. I just emailed them last week, and we set up an appointment for this upcoming Wednesday. I might report back later, as things progress.

comment by Zian · 2021-10-19T23:43:20.065Z · LW(p) · GW(p)

Unfortunately, by participating in this community (LW/etc.), we've disqualified ourselves from asking Scott to be our doctor (should I call him "Dr. Alexander" when talking about him-as-a-medical-professional while using his alias when he's not in a clinical environment?).

I concur with your comment about having trouble finding a good doctor for people like us. p(find a good doctor) is already low and difficult given the small n (also known as the doctor shortage). If you combine p(doctor works well with people like us), the result may rapidly approach epsilon.

It seems that the best advice is to make n bigger by seeking care in a place with a large per capita population of the doctors you need. For example, by combining https://nccd.cdc.gov/CKD/detail.aspx?Qnum=Q600 with the US Census ACS 2013 population estimates (https://data.census.gov/cedsci/table?t=Counts,%20Estimates,%20and%20Projections%3APopulation%20Total&g=0100000US%240400000&y=2013&tid=ACSDT1Y2013.B01003&hidePreview=true&tp=true), we see that the following states had >=0.9 primary care doctors per 1,000 people:

  • District of Columbia (1.4)
  • Vermont (1.1)
  • Massachusetts (1.0)
  • Maryland (0.9)
  • Minnesota (0.9)
  • Rhode Island (0.9)
  • New York (0.9)
  • Connecticut (0.9)
comment by abiggerhammer · 2021-10-26T05:57:19.157Z · LW(p) · GW(p)

Does anyone have thoughts about avoiding failure modes of this sort?

Meredith from Status451 here. I've been through a few psychotic episodes of my own, often with paranoid features, for reasons wholly unrelated to anything being discussed at the object-level here; they're unpleasant enough, both while they're going on and while cleaning up the mess afterward, that I have strong incentives to figure out how to avoid these kinds of failure modes! The patterns I've noticed are, of course, only from my own experience, but maybe relating them will be helpful.

  • Instrumental scrupulousness is a fantastic tool. By "instrumental scrupulousness" I simply mean pointing my scrupulousness at trying to make sure I'm not doing something I can't undo. More or less what you describe in your edit, honestly. As for how much is too much, you absolutely don't want to paralyse yourself into inaction through constantly second-guessing yourself. Real artists ship, after all!
  • Living someplace with good mental health care has been super crucial for me. In my case that's Belgium. I've only had to commit myself once, but it saved my life and was, bizarrely, one of the most autonomy-respecting experiences I've ever had. The US healthcare system is caught in a horrifically large principal-agent problem, and I don't know if it can extricate itself. Yeeting myself to another continent was literally the path of least resistance for me to find adequate, trustworthy care.
  • Secrecy is overrated and most things are nothingburgers. I've learned to identify certain thought patterns -- catastrophisation, for example -- as maladaptive, and while it'll probably always be a work in progress, the worst thing that actually does happen is usually far less awful than I imagined.

The "quit trying so hard and just do it" approach that you and nostalgebraist are gesturing at pays rent, IMO. Christian's and Avi's advice about cultivating stable and rewarding friendships and family relationships also comports with my experience.

comment by ChristianKl · 2021-10-18T14:49:43.230Z · LW(p) · GW(p)

I do think that encouraging people to stay in contact with their family and work to have good relationships is very useful. Family can provide a form of grounding that having small talk with normies while going dancing or persuing other hobbies doesn't provide. 

When deciding whether a personal development group is culty I think it's a good test to ask whether or not the work of the group lead to the average person in the group having better or worse relationships with their parents. 

comment by Avi (Avi Weiss) · 2021-10-18T10:10:23.456Z · LW(p) · GW(p)

I agree, and think it's important to 'stay grounded' in the 'normal world' if you're involved in any sort of intense organization or endeavor.

You've made some great suggestions.

I would also suggest that having a spouse who preferably isn't too involved, or involved at all, and maybe even some kids, is another commonality among people who find it easier to avoid going too far down these rabbit holes. Also, having a family is positive in countless other ways, and what I consider part of the 'good life' for most people.

comment by TekhneMakre · 2021-10-17T22:13:34.839Z · LW(p) · GW(p)
It would be both surprising news, and immensely bad news, to learn that only a tiny group of people could (or should) work on such a problem -- that would mean applying vastly less parallel "compute" to the problem, relative to what is theoretically available, and that when the problem is forbiddingly difficult to begin with.  

I have substantial probability on an even worse state: there's *multiple* people or groups of people, *each* of which is *separately* necessary for AGI to go well. Like, metaphorically, your liver, heart, and brain would each be justified in having a "rarity narrative". In other words, yes, the parallel compute is necessary--there's lots of data and ideas and thinking that has to happen--but there's a continuum of how fungible the compute is relative to the problems that need to be solved, and there's plenty of stuff at the "not very fungible but very important" end. Blood is fungible (though you definitely need it), but you can't just lose a heart valve, or your hippocampus, and be fine.

Replies from: nostalgebraist
comment by nostalgebraist · 2021-10-17T22:43:46.198Z · LW(p) · GW(p)

I didn't mention it in the comment, but having a larger pool of researchers is not only useful for doing "ordinary" work in parallel -- it also increases the rate at which your research community discovers and accumulates outlier-level, irreplaceable genius figures of the Euler/Gauss kind.

If there are some such figures already in the community, great, but there are presumably others yet to be discovered.  That their impact is currently potential, not actual, does not make its sacrifice any less damaging.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-17T22:53:28.615Z · LW(p) · GW(p)

Yep. (And I'm happy this overall discussion is happening, partly because, assuming rarity narratives are part of what leads to all this destructive psychic stuff as you described, then if a research community wants to work with people about whom rarity narratives would actually be somewhat *true*, the research community has as an important subgoal to figure out how to have true rarity narratives in a non-harmful way.)

comment by Gunnar_Zarncke · 2021-10-17T23:01:54.809Z · LW(p) · GW(p)

Most of these bullet points seem to apply to some degree to every new and risky endeavor ever started. How risky things are is often unclear at the start. Such groups are build from committed people. Small groups develop their own dynamics. Fast growth leads to social growing pains. Lack of success leads to a lot of additional difficulties. Also: Evaporative cooling. And if (partial) success happens even more growth leads to needed management level etc etc. And later: Hindsight bias. 

comment by Elizabeth (pktechgirl) · 2021-10-18T05:28:35.588Z · LW(p) · GW(p)

Without commenting on the object level, I am really happy to see someone lay this out in terms of patterns that apply to a greater or lesser extent, with correlations but not in lockstep.

comment by TAG · 2021-10-17T21:14:47.903Z · LW(p) · GW(p)

Best. Comment. Ever.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-18T02:09:52.155Z · LW(p) · GW(p)

Mod note: I don't think LessWrong is the right place for this kind of comment. Please don't leave more of these. I mean, you will get downvoted, but we might also ban you from this and similar threads if you do more of that.

Replies from: Duncan_Sabien, Benquo
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-18T05:05:33.912Z · LW(p) · GW(p)

It seems worthwhile to give a little more of the "why" here, lest people just walk away with the confusing feeling that there are invisible electric fences that they need to creep and cringe away from.

I'll try to lay out the why, and if I'm wrong or off, hopefully one of the mods or regular users will elaborate.

Some reasons why this type of comment doesn't fit the LW garden:

  • Low information density.  We want readers to be rewarded for each comment that strays across their visual field.
  • Cruxless/opaque/nonspecific.  While it's quite valid to leave a comment in support of another comment, we want it to be clear to readers why the other comment was deserving of more-support-than-mere-upvoting.
  • Self-signaling.  We want LW to both be, and feel, substantially different from the generic internet-as-a-whole, meaning that some things which are innocuous but strongly reminiscent of run-of-the-mill internetting provoke a strong "no, not that" reaction.
  • Driving things toward "sides."  There's the good stuff and the bad stuff, the good people and the bad people.  Fundamental bucketing, less attention to detail and gradients and complexity.

Having just laid out this case, I now feel bad about a similar comment that I made today, and am going to go either edit or delete it, in the pursuit of fairness and consistency.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-18T05:30:50.629Z · LW(p) · GW(p)

Ah, sorry, yeah, I agree my mod notice wasn't specific enough. Most of my mod notice was actually about a mixture of this comment, and this other comment [LW(p) · GW(p)], that felt like it was written by the same generator, but feels more obviously bad to me (and probably to others too). 

Like, the other comment that TAG left on this post felt like it was really trying to just be some kind of social flag that is common on the rest of the internet. Like, it felt like some kind of semi-ironic "Boo, outgroup" comment, and this comment felt like it was a parallel "Yay, ingroup!" comment, both of which felt like two sides of the same bad coin.

I think occasional "woo, this is great!" comments seem kind of good to me, though I also wouldn't want them to become as everpresent on here as the rest of the internet, if they are generated by a genuine sense of excitement and compassion. But I feel like I would want those comments to not come from the same generator that then generates a snarky "oh, just like this idiot..." comment. And if I had to choose between either having both or neither, I would choose neither.

comment by Benquo · 2021-10-18T18:17:14.339Z · LW(p) · GW(p)

Are you going to tell Eliezer the same thing? https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe#EJPSjPv7nNzsam947 [LW(p) · GW(p)]

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-19T07:29:28.808Z · LW(p) · GW(p)

No, Eliezer's comment seems like a straightforward "I am making a non-anonymous upvote" which is indeed a functionality I also sometimes want, since sometimes the identity of the upvoter definitely matters. The comment above seems like it's doing something different, especially in combination with the other comment I linked to.

comment by orthonormal · 2021-10-17T21:26:31.884Z · LW(p) · GW(p)

Thank you for writing this, Jessica. First, you've had some miserable experiences in the last several years, and regardless of everything else, those times sound terrifying and awful. You have my deep sympathy.

Regardless of my seeing a large distinction between the Leverage situation and MIRI/CFAR, I agree with Jessica that this is a good time to revisit the safety of various orgs in the rationality/EA space.

I almost perfectly overlapped with Jessica at MIRI from March 2015 to June 2017. (Yes, this uniquely identifies me. Don't use my actual name here anyway, please.) So I think I can speak to a great deal of this.

I'll run down a summary of the specifics first (or at least, the specifics I know enough about to speak meaningfully), and then at the end discuss what I see overall.

Claim: People in and adjacent to MIRI/CFAR manifest major mental health problems, significantly more often than the background rate.

I think this is true; I believe I know two of the first cases to which Jessica refers; and I'm probably not plugged-in enough socially to know the others. And then there's the Ziz catastrophe.

Claim: Eliezer and Nate updated sharply toward shorter timelines, other MIRI researchers became similarly convinced, and they repeatedly tried to persuade Jessica and others.

This is true, but non-nefarious in my genuine opinion, because it's a genuine belief and because given that belief, you'll have better odds of success if the whole team at least takes the hypothesis quite seriously.

(As for me, I've stably been at a point where near-term AGI wouldn't surprise me much, but the lack of it also wouldn't surprise me much. That's all it takes, really, to be worried about near-term AGI.)

Claim: MIRI started getting secretive about their research.

This is true, to some extent. Nate and Eliezer discussed with the team that some things might have to be kept secret, and applied some basic levels of it to things we thought at the time might be AGI-relevant instead of only FAI-relevant. I think that here, the concern was less about AGI timelines and more about the multipolar race caused by DeepMind vs OpenAI. Basically any new advance gets deployed immediately in our current world.

However, I don't recall ever being told I'm not allowed to know what someone else is working on, at least in broad strokes. Maybe my memory is faulty here, but it diverges from Jessica's. 

(I was sometimes coy about whether I knew anything secret or not, in true glomarization fashion; I hope this didn't contribute to that feeling.)

There are surely things that Eliezer and Nate only wanted to discuss with each other, or with a specific researcher or two.

Claim: MIRI had rarity narratives around itself and around Eliezer in particular.

This is true. It would be weird if, given MIRI's reason for being, it didn't at least have the institutional rarity narrative—if one believed somebody else were just as capable of causing AI to be Friendly, clearly one should join their project instead of starting one's own.

About Eliezer, there was a large but not infinite rarity narrative. We sometimes joked about the "bus factor": if researcher X were hit by a bus, how much would the chance of success drop? Setting aside that this is a ridiculous and somewhat mean thing to joke about, the usual consensus was that Eliezer's bus quotient was the highest one but that a couple of MIRI's researchers put together exceeded it. (Nate's was also quite high.)

(My expectation is that the same would not have been said about Geoff within Leverage.)

Claim: Working at MIRI/CFAR made it harder to connect with people outside the community.

There's an extent to which this is true of any community that includes an idealistic job (i.e. a paid political activist probably has likeminded friends and finds it a bit more difficult to connect outside that circle). Is it true beyond that?

Not for me, at least. I maintained my ties with the other community I'd been plugged into (social dancing) and kept in good touch with my family (it helps that I have a really good family). As with the above example, the social path of least resistance would have been to just be friends with the same network of people in one's work orbit, but there wasn't anything beyond that level of gravity in effect for me.

Claim: CFAR got way too far into Shiny-Woo-Adjacent-Flavor-Of-The-Week.

This is a unfair framing... because I agree with Jessica's claim 100%. Besides Kegan Levels and the MAPLE dalliance, there was the Circling phase and probably much else I wasn't around for.

As for causes, I've been of the opinion that Anna Salamon has a lot of strengths around communicating ideas, but that her hiring has had as many hits as misses. There's massive churn, people come in with their Big Ideas and nobody to stop them, and also people come in who aren't in a good emotional place for their responsibilities. I think CFAR would be better off if Anna delegated hiring to someone else. [EDIT: Vaniver corrects me to say that Pete Michaud has been mostly in charge of hiring for the past several years, in which case I'm criticizing him rather than Anna for any bad hiring decisions during that time.]

Overall Thoughts

Essentially, I think there's one big difference between issues with MIRI/CFAR and issues at Leverage:

The actions of CFAR/MIRI harmed people unintentionally, as evidenced by the result that people burned out and left quickly and with high frequency. The churn, especially in CFAR, hurt the mission, so it was definitely not the successful result of any strategic process.

Geoff Anders and others at Leverage harmed people intentionally, in ways that were intended to maintain control over those people. And to a large extent, that seems to have succeeded until Leverage fell apart.

Specifically, [accidentally triggering psychotic mental states by conveying a strange but honestly held worldview without adding adequate safeties] is different from [intentionally triggering psychotic mental states in order to pull people closer and prevent them from leaving], which is Zoe's accusation. Even if it's possible for a mental breakdown to be benign under the right circumstances, and even if an unplanned one is more likely to result in very very wrong circumstances, I'm far more terrified of a group that strategically plans for its members to have psychosis with the intent of molding those members further toward the group's mission.

Unintentional harm is still harm, of course! It might have even been greater harm in total! But it makes a big difference when it comes to assessing how realistic a project of reform might be.

There are surely some deep reforms along these lines that CFAR/MIRI must consider. For one thing: scrupulosity, in the context of AI safety, seems to be a common thread in several of these breakdowns. I've taken this seriously enough in the past to post extensively on it here [? · GW]. I'd like CFAR/MIRI leadership to carefully update on how scrupulosity hurts both their people and their mission, and think about changes beyond surface-level things like adding a curriculum on scrupulosity. The actual incentives ought to change.

Finally, a good amount of Jessica's post (similarly to Zoe's post) concerns her inner experiences, on which she is the undisputed expert. I'm not ignoring those parts above. I just can't say anything about them, merely that as a third person observer it's much easier to discuss the external realities than the internal ones. (Likewise with Zoe and Leverage.)

Replies from: Gunnar_Zarncke, orthonormal, Vaniver, vanessa-kosoy
comment by Gunnar_Zarncke · 2021-10-17T23:15:43.582Z · LW(p) · GW(p)

: People in and adjacent to MIRI/CFAR manifest major mental health problems, significantly more often than the background rate.

I think this is true

My main complaint about this and the Leverage post is the lack of base-rate data. How many people develop mental health problems in a) normal companies, b) startups, c) small non-profits, d) cults/sects? So far, all I have seen are two cases. And in the startups I have worked at, I would also have been able to find mental health cases that could be tied to the company narrative. Humans being human narratives get woven. And the internet being the internet, some will get blown out of proportion. That doesn't diminish the personal experience at all. I am updating only slightly on CFAR or MIRI. And basically not at all on "things look better from the outside than from the inside."

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-18T02:02:56.048Z · LW(p) · GW(p)

In particular, I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed depression or anxiety (link). I think given the kind of undirected, often low-paid, work that many have been doing for the last decade, I think that's the right reference class to draw from, and my current guess is we are roughly at that same level, or slightly below it (which is a crazy high number, and I think should give us a lot of pause). 

Replies from: Linch
comment by Linch · 2021-10-18T11:21:19.346Z · LW(p) · GW(p)

I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed [emphasis mine] depression or anxiety (link)

I'm confused about how you got to this conclusion, and think it is most likely false. Neither your link, the linked study, or the linked meta-analysis in the linked study of your link says this. Instead the abstract of the linked^3 meta-analysis says:

Among 16 studies reporting the prevalence of clinically significant symptoms of depression across 23,469 Ph.D. students, the pooled estimate of the proportion of students with depression was 0.24 (95% confidence interval [CI], 0.18-0.31; I2 = 98.75%). In a meta-analysis of the nine studies reporting the prevalence of clinically significant symptoms of anxiety across 15,626 students, the estimated proportion of students with anxiety was 0.17 (95% CI, 0.12-0.23; I2 = 98.05%).

Further, the discussion section of the linked^3 study emphasizes:

While validated screening instruments tend to over-identify cases of depression (relative to structured clinical interviews) by approximately a factor of two67,68, our findings nonetheless point to a major public health problem among Ph.D. students.

So I think there is at least two things going on here:

  1. Most people with clinically significant significant symptoms do not go get diagnosed, so "clinically significant symptoms of" depression/anxiety is a noticeably lower bar than "actually clinically diagnosed"
  2. As implied in the quoted discussion above, if everybody were to seek diagnosis, only ~half of the rate of symptomatic people would be clinically diagnosed as having depression/anxiety.
    1. For those keeping score, this is ~12% for depression and 8.5% for anxiety, with some error bars.

Separately, I also think:

my current guess is we are roughly at that same level, or slightly below it

is wrong. My guess is that xrisk reducers have worse mental health on average compared to grad students. (I also believe this, with lower confidence, about people working in other EA cause areas like animal welfare, global poverty, or non-xrisk longtermism, as well as serious rationalists who aren't professionally involved in EA cause areas).

Replies from: Gunnar_Zarncke, habryka4
comment by Gunnar_Zarncke · 2021-10-18T12:09:33.004Z · LW(p) · GW(p)

Note that the pooled prevalence is 24% (CI 18-31). But it differs a lot across studies, symptoms, and location. In the individual studies, the range is really from zero to 50% (or rather to 38% if you exclude a study with only 6 participants). I think a suitable reference class would be the University of California which has 3,190 participants and a prevalence of 38%.  

Replies from: Linch, habryka4
comment by Linch · 2021-10-18T21:00:38.119Z · LW(p) · GW(p)

Sorry, am I misunderstanding something? I think taking "clinically significant symptoms", specific to the UC system, as a given is wrong because it did not directly address either of my two criticisms:

1. Clinically significant symptoms =/= clinically diagnosed even in worlds where there is a 1:1 relationship between clinically significant symptoms and would have been clinically diagnosed, as many people do not get diagnosed

2. Clinically significant symptoms do not have a 1:1 relationship with would have been clinically diagnosed.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2021-10-18T22:27:16.219Z · LW(p) · GW(p)

Well, I agree that the actual prevalence you have in mind would be roughly half of 38% i.e. ~20%. That is still much higher than the 12% you arrived at. And either value is so high that there is little surprise some severe episodes of some people happened in a 5-year frame. 

comment by habryka (habryka4) · 2021-10-18T17:20:22.285Z · LW(p) · GW(p)

The UC Berkeley study was the one that I had cached in my mind as generating this number. I will reread it later today to make sure that it's right, but it sure seems like the most relevant reference class, given the same physical location.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2021-10-18T19:50:54.417Z · LW(p) · GW(p)

I had a look at the situation in Germany and it doesn't look much better. 17% of students are diagnosed with at least one psychical disorder. This is based on the health records of all students insured by one of the largest public health insurers in Germany (about ten percent of the population):

https://www.barmer.de/blob/144368/08f7b513fdb6f06703c6e9765ee9375f/data/dl-barmer-arztreport-2018.pdf 

comment by habryka (habryka4) · 2021-10-18T19:37:29.646Z · LW(p) · GW(p)

I feel like the paragraph you cited just seems like the straightforward explanation of where my belief comes from? 

Among 16 studies reporting the prevalence of clinically significant symptoms of depression across 23,469 Ph.D. students, the pooled estimate of the proportion of students with depression was 0.24 (95% confidence interval [CI], 0.18–0.31; I2 = 98.75%). In a meta-analysis of the nine studies reporting the prevalence of clinically significant symptoms of anxiety across 15,626 students, the estimated proportion of students with anxiety was 0.17 (95% CI, 0.12–0.23; I2 = 98.05%)

24% of people have depression, 17% have anxiety, resulting in something like 30%-40% having one or the other. 

I did not remember the section about the screening instruments over-identifying cases of depression/anxiety by approximately a factor of two, which definitely cuts down my number, and I should have adjusted it in my above comment. I do think that factor of ~2 does maybe make me think that we are doing a bit worse than grad students, though I am not super sure.

Replies from: Linch
comment by Linch · 2021-10-18T20:45:26.456Z · LW(p) · GW(p)

Sorry, maybe this is too nitpicky, but clinically significant symptoms =/= clinically diagnosed, even in worlds where the clinically significant symptoms are severe enough to be diagnosed as such.

If you instead said in "population studies 30-40% of graduate students have anxiety or depression severe enough to be clinically diagnosed as such were they to seek diagnosis" then I think this will be a normal misreading from not jumping through enough links.

Put another way, if someone in mid-2020 told me that they had symptomatic covid and was formally diagnosed with covid, I would expect that they had worse symptoms than someone who said they had covid symptoms and later tested for covid antibodies. This is because jumping through the hoops to get a clinical diagnosis is nontrivial Bayesian evidence of severity and not just certainty, under most circumstances, and especially when testing is limited and/or gatekeeped (which is true for many parts of the world for covid in 2020, and is usually true in the US for mental health). 

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-18T20:51:16.945Z · LW(p) · GW(p)

Ah, sorry, yes. Me being unclear on that was also bad. The phrasing you give is the one I intended to convey, though I sure didn't do it.

Replies from: Linch
comment by Linch · 2021-10-18T21:03:20.349Z · LW(p) · GW(p)

Thanks, appreciate the update!

comment by orthonormal · 2021-10-17T21:35:32.681Z · LW(p) · GW(p)

Additionally, as a canary statement: I was also never asked to sign an NDA.

comment by Vaniver · 2021-10-17T23:23:52.319Z · LW(p) · GW(p)

I think CFAR would be better off if Anna delegated hiring to someone else.

I think Pete did (most of?) the hiring as soon as he became ED, so I think this has been the state of CFAR for a while (while I think Anna has also been able to hire people she wanted to hire).

Replies from: petemichaud-1
comment by PeteMichaud (petemichaud-1) · 2021-10-18T07:14:42.324Z · LW(p) · GW(p)

It's always been a somewhat group-involved process, but yes, I was primarily responsible for hiring for roughly 2016 through the end of 2017, then it would have been Tim. But again, it's a small org and always involved some involvement of the whole group. 

Replies from: elityre
comment by Eli Tyre (elityre) · 2021-10-18T08:11:54.178Z · LW(p) · GW(p)

Without denying that it is a small org and staff usually have some input over hiring, that input is usually informal.

My understanding is that in the period when Anna was ED, there was an explicit all-staff discussion when they were considering a hire (after the person had done a trial?). In the Pete era, I'm sure Pete asked for staff members' opinions, and if (for instance) I sent him an email with my thoughts on a potential hire, he would take that info into account, but there was not institutional group meeting. 

comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-17T22:27:16.006Z · LW(p) · GW(p)

if one believed somebody else were just as capable of causing AI to be Friendly, clearly one should join their project instead of starting one's own.

Nitpicking: there are reasons to have multiple projects, for example it's convenient to be in the same geographic location but not anyone can relocate to any place.

Replies from: orthonormal
comment by orthonormal · 2021-10-17T22:43:11.063Z · LW(p) · GW(p)

Sure - and MIRI/FHI are a decent complement to each other, the latter providing a respectable academic face to weird ideas. 

Generally though, it's far more productive to have ten top researchers in the same org rather than having five orgs each with two top researchers and a couple of others to round them out. Geography is a secondary concern to that.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-17T22:54:27.014Z · LW(p) · GW(p)

A "secondary concern" in the sense that, we should work remotely? Or in the sense that everyone should relocate? Because the latter is unrealistic: people have families, friends, communities, not anyone can uproot themself.

Replies from: orthonormal
comment by orthonormal · 2021-10-17T23:52:59.665Z · LW(p) · GW(p)

A secondary concern in that it's better to have one org that has some people in different locations, but everyone communicating heavily, than to have two separate organizations.

Replies from: Davidmanheim, vanessa-kosoy
comment by Davidmanheim · 2021-10-18T06:53:08.479Z · LW(p) · GW(p)

I think this is much more complex than you're assuming. As a sketch of why, costs of communication scale poorly, and the benefits of being small and coordinating centrally often beats the costs imposed by needing to run everything as one organization. (This is why people advise startups to outsource non-central work.)

comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-18T09:29:04.502Z · LW(p) · GW(p)

This might be the right approach, but notice that no existing AI risk org does that. They all require physical presence.

Replies from: novalinium
comment by novalinium · 2021-10-18T17:31:50.165Z · LW(p) · GW(p)

Anthropic does not require consistent physical presence.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-18T18:27:33.865Z · LW(p) · GW(p)

AFAICT, Anthropic is not an existential AI safety org per se, they're just doing a very particular type of research which might help with existential safety. But also, why do you think they don't require physical presence?

Replies from: novalinium, Vaniver
comment by novalinium · 2021-10-18T23:13:53.262Z · LW(p) · GW(p)

If you're asking why I believe that they don't require presence, I've been interviewing with them and that's my understanding from talking with them. The first line of copy on their website is

Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.

Sounds pretty much like a safety org to me.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-19T11:24:47.756Z · LW(p) · GW(p)

If you're asking why I believe that they don't require presence, I've been interviewing with them and that's my understanding from talking with them.

Are you talking about "you can work from home and come to the office occasionally", or "you can live on a different continent"?

Sounds pretty much like a safety org to me.

I found no mention of existential risk on their web page. They seem to be a commercial company, aiming at short-to-mid-term applications. I doubt they have any intention to do e.g. purely theoretical research, especially if it has no applications to modern systems. So, what they do can still be meritorious and relevant to reducing existential risk. But, the context of this discussion is: can we replace all AI safety orgs by just one org. And, Anthropic is too specialized to serve such a role.

comment by Vaniver · 2021-10-18T19:20:14.951Z · LW(p) · GW(p)

I believe Anthropic doesn't expect its employees to be in the office every day, but I think this is more pandemic-related than it is a deliberate organizational design choice; my guess is that most Anthropic employees will be in the office a year from now.

comment by Eli Tyre (elityre) · 2021-10-18T07:41:47.662Z · LW(p) · GW(p)

[Edit: I want to note that this is represents only a fraction of my overall feelings and views on this whole thing.]

I don't want to concentrate on the question of which is "worse"; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases.

I feel some annoyance at this sentence. I appreciate the stated goal of just trying to understand what happened in the different situations, without blaming or trying to evaluate which is worse.

But then the post repeatedly (in every section!) makes reference to Zoe's post, comparing her experience at Leverage to your (and others') experience at MIRI/CFAR, taking specific elements from her account and drawing parallels to your own. This is the main structure of the post!

Some more or less randomly chosen examples (ctrl-f "Leverage" or "Zoe" for lots more):

Zoe begins by listing a number of trauma symptoms she experienced.  I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break.

...

Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past events secretively.  This matches my experience.

...

Zoe discusses an unofficial NDA people signed as they left, agreeing not to talk badly of the organization.  While I wasn't pressured to sign an NDA, there were significant security policies discussed at the time (including the one about researchers not asking each other about research). 

...

Like Zoe, I experienced myself and others being distanced from old family and friends, who didn't understand how high-impact the work we were doing was.

If the goal is just to clarify what happened and not at all to blame or compare, then why not...just state what happened at MIRI/CFAR without comparing to the Leverage case, at all?

You (Jessica) say, "I will be noting parts of Zoe's post and comparing them to my own experience, which I hope helps to illuminate common patterns; it really helps to have an existing different account to prompt my memory of what happened." But in that case, why not use her post as a starting point for organizing your own thoughts, but then write something about MIRI/CFAR that stands on its own terms?

. . . 

To answer my own question...

My guess is that you adopted this essay structure because you want to argue that the things that happened at Leverage were not a one-off random thing, they were structurally (not just superficially) similar to dynamics at MIRI / CFAR. That is, there is a common cause in of similar symptoms, between those two cases.

If so, my impression is that this essay is going too fast, by introducing a bunch of new interpretation-laden data, and fitting that data into a grand theory of similarity between Leverage and MIRI all at once. Just clarifying the facts about what happened is a different (hard) goal than describing the general dynamics underlying those events. I think we'll make more progress if we do the first, well, before moving on to the second.

In effect, because the data is presented as part of some larger theory, I have to do extra cognitive work to evaluate the data on its own terms, instead of slipping into the frame of evaluating whether the larger theory is true or false, or whether my affect towards MIRI should be the same as my affect toward Leverage, or something. It made it harder instead of easier for me to step out of the frame of blame and "who was most bad?".


 

Replies from: elityre
comment by Eli Tyre (elityre) · 2021-10-18T08:02:57.565Z · LW(p) · GW(p)

This feels especially salient because a number of the specific criticisms, in my opinion, don't hold up to scrutiny, but this is obscured by the comparison to Leverage.

Like for any cultural characteristic X, there will be healthy and unhealthy versions. For instance, there are clearly good healthy versions of "having a culture of self improvement and debugging", and also versions that are harmful. 

For each point Zoe contends that (at least some parts of Leverage) had a destructive version, and you point out that there was a similar thing at MIRI/CFAR. And for many (but not all) of those points, 1) I agree that there was a similar dynamic at MIRI/CFAR, and also 2) I think that the MIRI CFAR version was much less harmful than what Zoe describes.

For instance,

Zoe is making the claim that (at least some parts of) Leverage had an unhealthy and destructive culture of debugging. You, Jessica, make the claim that CFAR had a similar culture of debugging, and that this is similarly bad. My current informed impression is that CFAR's self improvement culture both had some toxic elements and is/was also an order of magnitude better than what Zoe describes.

Assuming that for a moment that my assessment about CFAR is true (of course, it might not be), your comparing debugging at CFAR to debugging at Leverage is confusing to the group cognition, because they have been implicitly lumped together [? · GW]. 

Now, more people's estimation of CFAR's debugging culture will rise or fall with their estimation of Leverage's debugging culture. And recognizing this, consciously or unconsciously, people are now incentivized to bias their estimation of one or of the other (because they want to defend CFAR, or defend Leverage, or attack CFAR, or attack Leverage).

I'm under this weird pressure, because if I state "Anna debugging with me while I worked at CFAR might seems bad, but it was actually mostly innocuous" is kind of awkward, because this seems to be implying that that what happened at Leverage was also not so bad. 

And on the flip side, I'll feel more cagey about talking about the toxic elements of CFAR's debugging culture, because in context, that seems to be implying that it was as bad as Zoe's account of Leverage. 

"Debugging culture" is just one example. For many of these points, I think further investigation might show that the thing that happened at one org was meaningfully different from the thing that happened at the other org, in which case, bucketing them together from the getgo seems counterproductive to me. 

Drawing the parallels between MIRI/CFAR and Leverage, point by point, makes it awkward to consider each org's pathologies on it's own terms. It makes it seem like if one was bad, then the other was probably bad too, even though it is at least possible that one org had mostly healthy versions of some cultural elements and the other had mostly unhealthy versions of similar elements, or (even more likely) they each had a different mix of pathologies.

I contend that if the goal is to get clear on the facts, we want to do the opposite thing: we want to, as much as possible, consider the details of the cases independently, attempting to do original seeing [LW · GW], so that we can get a good grasp of what happened in each situation. 

And only after we've clarified what happened might we want to go back and see if there are common dynamics in play.

Replies from: elityre, Vladimir_Nesov
comment by Eli Tyre (elityre) · 2021-10-22T08:20:37.464Z · LW(p) · GW(p)

Ok. After thinking further and talking about it with others, I've changed my mind about the opinion that I expressed in this comment, for two reasons.

1) I think there is some pressure to scapegoat Leverage, by which I mean specifically, "write off Leverage as reprehensible, treat it as 'an org that we all know is bad', and move on, while feeling good about our selves for not being bad they way that they were". 

Pointing out some ways that MIRI or CFAR are similar to Leverage disrupts that process. Anyone who both wants to scapegoat Leverage and also likes MIRI has to contend with some amount of cognitive dissonance. (A person might productively resolve this cognitive dissonance by recognizing what I contend are real disanalogies between the two cases, but they do have to come to terms with it at all.)

If you mostly want to scapegoat, this is annoying, but I think we should be making it harder, not easier, to scapegoat in this way.

2) My current personal opinion is that the worst things that happened at MIRI or CFAR are not in the same league as what was describes as happening in (at least some parts of) Leverage in Zoe's post, both in terms of the deliberateness of the bad dynamics and the magnitude the harm they caused.

I think that talking about MIRI or CFAR is mostly a distraction from understanding what happened at Leverage, and what things anyone here should do next. However, there are some similarities between Leverage on the one hand and CFAR or MIRI on the other, and Jessica had some data about the latter which might be relevant to people's view about Leverage.

Basically, there's an epistemic processing happening in these comments and on general principles, it is better for people to share info that they think is relevant, so that the epistemic process has the option of disregarding it or not.

 

 

I do think that Jessica writing this post will predictably have reputational externalities that I don't like and I think are unjustified. 

Broadly, I think that onlookers not paying much attention would have concluded from Zoe's post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe's and Jessica's post, is likely to conclude that Leverage and MIRI are similarly bad cults.

I think that both of these views are incorrect simplifications. But I think that the second story is less accurate than the first, and so I think it is a cost if Jessica's post promotes the second view. I have some annoyance about that.

However, I think that we mostly shouldn't be in the business of trying to cater to bystanders who are not invested in understanding what is actually going on in detail, and we especially should not compromise the discourse of people who are invested in understanding.

 

I still wish that this post had been written differently in a number of ways (such as emphasizing more strongly that in Jessica's opinion management in corporate America is worse than MIRI or Leverage), but I acknowledging that writing such a post is hard.

Replies from: Hazard, elityre
comment by Hazard · 2021-10-22T21:06:33.468Z · LW(p) · GW(p)

I'm not sure what writing this comment felt like for you, but from my view it seems like you've noticed a lot of the dynamics about scapegoating and info-suppression fields that Ben and Jessica have blogged about in the past (and occasionally pointed out in the course of these comments, though less clearly). I'm going to highlight a few things.

I do think that Jessica writing this post will predictably have reputational externalities that I don't like and I think are unjustified. 

Broadly, I think that onlookers not paying much attention would have concluded from Zoe's post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe's and Jessica's post, is likely to conclude that Leverage and MIRI are similarly bad cults.

I totally agree with this. I also think that to the degree to which an "onlooker not paying much attention" concludes this is the degree to which they are habituated to engaging with discussion of wrongdoing as scapegoating games. This seems to be very common (though incredibly damaging) behavior. Scapegoating works on the associative/impressionistic logic of "looks", and Jessica's post certainly makes CFAR/MIRI "look" bad. This post can be used as "material" or "fuel" for scapegoating, regardless of whether Jessica's intent in writing it. Though it can't be used honestly to scapegoat (if there even is such a thing). Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about "HEY, DON'T USE THIS TO SCAPEGOAT", and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.

(aside, from both my priors on Jess and my reading of the post it was clear to me that Jess wasn't trying to scapegoat CFAR/MIRI. It also simply isn't in Jess's interests for them to be scapegoated)

Another thought: CFAR/MIRI already "look" crazy to most people who might check them out. UFAI, cryonics, acausal trade, are all things that "look" crazy. And yet we're all able to talk about them on LW without worry about "how it looks" because over many many conversations, many sequences, blog posts, comments, etc have created a community with different common knowledge about what will result in people ganging up on you.

Something that we as a community don't talk a lot about is power structures, coercion, emotional abuse, manipulation, etc. We don't collectively build and share models on their mechanics and structure. As such, I think it's expected that when "things get real" people abandon commitment to the truth in favor of "oh shit, there's an actual conflict, I or others could be scapegoated, I am not safe, I need to protect my people from being scapegoated at all cost".

However, I think that we mostly shouldn't be in the business of trying to carter to bystanders who are not invested in understanding what is actually going on in detail, and we especially should not compromise the discourse of people who are invested in understanding.

I totally agree, and I think if you explore this sense you already sorta see how commitment to making sure things "look okay" quickly  becomes a commitment to suppress information about what happened.

(aside, these are some of Ben's post that have been most useful to me for understanding some of this stuff)

Blame Games

Can Crimes Be Discussed Literally?

Judgement, Punishment, and Information-Supression Fields

Replies from: jessica.liu.taylor, elityre
comment by jessicata (jessica.liu.taylor) · 2021-10-22T21:25:02.070Z · LW(p) · GW(p)

I appreciate this comment, especially that you noticed the giant upfront paragraph that's relevant to the discussion :)

One note on reputational risk: I think I took reasonable efforts to reduce it, by emailing a draft to people including Anna Salamon beforehand. Anna Salamon added Matt Graves (Vaniver) to the thread, and they both said they'd be happy with me posting after editing (Matt Graves had a couple specific criticisms of the post). I only posted this on LW, not on my blog or Medium. I didn't promote it on Twitter except to retweet someone who was already tweeting about it. I don't think such reputation risk reduction on my part was morally obligatory (it would be really problematic to require people complaining about X organization to get approval from someone working at that organization), just possibly helpful anyway.

Spending more than this amount of effort managing reputation risks would seriously risk important information not getting published at all, and too little of that info being published would doom the overall ambitious world-saving project by denying it relevant knowledge about itself. I'm not saying I acted optimally, just, I don't see the people complaining about this making a better tradeoff in their own actions or advising specific policies that would improve the tradeoff.

comment by Eli Tyre (elityre) · 2021-10-23T09:48:00.458Z · LW(p) · GW(p)

Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about "HEY, DON'T USE THIS TO SCAPEGOAT"

I think that's literally true, but also they way you wrote this sentence implies that that is unusual or uncommon.

I think that's backwards. If a person was intentionally and deliberately motivated to scapegoat some other person or group, it is an effective rhetorical move to say "I'm not trying to punish them, I just want to talk freely about some harms."

By pretending that you're not attacking the target, you protect yourself somewhat from counter attack. Now you can cause reputational damage, and if people try to punish you for doing that, you can retreat to the Motte of "but I was just trying to talk about what's going on. I specifically said not to punish any one!"

and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.

This also seems to strong too me. I expect that many movement EAs will read the  post Zoe's and think "well, that's enough information for me to never have anything to do with Geoff or Leverage." This isn't because they're not interested in justice, it's because they don't have time time or the interest to investigate every allegation, so they're using some rough heuristics and policies such as "if something looks sufficiently like a dangerous cult, don't even bother giving it the benefit of the doubt."

Replies from: Hazard
comment by Hazard · 2021-10-23T16:12:03.549Z · LW(p) · GW(p)

When I was drafting my comment, the original version of the text you first quoted was, "Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about 'HEY DON'T USE THIS TO SCAPEGOAT' (which people are totally capable of ignoring)", guess I should have left that in there. I don't think it's uncommon to ignore such disclaimers, I do think it actively opposes behaviors and discourse norms I wish to see in the world.

I agree that putting a "I'm not trying to blame anyone" disclaimer can be a pragmatic rhetorical move for someone attempting to scapegoat. There's an alternate timeline version of Jessica that wrote this post as a well crafted, well defended rhetorical attack, where the literal statements in the post all clearly say "don't fucking scapegoat anyone, you fools" but all the associative and impressionistic "dark implications" (Vaniver's language) say "scapegoat CFAR/MIRI!" I want to draw your attention to the fact that for a potential dark implication to do anything, you need people who can pick up that signal. For it to be an effective rhetorical move, you need a critical mass of people who are well practiced in ignoring literal speech, who understand on some level that the details don't matter, and are listening in for "who should we blame?"

To be clear, I think there is such a critical mass! I think this is very unfortunate! (though not awkward, as Scott put it) There was a solid 2+ days where Scott and Vaniver's insistence on this being a game of "Scapegoat Vassar vs scapegoat CFAR/MIRI" totally sucked me in, and instead of reading the contents of anyone's comments I was just like "shit, who's side do I join? How bad would it be if people know I hung out with Vassar once? I mean I really loved my time at CFAR, but I'm also friends with Ben and Jess. Fuck, but I also think Eli is a cool guy! Shit!" That mode of thinking I engaged in is a mode that can't really get me what I want, which is larger and larger groups of people that understand scapegoating dynamics and related phenomena. 

This also seems to strong to me. I expect that many movement EAs will read the  post Zoe's and think "well, that's enough information for me to never have anything to do with Geoff or Leverage." This isn't because they're not interested in justice, it's because they don't have time time or the interest to investigate every allegation, so they're using some rough heuristics and policies such as "if something looks sufficiently like a dangerous cult, don't even bother giving it the benefit of the doubt."

Okay, I think my statement was vague enough to be mistaken for a statement I think is too strong. Though I expect you might consider my clarification too strong as well :)

I was thinking about the "in any way that matters" part. I can see how that implies a sort of disregard for justice that spans across time. Or more specifically, I can see how you would think it implies that certain conversations you've had with EA friends were impossible, or that they were lying/confabulating the whole convo, and you don't think that's true. I don't think that's the case either. I'm thinking about it as more piece-wise behavior. One will sincerely care about justice, but in that moment where they read Jess's post, ignore the giant disclaimer about scapegoating, and try to scapegoat MIRI/CFAR/Leverage, in that particular moment the cognitive processes generating their actions aren't aligned with justice, and are working against it. Almost like an "anti-justice traumatic flashback" but most of the time it's much more low-key and less intense than what you will read about in the literature on flashback. Malcolm Ocean does a great job of describing this sort of "falling into a dream" in his post Dream Mashups (his post is not about scapegoating, its about ending up running a cognitive algo that hurts you without noticing).

To be clear, I not saying such behavior is contemptible, blameworthy, bad, or to-be-scapegoated. I am saying it's very damaging, and I want more people to understand how it works. I want to understand how it works more. I would love to not get sucked into as many anti-justice dreams where I actively work against creating the sort of world I want to live in.

So when I said "not aligned with justice in any important relevant way", that was more a statement about "how often and when will people fall into these dreams?" Sorta like the concept of "fair weather friend", my current hunch is that people fall into scapegoating behavior exactly when it would be most helpful for them to not. While reading a post about "here's some problems I see in this institution that is at the core of our community" is exactly when it is most important for one's general atemporal commitment to justice to be present in one's actual thoughts and actions. 

comment by Eli Tyre (elityre) · 2023-01-19T15:54:31.777Z · LW(p) · GW(p)

I retracted this comment, because reading all of my comments here, a few years later, I feel much more compelled by my original take than by this addition.

I think the addition points out real dynamics, but that those dynamics don't take precedence over the dynamics that I expressed the first place. Those seem higher priority to me.

comment by Vladimir_Nesov · 2021-10-18T11:42:20.115Z · LW(p) · GW(p)

This works as a general warning against awareness of hypotheses that are close to but distinct from the prevailing belief. The goal should be to make this feasible, not to become proficient in noticing the warning signs and keeping away from this.

I think the feeling that this kind of argument is fair is a kind of motivated cognition that's motivated by credence. That is, if a cognitive move (argument, narrative, hypothesis) puts forward something false, there is a temptation to decry it for reasons that would prove too much, that would apply to good cognitive moves just as well if considered in their context, which credence-motivated cognition won't be doing.

comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-17T10:08:05.629Z · LW(p) · GW(p)

Full disclosure: I am a MIRI Research Associate. This means that I receive funding from MIRI, but I am not a MIRI employee and I am not privy to its internal operation or secrets.

First of all, I am really sorry you had these horrible experiences.

A few thoughts:

Thought 1: I am not convinced the analogy between Leverage and MIRI/CFAR holds up to scrutiny. I think that Geoff Anders is most likely a bad actor, whereas MIRI/CFAR leadership is probably acting in good faith. There seems to be significantly more evidence of bad faith in Zoe's account than in Jessica's account, and the conclusion is reinforced by adding evidence from other accounts. In addition, MIRI definitely produced some valuable public research whereas I doubt the same can be said of Leverage, although I haven't been following Leverage so I am not confident about the latter (ofc it's in principle possible for a deeply unhealthy organization to produce some good outputs, and good outputs certainly don't excuse abuse of personnel, but I do think good outputs provide some evidence against such abuse).

It is important not to commit the fallacy of gray [LW · GW]: it would risk both judging MIRI/CFAR too harshly and judging Leverage insufficiently harshly. The comparison Jessica makes to "normal corporations" reinforces this impression: I have much experience in the industry, and although it's possible I've been lucky in some ways, I still very much doubt the typical company is nearly as bad as Leverage.

Thought 2: From my experience, AI alignment is a domain of research that intrinsically comes with mental health hazards. First, the possibility of impending doom and the heavy sense of responsibility are sources of stress. Second, research inquiries often enough lead to "weird" metaphysical questions that risk overturning the (justified or unjustified) assumptions we implicitly hold to maintain a sense of safety in life. I think it might be the closest thing in real life to the Lovecraftian notion of "things that are best not to know because they will drive you mad". Third, the sort of people drawn to the area and/or having the necessary talents seem to often also come with mental health issues (I am including myself in this group).

This might be regarded as an argument to blame MIRI less for the mental health fallout described by Jessica, but this is also an argument to pay more attention to the problem. It would be best if we could provide the people working in the area with the tools and environment to deal with these risks.

Thought 3: The part that concerned me the most in Jessica's account (in part due to its novelty to me) is MIRI's internal secrecy policy. While it might be justifiable to have some secrets to which only some employees are privy, it seems very extreme to require going through an executive because even the mere fact that a secret project exists is too dangerous. MIRI's secrecy policy seemed questionable to me even before, but this new spin makes it even more dubious.

Overall, I wish MIRI was more transparent, so that for example its supporters would know about this internal policy. I realize there are tradeoffs involved, but I am not convinced MIRI chose the right balance. To me it feels like overconfidence about MIRI's ability to steer the right way without the help of external critique.

Moreover, I'm a little worried that MIRI's lack of transparency might pose a risk for the entire AI safety project. Tbh, one of my first thoughts when I saw the headline of the OP was "oh no, what if some scandal around MIRI blows up and the shockwave buries the entire community". And I guess some people might think this is a reason for more secrecy. IMO it's a reason for less secrecy (not necessarily less secrecy about technical AI stuff, but less secrecy about management and high-level plans). If we don't have any skeletons in the closest, we don't need to worry about the day they will come out. And eventually everything comes out, more or less. When most of everything is in the open, the community can find the right balance around it, and the reputation system is much more robust.

Thought 4: "Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness." I think (hope?) this is not at all a prevalent stance in the community (or at least in its leading echelons), but just for the record I want to note my strong position that the "someone" in this story is very misguided. Like I said, I don't think community is currently comparable to Leverage, but this is the sort of thing that can push us in that direction.

Replies from: Dojan, ChristianKl, jessica.liu.taylor
comment by Dojan · 2021-10-17T14:37:40.041Z · LW(p) · GW(p)

Plus a million points for "IMO it's a reason for less secrecy"!

If you put a lid on something you might contain it in the short term, but only at the cost of increasing the pressure: And pressure wants out, and the higher the pressure the more explosive it will be when it inevitably does come out. 

I have heard too many accounts like this, in person and anecdotally, on the web and off for me to currently be interested in working or even getting to closely involved with any of the organizations in question. The only way to change this for me is to believably cultivate a healthy, transparent and supportive environment. 

This made me go back and read "Every Cause wants to be a Cult" (Eliezer, 2007) [LW · GW], which includes quotes like this one:
"Here I just want to point out that the worthiness of the Cause does not mean you can spend any less effort in resisting the cult attractor. And that if you can point to current battle lines, it does not mean you confess your Noble Cause unworthy. You might think that if the question were, “Cultish, yes or no?” that you were obliged to answer, “No,” or else betray your beloved Cause."

comment by ChristianKl · 2021-10-17T17:15:04.753Z · LW(p) · GW(p)

Thought 2: From my experience, AI alignment is a domain of research that intrinsically comes with mental health hazards. First, the possibility of impending doom and the heavy sense of responsibility are sources of stress. Second, research inquiries often enough lead to "weird" metaphysical questions that risk overturning the (justified or unjustified) assumptions we implicitly hold to maintain a sense of safety in life. I think it might be the closest thing in real life to the Lovecraftian notion of "things that are best not to know because they will drive you mad". Third, the sort of people drawn to the area and/or having the necessary talents seem to often also come with mental health issues (I am including myself in this group).

That sounds like MIRI should have a councillor on it's staff.

Replies from: crabman
comment by philip_b (crabman) · 2021-10-17T17:59:28.035Z · LW(p) · GW(p)

That would make them more vulnerable to claims that they use organizational mind control on their employees, and at the same time make it more likely that they would actually use it.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-18T08:39:28.210Z · LW(p) · GW(p)

You would likely hire someone who's traditionally trained, credentialed and has work experience instead of doing a bunch of your own psych-experiments, likely in a tradition like gestalttherapy that focuses on being nonmanipulative. 

Replies from: benjamin-j-campbell
comment by benjamin.j.campbell (benjamin-j-campbell) · 2021-10-18T14:31:36.133Z · LW(p) · GW(p)

There's an easier solution that doesn't run the risk of being or appearing manipulative. You can contract external and independent councillors and make them available to your staff anonymously. I don't know if there's anything comparable in the US, but in Australia they're referred to as Employee Assistance Programs (EAPs). Nothing you discuss with the councillor can be disclosed to your workplace, although in rare circumstances there may be mandatory reporting to the police (e.g. if abuse or ongoing risk of a minor is involved).

This also goes a long way toward creating a place where employees can talk about things they're worried will seem crazy in work contexts.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-18T15:08:51.954Z · LW(p) · GW(p)

Solutions like that might work, but it's worth noting that just having an average therapist likely won't be enough.

If you actually care about a level of security that protects secrets against intelligence agencies, operational security of the office of the therapist is a concern. 

Governments that have security clearances don't want their employees to talk with therapists who don't have the secuirty clearances about classified information.

Talking nonjudgmentally with someone who has reasonable fears that the humanity won't survive the next ten years because of fast AI timelines is not easy. 

comment by jessicata (jessica.liu.taylor) · 2021-10-17T21:19:26.474Z · LW(p) · GW(p)

As far as I can tell, normal corporate management is much worse than Leverage. The kind of people from that world will, sometimes when prompted in private conversations, say things like:

  • Standard practice is to treat negotiations with other parties as zero-sum games.
  • "If you look around the table and can't tell who the sucker is, it's you" is a description of a common, relevant social dynamic in corporate meetings.
  • They have PTSD symptoms from working in corporate management, and are very threat-sensitive in general.
  • They learned from experience to treat social reality in general as fake, everything as an act.
  • They learned to accept that "there's no such thing as not being lost", like they've lost the ability to self-locate in a global map (I've experienced losing this to a significant extent).
  • Successful organizations get to be where they are by committing crimes, so copying standard practices from them is copying practices for committing crimes.

This is, to a large extent, them admitting to being bad actors, them and others having been made so by their social context. (This puts the possibility of "Geoff Anders being a bad actor" into perspective)

MIRI is, despite the problems noted in the post, as far as I can tell the most high-integrity organization doing AI safety research. FHI contributes some, but overall lower-quality research; Paul Christiano does some relevant research; OpenAI's original mission was actively harmful, and hasn't done much relevant safety research as far as I can tell. MIRI's public output in the past few years since I left has been low, which seems like bad sign for its future performance, but what it's done so far has been quite a large portion of the relevant research. I'm not particularly worried about scandals sinking the overall non-MIRI AI safety world's reputation, given the degree to which it is of mixed value.

Replies from: nostalgebraist, vanessa-kosoy
comment by nostalgebraist · 2021-10-18T00:11:33.554Z · LW(p) · GW(p)

As far as I can tell, normal corporate management is much worse than Leverage

Your original post drew a comparison between MIRI and Leverage, the latter of which has just been singled out for intense criticism.

If I take the quoted sentence literally, you're saying that "MIRI was like Leverage" is a gentler critique than "MIRI is like your regular job"?

If the intended message was "my job was bad, although less bad than the jobs of many people reading this, and instead only about as bad as Leverage Research," why release this criticism on the heels of a post condemning Leverage as an abusive cult?  If you believe the normally-employed among LessWrong readers are being abused by sub-Leverage hellcults, all the time, that seems like quite the buried lede!

Sorry for the intense tone, it's just ... this sentence, if taken seriously, reframes the entire post for me in a big, weird, bad way.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T00:49:08.984Z · LW(p) · GW(p)

I thought I was pretty clear, at the end of the post, that I wasn't sad that I worked at MIRI instead of Google or academia. I'm glad I left when I did, though.

The conversations I'm mentioning with corporate management types were suprising to me, as were the contents of Moral Mazes, and Venkatesh Rao's writing. So "like a regular job" doesn't really communicate the magnitude of the harms to someone who doesn't know how bad normal corporate management is. It's hard for me to have strong opinions given that I haven't worked in corporate management, though. Maybe a lot of places are pretty okay.

I've talked a lot with someone who got pretty high in Google's management hierarchy, who seems really traumatized (and says she is) and who has a lot of physiological problems, which seem overall worse than mine. I wouldn't trade places with her, mental health-wise.

MIRI wouldn't make sense as a project if most regular jobs were fine, people who were really ok wouldn't have reason to build unfriendly AI. I discussed with some friends about the benefits of working at Leverage vs. MIRI vs. the US Marines, and we agreed that Leverage and MIRI were probably overall less problematic, but the fact that the US marines signal that they're going to dominate/abuse people is an important advantage relative to the alternatives, since it sets expectations more realistically.

Replies from: elityre, Vaniver, T3t
comment by Eli Tyre (elityre) · 2021-10-18T02:33:32.604Z · LW(p) · GW(p)

MIRI wouldn't make sense as a project if most regular jobs were fine, people who were really ok wouldn't have reason to build unfriendly AI.

I just want to note that this is a contentious claim. 

There is a competing story, and one much more commonly held among people who work for or support MIRI, that the world is heading towards an unaligned intelligence explosion due to the combination of a coordination problem and very normal motivated reasoning about the danger posed by lucrative and prestigious projects.

One could make the claim "healthy" people (whatever that means) wouldn't exhibit those behaviors, ie that they would be able to coordinate and avoid rationalizing. But that's a non-standard view. 

I would prefer that you specifically flag it as a non-standard view, and then either make the argument for that view over the more common one, or highlight that you're not going into detail on the argument and that you don't expect others to accept the claim.

As it is, it feels a little like this is being slipped in as if it is a commonly accepted premise.  

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T02:41:43.993Z · LW(p) · GW(p)

I agree this is a non-standard view.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2021-10-18T10:33:29.964Z · LW(p) · GW(p)

Yes, I would! Any pointers? 
(to avoid miscommunication I'm reading this to say that people are more likely to build UFAI because of traumatizing environment vs. normal reasons Eli mentioned)

comment by Vaniver · 2021-10-18T01:12:13.682Z · LW(p) · GW(p)

Note that there's an important distinction between "corporate management" and "corporate employment"--the thing where you say "yeesh, I'm glad I'm not a manager at Google" is substantially different from the thing where you say "yeesh, I'm glad I'm not a programmer at Google", and the audience here has many more programmers than managers.

[And also Vanessa's experience [LW(p) · GW(p)] matches my impressions, tho I've spent less time in industry.]

[EDIT: I also thought it was clear that you meant this more as a "this is what MIRI was like" than "MIRI was unusually bad", but I also think this means you're open to nostalgebraist's objection, that you're ordering things pretty differently from how people might naively order them.]

Replies from: iceman
comment by iceman · 2021-10-18T13:45:49.552Z · LW(p) · GW(p)

My experience was that if you were T-5 (Senior), you had some overlap with PM and management games, and at T-6 (Staff), you were often in them. I could not handle the politics to get to T-7. Programmers below T-5 are expected to earn promotions or to leave.

Google's a big company, so it might have been different elsewhere internally. My time at Google certainly traumatized me, but probably not to the point of anything in this or the Leverage thread.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-10-18T21:32:01.287Z · LW(p) · GW(p)

Programmers below T-5 are expected to earn promotions or to leave.

This changed something like five years ago [edit: August 2017], to where people at level four (one level above new grad) no longer needed to get promoted to stay long term.

comment by RobertM (T3t) · 2021-10-18T01:31:32.290Z · LW(p) · GW(p)

I think maybe a bit of the confusion here is nostalgebraist reading "corporate management" to mean something like "a regular job in industry", whereas you're pointing at "middle- or upper-management in sufficiently large or maze-like organizations"? Because those seem very different to me and I could imagine the second being much worse for people's mental health than the first.

Separately I'm confused about the claim that "people who were really ok wouldn't have reason to build unfriendly AI"; it sounds like you don't agree that the idea that UFAI is the default outcome from building AFI without a specific effort to make it friendly? (This is probably a distraction from this threads' subject but I'd be interested to read your thoughts on that if you've written them up somewhere.)

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T01:40:31.796Z · LW(p) · GW(p)

I think maybe a bit of the confusion here is nostalgebraist reading “corporate management” to mean something like “a regular job in industry”, whereas you’re pointing at “middle- or upper-management in sufficiently large or maze-like organizations”?

Yes, that seems likely. I did some interships at Google as a software engineer and they didn't seem better than working at MIRI on average, although they had less intense psychological effects, as things didn't break out in fractal betrayal during the time I was there.

Separately I’m confused about the claim that “people who were really ok wouldn’t have reason to build unfriendly AI”

People might think they "have to be productive" which points at increasing automation detached from human value, which points towards UFAI. Alternatively, they might think there isn't a need to maximize productivity, and they can do things that would benefit their own values, which wouldn't include UFAI. (I acknowledge there could be coordination problems where selfish behavior leads to cutting corners, but I don't think that's the main driver of existential risk failure modes)

comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-17T22:42:38.554Z · LW(p) · GW(p)

I worked for 16 years in the industry, including management positions, including (briefly) having my own startup. I talked to many, many people who worked in many companies, including people who had their own startups and some with successful exits.

The industry is certainly not a rose garden. I encountered people who were selfish, unscrupulous, megalomaniac or just foolish. I've seen lies, manipulation, intrigue and plain incompetence. But, I also encountered people who were honest, idealistic, hardworking and talented. I've seen teams trying their best to build something actually useful for some corner of the world. And, it's pretty hard to avoid reality checks when you need to deliver a real product for real customers (although some companies do manage to just get more and more investments without delivering anything until the eventual crash).

I honestly think most of them are not nearly as bad as Leverage.

comment by PhoenixFriend · 2021-10-19T05:01:40.284Z · LW(p) · GW(p)

[Deleted]

Replies from: Duncan_Sabien, elityre, AnnaSalamon, Davis_Kingsley, adam_scholl, jimrandomh, Aella, Yvain, Vladimir_Nesov, Unreal, jessica.liu.taylor, Benquo
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-19T06:15:43.113Z · LW(p) · GW(p)

Trying to do a cooperative, substantive reply.  Seems like openness and straightforwardness are the best way here.

I found the above to be a mix of surprising and believable.  I was at CFAR full-time from Oct 2015 to Oct 2018, and in charge of the mainline workshops specifically for about the last two of those three years.

At least four people

This surprises me.  I don't know what the bar for "worked in some capacity with the CFAR/MIRI team" is.  For instance, while at CFAR, I had very little attention on the comings-and-goings at MIRI, a much larger organization, and also CFAR had a habit of using five or ten volunteers at a time for workshops, month in and month out.  So this could be intended to convey something like "out of the 500 people closest to both orgs."  If it's meant to imply "four people who would have worked for more than 20 hours directly with Duncan during his three years at CFAR," then I am completely at a loss; I can't think of any such person who I am aware had a psychotic break.

Psychedelic use was common among the leadership

This also surprises me.  I do not recall ever either directly encountering or hearing open discussions of psychedelic use while at CFAR.  It was mentioned nonzero times in the abstract, as were any of dozens of other things (CFAR's colloquia wandered far and wide).  But while I can think of a time when a CFAR staff member spoke quietly and reservedly while not at work about an experience with psychedelics, I was not in touch with any such common institutional casualness, or "this is cool and people should do it" vibe, between 10/15 and 10/18.  I am not sure if this means it happened at a different time, or happened out of my sight, or what; I'm just reporting that I myself did not pick up on the described vibe at all.  In fact, I can think of several times that psychedelic use was mentioned by participants or volunteers at workshops, and was immediately discouraged by staff members along the lines of "look, that's the sort of thing people might have personal experiences with, but it's very much not at all in line with what we're trying to do or convey here."

Debugging sessions

This ... did not surprise me.  It is more extreme than I would have described and more extreme than I experienced or believe I participated in/perpetuated, but it is not to the point where I feel a "pshhhh, come on."  I will state for the record that I recall very very few debugging sessions between me and any less-senior staff member in my three years (<5), and absolutely none where I was the one pushing for debugging to happen (as opposed to someone like Eli Tyre (who I believe would not mind being named) asking for help working through something or other).

Relatedly, the organization uses a technique called goal factoring

This one misses the mark entirely, as far as I can see.  Goal factoring, at least in the 2015-2018 window, bears no resemblance whatsoever EDIT: little resemblance [LW(p) · GW(p)] to things like Connection Theory or Charting.  It's a pretty straightforward process of "think about what you want, think about its individual properties, and do a brainstorming session on how you might get each individual property on its own before returning to the birds'-eye view and making a new plan."  There's nothing psych-oriented about it except in the very general sense of "what kinds of good things were you hoping to get, when you applied to med school?"

No one at CFAR was required to use the double-crux conversational technique

This one feels within the realm of the believable.  The poster describes a more blatant adversarial atmosphere than I experienced, but I did sometimes have the feeling, myself, that people would double crux when that was useful to their goals and not when it wasn't, and I can well imagine someone else having a worse experience than I did.  I had some frustrating arguments in which it took more than an hour to establish the relevance of e.g. someone having agreed in writing to show up to a thing and then not doing so.  However, in my own personal experience, this didn't seem any worse than what most non-Hufflepuff humans do most of the time; it was more "depressingly failing to be better than normal" than "notably bad."  If someone had asked me to make a list of the top ten things I did not like at CFAR, or thought were toxic, this would not have made the list from my own personal point of view.

There were required sessions of a social/relational practice called circling

This is close to my experience.  Notably, there was a moment in CFAR's history when it felt like the staff had developed a deep and justified rapport, and was able to safely have conversations on extremely tricky and intimate topics.  Then a number of new hires were just—dropped in, sans orientation, and there was an explicit expectation that I/we go on being just as vulnerable and trusting as we had been the day before.  I boycotted those circles for several months before tolerance-for-boycott ran out and I was told I had to start coming again because it was a part of the job.  I disagree with "The whole point of circling is to create a state of emotional vulnerability and openness in the person who is being circled," but I don't disagree that this is often the effect, and I don't disagree with "This often required rank-and-file members to be emotionally vulnerable to the leadership who perhaps didn't actually have their best interests at heart."

The overall effect of all this debugging and circling was that it was hard to maintain the privacy and integrity of your mind if you were a rank-and-file employee at CFAR.

This also has the ring of truth, though I'm actually somewhat confused by the rank-and-file comment.  Without trying to pin down or out this person, there were various periods at CFAR in which the organization was more (or less) flat and egalitarian, so there were many times (including much of my own time there) when it wouldn't make sense to say that "rank-and-file employees" was a category that existed.  However, if I think about the times when egalitarianism was at its lowest, and people had the widest diversity of power and responsibility, those times did roughly correspond with high degrees of circling and one-on-one potentially head-melty conversations.

Pressure to debug at work

This bullet did not resonate with me at all, but I want to be clear that that's not me saying "no way."  Just that I did not experience this, and do not recall hearing this complaint, and do not recall participating in the kind of close debugging that I would expect to create this feeling.  I had my own complaints about work/life boundaries, but for me personally they didn't lie in "I can't get away from the circles and the debugging."  (I reiterate that there wasn't much debugging at all in my own experience, and all of that solicited by people wanting specific, limited help with specific, limited problems (as opposed to people virtuously producing desire-to-be-debugged in response to perceived incentives to do so, as claimed in some of the Leverage stuff).)

The longer you stayed with the organization, the more it felt like your family and friends on the outside could not understand the problems facing the world, because they lacked access to the reasoning tools and intellectual leaders you had access to. This led to a deep sense of alienation from the rest of society. Team members ended up spending most of their time around other members and looking down on outsiders as "normies".

This zero percent matches my experience, enough that I consider this the strongest piece of evidence that this person and I did not overlap, or had significant disoverlap.  The other alternative being that I just swam in a different subcultural stream.  But my relationships with friends and family utterly disconnected from the Bay Area and the EA movement and the rationalist community only broadened and strengthened during my time at CFAR.

There was a rarity narrative around being part of the only organization trying to "actually figure things out", ignoring other organizations in the ecosystem working on AI safety and rationality and other communities with epistemic merit. CFAR/MIRI perpetuated the sense that there was nowhere worthwhile to go if you left the organization.

Comments like this make me go "ick" at the conflation between CFAR and MIRI, which are extremely different institutions with extremely different internal cultures (I have worked at each).  But if I limit this comment to just my experience at CFAR—yes, this existed, and bothered me, and I can recall several instances of frustratedly trying to push back on exactly this sort of mentality.  e.g. I had a disagreement with a staff member who claimed that the Bay Area rationalist community had some surprising-to-me percentage of the world's agentic power (it might have been 1%, it might have been 10%; either way, it struck me as way too high).  That being said, that staff member and I had a cordial and relatively productive disagreement.  It's possible that I was placed highly enough in the hierarchy that I wasn't subject to the kind of pressure that this person's account seems to imply.

There was a rarity narrative around the sharpness of Anna's critical thinking skills, which made it so that if Anna knew everything you knew about a concern and disagreed with you, there was a lot of social pressure to defer to her judgment.

I did not have this experience.  I did, however, have the experience of something like "if Anna thinks your new idea for a class (or whatever) is interesting, it will somehow flourish and there will be lots of discussion, and if Anna thinks it's boring or trivial, then you'll be perfectly able to carry on tinkering with it by yourself, or if you can convince anyone else that it's interesting."  I felt personally grumpy about the different amount of water I felt different ideas got; some I thought were unpromising got much more excitement than some I thought were really important.

However, in my own personal experience/my personal story, this is neither a) Anna's fault, nor b) anything other than business as usual?  Like, I did not experience, at all, any attempt from Anna to cultivate some kind of mystique, or to try to swing other people around behind her.  Quite the contrary—I multiple times saw Anna try pretty damn hard to get people to unanchor from her own impressions or reactions, and I certainly don't blame her for being honest about what she found promising, even where I disagreed.  My sense was that the stuff I was grumpy about was just the result of individuals freely deferring to Anna's judgment, or just the way that vibes and enthusiasm spread in monkey social groups.  I never felt like, for instance, Anna (or anyone on Anna's behalf) was trying to suffocate one of my ideas.  It just felt like my ideas had a steeper hill in front of them, due to no individual's conscious choices.  Moloch, not malice.

This made it so that Anna's update towards short timelines caused a herd of employees and volunteers to defer to her judgment almost overnight...however, Anna also put substantial pressure on members of the team to act as if shorter timelines were the case.

Did not experience.  Do not rule out, but did not experience.  Can neither confirm nor deny.

The later iterations of the team idolized the founders...no new techniques have been developed in quite a few years

Yes.  This bothered me no end, and I both sparked and joined several attempts to get new curriculum development initiatives off the ground.  None of these were particularly successful, and I consider it really really bad that no substantially new CFAR content was published in my last year (or, to the best of my knowledge, in the three years since).  However, to be clear, I also did not experience any institutional resistance to the idea of new development.  It just simply wasn't prioritized on a mission level and therefore didn't cohere.

There was rampant use of narrative warfare (called "narrativemancy" within the organization) by leadership to cast aspersions and blame on employees and each other. There was frequent non-ironic use of magical and narrative schemas which involved comparing situations to fairy-tales or myths and then drawing conclusions about those situations with high confidence. The narrativemancer would operate by casting various members of the group into roles and then using the narrative arc of the story to make predictions about how the relationship dynamics of the people involved would play out. There were usually obvious controlling motives behind the narrative framings being employed, but the framings were hard to escape for most employees.

This reads as outright false to me, like the kind of story you'd read about in a clickbait tabloid that overheard enough words to fabricate something but didn't actually speak to anyone on the ground. 

The closest I can think of to what might have sparked the above description is Val's theorizing on narrativemancy and the social web?  But this mainly played out in scattered colloquium talks that left me, at least, mostly nonplussed.  To the extent that there was occasional push toward non-ironic use of magical schemas, I explicitly and vigorously pushed back (I had deep misgivings about even tiny, nascent signs of woo within the org).  But I saw nothing that resembles "people acting as narrativemancers" or telling stories based on clichés or genre tropes.  I definitely never told such stories myself, and I never heard one told about me or around me.

That being said, the same caveats apply: this could have been at a different time, or in a different subculture within the org, or something I just missed.  I am not saying "this anecdote is impossible."  I'm just saying ????

I will say this, though: to the extent that the above description is accurate, that's deeply fucked.  Like, I want to agree wholeheartedly with the poster's distaste for the described situation, separate from my ability to evaluate whether it took place.  That's exactly the sort of thing you go to a "center for applied rationality" to escape, in my book.

Generally there was a lack of clarity around which set of rules were at play at CFAR events and gatherings: those of a private gathering or those of the workplace. It seemed that the decision of which rules were at play were made ad hoc depending on the person's aesthetic / presentation, their social standing, and the offense being considered. In the absence of clear standards people ultimately fell back on blame-games and coalitional negotiation to resolve issues instead of using more reasonable approaches.

I do not recognize the vibe of this anecdote, either (can't think of "offenses" committed or people sitting in judgment; sometimes people didn't show up on time for meetings?  Or there would be personal disagreements between e.g. romantic exes?).  However, I will note that CFAR absolutely blurred the line between formal workshop settings, after-workshop parties, and various tiers of alumni events that became more or less intimate depending on who was invited.  While I didn't witness any "I can't tell what rules apply; am I at work or not?" confusion, it does seem to me that CFAR in particular would be 10x more likely to create that confusion in someone than your standard startup.  So: credible?

At such social gatherings you felt uncertain at times if you were enjoying yourself at a party, advocating for yourself in an interview, or defending yourself on trial for a crime. This confusing mixture of possible social expectations disoriented attendees and caught them off-guard giving team members deeper insight into their psyches. No party was just a party.

Again, confusing and not at all in synch with my personal experience.  But again: plausible/credible, especially if you add in the fact that I had a relatively secure role and am relatively socially oblivious.  I do not find it hard to imagine being a more junior staff member and feeling the anxiety and insecurity described.


I don't know.  I can't tell how helpful any of my commentary here is.  I will state that while CFAR and I have both tried to be relatively polite and hands-off with each other since parting ways, no one ever tried to get me to sign an NDA, or implied that I couldn't or shouldn't speak freely about my experiences or opinions.  I've been operating under the more standard-in-our-society just-don't-badmouth-your-former-workplace-and-they-won't-badmouth-you peace treaty, which seems good for all sorts of reasons and didn't seem unusually strong for CFAR in particular.

Which is to say: I believe myself to be free to speak freely, and I believe myself to be being candid here.  I am certainly holding many thoughts and opinions in reserve, but I'm doing so by personal choice and golden-rule policy, and not because of a sense that Bad Things Would (immediately, directly) Happen If I Didn't.

Shrug emoji?

Replies from: TekhneMakre, TekhneMakre
comment by TekhneMakre · 2021-10-19T07:58:27.023Z · LW(p) · GW(p)
Like, I want to agree wholeheartedly with the poster's distaste for the described situation, separate from my ability to evaluate whether it took place.

As a general dynamic, no idea if it was happening here but just to have as a hypothesis, sometimes people selectively follow rules of behavior around people that they expect will seriously disapprove of the behavior. This can be well-intentioned, e.g. simply coming from not wanting to harm people by doing things around them that they don't like, but could have the unfortunate effect of producing selected reporting: you don't complain about something if you're fine with it or if you don't see it, so the only reports we get are from people who changed their mind (or have some reason to complain about something they don't actually think is bad). (Also flagging that this is a sort of paranoid hypothesis; IDK how the world is on this dimension, but the Litany of Gendlin seems appropriate. Also it's by nature harder to test, and therefore prone to the problems that untestable hypotheses have.)

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-19T15:52:26.724Z · LW(p) · GW(p)

This literally happened with Brent; my current model is that I was (EDIT: quite possibly unconsciously/reflexively/non-deliberately) cultivated as a shield by Brent, in that he much-more-consistently-than-one-would-expect-by-random-chance happened to never grossly misbehave in my sight, and other people, assuming I knew lots of things I didn't, never just told me about gross misbehaviors that they had witnessed firsthand.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-19T17:46:40.134Z · LW(p) · GW(p)

Damn.

comment by TekhneMakre · 2021-10-19T07:46:21.949Z · LW(p) · GW(p)
there was a lot of social pressure to defer to her judgment. 
Moloch, not malice.

The two stories here fit consistently in a world where Duncan feels less social pressure than others including Phoenix, so that Duncan observes people seeming to act freely but Molochianly, and they experience network-effect social pressure (which looks Molochian, but is maybe best thought of as a separate sort of thing).

comment by Eli Tyre (elityre) · 2021-10-19T08:36:49.781Z · LW(p) · GW(p)

I worked for CFAR from 2016 to 2020, and am still somewhat involved.

This description does not reflect my personal experience at all. 

And speaking from my view of the organization more generally (not just my direct personal experience): Several bullet points seem flatly false to me. Many of the bullet points have some grain of truth to them, in the sense that they refer to or touch on real things that happened at the org, but then depart wildly from my understanding of events, or (according to me) mischaracterize / distort things severely.

I could go through and respond in more detail, point by point, if that is really necessary, but I would prefer not to do that, since it seems like a lot of exhausting work.

As a sort of free sample / downpayment: 

  • At least four people who did not listen to Michael's pitch about societal corruption and worked in some capacity with the CFAR/MIRI team had psychotic episodes.

I don't know who this is referring to. To my knowledge 0 people who are or have been staff at CFAR had a psychotic episode either during or after working at CFAR.

  • Psychedelic use was common among the leadership of CFAR and spread through imitation, if not actual institutional encouragement, to the rank-and-file. This makes it highly distressing that Michael is being singled out for his drug advocacy by people defending CFAR.

First of all, I think the use of "rank-and-file" throughout the use of this comment is misleading to the point of being dishonest. CFAR has always been a small organization of no more than 10 or 11 people, often flexibly doing multiple roles. The explicit organizational structure involved people having different "hierarchical" relationships depending on context. 

In general, different people lead different projects, and the rest of the staff would take "subordinate" roles, in those projects. That is, if Elizabeth is leading a workshop, she would delegate specific responsibilities to me as one of her workshop staff. But in a different context, where I'm leading a project, I might delegate to her, and I might have the final say. (At one point this was an official, structural, policy, with a hierarchy of reporting mapped out on a spreadsheet, but for most of the time I've been there it has been much more organic than that.)

But these hierarchical arrangements are both transient and do not at all dominate the experience of working for CFAR. Mostly we are and have been a group of pretty independent contributors, with different views about x-risk and rationality and what-CFAR-is-about, who collaborate on specific workshops and (in a somewhat more diffuse way) in maintaining the organization. There is not anything like the hierarchy you typically see in larger organizations, which makes the frequent use of the term "rank and file" seem out of place and disingenuous, to me.

Certainly, Anna was always in a leadership role, in the sense that the staff respected her greatly, and were often willing to defer to her, and at most times there was an Executive Director (ED) in addition to Anna. 

That said, I don't think that Anna, or either of the EDs ever confided to me that they had ever taken psychedelics, even in private. I certainly didn't feel pressured to do psychedelics, and I don't see how that practice could have spread by imitation, given that it was never discussed, much less modeled. And there was not anything like "institutional encouragement".

The only conversations I remember having about psychedelic drugs are the conversations in which we were told that it was one of the topics that we were not to discuss with workshop participants, and a conversation in which Anna strongly stated that psychedelics were destabilizing and implied that they were...generally bad, or at least that being reckless with them was really bad.

Personally, I have never taken any psychoactive drugs aside from nicotine (and some experimentation with caffeine and modafinil, once). This stance was generally respected by CFAR staff. Occasionally, some people (not Anna or either ED) expressed curiosity about or gently ribbed me about my hard-line stance of not drinking alcohol, but in a way that was friendly and respectful of my boundaries. My impression is that Anna more-or-less approves of stance on drugs, without endorsing it as the only or obvious stance.

  • Debugging sessions with Anna and with other members of the leadership was nigh unavoidable and asymmetric, meaning that while the leadership could avoid getting debugged it was almost impossible to do so as a rank-and-file member. Sometimes Anna described her process as "implanting an engine of desperation" within the people she was debugging deeply. This obviously had lots of ill psychological effects on the people involved, but some of them did seem to find a deeper kind of motivation.

This is false, or at minimum is overly general, in that it does not resemble my experience at all. 

My experience: 

I could and can easily avoid debugging sessions with Anna. Every interaction that I've had with her has been consensual, and she has, to my memory, always respected my boundaries, when I had had enough, or was too tired, or the topic was too sensitive, or whatever. In general, if I say that I don't want to talk about something, people at CFAR respect that. They might offer care, or help, for if I decided I wanted it, but then they would leave me alone. (Most of the debugging, etc., conversations that I had at CFAR, I explicitly sought out.)

This also didn't happen that frequently. While I've had lots of conversations with Anna, I estimate I've had deep "soulful" conversations, or conversations in which she was explicitly teaching me a mental technique...around once every 4 months, on average? 

Also, though it has happened somewhat more rarely, I have participated in debugging style conversations with Anna where I was in the "debugger" role.

(By the way, is in CFAR's context, the "debugger" role is explicitly a role of assistance / midwifery, of helping a person get traction and understanding on some problem, rather than an active role of doing something to or intervening on the person being debugged. 

Though I admit that this can still be a role with a lot of power and influence, especially in cases where there is an existing power or status differential. I do think that early in my experience with CFAR, I was to willing to defer to Anna about stuff in general, and might make big changes in my personal direction at her suggestion, despite not really having and inside view of why I should prefer that direction. She and I would both agree, today, that this is bad, though I don't consider myself to have been majorly harmed by it. I also think it is not that unusual. Young people are often quite influenced by role models that they are impressed by, often without clear-to-them reasons.)

I have never heard the phrase "engine of desperation" before today, though it is true that there was a period in which Anna was interested in a kind of "quiet desperation" that she thought was a effective place to think and act from.

I am aware of some cases of Anna debugging with CFAR staff that seem somewhat more fraught than my own situation, but from what I know of those, they are badly characterized by the above bullet point.

 

I could go on, and I will if that's helpful. I think my reaction to these first few bullet points is a broadly representative sample.

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-19T15:56:30.422Z · LW(p) · GW(p)

I endorse Eli's commentary.

comment by AnnaSalamon · 2021-10-19T11:47:04.167Z · LW(p) · GW(p)

Thank you for adding your detailed take/observations.

My own take on some of the details of CFAR that’re discussed in your comment:

Debugging sessions with Anna and with other members of the leadership was nigh unavoidable and asymmetric, meaning that while the leadership could avoid getting debugged it was almost impossible to do so as a rank-and-file member. Sometimes Anna described her process as "implanting an engine of desperation" within the people she was debugging deeply. This obviously had lots of ill psychological effects on the people involved, but some of them did seem to find a deeper kind of motivation.

I think there were serious problems here, though our estimates of the frequencies might differ. To describe the overall situation in detail:

  • I often got debugging help from other members of CFAR, but, as noted in the quote, it was voluntary. I picked when and about what and did not feel pressure to do so.
  • I can think of at least three people at CFAR who had a lot of debugging sort of forced on them (visibly expected as part of their job set-up or of check-in meetings or similar; they didn’t make clear complaints but that is still “sort of forced”), in ways that were large and that seem to me clearly not okay in hindsight. I think lots of other people mostly did not experience this. There are a fair number of people about whom I am not sure or would make an in-between guess. To be clear, I think this was bad (predictably harmful, in ways I didn’t quite get at the time but that e.g. standard ethical guidelines in therapy have long known about), and I regret it and intend to avoid “people doing extensive debugging of those they have direct power over” contexts going forward.
  • I believe this sort of problem was more present in the early years, and less true as CFAR became older, better structured, somewhat “more professional”, and less centered around me. In particular, I think Pete’s becoming ED helped quite a bit. I also think the current regime (“holocracy”) has basically none of this, and is structured so as to predictably have basically none of this -- predictably, since there’s not much in the way of power imbalances now.
  • It’s plausible I’m wrong about how much of this happened, and how bad it was, in different eras. In particular, it is easy for those in power (e.g., me) to underestimate aspects of how bad it is not to have power; and I did not do much to try to work around the natural blindspot. If anyone wants to undertake a survey of CFAR’s past and present staff on this point (ideally someone folks know and can accurately trust to maintain their anonymity while aggregating their data, say, and then posting the results to LW), I’d be glad to get email addresses for CFAR’s past and present staff for the purpose.
  • I’m sure I did not describe my process as “implanting an engine of desperation”; I don’t remember that and it doesn’t seem like a way I would choose to describe what I was doing. “Implanting” especially doesn’t. As Eli notes (this hadn’t occurred to me, but might be what you’re thinking of?), I did talk some about trying to get in touch with one’s “quiet desperation”, and referenced Pink Floyd’s song “Time” and “the mass of men lead lives of quiet desperation” and developed concepts around that; but this was about accessing a thing that was already there, not "implanting" a thing. I also led many people in “internal double cruxes around existential risk”, which often caused fairly big reactions as people viscerally noticed “we might all die.”

Relatedly, the organization uses a technique called goal factoring during debugging which was in large part inspired by Geoff Anders' Connection Theory and was actually taught by Geoff at CFAR workshops at some point. This means that CFAR debugging in many ways resembles Leverage's debugging and the similarity in naming isn't just a coincidence of terms.

I disagree with this point overall. Goal-Factoring was first called “use fungibility”, a technique I taught within a class called “microeconomics 1” at the CFAR 2012 minicamps prior to Geoff doing any teaching. It was also discussed at times in some form at the old SingInst visiting fellows program, IIRC.
Geoff developed it, and taught it at many CFAR workshops in early years (2013-2014, I think). The choice that it was Goal-Factoring that Geoff (was asked to teach? wanted to teach? I don’t actually remember; probably both?) was I think partly to do with its resemblance to the beginning/repeated basic move in Connection Theory.

No one at CFAR was required to use the double-crux conversational technique for reaching agreement, but if a rank-and-file member refused to they were treated as if they were being intellectually dishonest, while if a leader refused to they were just exercising their right to avoid double-cruxing. While I believe the technique is epistemically beneficial, the uneven demands on when it is used biases outcomes of conversations.

My guess is that there were asymmetries like this, and that they were important, and that they were not worse than most organizations (though that’s really not the right benchmark). Insofar as you have experience at other organizations (e.g. mainstream tech companies or whatnot), or have friends with such experience who you can ask questions of, I am curious how you think they compare.

On my own list of “things I would do really differently if I was back in 2012 starting CFAR again”, the top-ranked item is probably:

  • Share information widely among staff, rather than (mostly unconsciously/not-that-endorsedly) using lack-of-information-sharing to try to control people and outcomes.
  • Do consider myself to have some duty to explain decisions and reply to questions. Not “before acting”, because the show must go on and attempts to reach consensus would be endless. And not “with others as an authority that can prevent me from acting if they don’t agree.” But yes with a sincere attempt to communicate my actual beliefs and causes of actions, and to hear others’ replies, insofar as time permits.

I don’t think I did worse than typical organizations in the wider world, on the above points.

I’m honestly uncertain how much this is/isn’t related to the quoted complaint.

There were required sessions of a social/relational practice called circling (which kind of has a cult of its own). It should be noted that circling as a practice is meant to be egalitarian and symmetric, but circling within the context of CFAR had a weird power dynamic because subordinates would circle with the organizational leaders. The whole point of circling is to create a state of emotional vulnerability and openness in the person who is being circled. This often required rank-and-file members to be emotionally vulnerable to the leadership who perhaps didn't actually have their best interests at heart.

Duncan’s reply here is probably more accurate to the actual situation at CFAR than mine would be. (I wrote much of the previous paragraphs before seeing his, but endorsing Duncan’s on this here seems best.) If Pete wants to weigh in I would also take his perspective quite seriously here. I don’t quite remember some of the details.

As Duncan noted, “creating a state of emotional vulnerability and openness” is really not supposed to be the point of circling, but it is a thing that happens pretty often and that a person might not know how to avoid.

The point of circling IMO is to break all the fourth walls that conversations often skirt around, let the subtext or manner in which the conversation is being done be made explicit text, and let it all thereby be looked at together.

A different thing that I in hindsight think was an error (that I already had on my explicit list of “things to do differently going forward”, and had mentioned in this light to a few people) was using circling in the way we did at AIRCS workshops, where some folks were there to try to get jobs. My current view, as mentioned a bit above, is that something pretty powerfully bad sometimes happens when a person accesses bits of their insides (in the way that e.g. therapy or some self-help techniques lead people to) while also believing they need to please an external party who is looking at them and has power over them.

(My guess is that well-facilitated circling is fine at AIRCS-like programs that are less directly recruiting-oriented. Also that circling at AIRCS had huge upsides. This is a can of worms I don’t plan to go into right now, in the middle of this comment reply, but flagging it to make my above paragraph not overgenralized-from.)

The overall effect of all this debugging and circling was that it was hard to maintain the privacy and integrity of your mind if you were a rank-and-file employee at CFAR.

I believe this was your experience, and am sorry. My non-confident guess is that some others experienced this and most didn’t, and that the impact on folks’ mental privacy was considerably more invasive than a standard workplace would’ve been, and that the impact on folks’ integrity was probably less bad than my guess at many mainstream workplace’s impact but still a lot worse than the CFAR we ought to aim for.

Personally I am not much trying to maintain the privacy of my own mind at this point, but I am certainly trying to maintain its integrity, and I think being debugged by people with power over me would not be good for that.

The longer you stayed with the organization, the more it felt like your family and friends on the outside could not understand the problems facing the world, because they lacked access to the reasoning tools and intellectual leaders you had access to. This led to a deep sense of alienation from the rest of society. Team members ended up spending most of their time around other members and looking down on outsiders as "normies".

This wasn’t my experience at all, personally. I did have some feeling of distance when I first started caring about AI risk in ~2008, but it didn’t get worse across CFAR. I also stayed in a lot of contact with folks outside the CFAR / EA / rationalist / AI risk spheres through almost all of it. I don’t think I looked down on outsiders.

There was a rarity narrative around being part of the only organization trying to "actually figure things out", ignoring other organizations in the ecosystem working on AI safety and rationality and other communities with epistemic merit. CFAR/MIRI perpetuated the sense that there was nowhere worthwhile to go if you left the organization.

I thought CFAR and MIRI were part of a rare and important thing, but I did not think CFAR (nor CFAR + MIRI) was the only thing to matter. I do think there’s some truth in the “rarity narrative” claim, at CFAR, mostly via me and to a much smaller extent some others at CFAR having some of this view of MIRI.

There was a rarity narrative around the sharpness of Anna's critical thinking skills, which made it so that if Anna knew everything you knew about a concern and disagreed with you, there was a lot of social pressure to defer to her judgment.

I agree that this happened and that it was a problem. I didn’t consciously intend to set this up, but my guess is that I did a bunch of things to cause it anyhow. In particular, there’s a certain way I used to sort of take the ground out from under people when we talked, that I think contributed to this. (I used to often do something like: stay cagey about my own opinions; listen carefully to how my interlocutor was modeling the world; show bits of evidence that refuted some of their assumptions; listen to their new model; repeat; … without showing my work. And then they would defer to me, instead of having stubborn opinions I didn’t know how to shift, which on some level was what I wanted.)

People at current-CFAR respect my views still, but it actually feels way healthier to me now. Partly because I’m letting my own views and their causes be more visible, which I think makes it easier to respond to. And because I somehow have less of a feeling of needing to control what other people think or do via changing their views.

(I haven't checked the above much against others' perceptions, so would be curious for anyone from current or past CFAR with a take.)

There was rampant use of narrative warfare (called "narrativemancy" within the organization) by leadership to cast aspersions and blame on employees and each other. There was frequent non-ironic use of magical and narrative schemas which involved comparing situations to fairy-tales or myths and then drawing conclusions about those situations with high confidence. The narrativemancer would operate by casting various members of the group into roles and then using the narrative arc of the story to make predictions about how the relationship dynamics of the people involved would play out. There were usually obvious controlling motives behind the narrative framings being employed, but the framings were hard to escape for most employees.

I believe this was your experience, mostly because I’m pretty sure I know who you are (sorry; I didn’t mean to know and won’t make it public) and I can think of at least one over-the-top (but sincere) conversation you could reasonably describe at least sort of this way (except for the “with high confidence”, I guess, and the "frequent"; and some other bits), plus some repeated conflicts. I don’t think this was a common experience, or that it happened much at all (or at all at all?) in contexts not involving you, but it’s possible I’m being an idiot here somehow in which case someone should speak up. Which I guess is to say that the above bullet point seems to me, from my experiences/observations, to be mostly or almost-entirely false, but that I think you’re describing your experiences and guesses about the place accurately and that I appreciate you speaking up.

[all the other bullet points] I agree with parts and disagree with parts; but seemed mostly less interesting than the above. —

Anyhow, thanks for writing, and I’m sorry you had bad experiences at CFAR, especially about the fairly substantial parts of the above bad parts that were my fault.

I expect my reply will accidentally make some true points you’re making harder to see (as well as hopefully adding light to some other parts), and I hope you’ll push back in those places.

Replies from: AnnaSalamon, Duncan_Sabien, Viliam
comment by AnnaSalamon · 2021-10-19T11:53:54.681Z · LW(p) · GW(p)

Related to my reply to PhoenixFriend (in the parent comment), but hopping meta from it:

I have a question for whoever out there thinks they know how the etiquette of this kind of conversation should go. I had a first draft of my reply to PhoenixFriend, where I … basically tried to err on the side of being welcoming, looking for and affirming the elements of truth I could hear in what PhoenixFriend had written, and sort of emphasizing those elements more than my also-real disagreements. I ran it by a CFAR colleague at my colleague’s request, who said something like “look, I think your reply is pretty misleading; you should be louder and clearer about the ways your best guess about what happened differed from what’s described in PhoenixFriend’s comment. Especially since I and others at CFAR have our names on the organization too, so if you phrase things in ways that’ll cause strangers who’re skim-reading to guess that things at CFAR were worse than they were, you’ll inaccurately and unjustly mess with other peoples’ reputations too.” (Paraphrased.)

So then I went back and made my comments more disagreeable and full of details about where my and PhoenixFriend’s models differ. (Though probably still less than the amount that would've fully addressed my colleague's complaints.)

This… seems better in that it addresses my colleague’s pretty reasonable desire, but worse in that it is not welcoming to someone who is trying to share info and is probably finding that hard. I am curious if anyone has good thoughts on how this sort of etiquette should go, if we want to have an illuminating, get-it-all-out-there, non-misleading conversation.

Part of why I’m worried, is it seems to me pretty easy for people who basically think the existing organizations are good, and also that mainstream workplaces are non-damaging and so on, to upvote/downvote each new datum based on those priors plus a (sane and sensible) desire to avoid hurting others’ feelings and reputations without due cause, etc., in ways that despite their reasonability may make it hard for real and needed conversations that are contrary to our current patterns of seeing to get started.

For example, I think PhoenixFriend indeed saw some real things at CFAR that many of those downvoting their comment did not see and mistakenly wouldn’t expect to see, but that also many of the details of PhoenixFriend’s comment are off, partly maybe because they were mis-generalizing from their experiences and partly because it’s hard to name things exactly (especially to people who have a bit of an incentive to mishear.)

(Also, to try briefly and poorly to spell out why I’m rooting for a “get it all out on the table” conversation, and not just a more limited “hear and acknowledge the mostly blatant/known harms, correct those where possible, and leave the rest of our reputation intact” conversation: basically, I think there’s a bunch of built-up “technical debt”, in the form of confusion and mistrust and trying-not-to-talk-about-particular-things-because-others-will-form-“unreasonable”-conflusions-if-we-do and who-knows-why-we-do-that-but-we-do-so-there’s-probably-a-reason, that I’m hoping gets cleared out by the long and IMO relatively high-quality and contentful conversation that’s been happening so far. I want more of that if we can get it. I want culture and groups to be able to build around here without building on top of technical debt. I also want information about how organizations do/don’t work well, and, in terms of means of acquiring this information, I much prefer bad-looking conversations on LW to wasting another five years doing it wrong.)

Replies from: TekhneMakre, Duncan_Sabien
comment by TekhneMakre · 2021-10-19T13:32:33.852Z · LW(p) · GW(p)

Personally I am not much trying to maintain the privacy of my own mind at this point,

This sounds like an extreme and surprising statement. I wrote out some clarifying questions like "what do you mean by privacy here", but maybe it'd be better to just say:

I think it strikes me funny because it sounds sort of like a PR statement. And it sounds like a statement that could set up a sort of "iterations of the Matrix"-like effect. Where, you say "ok now I want to clear out all the miasma, for real", and then you and your collaborators do a pretty good job at that; but also, something's been lost or never gained, namely the logical common knowledge that there's probably-ongoing, probably difficult to see dynamics that give rise to the miasma of {ungrounded shared narrative, information cascades, collective blindspots, deferrals, circular deferrals, misplaced/miscalibrated trust, etc. ??}. In other words, since these things happened in a context where you and your collaborators were already using reflection, introspection, reasoning, communication, etc., we learn that the ongoing accumulation of miasma is a more permanent state of affairs, and this should be common knowledge. Common knowledge would for example help with people being able to bring up information about these dynamics, and expect their information to be put to good use.

(I notice an analogy between iterations of the Matrix and economic boom-bust cycles.)

“get it all out on the table” conversation

"technical debt" [...] I’m hoping gets cleared out

These statements also seem to imply a framing that potentially has the (presumably unintentional) effect of subtly undermining the common knowledge of ongoing miasma-or-whatever. Like, it sort of directs attention to the content but not the generator, or something; like, one could go through all the "stuff" and then one would be done.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-19T15:26:35.530Z · LW(p) · GW(p)

This sounds like an extreme and surprising statement.

Well, maybe I phrased it poorly; I don't think what I'm doing is extreme; "much" is doing a bunch of work in my "I am not much trying to..." sentence.

I mean, there's plenty I don't want to share, like a normal person. I have confidential info of other peoples that I'm committed to not sharing, and plenty of my own stuff that I am private about for whatever reason. But in terms of rough structural properties of my mind, or most of my beliefs, I'm not much trying for privacy. Like when I imagine being in a context where a bunch of circling is happening or something (circling allows silence/ignoring questions/etc..; still, people sometimes complain that facial expressions leak through and they don't know how to avoid it), I'm not personally like "I need my privacy though." And I've updated some toward sharing more compared to what I used to do.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-20T08:38:43.037Z · LW(p) · GW(p)

Ok, thanks for clarifying. (To reiterate my later point, since it sounds like [LW(p) · GW(p)] you're considering the "narrative pyramid schemes" hypothesis: I think there is not common knowledge that narrative pyramid schemes happen, and that common knowledge might help people continuously and across contexts share more information, especially information that is pulling against the pyramid schemes, by giving them more of a true expectation that they'll be heard by a something-maximizing person rather than a narrative-executer [LW · GW]).

comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-19T21:54:09.867Z · LW(p) · GW(p)

I have concrete thoughts about the specific etiquette of such conversations (they're not off the cuff; I've been thinking more-or-less continuously about this sort of thing for about eight years now).

However, I'm going to hold off for a bit because:

a) Like Anna, I was a part of the dynamics surrounding PhoenixFriend's experience, and so I don't want to seize the reins

b) I've also had a hard time coordinating with Anna on conversational norms and practices, both while at CFAR and recently

... so I sort of want to not-pretend-I-don't-have-models-and-opinions-here (I do) but also do something like "wait several days and let other people propose things first" or "wait until directly asked, having made it clear that I have thoughts if people want them" or something.

Replies from: Beckeck
comment by Beckeck · 2021-10-19T23:09:10.691Z · LW(p) · GW(p)

link to the essay if/when you write it? 

comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-19T16:00:15.536Z · LW(p) · GW(p)

I endorse Anna's commentary.

comment by Viliam · 2021-10-19T12:35:08.828Z · LW(p) · GW(p)

Goal-Factoring was first called “use fungibility”, a technique I taught within a class called “microeconomics 1” at the CFAR 2012 minicamps prior to Geoff doing any teaching.

As a participant of Rationality Minicamp in 2012, I confirm this. Actually, found the old textbook, look here!

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-19T12:51:35.101Z · LW(p) · GW(p)

Okay, so, that old textbook does not look like a picture of goal-factoring, at least not on that page. But I typed "goal-factoring" into my google drive and got up these old notes that used the word while designing classes for the 2012 minicamps. A rabbithole, but one I enjoyed so maybe others will.

comment by Davis_Kingsley · 2021-10-20T23:29:44.975Z · LW(p) · GW(p)

I worked for CFAR full-time from 2014 until mid-to-late 2016 and have continued working as a part-time employee or frequent contractor since. I'm sorry this was your experience. That said, it really does not mesh that much with what I've experienced and some of it is almost the opposite of the impressions that I got. Some brief examples:

  • My experience was that CFAR if anything should have used its techniques internally much more. Double crux for instance felt like it should have been used internally far more than it actually was -- one thing that vexed me about CFAR was a sense that there were persistent unresolved major strategic disagreements between staff members that the organization did not seem to prioritize resolving, where I think double crux would have helped.

    (I'm not talking about personal disagreements but rather things like "should X set of classes be in the workshop or not?")
  • Similarly, goal factoring didn't see much internal use (I again think it should have been used more!) and Leverage-style "charting" strikes me as really a very different thing from the way CFAR used this sort of stuff.
  • There was generally little internal "debugging" at all, which contrary to the previous two cases I think is mostly correct -- the environment of having your colleagues "debug" you seems pretty weird and questionable. I do think there was at least some of this, but I don't think it was pervasive or mandatory in the organization and I mostly avoided it.
  • Far from spending all my time with team members outside of work, I think I spent most of my leisure and social time with people from other groups, many outside the rationalist community. To some degree I (and I think some others) would have liked for the staff to be tighter-knit, but that wasn't really the culture. Most CFAR staff members did not necessarily know much about my personal life and I did not know much about theirs.
  • I do not much venerate the founding team or consider them to be ultimate masters or whatever. There was a period early on when I was first working there where I sort of assumed everyone was more advanced than they actually were, but this faded with time. I think what you might consider "lionizing parables" I might consider "examples of people using the techniques in their own lives". Here is a sample example of this type I've given many times at workshops as part of the TAPs class, the reader can decide whether it is a "lionizing parable" or not (note: exact wording may vary):
    • It can be useful to practice TAPs by actually physically practicing! I believe <a previous instructor's name> once wanted to set up a TAP involving something they wanted to do after getting out of bed in the morning, so they actually turned off all the lights in their room, got into bed as if they were sleeping, set an alarm to go off as if it were the morning, then waited in bed for the alarm to go off, got up, did the action they were practicing... and then set the whole thing up again and repeated!
  • I'm very confused by what you deem "narrativemancy" here. I have encountered the term before but I don't think it was intentionally taught as a CFAR technique or used internally as an explicit technique. IIRC the term also had at least somewhat negative valence.

I should clarify that I have been less involved in "day-to-day" CFAR stuff since mid-late 2016, though I have been at I believe a large majority of mainline workshops (I think I'm one of the most active instructors). It's possible that the things you describe were occurring but in ways that I didn't see. That said, they really don't match with my picture of what working at CFAR was like.

comment by Adam Scholl (adam_scholl) · 2021-10-19T06:43:20.224Z · LW(p) · GW(p)

I've worked at CFAR for most of the last 5 years, and this comment strikes me as so wildly incorrect and misleading that I have trouble believing it was in fact written by a current CFAR employee. Would you be willing to verify your identity with some mutually-trusted 3rd party, who can confirm your report here? Ben Pace has offered to do this for people in the past.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-19T06:46:30.601Z · LW(p) · GW(p)

I don't know if you trust me, but I confirmed privately that this person is a past or present CFAR employee.

Replies from: adam_scholl
comment by Adam Scholl (adam_scholl) · 2021-10-19T07:11:40.182Z · LW(p) · GW(p)

Sure, but they led with "I'm a CFAR employee," which suggests they are a CFAR employee. Is this true?

Replies from: Unreal
comment by Unreal · 2021-10-19T14:00:53.269Z · LW(p) · GW(p)

It sounds like they meant they used to work at CFAR, not that they currently do. 

Also given the very small number of people who work at CFAR currently, it would be very hard for this person to retain anonymity with that qualifier so... 

I think it's safe to assume they were a past employee... but they should probably update their comment to make that clearer because I was also perplexed by their specific phrasing. 

Replies from: steven0461, adam_scholl
comment by steven0461 · 2021-10-19T21:26:05.525Z · LW(p) · GW(p)

It sounds like they meant they used to work at CFAR, not that they currently do.

The interpretation of "I'm a CFAR employee commenting anonymously to avoid retribution" as "I'm not a CFAR employee, but used to be one" seems to me to be sufficiently strained and non-obvious that we should infer from the commenter's choice not to use clearer language that they should be treated as having deliberately intended for readers to believe that they're a current CFAR employee.

comment by Adam Scholl (adam_scholl) · 2021-10-19T22:43:19.167Z · LW(p) · GW(p)

I like the local discourse norm of erring on the side of assuming good faith, but like steven0461, in this case I have trouble believing this was misleading by accident. Given how obviously false, or at least seriously misleading, many of these claims are (as I think accurately described by Anna/Duncan/Eli), my lead hypothesis is that this post was written by a former staff member, who was posing as a current staff member to make the critique seem more damning/informed, who had some ax to grind and was willing to engage in deception to get it ground, or something like that...?

Replies from: PeterMcCluskey, Raemon, jessica.liu.taylor
comment by PeterMcCluskey · 2021-10-20T21:09:51.373Z · LW(p) · GW(p)

It seems misleading in a non-accidental way, but it seems fairly plausible that their main motive was to obscure their identity.

comment by Raemon · 2021-10-19T22:49:56.117Z · LW(p) · GW(p)

FYI I just interpreted it to mean "former staff member" automatically. (This is biased by my belief that CFAR has very few current staff members so of course it was highly unlikely to be one, but I don't think it was an unreasonably weird reading)

comment by jessicata (jessica.liu.taylor) · 2021-10-19T23:18:45.189Z · LW(p) · GW(p)

PhoenixFriend edited the comment.

comment by jimrandomh · 2021-10-19T21:51:12.568Z · LW(p) · GW(p)

Relatedly, the organization uses a technique called goal factoring during debugging which was in large part inspired by Geoff Anders' Connection Theory and was actually taught by Geoff at CFAR workshops at some point. This means that CFAR debugging in many ways resembles Leverage's debugging and the similarity in naming isn't just a coincidence of terms.

While it's true that there's some structural similarity between Goal Factoring and Connection Theory, and Geoff did teach Goal Factoring at some workshops (including one I attended), these techniques are more different than they are similar. In particular, goal factoring is taught as a solo technique for introspecting on what you want in a specific area, while Connection Theory is a therapy-like technique in which a facilitator tries to comprehensively catalog someone's values across multiple sessions going 10+ hours.

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-19T21:56:07.864Z · LW(p) · GW(p)

Thanks for this reply, Jim; I winced a bit at my own "no resemblance whatsoever" and your comment is clearer and more accurate.

comment by Aella · 2021-10-22T19:08:17.222Z · LW(p) · GW(p)

I don't have an object-level opinion formed on this yet, but want to +1 this as more of the kind of description I find interesting, and isn't subject to the same critiques I had with the original post.

comment by Scott Alexander (Yvain) · 2021-10-21T09:41:55.431Z · LW(p) · GW(p)

Thanks for this.

I'm interested in figuring out more what's going on here - how do you feel about emailing me, hashing out the privacy issues, and, if we can get them hashed out, you telling me the four people you're thinking of who had psychotic episodes?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2022-07-03T23:17:34.821Z · LW(p) · GW(p)

Update: I interviewed many of the people involved and feel like I understand the situation better.

My main conclusion is that I was wrong about Michael making people psychotic. Everyone I talked to had some other risk factor, like a preexisting family or personal history, or took recreational drugs at doses that would explain their psychotic episodes.

Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeable at the moment they become psychotic. But aside from one case where he recommended someone take a drug that made a bad situation slightly worse, and the general Berkeley rationalist scene that he (and I and everyone else here) is a part of having lots of crazy ideas that are psychologically stressful, I no longer think he is a major cause.

While interviewing the people involved, I did get some additional reasons to worry that he uses cult-y high-pressure recruitment tactics on people he wants things from, in ways that make me continue to be nervous about the effect he *could* have on people. But the original claim I made that I knew of specific cases of psychosis which he substantially helped precipitate turned out to be wrong, and I apologize to him and to Jessica. Jessica's later post https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards [LW · GW] explained in more detail what happened to her, including the role of MIRI and of Michael and his friends, and everything she said there matches what I found too. Insofar as anything I wrote above produces impressions that differs from her explanation, assume that she is right and I am wrong.

Since the interviews involve a lot of private people's private details, I won't be posting anything more substantial than this publicly without a lot of thoughts and discussion. If for some reason this is important to you, let me know and I can send you a more detailed summary of my thoughts.

I'm deliberately leaving this comment in this obscure place for now while I talk to Michael and Jessica about whether they would prefer a more public apology that also brings all of this back to people's attention again.

Replies from: iceman, Benito, Richard_Kennaway
comment by iceman · 2022-07-09T14:52:12.771Z · LW(p) · GW(p)

I want to summarize what's happened from the point of view of a long time MIRI donor and supporter:

My primary takeaway of the original post was that MIRI/CFAR had cultish social dynamics, that this lead to the spread of short term AI timelines in excess of the evidence, and that voices such as Vassar's were marginalized (because listening to other arguments would cause them to "downvote Eliezer in his head"). The actual important parts of this whole story are a) the rationalistic health of these organizations, b) the (possibly improper) memetic spread of the short timelines narrative.

It has been months since the OP, but my recollection is that Jessica posted this memoir, got a ton of upvotes, then you posted your comment claiming that being around Vassar induced psychosis [LW(p) · GW(p)], the karma on Jessica's post dropped in half [LW(p) · GW(p)] while your comment that Vassar had magical psychosis inducing powers is currently sitting at almost five and a half times the karma of the OP. At this point, things became mostly derailed into psychodrama about Vassar, drugs, whether transgender people have higher rates of psychosis, et cetera, instead of discussion about the health of these organizations and how short AI timelines came to be the dominant assumption in this community.

I do not actually care about the Vassar matter per say. I think you should try to make amends with him and Jessica, and I trust that you will attempt to do so. But all the personal drama is inconsequential next to the question of whether MIRI and CFAR have good epistemics and how the short timelines meme became widely believed. I would ask that any amends you try to make also address that your comment also derailed these very vital discussions.

comment by Ben Pace (Benito) · 2022-07-03T23:28:07.605Z · LW(p) · GW(p)

Thanks so much for talking to the folks involved and writing this note on your conclusions, I really appreciate that someone did this (who I trust to actually try to find out what happened and report their conclusions accurately).

comment by Richard_Kennaway · 2022-07-04T07:38:15.274Z · LW(p) · GW(p)

My main conclusion is that I was wrong about Michael making people psychotic.

...

Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeable at the moment they become psychotic.

This does not contradict "Michael making people psychotic". A bad therapist is not excused by the fact that his patients were already sick when they came to him.

Disclaimer: I do not know any of the people involved and have had no personal dealings with any of them.

comment by Vladimir_Nesov · 2021-10-19T12:55:47.747Z · LW(p) · GW(p)

outsiders as "normies"

I've seen the term used a few times on LW. Despite the denotational usefulness, it's very hard to keep it from connotationally being a slur, not without something like there being an existing slur and the new term getting defined to be its denotational non-slur counterpart (how it actually sounds also doesn't help).

So it's a good principle to not give it power by using it (at least in public).

comment by Unreal · 2021-10-19T14:43:59.491Z · LW(p) · GW(p)

You contributing to this conversation seems good, PhoenixFriend. Thanks for saying your piece. 

comment by jessicata (jessica.liu.taylor) · 2021-10-22T05:16:58.957Z · LW(p) · GW(p)

Psychedelic use was common among the leadership of CFAR and spread through imitation, if not actual institutional encouragement, to the rank-and-file. This makes it highly distressing that Michael is being singled out for his drug advocacy by people defending CFAR.

I remember someone who lived in Berkeley in 2016-2017, who wasn't a CFAR employee but was definitely talking extensively with CFAR people (collaborating on rationality techniques/instruction?) and had gone to a CFAR workshop, telling me something along the lines of "CFAR can't legally recommend that people try LSD, but..."; I don't remember what followed the "but", I don't think the specific wording was even intended to be remembered (to preserve plausible deniability?), but it gave me the impression that CFAR people may have recommended it if it were legal to do so, as implied by the "but". This was before I was talking with Michael Vassar extensively. This is some amount of Bayesian evidence for the above.

Replies from: adam_scholl
comment by Adam Scholl (adam_scholl) · 2021-10-22T09:45:56.930Z · LW(p) · GW(p)

It's true some CFAR staff have used psychedelics, and I'm sure they've sometimes mentioned that in private conversation. But CFAR as an institution never advocated psychedelic use, and that wasn't just because it was illegal, it was because (and our mentorship and instructor trainings emphasize this) psychedelics often harm people.

Replies from: Unreal
comment by Unreal · 2021-10-22T10:32:30.434Z · LW(p) · GW(p)

I'd be interested in hearing from someone who was around CFAR in the first few years to double check that the same norm was in place. I wasn't around before 2015. 

comment by Benquo · 2021-10-19T21:04:41.822Z · LW(p) · GW(p)

I had significant involvement with CFAR 2014-2015 and this is consistent with my impression.

Replies from: Davis_Kingsley
comment by Davis_Kingsley · 2021-10-20T02:02:35.861Z · LW(p) · GW(p)

What does "significant involvement" mean here? I worked for CFAR full-time during that period and to the best of my knowledge you did not work there -- I believe for some of that time you were dating someone who worked there, is that what you mean by significant involvement?

Replies from: Benquo
comment by Benquo · 2021-10-20T07:00:23.790Z · LW(p) · GW(p)

I remember being a "guest instructor" at one workshop, and talking about curriculum design with Anna and Kenzi. I was also at a lot of official and unofficial CFAR retreats/workshops/etc. I don't think I participated in much of the normal/official CFAR process, though I did attend the "train the trainers workshop", and in this range of contexts saw some of how decisions were made, how workshops were run, how people related to each other at parties.

As I recall it, what I observed first-hand and was told second-hand at the time confirms bullets 2, 4, and 6 of the top-level comment. Many of the others are about how people felt, and are consistent with what people I knew reported at the time. Nothing in the top-level comment seems dissonant with what I observed.

It seems like there was a lot of fragmentation (which is why we mostly didn't interact). I felt bad about exercising (a small amount of) unaccountable influence at the time through these mechanisms, but I was confused about so much relative to the rate at which I was willing to ask questions that I didn't end up asking about the info-siloing. In hindsight it seems intended to keep the true nature of governance obscure and therefore unaccountable. I did see or at least hear reports of Anna pretending to give different people authority over things and then intervening if they weren't doing the thing she expected, which is consistent with that hypothesis.

I'm afraid I don't remember a lot of details beyond this, I had a lot going on that year aside from CFAR.

My comment initially said 2014-2016 but IIRC my involvement was much less after 2015 so I edited it.

Replies from: elityre, Davis_Kingsley
comment by Eli Tyre (elityre) · 2021-10-21T06:23:09.295Z · LW(p) · GW(p)

As I recall it, what I observed first-hand and was told second-hand at the time confirms bullets 2, 4, and 6 of the top-level comment.

I would like a lot more elaboration about this, if you can give it. 

Can you say more specifically what you observed?

comment by Davis_Kingsley · 2021-10-21T10:17:57.825Z · LW(p) · GW(p)

Unfortunately I think the working relationship between Anna and Kenzi was exceptionally bad in some ways and I would definitely believe that someone who mostly observed that would assume the organization had some of these problems; however I think this was also a relatively unique situation within the organization.

(I suspect though am not certain that both Anna and Kenzi would affirm that indeed this was an especially bad dynamic.)

With respect to point 2, I do not believe there was major peer pressure at CFAR to use psychadelics and I have never used psychadelics myself. It's possible that there was major peer pressure on other people or it applied to me but I was oblivious to it or whatever but I'd be surprised.

Psychadelic use was also one of a few things that were heavily discouraged (or maybe banned?) as conversation topics for staff at workshops -- like polyphasic sleep (another heavily discouraged topic), psychadelics were I believe viewed as potentially destabilizing and inappropriate to recommend to participants, plus there are legal issues involved. I personally consider recreational use of psychadelics to be immoral as well.

My comment initially said 2014-2016 but IIRC my involvement was much less after 2015 so I edited it.


Thanks for the clarification, I've edited mine too.

Replies from: Benquo, Benquo
comment by Benquo · 2021-11-21T00:58:41.688Z · LW(p) · GW(p)

What do you see as the main sorts of interventions CFAR was organized around? I feel like this is a "different worlds" thing where I ought to be pretty curious what the whole scene looked like to you, what it seemed like people were up to, what the important activities were, & where progress was being made (or attempted).

Replies from: Davis_Kingsley
comment by Davis_Kingsley · 2021-11-21T11:28:12.132Z · LW(p) · GW(p)

I think that CFAR, at least while I was there full-time from 2014 to sometime in 2016, was heavily focused on running workshops or other programs (like the alumni reunions or the MIRI Summer Fellows program). See for instance my comment here [LW(p) · GW(p)].

Most of what the organization was doing seemed to involve planning and executing workshops or other programs and teaching the existing curriculum. There were some developments and advancements to the curriculum, but they often came from the workshops or something around them (like followups) rather than a systematic development project. For example, Kenzi once took on the lion's share of workshop followups for a time, which led to her coming up with new curriculum based on her sense of what the followup participants were missing even after having attended the workshop.

(In the time before I joined there had been significantly more testing of curriculum etc. outside of workshops, but this seemed to have become less the thing by the time I was there.)

A lot of CFAR's internal focus was on improving operations capacity. There was at one time a narrative that the staff was currently unable to do some of the longer-term development because too much time was spent on last minute scrambles to execute programs, but once operations sufficiently improved, we'd have much more open time to allocate to longer-term development.

I was skeptical of this and I think ultimately vindicated -- CFAR made major improvements to its operations, but this did not lead to systematic research and development emerging, though it did allow for running more programs and doing so more smoothly.

comment by Benquo · 2021-11-21T01:09:09.415Z · LW(p) · GW(p)
comment by CronoDAS · 2021-10-17T17:07:46.279Z · LW(p) · GW(p)

One takeaway I got from this when combined with some other stuff I've read:

Don't do psychedelics. Seriously, they can fuck up your head pretty bad and people who take them and organizations that encourage taking them often end up drifting further and further away from normality and reasonableness until they end up in Cloudcuckooland.

Replies from: Eliezer_Yudkowsky, Kaj_Sotala, iceman, jessica.liu.taylor
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-10-17T20:30:19.998Z · LW(p) · GW(p)

I'm about ready to propose a group norm against having any subgroups or leaders who tell other people they should take psychedelics.  Maybe they have individually motivated uses - though I get the impression that this is, at best, a high-variance bet with significantly negative expectation.  But the track record of "rationalist-adjacent" subgroups that push the practice internally and would-be leaders who suggest to other people that they do them seems just way too bad.

I'm also about ready to propose a similar no-such-group policy on 'woo', tarot-reading, supernaturalism only oh no it's not really supernaturalism I'm just doing tarot readings as a way to help myself think, etc.  I still think it's not our community business to try to socially prohibit things like that on an individual level by exiling individuals like that from parties, I don't think we have or should have that kind of power over individual behaviors that neither pick pockets nor break legs.  But I think that when there's anything like a subgroup or a leader with those properties we need to be ready to say, "Yeah, that's not a group in good standing with the rest of us, don't go there."  This proposal is not mainly based on the advance theories by which you might suspect or guess that subgroups like that would end badly; it is motivated mainly by my sense of what the actual outcomes have been.

Since implicit subtext can also sometimes be bad for us in social situations, I should be explicit that concern about outcomes of psychedelic advocacy includes Michael Vassar, and concern on woo includes the alleged/reported events at Leverage.

Replies from: RobbBB, pktechgirl, Viliam, Vaniver, ChristianKl, Duncan_Sabien, Unreal, Chris_Leong
comment by Rob Bensinger (RobbBB) · 2021-10-18T01:55:44.136Z · LW(p) · GW(p)

Copying over a related Oct. 13-17 conversation from Facebook:

(context: someone posted a dating ad in a rationalist space where they said they like tarot etc., and rationalists objected)

_____________________________________________

Marie La:  As a cultural side note, most of my woo knowledge (like how to read tarot) has come from the rationalist community, and I wouldn't have learned it otherwise

_____________________________________________

Eliezer Yudkowsky:  @Marie La   Any ideas how we can stop that?

(+1 from Rob B)

_____________________________________________

Marie La:  Idk, it's an introspective technique that works for some people. Doesn't particularly work for me. Sounds like the concern is bad optics / PR rather than efficacy

(+1 from Rob B)

_____________________________________________

Shaked Koplewitz:  @Marie La   optics implies that the concern is with the impression it makes on outsiders, my concern here is the effect on insiders (arguably this is optics too, but a non-central example)

_____________________________________________

Rob Bensinger:  If the concern is optics, either to insiders or outsiders, then it seems vastly weaker to me than if the concern is about epistemic methods or conclusions. (Indeed, it might flip the sign for me.)

The rationality community should be about truth and winning, not about linking ourselves up to whatever is culturally associated with the word "rationality".

The first-order argument for trying weird things and seeing if they work (or just doing them for fun as a sort of game, etc.) makes sense. I'd rather focus on the question of whether that first-order case fails epistemically and/or instrumentally. Also: what does it fail for? Just saying 'woo' doesn't tell us whether, e.g., we should stop using IFS because it isn't normal-sounding enough.

(+1 from Marie L)

_____________________________________________

Marie La:  When we come across someone using weird mind trick X, we should figure out what it does and if we want the results. Being skilled at sorting out good weird mind tricks from bad, regardless of cultural coding, feels like an important rationalist skill.

Tarot is set of fancy art cards can be used in many ways, some that encourage magical thinking and some that provide useful introspective access

For the latter, I'd guess it's somewhat useful to some people, similar to the skill of flipping a coin and doing what you want anyway

(+1 from Rob B)

_____________________________________________

Eliezer Yudkowsky:  @Marie La   I disagree and think the woo has proven in empirical practice to be sufficiently destructive to people who can't see the destruction, to reach a level where it should not be tolerated by this group as a future subgroup norm, same as LSD use shouldn't be tolerated by us as a subgroup norm.

(+1 from Rob B, Marie L)

_____________________________________________

Marie La:  I'm interested in seeing more of your reasoning on this. Pointing out the harm model sounds useful to people who can't easily see it (or to the people around them) to help avoid further harm in the future

(+1 from Rob B)

_____________________________________________

Eliezer Yudkowsky:  The #1 reason why I think it's harmful isn't a theory by which I divined it in advance. Though there sure is a very obvious theory whereby the path of sanity is a narrow one and people who step a bit off it in what they fondly imagine to be a controlled way, fall quite a lot further once they're hanging around with crazier people, crazier ideas, and have already decided to let themselves go a little.

The #1 reason why I think it's harmful is the number of times you hear about somebody, or worse, some subgroup, that pushed a little woo on somebody, or offered them some psychedelics, and a few years later you're hearing about how far they went off the deep end. It seems to be destructive in practice and that's a far stronger reason to be wary than the obvious-seeming ways it could be destructive in principle.

(+1 from Anonymous, Rob B)

_____________________________________________

Anonymous:  Agree with this but surely it also matters how often that action seems to have that effect *out of the times it's done* - noting this because my impression (which however I don't have data for) is that psychedelic use might be locally common enough that it's only a small proportion of "rationalists who try LSD" who end up "going off the deep end". Whereas experimenting with woo-y beliefs seems more strongly associated with that kind of trajectory and for that I endorse your conclusion.

(+1 from Eliezer Y)

_____________________________________________

Aella:  @Eliezer Yudkowsky   On phone so thumb words but I notice I have a belief that this is predictable, and thus not dangerous? or rather, it's something like if you're religious and noticed some ppl have been drinking alcohol and then eventually losing their faith, you might be right to be wary of alcohol, but if you know that it's actually the doubt of their faith that *causes* the alcohol drinking, then you wouldn't be concerned if someone drinks alcohol but also isn't doubting their faith.

similarly I have some intuition here that the woo stuff is a symptom and not the cause, and that it's very possible to engage harmlessly with the symptom alone, and is a fine social norm if people can distinguish the alcoholism from doubting your faith.

I do agree seeing woo belief does up my probability they might end up going off the deep end tho

(+1 from Rob B, Eliezer Y)

_____________________________________________

Eliezer Yudkowsky:  My sense of "this seems to be ending very poorly on average" is much stronger for situations in which there's a Leader or a Discernible Subgroup has formed, that are going up to others and saying "why, you really should try some psychedelics / woo". Or where they wander up to individuals trying that, and put their arm around their shoulders all friendly-like.

Though I suppose that could also be because I'm much less likely to hear about the individual non-social cases even if they end poorly. And indeed, my sense of the individual cases, is that I have heard of a lot more individuals who took psychedelics a few times in a situation devoid of Leaders and Subgroups and nothing bad happened to them; compared to the case with woo, where it feels like I'm more likely to have heard that an individual who tried woo even in a situation without Leaders or Subgroups later went further off the deep end. Which has the very obvious explanation that some people ever do benefit from psychedelics and they are plausibly interesting to a healthy mind willing to risk itself, you can be sane and still try shrooms ever; while the woo thing requires a larger, more willing step off the strict rails of sanity.

But in the case of Subgroups or Leaders, neither woo nor psychedelics seems to end well.

And to be clear that this isn't just disguised "boo subgroups and leaders", let's be clear that, say, OpenPhil is for these purposes a Subgroup and Holden Karnofsky is a Leader, as are MIRI and myself; they are just Subgroups and Leaders which have not, to my knowledge, ever advocated LSD or tarot readings.

(+1 from Rob B, Anonymous, Marie L)

_____________________________________________

Marie La:  @Eliezer Yudkowsky   Strong agree that psychedelics + a strong leader context can enable great harm quickly.

It tends to make people vulnerable, and if there's a bad actor around this can be dangerous. This has been plenty weaponized for controlling people in cult-like groups.

I'd expand this to include most drugs, but especially classical psychs and mdma.

Here I'd blame the leader for seeking out vulnerable psychedelic users or encouraging people under them to use psychedelics, rather than the drugs themselves

(I can't say much about weaponized woo, don't know what that looks like as much)

(+1 from Anonymous)

_____________________________________________

Rob Bensingerhttps://slatestarcodex.com/2019/09/10/ssc-journal-club-relaxed-beliefs-under-psychedelics-and-the-anarchic-brain suggests that psychedelics permanently "relax" people's perceptual and epistemic priors, and that this is maybe why they can cause hallucinogen persisting perceptual disorder and crazy beliefs.

This seems maybe much worse for people whose starting priors are quite good. If your life is a wreck and you're terrible at figuring out what's true, then yeah, shaking things up might be great. But if rationalists are selected for having unusually *good* priors, then shaking things up will cause regression to the mean.

Cf. Jim Babcock's argument that you shouldn't be experimenting with new diets if you're already an unusually awesome, productive, etc. person, since then you risk breaking a complex system that's working well. It's the people whose status quo sucks who should be experimenting.

(+1 from Jim B, Eliezer Y, Anonymous, Marie L)

_____________________________________________

Jim Babcock:  My own impression is that the effect of LSD is not primarily a regression to the mean thing, but rather, that it temporarily enables some self-modification capabilities, which can be powerfully positive but which require a high degree of sanity and care to operate safely. When I see other people using psychedelics, very often I see them acting like Harry Potter experimenting with transfiguration, or worse, treating it as *entertainment*. And I want to yell at them, and point them at the scene in Dumbledore's laboratory where Dumbledore and Minerva go through a checklist and have a levels-of-precaution framework and have a step where they actually stop and think before they begin.

(+1 from Rob B, Marie L)

_____________________________________________

Eliezer Yudkowsky:  I worry that we're shooting ourselves in the foot by telling ourselves that psychedelics "temporarily enable some self-modification capabilities" rather than doing shit to the brain that we don't understand, and we know a bunch of people who seemed a lot more promising and sane before they did some psychedelics, and now they're not the people they were anymore and not in a good way, and there is not in fact any good way to be sure of who that happens to because it did not seem very predictable in advance at the time, and maybe you can roll the dice on that if you're tired of being yourself and want to take a bet with high variance and negative expected value, but you sure don't do it in little subgroups that put an arm around somebody's shoulder and make helpful offers.

(+1 from Rob B, Marie L, Jim B)

_____________________________________________

Jim Babcock: I suspect you may be underestimating the base rate of people using psychedelics discretely and having a neutral or mild-positive effect that they don't talk about, and also underestimating the degree of stupidity that people bring to bear in their drug use.

At Burning Man, I saw a lot of stuff like: making dosing and mixing decisions while not sober; taking doses without a scale, not beig justifiably confident that the dose is the right order of magnitude; taking their epistemically-untrustworthy friend's word that a substance they've never heard of before is safe for them. That sort of thing. And that's before even getting into the social stuff, where eg people really shouldn't be having conversations about crazymaking topics while high. And the legitimately hard stuff, like it's important to have a tripsitter who will break thought loops if you step on a trauma trigger, but also important that that tripsitter not be a significant other or cult leader and not be someone who will talk to you about crazymaking topics.

Meanwhile nearly everyone has been exposed to extremely unsubtle and substantially false anti-drug propaganda, which fails to survive contact with reality. So it's unfortunate but also unsurprising that the how-much-caution pendulum in their heads winds up swinging too far to the other side. The ideal messaging imo would leave most people feeling like planning an acid trip is more work than they personally will get around to, plus mild disdain towards impulsive usage and corner-cutting.

(+1 from Rob B, Marie L)

Replies from: ioannes_shade, Vaniver
comment by ioannes (ioannes_shade) · 2021-10-19T17:14:51.162Z · LW(p) · GW(p)


Jim Babcock's stance here is the most sensible one I've seen in this thread:


My own impression is that the effect of LSD is not primarily a regression to the mean thing, but rather, that it temporarily enables some self-modification capabilities, which can be powerfully positive but which require a high degree of sanity and care to operate safely.

...

Meanwhile nearly everyone has been exposed to extremely unsubtle and substantially false anti-drug propaganda, which fails to survive contact with reality. So it's unfortunate but also unsurprising that the how-much-caution pendulum in their heads winds up swinging too far to the other side. The ideal messaging imo would leave most people feeling like planning an acid trip is more work than they personally will get around to, plus mild disdain towards impulsive usage and corner-cutting.

comment by Vaniver · 2021-10-18T04:16:21.398Z · LW(p) · GW(p)

Somehow this reminds me of the time I did a Tarot reading for someone, whose only previous experience had been Brent Dill doing a Tarot reading, and they were... sort of shocked at the difference. (I prefer three card layouts with a simple context where both people think carefully about what each of the cards could mean; I've never seen his, but the impression I got was way more showmanship.)

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2021-10-18T12:27:23.211Z · LW(p) · GW(p)

If it works as a device to facilitate sub-conscious associations, then maybe an alternative should be designed that sheds the mystical baggage and comes with clear explanations of why and how it works. 

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-10-18T21:36:49.983Z · LW(p) · GW(p)

I'm generally very anti-woo, but I expect presenting it clearly and without baggage would make it stop working because the participant would be in a different mental state.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2021-10-18T22:30:03.727Z · LW(p) · GW(p)

Well, if that is true then that would be another avenue to research mental states. Something that is clearly needed.

But what I really wanted to say: You shouldn't do it if you can't formulate hypotheses and do experiments for it.

comment by Elizabeth (pktechgirl) · 2021-10-19T02:48:16.649Z · LW(p) · GW(p)

No greater sign that Eliezer isn't leading a cult than that my first reaction to this was "pfft, good luck", even when I misread it as "we should shame individuals for doing these things Elizabeth finds valuable" and not the more reasonable "leaders pushing this are suspect"

Replies from: ioannes_shade
comment by ioannes (ioannes_shade) · 2021-10-19T17:07:23.416Z · LW(p) · GW(p)

Big +1.

Really important to disambiguate the two:

"People shouldn't do psychedelics" is highly debatable and has to argue against a lot of research demonstrating their efficacy for improving mental wellness and treating psychiatric disorders.

"Leaders & subgroups shouldn't push psychedelics on their followers" seems straightforwardly correct.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-21T14:46:18.994Z · LW(p) · GW(p)

I haven't taken any psychedelics myself. I have the impression that best practice with LSD is not to take it alone but to have someone skillful as a trip sitter. I imagine having a fellow rationalist as a trip sitter is much better then having some one agey person with sketchy epistemics. 

comment by Viliam · 2021-10-17T21:29:52.955Z · LW(p) · GW(p)

Thank you for saying this!

I wonder where the line will be drawn with regards to the { meditation, Buddhism, post-rationality, David Chapman, etc. } cluster. On one hand, meditation -- when done without all the baggage, hypothetically -- seems like a useful tool. On the other hand, it simply invites all that baggage, because that is in the books, in the practicing communities, etc. Also, Christianity is an outgroup, but Buddhism is a fargroup, so people seem less averse to religious connotations; in my opinion, it's just the different flavor of the same poison. Buddhism is sometimes advertized as a kind of evidence-based philosophy, but then you read the books and they discuss the supernatural and describe the miracles done by Buddha. Plus the insights into your previous lives, into the ultimate nature of reality (my 200 Hz brain sees the quantum physics, yeah), etc.

Also, somewhat ironically...

Marcello and I developed a convention in our AI work: when we ran into something we didn’t understand, which was often, we would say “magic”—as in, X magically does Y”—to remind ourselves that here was an unsolved problem, a gap in our understanding. It is far better to say “magic” than “complexity” or “emergence”; the latter words create an illusion of understanding. Wiser to say “magic,” and leave yourself a placeholder, a reminder of work you will have to do later.

This [LW · GW] made perfect sense in 2007, because whoever was reading these words, they knew you didn't mean "magic" literally. But now I see Anna's recent comment [LW(p) · GW(p)]:

For example, once I came to believe that an acquaintance was having a psychotic episode and suggested he see a psychiatrist; the psychiatrist agreed. A friend who'd observed most of the same data I had asked me how I'd known. I said it was several things, but that the bit where our acquaintance said God was talking to him through his cereal box was one of the tip-offs from my POV. My friend's response was "oh, I thought that was a metaphor." I know several different stories like this one, including a later instance where I was among those who missed what in hindsight was fairly blatant evidence that someone was psychotic, none of which involved weird group-level beliefs or practices.

...and I am not sure what we can and what we can't jokingly countersignal [LW · GW] anymore, even within the supposed rationalist (-adjacent) community. Especially with people in the neighborhood exorcising demons and clearing bad energy using crystals. Like, what the fuck happened to the sanity waterline; this feels like walking across a desert.

Maybe the extraordinary times require extraordinary care at using metaphors, because someone (perhaps someone hanging out with rationalists, and then blogging about their experience) will take them literally.

Or maybe you should move out of the Bay Area, a.s.a.p. (Like, half seriously, I wonder how much of this epistemic swamp is geographically determined. Not having the everyday experience, I don't know.)

Replies from: Holly_Elmore, RobbBB, wunan, Bjartur Tómas, steven0461
comment by Holly_Elmore · 2021-10-18T21:22:33.378Z · LW(p) · GW(p)

Western Buddhism tends to be more of a bag of wellness tricks than a religion, but it’s worth sharing that Buddhism proper is anti-life. It came out of a Hindu obsession with ending the cycle of reincarnation. Nirvana means “cessation.” The whole idea of meditation is to become tolerant of signals to action so you can let them pass without doing the things that replicate them or, ultimately, propagate any life-like process. Karma is described as a giant wheel that powers reincarnation and gains momentum whenever you act unconsciously. The goal is for the wheel to stop moving and the way is to unlearn your habit of kicking it. When the Buddha became enlightened under the Bodhi tree, it wasn’t actually complete enlightenment. He was “enlightened with residues”— he stopped making new karma but he was still burning off old karma. He achieved actual cessation when he died. To be straight up enlightened, you stop living. The whole project of enlightenment is to end life.

It’s a sinister and empty philosophy, IMO. A lot of the insights and tools are great but the thrust of (at least Theravada) Buddhism is my enemy.

Replies from: RobbBB, romeostevensit, Kaj_Sotala, Unreal
comment by Rob Bensinger (RobbBB) · 2021-10-18T22:26:33.802Z · LW(p) · GW(p)

I agree this is pretty sinister and empty. Traditional samsara includes some pretty danged nice places (the heavens), not just things that have Earth-like quantities or qualities of flourishing; so rejecting all of that sounds very anti-life.

 Some complicating factors:

  • It's not clear (to put it lightly) what parinirvana (post-death nirvana / escape from samsara) entails. Some early Buddhists seem to have thought of it as more like oblivion/cessation; others seem to have thought of it as more like perfectly blissful experience.

(Obviously, this becomes more anti-life when you get rid of supernaturalism -- then the only alternative to 'samsara' is oblivion. But the modern Buddhist can retreat to various mottes about what 'nirvana' is, such as embracing living nirvana (sopadhishesa-nirvana) while rejecting parinirvana.)

  • The Buddhists have a weird psychological theory according to which living in samsara inherently sucks. Liking or enjoying things is really just another species of bad.

The latter view is still pretty anti-life, but notably, it's a psychological claim ('this is what it's really like to experience things'), not a normative claim that we should reject life a priori. If a Buddhist updates away from thinking everything is dukkha, they aren't necessarily required to reject life anymore -- the life-rejection wasn't was contingent on the psych theory.

Replies from: Kaj_Sotala, sil-ver
comment by Kaj_Sotala · 2021-10-18T22:48:17.767Z · LW(p) · GW(p)

There are also versions of the psychological theory in which dukkha is not associated with all motivation, just the craving-based system [LW · GW], which is in a sense "extra"; it's a layer on top of [LW · GW] the primary motivation system, which would continue to operate even if all craving was eliminated. Under that model (which I think is the closest to being true), you could (in principle) just eliminate the unpleasant parts of human motivation, while keeping the ones that don't create suffering - and probably get humans who were far more alive as a result, since they would be far more willing to do even painful things if pain no longer caused them suffering. 

Pain would still be a disincentive in the same way that a reinforcement learner would generally choose to take actions that brought about positive rather than negative reward, but it would make it easier for people to voluntarily choose to experience a certain amount of pain in exchange for better achieving their values afterwards, for instance.

Replies from: MondSemmel
comment by MondSemmel · 2021-10-20T13:52:06.751Z · LW(p) · GW(p)

Related to this (?) is the notion that 'wanting' and 'liking' are separate systems. For instance, from a random paper:

Incentive salience or ‘wanting’, a form of motivation, is generated by large and robust neural systems that include mesolimbic dopamine. By comparison, ‘liking’, or the actual pleasurable impact of reward consumption, is mediated by smaller and fragile neural systems, and is not dependent on dopamine. The incentive-sensitization theory posits the essence of drug addiction to be excessive amplification specifically of psychological ‘wanting’, especially triggered by cues, without necessarily an amplification of ‘liking’. This is due to long-lasting changes in dopamine-related motivation systems of susceptible individuals, called neural sensitization.

In this perspective, a philosophy can say that 'wanting' is psychologically unhealthy while 'liking' is fine. I'm not sure if this is what Buddhists actually believe, but it is how I've interpreted notions like "desire leads to suffering", "letting go", "ego death", etc.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2021-10-20T18:39:49.527Z · LW(p) · GW(p)

There's that, but I think it would also be misleading to say that (all) Buddhists consider desire/wanting to be bad! (Though to be clear, it does seem like some of them do.)

I liked this article's take on the issue.

I also sometimes wonder whether it would help to distinguish more cleanly and explicitly between caring and clinging as different dimensions of experience. I, at least, have found it clarifying (who knows if it’s exegetically accurate) to think of the Buddha as centrally advocating that you let go of clinging, as understood above; and of many contemporary Buddhist practices and ideas as oriented towards this goal. That is, the aim is not, centrally, to care less about anything (though sometimes that’s appropriate too). Rather, the aim (or at least, one aim) is to care differently — without a certain kind of internal, experiential contraction. To untie a certain kind of knot; to let go of a certain type of denial/resistance towards what is or could be; and in doing so, to step more fully into the real world, and into a kind of sanity.

comment by Rafael Harth (sil-ver) · 2021-10-18T22:42:27.204Z · LW(p) · GW(p)

The Buddhists have a weird psychological theory according to which living in samsara inherently sucks. Liking or enjoying things is really just another species of bad

I don't think this is true, at leas not insofar as it describes the original philosophy. You may be thinking about the first noble truth "The truth of Dukkha", but Dukkha is not correctly translated as suffering. A better translation is "unsatisfactoriness". For example, even positive sensations are Dukkha, according to the Buddha. I think the intention of the first noble truth is to say that worldly sensations, positive and negative, are inherently unsatisfactory.

The Buddha has also said pretty explicitly that a great happiness can be achieved through the noble path, which seems to directly contradict the idea that life inherently sucks, and that suffering can be overcome.

(However, there may be things he's said that support the quote; I'm definitely not claiming to have a full or even representative view.)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-18T23:35:44.079Z · LW(p) · GW(p)

I don't think this is true, at leas not insofar as it describes the original philosophy. You may be thinking about the first noble truth "The truth of Dukkha", but Dukkha is not correctly translated as suffering. A better translation is "unsatisfactoriness". For example, even positive sensations are Dukkha, according to the Buddha. I think the intention of the first noble truth is to say that worldly sensations, positive and negative, are inherently unsatisfactory.

From https://www.lionsroar.com/forum-understanding-dukkha/:

Bhikkhu Bodhi: In the Pali suttas, the discourses of the Buddha, the word dukkha is used in at least three senses. One, which is probably the original sense of the word dukkha and was used in conventional discourse during the Buddha’s time, is pain, particularly painful bodily feelings. The Buddha also uses the word dukkha for the emotional aspect of human existence. There are a number of synonyms that comprise this aspect of dukkha: soka, which means sorrow; aryadeva, which is lamentation; dolmenasa, which is sadness, grief, or displeasure; and upayasa, which is misery, even despair. The deepest, most comprehensive aspect of dukkha is signified by the term samkara-dukkha, which means the dukkha that is inherent in all conditioned phenomena simply by virtue of the fact that they are conditioned.

Followed by:

Konin Cardenas: In the Zen tradition, dukkha is often translated as “suffering,” although more often it means dissatisfaction or the nagging sense that something is off, or sometimes even existential angst. It seems that dukkha is discussed more explicitly in American Zen than it commonly has been elsewhere in the Zen world. In my experience, Japanese Zen tends to assume that people come to practice seeking enlightenment—I can’t think of a single time I heard the word “dukkha” used during my Japanese training.

I could buy that early Buddhists were using a word that basically meant 'suffering' or 'pain' metaphorically, but what's the argument that this wasn't the original word meaning at all? (I'm not a specialist on this topic, I'm just wary of 'rationalizing' tendencies for modern readers to try to retranslate concepts in ways that make them sound more obvious/intuitive/modern.)

The Buddha has also said pretty explicitly that a great happiness can be achieved through the noble path, which seems to directly contradict the idea that life inherently sucks, and that suffering can be overcome.

If you think great happiness can be achieved through the Noble Path and you should leave samsara anyway, that's an even more extreme anti-life position, because you're rejecting the best life has to offer.

I do agree that Buddhism claims you can get tons of great conventional bliss-states on the road to nirvana (see also the potential to reincarnate in the various heavens); but then it rejects those too, modulo the complications I noted in my upthread comment.

Replies from: sil-ver, Holly_Elmore
comment by Rafael Harth (sil-ver) · 2021-10-19T09:42:44.419Z · LW(p) · GW(p)

I 100% grant that you can find people, including Buddhist scholars, who will translate dukkha that way. I would generally trust Wikipedia to get a reasonable consensus on this, but in this case, it is also inconsistent, e.g. this quote from the article about Buddhism

The truth of dukkha is the basic insight that life in this mundane world, with its clinging and craving to impermanent states and things[53] is dukkha, and unsatisfactory.[55][66][web 1] Dukkha can be translated as "incapable of satisfying,"[web 5] "the unsatisfactory nature and the general insecurity of all conditioned phenomena"; or "painful."[53][54] Dukkha is most commonly translated as "suffering," but this is inaccurate, since it refers not to episodic suffering, but to the intrinsically unsatisfactory nature of temporary states and things, including pleasant but temporary experiences.[note 9] We expect happiness from states and things which are impermanent, and therefore cannot attain real happiness.

backs up what I just said, but from the article about dukkha:

Duḥkha (/ˈduːkə/; Sanskrit:दुःख; Pāli: dukkha) is an important concept in Hinduism and Buddhism, commonly translated as "suffering", "unhappiness", "pain", "unsatisfactoriness" or "stress".[1][2][3][4][5][6] It refers to the fundamental unsatisfactoriness and painfulness of mundane life.

I guess I have a strong opinion on this much like someone could have a strong opinion on what the bible says about abortion even if there are scholars on both sides. My main point is that [the idea that there is a path to overcome suffering in this life] is * not * a western invention. The Buddha may have also talked about rebirth and karma and stuff, but he has made this much clear at several points in pretty direct language, and he even talked about lasting happiness that can be achieved through the noble path. (I know he e.g. endorsed the claim that this kind of happiness has "no drawbacks"). Bottom line, I think it requires a very tortured reading of his statements to reconcile this with the idea that life on earth is necessarily negative well-being.

There's also the apparent contradiction in just the noble truths ("the truth of dukkha", "the origin of dukkha", "the end of dukkha", "the path to the end of dukkha") because (1) is usually phrased as "dukkha is an inherent part of the world", which would then contradict (3), unless you read (3) as only referring to the end via escaping the cycle of rebirth (which again I don't think can be reconciled with what the Buddha actually said). It's annoying, but you have to read dukkha as referring to different things if you want to make sense of this.

If you think great happiness can be achieved through the Noble Path and you should leave samsara anyway, that's an even more extreme anti-life position, because you're rejecting the best life has to offer.

Agreed. (And I would agree that this is more than enough reason not to defend original Buddhism as a philosophy without picking and choosing.)

comment by Holly_Elmore · 2021-10-19T03:24:39.583Z · LW(p) · GW(p)

It makes sense to me to use dukkha as "unsatisfactoriness" because it emphasizes that the issue is resisting the way things are or needing things to be different. 

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-19T05:52:56.831Z · LW(p) · GW(p)

I think it makes Buddhism higher-probability to translate dukkha that way. This on its own doesn't immediately make me confident that the original doctrines had that in mind.

For that, I'd want to hear more from Pāli experts writing articles that discuss standard meanings for dukkha at the time, and asks questions like "If by 'dukkha' early Buddhists just meant 'not totally satisfactory', then why did they choose that word (apparently mainly used for physical pain...?) rather than some clearer term? Were there no clearer options available?"

Replies from: Kaj_Sotala, Holly_Elmore, Slider, Holly_Elmore
comment by Kaj_Sotala · 2021-10-20T18:47:27.326Z · LW(p) · GW(p)

If by 'dukkha' early Buddhists just meant 'not totally satisfactory', then why did they choose that word (apparently mainly used for physical pain...?) rather than some clearer term? 

Note that Wikipedia gives the word's etymology as being something that actually does seem  pretty analogous to 'not totally satisfactory';

The word is commonly explained as a derivation from Aryan terminology for an axle hole, referring to an axle hole which is not in the center and leads to a bumpy, uncomfortable ride. According to Winthrop Sargeant,

The ancient Aryans who brought the Sanskrit language to India were a nomadic, horse- and cattle-breeding people who travelled in horse- or ox-drawn vehicles. Su and dus are prefixes indicating good or bad. The word kha, in later Sanskrit meaning "sky," "ether," or "space," was originally the word for "hole," particularly an axle hole of one of the Aryan's vehicles. Thus sukha … meant, originally, "having a good axle hole," while duhkha meant "having a poor axle hole," leading to discomfort.[12]

Joseph Goldstein, American vipassana teacher and writer, explains the etymology as follows:

The word dukkha is made up of the prefix du and the root kha. Du means "bad" or "difficult". Kha means "empty". "Empty", here, refers to several things—some specific, others more general. One of the specific meanings refers to the empty axle hole of a wheel. If the axle fits badly into the center hole, we get a very bumpy ride. This is a good analogy for our ride through saṃsāra.

As I heard one meditation teacher put it, the modern analogy to this would be if you had one of those shopping carts where one of the wheels is stuck and doesn't quite go the way you'd like it - doesn't exactly kill you or cause you enormous suffering, but it's not a totally satisfactory shopping cart experience, either.

(Leigh Brasington also has a fun take.)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-20T23:23:27.367Z · LW(p) · GW(p)

I find arguments by analogy etymology almost maximally unconvincing here, unless dukkha was a neologism? Like, those arguments make me update away from your conclusion, because they seem so not-of-the-correct-type. Normally, word etymologies are a very poor guide to meaning compared to looking at usage -- what do other sources actually mean when they say "dukkha" in totally ordinary contexts?

There's a massive tradition across many cultures of making sophistical arguments about words' 'true' or 'real' meaning based on (real or imagined) etymologies. This is even dicier when the etymology is as vague/uninformative as this one -- there are many different ways you can spin 'bad axle hole' to give exactly opposite glosses of dukkha.

I still don't find this 100% convincing/exacting, but the following account at least doesn't raise immediate alarm bells for me:

According to Pali-English Dictionary, dukkha (Sk. duḥkha) means unpleasant, painful, causing misery.[4] [...]

The other meaning of the word dukkha, given in Venerable Nyanatiloka written Buddhist Dictionary, is “ill”. As the first of the Four Noble Truths and the second of the three characteristics of existence (tilakkhaṇa), the term dukkha is not limited to painful experience (as “pain”, “painful feeling”, which may be bodily and mental), but refers to the unsatisfactory nature and the general insecurity of all conditioned phenomena which, on account of their impermanence, are all liable to suffering, and this includes also pleasurable experience. Hence “unsatisfactoriness” or “liability to suffering” would be more adequate renderings, if not for stylistic reasons.[6] Therefore, it can be said that dukkha is the lack of satisfaction.

Our modern words are too specialized, too limited, and usually too strong. Sukha and dukkha are ease and dis-ease (but we use disease in another sense); or wealth and ilth from well and ill (but we have now lost ilth); or wellbeing and ill-ness (but illness means something else in English). We are forced, therefore, in translation to use half synonyms, no one of which is exact. Dukkha is equally mental and physical. Pain is too predominantly physical, sorrow too exclusively mental, but in some connections they have to be used in default of any more exact rendering. Discomfort, suffering, ill, and trouble can occasionally be used in certain connections. Misery, distress, agony, affliction and woe are never right. They are all much too strong & are only mental.[7] As there is no word in English covering the same ground as dukkha does in Pali, I believe, the most appropriate translation equivalent of dukkha could be ‘stress’ (as distress and eustress).

Distress is a term of modern psychology which implies ‘great pain, anxiety, or sorrow; acute physical or mental suffering; affliction or trouble; that which causes pain, suffering, trouble, danger, etc.’ ‘It is state of extreme necessity or misfortune’, liability or exposure to pain, suffering, trouble, etc.; or danger’.[8]

The antonym of dukkha is sukha, which is agreeable, pleasant, blest. In Buddhist usage it is not merely sensual pleasure, it is the happy feeling in ordinary sense. But it is also used to convey an ethical import of doctrinal significance. The concept of dukkha necessarily includes the general insecurity of the whole of our experience.[9]

Using psychological terminology, it can be said that, sukha is the equivalent of eustress, which means a so-called positive tension or ‘good stress’. Eustress is derived from the ‘Greek eu 'well, good' + stress’ and means ‘stress that is deemed healthful or giving one the feeling of fulfillment[ ]'[10] or other positive feelings.

[...]

Dukkha in non-Buddhist belief-systems

Two ideas of great significance developed between the ninth and sixth centuries BCE, namely that beings are reincarnated into the world (saṃsāra) over and over again and that the result of action (karma) are reaped in future lives. This process rebirth is one of suffering (duḥkha), escape from which can be achieved through the minimizing of action and through spiritual knowledge. Patañjali (second century BCE), a systematizer of yoga practice and philosophy, states that all is suffering to the spiritually discriminating person (vivekin). This doctrine that all life is suffering is common to renouncer tradition.[13]

[...] Hinduism expects of followers accepting suffering as inevitable and inescapable consequence and as an opportunity for spiritual progress. Thus the soul or true self, which is eternally free of any suffering, may come to manifest itself in the person, who then achieves liberation (moksha). 

As I have already mentioned above, that Hinduism is a complex mixture of religious movements. Concerning the relation between Ultimate Reality and evil, there are at least three major perspectives, given by (1) Vedas, (2) Upanishads and the whole corpus of pantheistic writings and (3) Epics and Puranas.

Suffering in Vedas also refers to theory of moral law of cause and effect. [...] In the hymns addressed to Varuna (Vedic god) evil is a matter of humans not fulfilling his laws or not performing the ritual properly. Often it has a moral significance, in that people are evil-minded or commit adultery. Those who commit evil deeds must repent before Varuna and try to repair their evil deeds through ritual sacrifices. In other hymns addressed to Indra, suffering or evil is personified by demons. Thus the fight against evil is a perpetual combat between personalized good and evil forces.[17]

The Upanishads ground a pantheistic perspective on Ultimate Reality and introduce karma as the explanation of evil in the world. Ignorance launches karma into action and karma brings suffering. As the manifestations and dissolutions of the world have no beginning and no end, so is karma, meaning that suffering is a part of the eternal cosmic cycle. Suffering in the present life is the natural consequence of past lives’ ignorance and it has to be endured without questioning.[18]

Hinduism holds that suffering is the fruit of karma, which goes accompanied by the inevitable shadow from personal unwholesome actions in one’s current life or in a past life. The monotheistic faiths must contemplate the problems of suffering, ill or evil within the context of god's authority and mercy.

[...]

More detailed overview of dukkha [in Buddhism]

More comprehensive overview of what the term dukkha implies, is given in Saccavibhaṅga sutta (An Analysis of the Truths) by Sāriputta:

“Now what, friends, is the noble truth of stress? Birth is stressful, aging is stressful, death is stressful; sorrow, lamentation, pain, distress, & despair are stressful; association with the unbeloved is stressful; separation from the loved is stressful; not getting what is wanted is stressful. In short, the five clinging-aggregates are stressful.

And what is birth (jāti)? Whatever birth, taking birth, descent, coming-to-be, coming-forth, appearance of aggregates, & acquisition of [sense] spheres of the various beings in this or that group of beings, that is called birth. 

And what is aging (jāra)? Whatever aging, decrepitude, brokenness, graying, wrinkling, decline of life-force, weakening of the faculties of the various beings in this or that group of beings, that is called aging.

And what is death (maraṇa)? Whatever deceasing, passing away, breaking up, disappearance, dying, death, completion of time, break up of the aggregates, casting off of the body, interruption in the life faculty of the various beings in this or that group of beings, that is called death.

And what is sorrow (soka)? Whatever sorrow, sorrowing, sadness, inward sorrow, inward sadness of anyone suffering from misfortune, touched by a painful thing, that is called sorrow.

And what is lamentation (parideva)? Whatever crying, grieving, lamenting, weeping, wailing, lamentation of anyone suffering from misfortune, touched by a painful thing, that is called lamentation.

And what is pain (dukkha)? Whatever is experienced as bodily pain, bodily discomfort, pain or discomfort born of bodily contact, that is called pain.

And what is distress (domanassa)? Whatever is experienced as mental pain, mental discomfort, pain or discomfort born of mental contact, that is called distress.

And what is despair (upāyāsa)? Whatever despair, despondency, desperation of anyone suffering from misfortune, touched by a painful thing that is called despair.

And what is the stress of association with the unbeloved? There is the case where undesirable, unpleasing, unattractive sights, sounds, aromas, flavors, or tactile sensations occur to one; or one has connection, contact, relationship, interaction with those who wish one ill, who wish for one's harm, who wish for one's discomfort, who wish one no security from the yoke. This is called the stress of association with the unbeloved.

And what is the stress of separation from the loved? There is the case where desirable, pleasing, attractive sights, sounds, aromas, flavors, or tactile sensations do not occur to one; or one has no connection, no contact, no relationship, no interaction with those who wish one well, who wish for one's benefit, who wish for one's comfort, who wish one security from the yoke, nor with one's mother, father, brother, sister, friends, companions, or relatives. This is called the stress of separation from the loved.

And what is the stress of not getting what is wanted (yam pi icchaṃ na labbati)? In beings subject to birth, the wish arises, 'O, may we not be subject to birth, and may birth not come to us.' But this is not to be achieved by wanting. This is the stress of not getting what is wanted. In beings subject to aging... illness... death... sorrow, lamentation, pain, distress, & despair, the wish arises, 'O, may we not be subject to aging... illness... death... sorrow lamentation, pain, distress, & despair, and may aging... illness... death... sorrow, lamentation, pain, distress, & despair not come to us.' But this is not to be achieved by wanting. This is the stress of not getting what is wanted.”[32]

The bailey of dukkha is that it's really bad -- like physical pain. And the older texts seem to generally embrace, indeed presuppose, this bailey -- the whole reason these sentences sound radical, revolutionary, concerning, is that they're saying something not-obvious and seemingly extreme about all ordinary experience. Not that it's literally physical pain, sure; but there's a deliberate line being drawn between physical pain, illness, dysfunction, suffering, badness, etc. and many other things.

My intuition is that these lines would have hit very differently if their first-pass meaning in the eyes of Sanskrit- or Pali-speakers had been "Birth isn't totally satisfying, aging isn't totally satisfying, death isn't totally satisfying..."

(I can much more easily buy that 'physical pain' is the obvious surface meaning for initiates, the attention-grabbing Buzzfeed headline; and something like 'not perfectly satisfying' is the truly-intended motte, meant for people to come to understand later. But in that case the etymology arguments are totally backwards, since etymology relates to common usage and not 'weird new esoteric meaning our religion is inventing here'.)

comment by Holly_Elmore · 2021-10-19T21:27:28.099Z · LW(p) · GW(p)

If by 'dukkha' early Buddhists just meant 'not totally satisfactory', then why did they choose that word (apparently mainly used for physical pain...?

I'm willing to believe, based on the totality of the Buddha's message, that he meant dukkha as "resisting how things are/wanting them to be different," i.e. being unsatisfied with reality. Look at our own word "suffering" in English. Today it connotes anguish, but it also means "enduring" or "putting up with." A word like "unsatisfied" in English has a mild connotation, but we could also say something like "tormented by desire" to ramp up the intensity without fundamentally changing the meaning. 

comment by Slider · 2021-11-03T14:32:14.759Z · LW(p) · GW(p)

I think even in current english there is an idiom for pain. Ie ""It pains me that I don't have food" vs "I am hungry". One variant of the claims is that there is way to be food-poor that is positive "It delights me that I don't have food" or just "I don't have food".

comment by Holly_Elmore · 2021-10-25T04:56:38.064Z · LW(p) · GW(p)

I think it would pretty hard to translate words like “annoying,” “irritating,” etc to a very foreign audience without making reference to physical pain. It’s hard to infer connotations or intensity when looking at those older writings.

comment by romeostevensit · 2021-10-18T23:03:35.404Z · LW(p) · GW(p)

The set of metaphors that have come to the west are dominated by the early transmission of Buddhism which occurred in the late 1800's, and was carried out by Sanskrit scholars translating from Sanskrit sources. The Buddha specifically warned people against translating his teachings into Sanskrit for pretty much the sorts of reasons being passed off as genuine Buddhism here.

Replies from: ioannes_shade
comment by ioannes (ioannes_shade) · 2021-10-19T17:47:57.710Z · LW(p) · GW(p)

Quora for the curious: Did the Buddha forbid the translation of his teachings into Sanskrit? If so, did he mention why?

From my quick skim of those answers, it looks like he was more concerned about accessibility of the teachings rather than issues of interpretation.

comment by Kaj_Sotala · 2021-10-18T22:33:21.623Z · LW(p) · GW(p)

The whole idea of meditation is to become tolerant of signals to action so you can let them pass without doing the things that replicate them or, ultimately, propagate any life-like process.

I'm willing to grant that there are certain interpretations of Buddhism that take this view, but object pretty strongly to depicting it as the idea of meditation. Especially since there are many different varieties of meditation, with varying degrees of (in)compatibility with this goal; something like loving-kindness or shi-ne meditation seem both more appropriate for creating activity, for instance.

In my view, there are so many varieties and interpretations of Buddhism that pointing to some of them having an anti-life view always seems like a weird sleight of hand to me. By saying that Buddhism originates as an anti-life practice, one can then imply that all of its practices also tend to lead towards that goal, without needing to establish that that's actually the case. 

After all, just because some of the people who developed such techniques wanted to create an anti-life practice doesn't mean that they actually succeeded in developing techniques that would be particularly well-suited for this goal. I agree that it's possible to use them for such a goal, especially if they're taught in the context of an ideology that frames the practice that way, but  I don't think them to be very effective for that goal even then.

Replies from: Holly_Elmore
comment by Holly_Elmore · 2021-10-19T21:20:51.658Z · LW(p) · GW(p)

I think if rationalists are interested in Buddhism as part of their quest to find truth, they should know that it has, at the very least, deathist origins. 

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2021-10-20T18:52:34.934Z · LW(p) · GW(p)

I agree that it's valuable to be aware of the life-denying aspects of the tradition, since those mindsets do affect some teachings of it and it's good to be able to notice them and filter them out rather than accidentally absorbing them.

I do however object to characterizing "Buddhism proper" anti-life, as it implies that any proper attempt to delve into or practice Buddhism will eventually just lead you into deathism.

comment by Unreal · 2021-10-18T21:41:34.413Z · LW(p) · GW(p)

This view is disputed and countered in the original texts. It is worth it to me to mention this, but I am not the right one to go into details. 

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-18T23:08:36.962Z · LW(p) · GW(p)

Some good (mainstream, scholarly) books on nirvana and historical Buddhism:

Excerpt (starting p. 69):

Consciousness is one of the Five Aggregates, and the Cessation of the Aggregates is final nirvana: consciousness, at least in this sense, cannot exist in nirvana.

[...] But for Buddhism, 'does not exist' is one of the four possibilities for the state of an enlightened person after his or her nirvanizing that are explicitly rejected. On the conceptual level there is an impasse, at least in the articulation of systematic thought: nirvana is the cessation of the consciousness Aggregate, but that is not equivalent to becoming non-existent: it is beyond designation.

[...] One might be tempted to say, as has indeed sometimes been said, in quasi-Buddhist terms, that apropos the Enlightened person 'in' nirvana, existence and non-existence here are two extremes, between which Buddhism proposes the Middle Way. But for a scholar to say only that would do no more than reproduce a cliche, putting on a Buddhist disguise and pretending to say something illuminating from a scholarly perspective. A better interpretive strategy, I suggest, is to see this as an example of the way silences within discourse are themselves part of the production of meaning.

[...] One can say that it is not non-existence, and it is a timeless bliss; to say more would be to rush in where Buddhas fear to tread.

 

Happiness

Since final nirvana is the cessation of the Aggregates, it is clear that just as there can be no consciousness in that sense, so there can be no Feeling, and no determinate Perception or Ideation, and so no happiness in any ordinary sense. At the same time, however, it is said to be a form of happiness, one of the standard list of three: those of mankind, of the gods, and of nirvana. Nirvana is repeatedly said to be the highest happiness. A passage repeated (with some variations) in a number of commentaries cites different canonical phrases to show that sukha can be used variously: inter alia, it denotes

(i) pleasurable feeling(s);

(ii) the 'roots of happiness', as in the phrase 'happy is the arising of Buddhas', or the 'cause of happiness', as in the phrase 'the accumulation of merit is [i.e., brings] happiness'; and

(iii) nirvana, as in the phrase 'nirvana is the highest happiness'.

[...] Commentaries explain:

"Here from the fourth Level onwards the feeling of neither suffering nor happiness (that occurs) is also said to be happiness in the sense that it is peaceful and sublime. Cessation [the ninth level] occurs as happiness in that it is the kind of happiness which is not a matter of feeling. For happiness that is a matter of feeling (occurs) through the five strands of sense-pleasure and through the eight (Meditation Level) attainments [i.e., as a feeling of happiness in nos. 1-3 and as the peaceful and sublime feeling of neither suffering nor happiness in nos. 4-8]. Cessation is (an example of) happiness that is not a matter of feeling. Whether the happiness be a matter of feeling or not, it is all happiness in that it is taken to be a state of non-suffering ... [The phrase in the texts] 'happiness exists' means that there exists either the happiness that is a matter of feeling or that which is not a matter of feeling. [The phrase] 'the Tathagata (the Buddha) assigns this or that to (The category of "happiness"' means that he assigns to happiness everything which is non-suffering."

Replies from: Unreal, RobbBB
comment by Unreal · 2021-10-19T01:44:52.134Z · LW(p) · GW(p)

But for Buddhism, 'does not exist' is one of the four possibilities for the state of an enlightened person after his or her nirvanizing that are explicitly rejected. On the conceptual level there is an impasse, at least in the articulation of systematic thought: nirvana is the cessation of the consciousness Aggregate, but that is not equivalent to becoming non-existent: it is beyond designation.

This section seems to say it well, highlighted bits in bold for easier reading. 

There is nothing pro-"nonexistence" in Buddhism. There is nothing pro-"ending or annihilating life." These takes are explicitly rejected in the Pali canon. 

It is very easy to misunderstand what Buddhism is saying, and the inferential gap is larger than I think most people imagine. The words / phrases do not have direct translations into common English. 

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-10-19T03:07:01.355Z · LW(p) · GW(p)

When someone claims something to be “beyond designation” or “beyond categorization” or any such thing, it’s a sure bet that they’re trying to slip one by you; in fact, the given thing belongs to a category which, if you recognized that membership, would lead you to reject it—and rightly.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-19T05:59:38.185Z · LW(p) · GW(p)

I think this is not true in full generality -- I think meditation does give people insights that are hard to verbalize, and does make some common verbal distinctions feel less joint-carving, so it makes sense for a tradition of meditators to say a lot in favor of 'things that are hard to verbalize' and 'things that can't be neatly carved up in the normal intuitive ways'.

I do think that once you have those insights, there's a strong temptation to lapse into sophistry or doublethink to defend whatever silly thing you feel like defending that day -- if someone doubts your claim that the Buddha lives on like a god or ghost after death, you can say that the Buddha's existence-status after death transcends concepts and verbalization.

When in fact the honest thing to say if you believed in immaterial souls would be 'I don't know what happened to the Buddha when he died', and the honest thing to say if you're an educated modern person is 'the Buddha was totally annihilated when he died, the exact same as anyone else who dies.'

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-10-19T06:34:28.949Z · LW(p) · GW(p)

I think meditation does give people insights that are hard to verbalize

What would the world look like if meditation only made people feel like they had insights that were hard to verbalize, without actually giving them any new insights?

(But also, “thing X is beyond designation” and “some fact(s) about thing X are hard to verbalize” are not the same thing.)

Replies from: Kaj_Sotala, Holly_Elmore
comment by Kaj_Sotala · 2021-10-19T10:46:00.632Z · LW(p) · GW(p)

If the world was one where meditation only made people feel like they had insights that were hard to verbalize, then I probably wouldn't have figured out ways to verbalize some of them [LW · GW] (mostly due to having knowledge of neuroscience etc. stuff that most historical Buddhists haven't had).

comment by Holly_Elmore · 2021-10-19T21:55:15.027Z · LW(p) · GW(p)

I admire koan practice in Zen as an attempt to make sure people are reaching genuine insights without being able to fully capture them in explicit words. 

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-10-19T22:08:44.789Z · LW(p) · GW(p)

Can you say more about this? I don’t think I quite follow.

Replies from: Holly_Elmore
comment by Holly_Elmore · 2021-10-25T05:05:43.065Z · LW(p) · GW(p)

Koans are “riddles” that are supposed to only be understandable by “insight,” a non-cognitive form of knowledge attained by entering “don’t know mind.” Meditating on koans “confuses the rational mind” so that it is easier to enter “don’t know mind.” Koan training consists of being given a koan by a master (the first one I ever received was “what is the meaning of [smacks hand into ground]?”), letting the koan confuse you and relaxing into that feeling, letting go of all the thoughts that try to explain, and then one day having the answer pop into your awareness (some schools have people concentrate on the koan, others say to just create the conditions for insight and it will come). If you explain your insight to a master and they think you’ve figured it out (they often say “used up”) that koan, they give you a new one that’s even further from everyday thinking. And so it continues until you’ve gone through enough of the hundreds of koans in that lineage.

It’s a cool system because “getting” your koan is an objectively observable indicator of progress at meditation, which is otherwise quite difficult to assess.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-10-25T10:28:23.372Z · LW(p) · GW(p)

Ok, but how exactly does “make sure people are reaching genuine insights”? Are there canonical correct answers to koans? (But that would seem to violate the “without being able to fully capture them in explicit words” clause…)

In other words, how do you know when you’ve correctly understood a koan? (When an answer pops into your awareness, how do you know it’s the right one?) And, what does it mean to correctly understand a koan? (What’s the difference between correctly understanding a koan and incorrectly understanding it?)

It’s a cool system because “getting” your koan is an objectively observable indicator of progress at meditation, which is otherwise quite difficult to assess.

Could you elaborate on this? I am confused by this point.

Replies from: Richard_Kennaway, Holly_Elmore
comment by Richard_Kennaway · 2021-10-25T14:43:44.691Z · LW(p) · GW(p)

Are there canonical correct answers to koans?

"The Sound of One Hand: 281 Zen Koans with Answers"

comment by Holly_Elmore · 2021-10-25T19:01:24.851Z · LW(p) · GW(p)

Masters have an oral tradition of assessing the answers to koans and whether they reflect genuine insight. They use the answers people give to guide their future training.

Having used up a few koans, I’d say the answers come to you pretty clearly. You get to a certain point in meditation and the koan suddenly makes sense in light of that.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-10-25T20:05:24.103Z · LW(p) · GW(p)

By what means do the masters assess whether the answers reflect “genuine insight”?

Is there a way for a non-master to evaluate whether a given answer to a koan is correct, or to show that the ostensibly-correct answer is correct? (Analogously to P vs. NP—if the correct answer is difficult to determine, is it nonetheless straightforward to verify?)

If the answer to the previous question is “no”, then how is one to know whether the ostensibly-correct answer is, in fact, actually correct?

Replies from: Holly_Elmore
comment by Holly_Elmore · 2021-10-28T21:09:26.032Z · LW(p) · GW(p)

It’s not really a question of factually correct. The koan is designed to make sense on a non-cognitive, non-rational level. My experience was that I would have a certain insight on my own when I was meditating and then I would realize that that’s what the koan was talking about. What makes a good koan is that you’re totally stumped when you first hear it, but when it clicks you know that’s the right answer. That’s why one English translation is “riddle.” Some riddles have correct answers according to the terms they lay out, but really what makes a riddle is the recognition of a lateral thinking move, even if it’s as simple as a pun. Koans are “riddles” that require don’t-know mind.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-11-03T14:50:30.324Z · LW(p) · GW(p)

The koan is designed to make sense on a non-cognitive, non-rational level.

What is the content of whatever “insight” or “sense” it is that’s gained when you “get the right answer” to a koan? I do not see what it could mean to say that one has gained such an insight…

Some questions:

  1. Does it ever happen that someone “gets” a koan—it “clicks” for them, and they “know” that the answer they’ve got is “the right answer”—but actually, their differs from the canonically “correct” answer?

  2. Alternatively: does it ever happen that two different people both “get” a koan—it “clicks” for them both—but their answers differ?

  3. Do Zen teachers/masters ever disagree on what the “right” answer to a koan is? If so—how do they resolve this disagreement?

  4. Suppose I were to say to a Zen teacher: you say the answer to this koan is X, but I think it is actually Y. Please demonstrate to me that it is as you say, and not as I say. How might they do this?

comment by Rob Bensinger (RobbBB) · 2021-10-18T23:21:21.122Z · LW(p) · GW(p)

Excerpt (starting p. 155):

The awareness belonging to Buddha, then, is free from construction. But what, positively, could such awareness be like?

The digests are concerned to eliminate a number of possible errors in thinking about awareness without construction. The first of these is the error of judging such awareness to be identical with simple unconsciousness, a simple absence of mental activity. If it were, such absence would be easy to attain: a sharp blow to the head produces unconsciousness; and there are various meditational practices that dispose of many kinds of mental activity at an early stage of the practice of the path. But it is obvious that the absence of constructive activity that characterizes Buddha's awareness is not so easily attained. Neither is it the case, according to the digests, that Buddha's awareness is epistemically, phenomenally, or soteriologically as uninteresting as deep sleep or a drunken stupor.

More interestingly, the digests also negate the idea that unconstructed awareness is to be identified with a much more exalted meditative state called the 'attainment of cessation' (nirodhasamapatti) or the 'cessation of sensation and conceptualization' (samjnaveditanirodha). This is a condition attained by complex and difficult meditational practice, a condition wherein there are no mental events of any kind. It is not death; but it is not distinguishable from death by any phenomenal properties. The only difference between the two is that the attainment of cessation can be emerged from, while death cannot (not, at least, without various complications caused by the need to take on a new body and the like, complications that need not detain us here).

Buddha's construction-free awareness is distinct even from this exalted condition, and the digests put this in formal terms by denying that Buddha's awareness could be identified with the attainment of cessation, because if it were it would not be an instance of awareness (jnana) at all, which its name requires it to be, for awareness cannot occur where there are no mental events of any kind. The point here is the simple logical one that awareness is a species of mental event, from which it follows that no instance of awareness can be identified with a condition in which there are no mental events.

[... The digests negate] the claim that this awareness comprises any volitional turning of the mind toward its objects (alambanabhisamskara). This is not the same as denying that Buddha's awareness has content, or consists of events with phenomenal properties; it is simply a denial that the phenomenal properties of its apparent objects are, or can be, things with which it can be involved in a sustained and intentional way. In so far as what appears in the mind does so with phenomenal properties, those properties do not lead the Buddha-mind to fasten upon them, to follow after them, or to make judgments that a particular thing with particular properties is now being experienced.

For example, suppose Buddha sees a blue pot. One way of reading the negation described in the preceding paragraph is to say that Buddha has a spontaneous (that is, effortless, nonvolitional) moment of awareness (jnana) consisting of a mental object or image (alambana, nimitta) whose phenomenal properties (akara) consist of a complex list of things such as 'transient-blue-pot-here-now'; in English such an occurrence is best described adverbially by saying that Buddha is appeared to transient-blue-pot-here-now-ly.

[... T]he important distinction between Buddha's blue-pot awareness and mine is that Buddha neither does nor can judge that it is being appeared to blue-pot-ly, whereas I, other things being equal, inevitably do. Buddha, moreover, does not engage in the constructive activity of manipulating and massaging its mental images; it has no affective response to them, and, above all, no concern for their endurance, cessation, or repetition. The digests sometimes express this by saying that Buddha does not behave like an artist toward the objects of its awareness.[.]

[...] If, in order to have phenomenal properties or modes of appearance, awareness must be characterized by effortful acts of attention toward specific objects (as it certainly must in most instances of ordinary awareness), then it is proper to say that Buddha's awareness is nirakara, 'free from modes of appearance.' But if possessing modes of appearance can be understood through the simile of reflections on the surface of a mirror, then it is reasonable to say that Buddha's awareness does have them -- for a mirror, like Buddha's awareness, does not engage itself with or focus upon specific 'reflectables'; it simply reflects, spontaneously, perfectly, and without distortion, everything that passes before it.

[...] The thrust of the digests toward presenting the Buddha as maximally great requires the scope of Buddha's awareness to be maximized: if it is good to have unconstructed awareness, then the temporal and spatial range of this awareness cannot be restricted or limited in Buddha's case: it must be, as the digests claim it to be, strictly universal in scope. Buddha must therefore be, in some important sense, omniscient[.]

[...] The digests generally agree that Buddha's universal awareness is not brought about by causes, since this would entail its contingency: if the proper causes had not obtained, its universal awareness would not have obtained. And this cannot be correct: Buddha's awareness has always (sada) and necessarily (avasyam) existed.

[... Many similar passages link] Buddha's permanence closely with its salvific actions. The limitless and perfect salvific efficacy that Buddha, understood as maximally great, must necessarily possess, requires that Buddha be present and active everywhere and at all times. Hence, Buddha must be permanent, without beginning or end in time.

The digests thus refuse to predicate any temporal properties of Buddha considered in se. Buddha is not earlier or later than anything, not temporally related to anything in any way. All Buddha's temporal properties are of the kind described in chapters four and five: seems to S to be P at t. Correlated with this refusal is a denial to Buddha of causal properties: Buddha is not caused to do anything, nor does Buddha cause any non-Buddha to do anything. Buddha is, metaphysically speaking, simply identical with all atemporal states of affairs.

comment by Rob Bensinger (RobbBB) · 2021-10-18T18:44:34.242Z · LW(p) · GW(p)

Regarding meditation, Kevin Fischer reported a surprising-to-me anecdote on FB yesterday:

I had one conversation with Soryu [the head of Monastic Academy / MAPLE] at a small party once. I mentioned that my feeling about meditation is that it’s really good for everyone when done for 15 minutes a day, and when done for much more than that forever, it’s much more complicated and sometimes harmful.

He straightforwardly agreed, and said he provides the environment for long term dedication to meditation because there is a market demand for that product. 🤷

Replies from: mr-hire, ioannes_shade
comment by Matt Goldenberg (mr-hire) · 2021-10-20T13:03:53.043Z · LW(p) · GW(p)

He straightforwardly agreed, and said he provides the environment for long term dedication to meditation because there is a market demand for that product. 🤷

 

FWIW as a resident of MAPLE, my sense is Soryu believes something like:

"Smaller periods of meditation will help you relax/focus and probably have only a very small risk of harm. Larger/longer periods of meditation come with deeper risks of harm,  but are also probably necessary to achieve awakening, which is important for the good of the world." 

 

But I am a newer resident and could easily misunderstanding here.

comment by ioannes (ioannes_shade) · 2021-10-19T17:41:36.591Z · LW(p) · GW(p)

The correspondent's reply here is helpful color on how things can get more complicated (e.g. shifts in how you perceive the actions/intentions of yourself & others) and sometimes harmful (e.g. extended stays in Dark Night).

comment by wunan · 2021-10-18T00:16:39.822Z · LW(p) · GW(p)

On one hand, meditation -- when done without all the baggage, hypothetically -- seems like a useful tool. On the other hand, it simply invites all that baggage, because that is in the books, in the practicing communities, etc.

 

I think meditation should be treated similarly to psychedelics -- even for meditators who don't think of it in terms of anything supernatural, it can still have very large and unpredictable effects on the mind. The more extreme the style of meditation (e.g. silent retreats), the more likely this sort of thing is.

Any subgroups heavily using meditation seem likely to have the same problems as the ones Eliezer identified for psychedelics/woo/supernaturalism.

Replies from: Gunnar_Zarncke, Kenny
comment by Gunnar_Zarncke · 2021-10-18T12:49:08.421Z · LW(p) · GW(p)

I have pointed out the risks of meditation and meditation-like practices before. The last time was on the Shoulder Advisors [LW(p) · GW(p)] which does seem to fall on the boundary. I have experience with meditation and have been to extended silent meditation retreats with only positive results. Nonetheless, bad trips are possible - esp. without a supportive teacher and/or community. 

But I wouldn't make a norm against groups fostering meditation. Meditation depends on groups for support (though the same might be said about psychedelics). Meditation is also a known way to gain high levels of introspective awareness and to have many mental health benefits (many posts about that on LW I'm too lazy to find). The group norm about these things should be to require oversight by a Living Tradition of Knowledge [LW · GW] in the relevant area (for meditation e.g. an established - maybe even Buddhist - meditation school).

comment by Kenny · 2021-10-18T06:51:10.557Z · LW(p) · GW(p)

Psychedelics, woo, and meditation are very separate stuff. They are often used in conjunction with each other due to popularity and the context some of these things are discussed along with each other. Buddhism has incorporated meditation into its woo while other religions have mostly focused on group based services in terms of talking about their woos.

I like how some commenters have grouped psychedelics and meditation separate of the woo stuff, but it was a bit surprising to me to see Eliezer dismissing psychedelics along with woo in the same statements. He probably hasn't taken psychedelics before. Meditation is quite different as in it's more of a state of mind as opposed to an altered mentality. With psychedelics there is a clear distinction between when you are tripping and when you aren't tripping. With meditation, it's not so clear when you are meditating and when you aren't. Woo is just putting certain ideas into words, which has nothing to do with different mindset/mentalities.

Replies from: Treszkai
comment by Laszlo_Treszkai (Treszkai) · 2021-10-18T14:34:19.926Z · LW(p) · GW(p)

Meditation is quite different as in it's more of a state of mind as opposed to an altered mentality. With psychedelics there is a clear distinction between when you are tripping and when you aren't tripping.

However, according to some, even meditation done properly can have negative effects, which would be similar to psychedelics but manifesting slower and through your own effort. Quoted from the book review:

Once you have meditated enough to reach the A&P Event, you’re stuck in the (very unpleasant) Dark Night Of The Soul until you can meditate your way out of it, which could take months or years.

Replies from: Kenny
comment by Kenny · 2021-10-18T15:52:44.149Z · LW(p) · GW(p)

I don't think I was advocating for either. I apologize if I came off as saying people should try psychedelics and meditation.

comment by Tomás B. (Bjartur Tómas) · 2021-10-18T14:53:28.143Z · LW(p) · GW(p)

Even in the case of Sam Harris, who seems relatively normal, he lost a decade of his life pursuing “enlightenment” though meditation - also notable is this was spurred on by psychedelic use. Though I am sure he would not agree with the frame that it was a waste,  I read his *Waking Up* as a bit of a horror story. For someone without his high IQ and indulgent parents, you could imagine more horrible ends. 

I know of at least one person who was bright, had wild ambitious ideas, and now spends his time isolated from his family inwardly pursuing “enlightenment.” And this through the standard meditation + psychedelics combination. I find it hard to read this as anything other than wire-heading, and I think a good social norm would be one where we consider such behavior as about as virtuous as obsessive masturbation.

In general, for any drug that produces euphoria, especially spiritual euphoria, the user develops an almost romantic relationship with their drug, as the feelings they inspire are just as intense (and sometimes more so) as familial love.  One should at least be slightly suspicious of the benefits propounded by their users, who in many cases literally worship their drugs of choice. 

Replies from: Aella, sil-ver
comment by Aella · 2021-10-18T18:54:30.252Z · LW(p) · GW(p)

fwiw as a data point here, I spent some time inwardly pursuing "enlightenment" with heavy and frequent doses of psychedelics for a period of 10 months and consider this to be one of the best things I've ever done. I believe it raised my resting set point happiness, among other good things, and I am still deeply altered (7 years later).

I do not think this is a good idea for everyone and lots of people who try would end up worse off. But I strongly object to this being seen as virtuous as obsessive masturbation. Sure, it might not be your thing, but this frame seriously misses a huge amount of really important changes in my experience. And I get you might think I'm... brainwashed or something? by drugs? So I don't know what I could say that would convince you otherwise.

But I did have concrete things, like solving a pretty big section of childhood trauma (like; I had a burning feeling of rage in my chest before, and the burning feeling was gone afterwards), I had multiple other people comment on how different I was now (usually in regards to laughing easier and seeming more relaxed), I lost my anxiety around dying, my relationship to pain altered in such a way that I am significantly more mentally able to endure it than I was before, I also had a radically altered relationship to the physical environment (my living space looked very different before and after), and I produced a lot of art that I hadn't been producing before. Maybe this one is less concrete, but some part of me feels really deeply at peace, always, like it knows everything is going to be ok and I didn't have that before.

There's a way in which I consider what I did wireheading, like really successful wireheading, but I think people often... fail to imagine wireheading properly? And the husk of wireheading, where you're sort of less of a person, is really terrifying. I agree that the husk-of-wireheading view makes psychedelics seem more sinister.

Replies from: Duncan_Sabien, Bjartur Tómas
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-18T19:00:37.680Z · LW(p) · GW(p)

In my culture, it's easy to look at "what happens at the ends of the bell curves" and "where's the middle of the bell curve" and "how tight vs. spread out is the bell curve (i.e. how different are the ends from the middle)" and "are there multiple peaks in the bell curves" and all of that, separately.

Like, +1 for the above, and I join the above in giving a reminder that rounding things off to "thing bad" or "thing good" is not just not required, it's actively unhelpful.

Policies often have to have a clear answer, such as the "blanket ban" policy that Eliezer is considering proposing.  But the black-or-white threshold of a policy should not be confused with the complicated thing underneath being evaluated.

comment by Tomás B. (Bjartur Tómas) · 2021-10-18T19:46:42.284Z · LW(p) · GW(p)

And I get you might think I'm... brainwashed or something? by drugs?

I'm not sure what you find implausible about that.  Drugs do not literally propagandize the user, but they can hijack the reward system, in the case of many drugs, and in the case of psychedelics they seem to alter beliefs in reliable ways. Psychedelics are also taken in a memetic context with many crystalized notions about what the psychedelic experience is, what enlightenment is, that enlightenment itself is a mysterious but worthy pursuit.

The classic joke about psychedelics is they provide the feelings associated with profound insights without the actual profound insights. To the extent this is true, I feel this is pretty dangerous territory for a rationalist to tread.  

In your own case unless I am misremembering, I believe on your blog you discuss LSD permanently lowering your mathematical abilities degrading your memory. This seems really, really bad to me…

Maybe this one is less concrete, but some part of me feels really deeply at peace, always, like it knows everything is going to be ok and I didn't have that before.

I’m glad your anxiety is gone, but I don't think everything is going to be alright by default. I would not like to modify myself to think that. It seems clearly untrue. 

Perhaps the masturbation line was going too far.  But the gloss of virtue that “seeking enlightenment” has strikes me as undeserved. 

Replies from: Aella, thoth-hermes
comment by Aella · 2021-10-18T19:55:41.799Z · LW(p) · GW(p)

Also fwiw, I took psychedelics in a relatively memetic-free environment. I'd been homeschooled and not exposed to hippie/drug culture, and especially not significant discussion around enlightenment. I consider this to be one of the reasons my experience was so successful; I didn't have it in relationship to those memes, and did not view myself as pursuing enlightenment (I know I said I was inwardly pursuing enlightenment in my above comment, but I was mostly riffing off your phrasing; in some sense I think it was true but it wasn't a conscious thing.)

LSD did not permanently lower my mathematical abilities, and if I suggested that I probably misspoke? I suspect it damaged my memory, though; my memory is worse now than before I took LSD. 

And sorry; by 'everything being ok' I didn't mean that I literally think that situation will end up being the ones I want; I mean that I know I will be okay with whatever happens. Very related to my endurance of pain going up by quite a lot, and my anxiety of death disappearing.

Separately, I do think that a lot of the memes around psychedelics are... incomplete? It's hard to find a good word. Naive? Something around the difference between the aesthetic of a thing and the thing itself? And in that I might agree with you somewhere that "seeking enlightenment" isn't... virtuous or whatever.

Replies from: ChristianKl, Bjartur Tómas
comment by ChristianKl · 2021-10-20T08:45:29.744Z · LW(p) · GW(p)

LSD did not permanently lower my mathematical abilities, and if I suggested that I probably misspoke? I suspect it damaged my memory, though; my memory is worse now than before I took LSD. 

7 years seem to be a long time and most people get worse memory as they age. Was it also significantly worse directly aften the 10 months of you being on that quest then before those 10 months.?

comment by Tomás B. (Bjartur Tómas) · 2021-10-18T20:04:37.832Z · LW(p) · GW(p)

LSD did not permanently lower my mathematical abilities, and if I suggested that I probably misspoke? I suspect it damaged my memory, though; my memory is worse now than before I took LSD. 

Thanks. Corrected; I probably conflated the two.  But my feeling towards that change are the same so the line otherwise remains unchanged.  I should probably organize my opinons/feelings on this topic and write an effortpost or something rather than hash it out in the comments.

comment by Thoth Hermes (thoth-hermes) · 2023-04-03T13:46:39.917Z · LW(p) · GW(p)

This is an interesting class of opinions; I wonder if believing the following:

I’m glad your anxiety is gone, but I don't think everything is going to be alright by default. I would not like to modify myself to think that. It seems clearly untrue. 

is at all correlated with also having this belief:

The classic joke about psychedelics is they provide the feelings associated with profound insights without the actual profound insights. To the extent this is true, I feel this is pretty dangerous territory for a rationalist to tread.

"Everything is not going to be alright by default" is sort of a vague belief to have, so is it worth having? I don't think this is necessarily either an anomalous belief nor a common-place belief. Admittedly, I have a hard time figuring out how I would modify myself to have this belief. I guess I am not that way by nature, but others can be. It would be interesting to find out what accounts for that difference. Ultimately, if it's more of an axiomatic belief, it would require a lot of argument about what kinds of other beliefs it leads to that are more beneficial for one to use over their lifetimes. 

About the profound insights, the way to check to see if they are actually profound is:

  1. Can it be articulated?
  2. Can you explain it in further detail from subsequent experiences?
  3. Does it remain with you even once the psychedelics or the "elevated" experience has worn off?

From personal experience, there are insights you can have which satisfy all three. I think lessened anxiety (which will be accompanied with reasons, though too long for this comment) is one of them. 

comment by Rafael Harth (sil-ver) · 2021-10-24T20:49:08.601Z · LW(p) · GW(p)

Even in the case of Sam Harris, who seems relatively normal, he lost a decade of his life pursuing “enlightenment” though meditation

What kind of a cost-benefit analysis is this?

if you start from the assumption that something isn't useful, of course spending time on that thing is a waste. As far as I can see, this is the totality of your argument. You can do this for just about anyone, e.g.:

Even in the case of Scott Garrabrant who seems relatively normal, he lost a decade of his life pursuing "AI alignment" through the use of mathematics.

I happen to think that Scott did amazing work at Miri, but objectively speaking, it is significantly harder to justify his time spent doing research at Miri than that of Sam Harris pursuing englightenment in India. Sam has released the Waking Up app, which is effectively a small company making a ton of money, donating 10% of its income to the most effective charities (arguably that alone is more than enough to pay for one decade of Sam's time) and has thousands of people reporting enormous psychological benefits. I'm one of them; in terms of productivity alone, I'd say my time working as increased by at least 20% and has gotten at least 10% more effective at a fairly low cost of time, negligible cost of money, and no discernible downside or risk. (I've never taken psychedelics.)

I get that you think Enlightenment is bullshit. (Or at least I assume that's what you think, correct me if this is wrong.) I strongly sympathize with this position because I think it's the logical conclusion if you evaluate the question via pattern matching. But the person you just cited is enormously successful & personally credits his decade in meditation for that, and he created a product directly causally upstream of that decade which has thousands of more people reporting similar things. (And makes a ton of money.) I don't fault you for still thinking that the entire project is bullshit, but obviously his case is Bayesian evidence against your position.

comment by steven0461 · 2021-10-18T20:35:08.905Z · LW(p) · GW(p)

Or maybe you should move out of the Bay Area, a.s.a.p. (Like, half seriously, I wonder how much of this epistemic swamp is geographically determined. Not having the everyday experience, I don't know.)

I wonder what the rationalist community would be like if, instead of having been forced to shape itself around risks of future superintelligent AI in the Bay Area, it had been artificial computing superhardware in Taiwan, or artificial superfracking in North Dakota, or artificial shipping supercontainers in Singapore, or something. (Hypothetically, let's say the risks and opportunities of these technologies were equally great and equally technically and philosophically complex as those of AI in our universe.)

comment by Vaniver · 2021-10-17T20:53:54.404Z · LW(p) · GW(p)

I'm also about ready to propose a similar no-such-group policy on 'woo', tarot-reading, supernaturalism only oh no it's not really supernaturalism I'm just doing tarot readings as a way to help myself think, etc.

Hmm. I can't tell if the second half is supposed to be pointing at my position on Tarot [LW · GW], or the thing that's pretending to be my position but is actually confused?

Like, I think the hitrate for 'woo' is pretty low, and so I spend less time dredging there than I do other places which are more promising, but also I am not ashamed of the things that I've noticed that do seem like hits there. Like, I haven't delivered on my IOU [LW · GW] to explain 'authenticity' yet, but I think Circling is actually a step above practices that look superficially similar in a way we could understand rigorously, even if Circling is in a reference class that is quite high in woo, and many Circlers like the flavor of woo.

That said, I could also see an argument that's like "look, we really have to implement rules like this at a very simple level or they will get bent to hell, and it's higher EV to not let in woo."

Replies from: Holly_Elmore
comment by Holly_Elmore · 2021-10-17T22:21:59.842Z · LW(p) · GW(p)

Would it be acceptable to regard practices like self-reflective tarot and circling and other woo-adjacent stuff as art rather than an attempt at rationality? I think it is a danger sign when people are claiming those highly introspective and personal activities as part of their aspiring to rationality. Can we just do art and personal emotional and creative discovery and not claim that it’s directly part of the rationalist project?

Replies from: Vaniver
comment by Vaniver · 2021-10-17T23:26:20.430Z · LW(p) · GW(p)

I mean, I also do things that I would consider 'art' that I think are distinct from rationality. But, like, just like I wouldn't really consider 'meditation' an art project instead of 'inner work' or 'learning how to think' or w/e, I wouldn't really consider Circling an art project instead of those things.

Replies from: Holly_Elmore
comment by Holly_Elmore · 2021-10-18T06:36:06.283Z · LW(p) · GW(p)

I would consider meditation and circling to have the same relationship to “discovering the truth” as art. The insights can be real and profound but are less rigorous and much more personal.

Replies from: Benquo
comment by Benquo · 2021-10-25T04:15:13.698Z · LW(p) · GW(p)

I think we need more than two categories here. We can't allocate credit for input, only output. People can learn things by carefully observing stuff, but we shouldn't get to mint social capital as rationalists for hours meditating any more than Darwin's reputation should depend directly on time spent thinking about tortoises.

Discerning investors might profit by recognizing leading indicators of high productivity, but that only works if incentives are aligned, which means, eventually, objective tests. In hindsight it seems very unfortunate that MIRI was not mostly funded by relevant-to-its-expertise prediction markets.

Good art seems like it should make people sexy, not credible.

comment by ChristianKl · 2021-10-18T09:19:06.684Z · LW(p) · GW(p)

Instead of declaring group norms I think it would be worth it to have posts that actually lay out the case in a convincing manner. In general there are plenty of contrarian rationalists for whom "it's a group norm" is not enough to not do something. Declaring it as a drug norm might get them to be more secretive about it which is bad. 

Trying to solve issues about people doing the wrong things with group norms instead of with deep arguments doesn't seem to be the rationalist way.

Replies from: Gunnar_Zarncke, TekhneMakre
comment by Gunnar_Zarncke · 2021-10-18T12:50:08.106Z · LW(p) · GW(p)

Can you propose a norm that avoids the pitfalls?

Replies from: ChristianKl
comment by ChristianKl · 2021-10-18T15:12:35.574Z · LW(p) · GW(p)

Have the important conversations about why you shouldn't take drugs / engage in woo openly on LessWrong instead only having them only privately where it doesn't reach many people. Then confront people who suggest something in that direction with those posts. 

comment by TekhneMakre · 2021-10-18T11:11:12.078Z · LW(p) · GW(p)

+1

comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-17T20:54:20.301Z · LW(p) · GW(p)

There are some potential details that might swing one way or the other (Vaniver's comment points at some), but as-written above, and to the best of my ability to predict what such a proposal would actually look like once Eliezer had put effort into it:

I expect I would wholeheartedly and publicly endorse it, and be a signatory/adopter.

comment by Unreal · 2021-10-18T19:59:00.402Z · LW(p) · GW(p)

I feel tempted to mostly agree with Eliezer here... 

Umm To relay a trad Buddhist perspective, you're not (traditionally) supposed to make a full-blown attempt for 'enlightenment' or 'insight' until you've spent a fairly extensive time working on personal ethics & discipline. I think an unnamed additional step is to establish your basic needs, like good community, health, food, shelter, etc. It's also recommended that you avoid drugs, alcohol, and even sex. 

There's also an important sense I get from trad Buddhism, which is: If you hold a nihilistic view, things will go sideways. A subtle example of nihilism is the sense that "It doesn't matter what I do or think because it's relatively inconsequential in the scheme of things, so whatever." or a deeper hidden sense of "It doesn't really matter if everyone dies." or "I feel it might be better if I just stopped existing?" or "I can think whatever I want inside my own head, including extensive montages of murder and rape, because it doesn't really affect anything." 

These views seem not uncommon among modern people, and subtler forms seem very common. Afaict from reading biographies, modern people have more trouble with nihilistic or self-hating views like these than people who grew up in e.g. Thai forest villages 100+ years ago. Because of this, I recommend a lot more caution with meditation and similar things. 

Modern westerners and their flavors of non-trad Buddhism is more Wild West. Maybe not surprising. They don't really like the traditional, renunciate, careful, slow aspects. They add in the drugs and the spirit of 'move fast, break things'. They get addicted to the sense of progress and acceleration. 

I would... personally... warn against that. Although... it's complicated, since it does seem helpful to various people. People also seem really into their drugs, sex, and alcohol. ^_^; 

I get ... mildly squicked out by people trying to get enlightened using lots of drugs. I get squicked out if people seem pressured or anxious about trying to get enlightened asap. I have good friends who are in these categories. I am not qualified to know what's truly good for them. But I'd personally feel a bit more comfortable if they eventually moved off of drug use. 

While I personally strive for enlightenment in my own meditation practice at MAPLE, I am coming to realize the folly of "trying to make it happen as quickly as possible." I am very wary of using drugs as a way to try to speed up "the process". I am also wary of using pressure or brute force as a way to speed up "the process". Seems like a form of hubris. 

Rationalists seem... susceptible?... to hubris. :/ and... nihilism :\ :/ :\ 
I consider myself an example. ! 

comment by Chris_Leong · 2021-10-18T02:46:54.345Z · LW(p) · GW(p)

I guess I'd suggest thinking about targets carefully. A lot of people are going to experiment with psychedelics anyway and it's safer for people to do so within a group, assuming the group is actually trustworthy and not attempting to brainwash people.

comment by Kaj_Sotala · 2021-10-17T20:26:51.865Z · LW(p) · GW(p)

OTOH a significant amount of (seemingly sane) people credit psychedelics for important personal insights and mental health/trauma healing. Psychedelics seem to be showing enough promise for that for the psychiatric establishment to be getting interested in them again [1, 2] despite them having been stigmatized for decades, and AFAIK the existing medical literature generally finds them to be low-risk [3, 4].

Replies from: ioannes_shade, Avi Weiss, None
comment by ioannes (ioannes_shade) · 2021-10-19T17:24:00.196Z · LW(p) · GW(p)

It's interesting that a lot of the discussion about psychedelics here is arguing from intuitions and personal experience, rather than from the trial results that have been coming out. 

I do think that psychedelic experiences vary a lot from person-to-person and trip-to-trip, and that psychedelics aren't for everyone. (This variability probably isn't fully captured by the trial results because study participants are carefully screened for lots of factors that may be contraindicated.)

comment by Avi (Avi Weiss) · 2021-10-18T08:40:57.307Z · LW(p) · GW(p)

Psilocybin-based psychedelics are indeed considered low-risk both in terms of addiction and overdose. This chart sums things up nicely, and is a good thing to 'pin on your mental fridge':

https://upload.wikimedia.org/wikipedia/commons/thumb/a/a5/Drug_danger_and_dependence.svg/1920px-Drug_danger_and_dependence.svg.png

You want to stay as close as possible to the bottom left corner of that graph!

Replies from: None
comment by [deleted] · 2021-10-18T15:05:37.143Z · LW(p) · GW(p)

This graph shows death and addiction potential but it doesn't say anything about sanity 

Replies from: Avi Weiss
comment by Avi (Avi Weiss) · 2021-10-18T15:10:23.642Z · LW(p) · GW(p)

Correct - but they are low-risk for those factors (addiction and/or overdose).

comment by [deleted] · 2021-10-18T15:04:22.374Z · LW(p) · GW(p)

EDIT: while I still think this is true for most people, I want to contextualize this by saying that people should always practice harm-reduction when taking psychedelics, do their own research, consider their own history and risk profile, start with a low dose. I don't mean to make anyone feel threatened by arguing in favor of psychedelic use. Your boundaries are sacred and my opinion is just an opinion.

Psychedelics don't have any inherent positive or negative effect, they just make you more open to suggestion. They increase your learning rate. New evidence (i.e. your current lifestyle) will start weighing more on you than your prior (i.e. everything you've learned since you were a child). 

If you are in a context that promotes healthy ideas, then psychedelics will help you absorb them faster. If you are in a cult, they'll make you go off the rails faster.

I take them all the time and I'm better for it, but I would never take them in Berkeley.

Replies from: Holly_Elmore, mr-hire
comment by Holly_Elmore · 2021-10-18T21:30:59.125Z · LW(p) · GW(p)

This is not true. Some people are significantly less robust to the effects of psychedelics. Even a meditation retreat was enough to make me go off the rails— I would never take psychedelics. But some people can’t feel anything at those retreats and seem like psychedelics just open them up a bit. The same predispositions that lead people to develop schizophrenia and bipolar make them vulnerable to destabilization from psychedelics.

Replies from: None
comment by [deleted] · 2021-10-22T12:54:14.669Z · LW(p) · GW(p)

I wanted to dig up some numbers that put your claim in context. I have also seen a small minority of people that respond badly to psychedelics and even to meditation retreats (especially vipassana). But the first few studies did not find a connection between psychedelic use and mental health issues. I still feel this needs to be investigated.

However, I insist that my claim is true for a large majority, and an overcorrection of universally recommending against psychedelics would be net-negative (yes really). Instead we should be investigating how to ensure that these negative responses don't happen.

I'll make my model more precise: besides increasing your learning rate, psychedelics open you up to negative emotional stimuli that are being habitually suppressed because they would otherwise destabilize you. Naturally this opening up brings some destabilization along with it, which requires some skill to navigate. A good shaman or teacher will be able to teach you those skills, but a bunch of videos (like in vipassana) or just drinking the kool-aid with a bunch of friends won't do.

If I may ask (you don't have to answer): what retreat did you go to and how did you go off the rails?

comment by Matt Goldenberg (mr-hire) · 2021-10-18T18:57:20.122Z · LW(p) · GW(p)

This may be slightly overconfident. My guess is that the effects can vary wildly depending on the individual.

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-18T19:02:20.653Z · LW(p) · GW(p)

It's definitely overconfident.  Source: twenty years of listening to a wide range of stories from my mother's experiences as a mental health nurse in a psychiatric emergency room.  Some of those psychedelic-related cases involved all sorts of confounding factors, and some of them just didn't.

comment by iceman · 2021-10-17T19:39:19.807Z · LW(p) · GW(p)

I want to second this. I worked for an organization where one of key support people took psychedelics and just...broke from reality. This was both a personal crisis for him and an organizational crisis for the company to deal with the sudden departure of a bus factor 1 employee.

I suspect that psychedelic damage happens more often than we think because there's a whole lobby which buys the expand-your-mind narrative.

comment by jessicata (jessica.liu.taylor) · 2021-10-17T17:09:23.108Z · LW(p) · GW(p)

I don't regret having used psychedelics, though I understand why people might take what I've written as a reason not to try psychedelics.

Replies from: CronoDAS
comment by CronoDAS · 2021-10-17T19:50:59.592Z · LW(p) · GW(p)

The most horrific case I know of LSD being involved in a group's downward spiral from weird and kinda messed up to completely disconnected from reality and really fucking scary is the Manson family, but that's far from a typical example. But if you do want to be a cult leader, LSD does seem to do something that makes the job a lot easier.

comment by rationalistthrowaway · 2021-10-19T03:28:26.749Z · LW(p) · GW(p)

(Note: I feel nervous posting this under my own name, in part because my Dad is considering transitioning at the moment and I worry he'd read it as implying some hurtful thing I don't mean, but I do want to declare the conflict of interest that I work at CFAR or MIRI).

The large majority of folks described in the OP as experiencing psychosis are transgender. Given the extremely high base rate of mental illness in this demographic, my guess is this is more explanatorily relevant than the fact that they interacted with rationalist institutions or memes. 

I do think the memes around here can be unusually destabilizing. I have personally experienced significant psychological distress thinking about s-risk scenarios, for example, and it feels easy to imagine how this distress could have morphed into something serious if I'd started with worse mental health. 

But if we're exploring analogies between what happened at Leverage and these rationalist social circles, it strikes me as relevant to ask why each of these folks were experiencing poor mental health. My impression from reading Zoe's writeup is that she thinks her poor mental health resulted from memes/policies/conversations that were at best accidentally mindfucky, and often intentionally abusive and manipulative.

In contrast, my impression of what happened in these rationalist social circles is more like "friends or colleagues earnestly introduced people (who happened to be drawn from a population with unusually high rates of mental illness) to upsetting plausible ideas."

Replies from: Benquo, jessica.liu.taylor, jessica.liu.taylor
comment by Benquo · 2021-10-19T16:27:03.995Z · LW(p) · GW(p)

As I understand it you're saying:

At Leverage people were mainly harmed by people threatening them, whether intentionally or not. By contrast, in the MIRICFAR social cluster, people were mainly harmed by plausible upsetting ideas. (Implausible ideas that aren't also threats couldn't harm someone because there's no perceived incentive to believe them.)

An example of a threat is Roko's Basilisk. An example of an upsetting plausible idea was the idea in early 2020 that there was going to be a huge pandemic soon. Serious attempts were made to suppress the former meme and promote the latter.

If someone threatens me I am likely to become upset. If someone informs me about something bad, I am also likely to become upset. Psychotic breaks are often a way of getting upset about one's prior situation.. People who transition genders are also usually responding to something in their prior situation that they were upset about.

Sometimes people get upset in productive ways. When Justin Shovelain called me to tell me that there was going to be a giant pandemic, I called up some friends and talked through self-quarantine thresholds, resulting in this blog post. Later, some friends and I did some other things to help out people in danger from COVID, because we continued to be upset about the problem. Zack Davis's LessWrong posts on epistemology also seem like a productive way to get upset.

Sometimes people get upset in unproductive ways. Once, a psychotic friend peed on their couch "in order to make things worse" (their words). People getting upset in unproductive ways is an important common unsolved problem.

The rate at which people are getting upset unproductively is an interesting metric but a poor target because while it is positively related to how bad problems are, it is also inversely related to the flow of information about problems. But that means it can be inversely related to the rate at which problems are getting solved and therefore to the rate at which things are getting better.

comment by jessicata (jessica.liu.taylor) · 2021-10-19T04:42:43.395Z · LW(p) · GW(p)

The large majority of folks described in the OP as experiencing psychosis are transgender.

That would be, arguably, 3 of the 4 cases of psychosis I knew about (if Zack Davis is included as transgender) and not the case of jail time I knew about. So 60% total. [EDIT: See PhoneixFriend's comment [LW(p) · GW(p)], there were 4 cases who weren't talking with Michael and who probably also weren't trans (although that's unknown); obviously my own knowledge is limited to my own social circle and people including me weren't accounting for this in statistical inference]

My impression from reading Zoe’s writeup is that she thinks her poor mental health resulted from memes/policies/conversations that were at best accidentally mindfucky, and often intentionally abusive and manipulative.

In contrast, my impression of what happened in these rationalist social circles is more like “friends or colleagues earnestly introduced people (who happened to be drawn from a population with unusually high rates of mental illness) to upsetting plausible ideas.”

These don't seem like mutually exclusive categories? Like, "upsetting plausible ideas" would be "memes" and "conversations" that could include things like AI probably coming soon, high amounts of secrecy being necessary, and the possibility of "mental objects" being transferred between people, right?

Even people not at the organizations themselves were an important cause, everyone was in a similar social context and responding to it, e.g. a lot of what Michael Vassar said was in response to and critical of lots of ideas institutional people had.

It seems like something strange is happening with some ideas that were different from the mainstream being labeled as "memes" and others, some of which are counter to the first set of ideas and some of which are counter to mainstream understanding, being labeled as "upsetting plausible ideas" with more causal attribution to the second class.

If a certain scene is a "cult" and people who "exit the cult" have higher rates of psychosis than people who don't even attempt to "exit the cult" then this is consistent with the observations so far. Which could happen in part due to the ontological shift necessary to "exit the cult" and also because exiting would increase social isolation (increasing social dependence on a small number of people), which is a known risk factor.

Both Zoe and I were at one time "in a cult" and at a later time "out of the cult" with some in-between stage of "believing what we were in was a cult", where both being "in the cult" and "coming to believe what we were in was a cult" involved "memes" and "upsetting plausible ideas", which doesn't seem like enough to differentiate.

Overall this doesn't immediately match my subjective experience and seems like it's confusing a lot of things.

[EDIT: The case I'm making here is even stronger given PhoenixFriend's comment.]

Replies from: Viliam
comment by Viliam · 2021-10-19T11:55:15.494Z · LW(p) · GW(p)

exiting would increase social isolation (increasing social dependence on a small number of people), which is a known risk factor

If exiting makes you socially isolated, it means that (before exiting) all/most of your contacts were within the group.

That suggests that the safest way to exit is to gradually start meeting new people outside the group, start spending more time with them and less time with other group member, until the majority of your social life happens outside the group, which is when you should quit.

Cults typically try to prevent you from doing this, to keep the exit costly and dangerous. One method is to monitor you and your communications all the time. (For example, Jehovah Witnesses are always out there in pairs, because they have a sacred duty to snitch on each other. Another way is to keep you at the group compound where you simply can't meet non-members. Yet another way is to establish a duty to regularly confess what you did and who you talked to, and to chastise you for spending time with unbelievers.) Another method is simply to keep you so busy all day long that you have no time left to interact with strangers.

To revert this -- a healthy group will provide you enough private free time. (With the emphasis on all the three words: "free", "private", and "enough".)

Both Zoe and I were at one time "in a cult"

We know that Zoe had little free time, she had to spend a lot of time reporting her thoughts to her supervisors, and she was pressured to abandon her hobbies and not socialize.

2–6hr long group debugging sessions in which we as a sub-faction (Alignment Group) would attempt to articulate a “demon” which had infiltrated our psyches from one of the rival groups, its nature and effects, and get it out of our systems using debugging tools.

it was suggested I cancel my intended trip to Europe to show my commitment, which I did.

There was no vacation policy, which seemed good, but in reality panned out in my having no definitively free, personal time that couldn’t be infringed upon by expectations of project prioritization.

One day, I was debugging with a supervisor and we got to the topic of my desire to perform as an actor. [...] he thought that wanting to do acting [...] was “honestly sociopathic.”

We were kept extremely busy. [...] Here are four screenshots of my calendar, showing an average month in my last 6 months at Leverage.

I was regularly left with the feeling that I was low status, uncommitted, and kind of useless for wanting to socialize on the weekends or in the evenings.

Also, the group belief that if you meet outsiders they may "mentally invade you", the rival groups (does this refer to rationalists and EAs? not sure) will "infiltrate" you with "demons", and ordinary people will intentionally or subconsciously "leave objects" in you... does not sound like it would exactly encourage you to make friends outside the group, to put it mildly.

Now, do you insist that your experience in MIRI/CFAR was of the same kind? -- Like, what was your schedule, approximately? Did you have free weekends? Were you criticized for socializing with people outside MIRI/CFAR, especially with "rival groups"? Did you have to debug your thoughts and exorcise the mental invasions left by your interaction with nonmembers? If possible, please be specific.

Replies from: Vaniver, jessica.liu.taylor, Linch
comment by Vaniver · 2021-10-19T18:12:12.729Z · LW(p) · GW(p)

Were you criticized for socializing with people outside MIRI/CFAR, especially with "rival groups"?

As a datapoint, while working at MIRI I started dating someone working at OpenAI, and never felt any pressure from MIRI people to drop the relationship (and he was welcomed at the MIRI events that we did, and so on), despite Eliezer's tweets discussed here [LW · GW] representing a pretty widespread belief at MIRI. (He wasn't one of the founders, and I think people at MIRI saw a clear difference between "founding OpenAI" and "working at OpenAI given that it was founded", so idk if they would agree with the frame that OpenAI was a 'rival group'.)

comment by jessicata (jessica.liu.taylor) · 2021-10-19T13:52:04.252Z · LW(p) · GW(p)

That suggests that the safest way to exit is to gradually start meeting new people outside the group, start spending more time with them and less time with other group member, until the majority of your social life happens outside the group, which is when you should quit.

This is what I did, it was just still a pretty small social group, and getting it and "quitting" were part of the same process.

(does this refer to rationalists and EAs? not sure)

I think it was other subgroups at Leverage, at least primarily. So "mental objects" would be a consideration in favor of making friends outside of the group. Unless one is worried about spreading mental objects to outsiders.

Now, do you insist that your experience in MIRI/CFAR was of the same kind?

Most of this is answered in the post, e.g. I made it clear that the over-scheduling issue was not a problem for me at MIRI, which is an important difference. I was certainly spending a lot of time outside of work doing psychological work, and I noted friendships including one with a housemate formed around a shared interest in such work (Zoe notes that a lot of things on her schedule were internal psychological work). There wasn't active prevention of talking to people outside the community but it's common for it to happen anyway which is influenced by soft social pressure (e.g. looking down on people as "normies"). Zoe also is saying a lot of the pressure at Leverage was soft/nonexplicit, e.g. "being looked down on" for taking normal weekends.

I do remember Nate Soares who was executive director at the time telling me that "work-life balance is overrated/not really necessary" and if I'd been more sensitive to this I might have spent a lot more time on work. (I'm not even sure he's "wrong" in that the way "normal people" do this has a lot of problems and integrating different domains of life can help sometimes, it still could have been taken as encouragement in the direction of working on weekends etc.)

comment by Linch · 2021-10-19T23:21:48.855Z · LW(p) · GW(p)

Just want to register that this comment seemed overly aggressive to me on a first read, even though I probably have many sympathies in your direction (that Leverage is importantly disanalogous to MIRI/CFAR)

comment by jessicata (jessica.liu.taylor) · 2021-10-19T16:36:19.426Z · LW(p) · GW(p)

The following recent Twitter thread by Eliezer is interesting in the context of the discussion of whether "upsetting but plausible ideas" are coming from central or non-central community actors, and Eliezer's description of Michael Vassar as "causing psychotic breaks":

if you actually knew how deep neural networks were solving your important mission-critical problems, you'd never stop screaming

(no, I don't know how they're doing it either, I just know that you'd update in a predictable net direction if you found out)

(in reply to "My model of Eliezer is not so different from his constantly screaming, silently to himself, at all times, pausing only to scream non-silently to others, so he doesn't have to predictably update in the future.":)

This state of affairs sounds indistinguishable from coherent Bayesian thought inside a world like this one, so I suppose that's confirmation, yes.

A few takeaways from this:

  1. Obviously, Eliezer is saying that there is a plausible but extremely upsetting idea that could be learned by studying neural networks sufficiently competently. [EDIT: Maybe I'm wrong that this is indicating neural nets being powerful and is just indicating them being unreliable for mission-critical applications? Both interpretations seem plausible...]

  2. This statement, itself, is plausible and upsetting, though presumably less upsetting than if one actually knew the thing that could be learned about neural networks.

  3. Someone who was "constantly screaming" would be considered, by those around them, to be having a psychotic break (or an even worse mental health problem), and be almost certain to be psychiatrically incarcerated.

  4. Eliezer is, to all appearances, trying to convey these upsetting ideas on Twitter.

  5. It follows that, to the extent that Eliezer is not "causing psychotic breaks", it's only because he's insufficiently capable of causing people to believe "upsetting but plausible ideas" that he thinks are true, i.e. because he's failing (or perhaps not-really-trying, only pretending to try) to actually convey them.

Replies from: TurnTrout, Viliam
comment by TurnTrout · 2021-10-19T18:19:50.803Z · LW(p) · GW(p)

This does not seem like the obvious reading of the thread to me.

Obviously, Eliezer is saying that there is a plausible but extremely upsetting idea that could be learned by studying neural networks sufficiently competently.

I think Eliezer is saying that if you understood on a gut level how messy deep networks are, you'd realize how doomed prosaic alignment is. And that would be horrible news. And that might make you scream, although perhaps not constantly. 

After all, Eliezer is known to use... dashes... of colorful imagery. Do you really think he is literally constantly screaming silently to himself? No? Then he was probably also being hyperbolic about how he truly thinks a person would respond to understanding a deep network in great detail.

That's why I feel that your interpretation is grasping really hard at straws. This is a standard "we're doomed by inadequate AI alignment" thread from Eliezer.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-19T18:37:41.830Z · LW(p) · GW(p)

Even though it's an exaggeration, Eliezer is, with this exaggeration, trying to indicate an extremely high level of fear, off the charts compared with what people are normally used to, as a result of really taking in the information. Such a level of fear is not clearly lower than the level of fear experienced by the psychotic people in question, who experienced e.g. serious sleep loss due to fear.

Replies from: dxu
comment by dxu · 2021-10-19T18:58:58.401Z · LW(p) · GW(p)

I strong-upvoted both of Jessica's comments in this thread despite disagreeing with her interpretation in the strongest possible terms; I did so because I think it is important to note that, for every "common-sense" interpretation of a community leader's words, there will be some small minority who interpret it in some other (possibly more damaging) way--and while I think (importantly) this does not imply it is the community leader's responsibility to manage their words in such a way that no misinterpretation is possible (which I think is simply completely unfeasible), I am nonetheless in favor of people sharing their (non-standard) interpretations, given the variation in potential responses.

As Eliezer once said (I'm paraphrasing from memory here, so the following may not be word-for-word accurate, but I am >95% confident I'm not misremembering the thrust of what he said), "The question I have to ask myself is, will this drive more than 5% of my readers insane?"

EDIT: I have located the text of the original comment [LW(p) · GW(p)]. I note (with some vindication) that once again, it seems that Eliezer was sensitive to this concern way ahead of when it actually became a thing.

comment by Viliam · 2021-10-19T18:54:50.036Z · LW(p) · GW(p)

Hm, I thought that the upsetting thing is how neural networks work in general. Like the ones that can correctly classify pictures with 99% probability... and then you slightly adjust a few pixels in such way that a human sees no difference, but the neural network suddenly makes a completely absurd claim with high certainty.

And, if you are using neural networks to solve important problems, and become aware of this, then you realize that despite them doing a great job in 99% of situations and a random stupid thing in the remaining 1%, there is actually no limit to how insanely wrong they can get, and that it can happen in circumstances that would seem perfectly harmless to you. That the underlying logic is just... inhuman.

(To make an analogy, imagine that you hire a human to translate from French to English. The human is pretty good but not perfect, which means that he gets 99% right. In the remaining 1% he either translates the word incorrectly or says that he doesn't know. These two options are the only results you expect. -- Now instead of a human, you hire a robot. He also translates 99% correctly and 1% incorrectly or with no output. But in addition to this, if you give him a specifically designed input, he will say a complete absurdity. Like, he would translate "UN CHAT" as "A CAT", but when you strategically add a few dots and make it "ỤN ĊHAṬ", he will suddenly insist that is means "CENTRUM FOR APPLIED RATIONALITY" and will assign a 99.9999999% certainty to this translation. Note that this is not its usual reaction to dots; the input papers usually contain some impurities or random dots, and the algorithm has always successfully ignored them... until now. -- The answer is not just wrong, but absurdly wrong, it happened in the situation where you felt quite sure nothing wrong can happen, and the robot didn't even feel uncertain.)

Obviously, Eliezer is saying that there is a plausible but extremely upsetting idea that could be learned by studying neural networks sufficiently competently.

So, I think that you got this part wrong (and that putting "obviously" in front of it makes this weirdly ironic in given context), and the following conclusions are therefore also wrong.

Eliezer is simply saying (not "constantly screaming") "do not trust neural networks, they randomly make big errors". That message, even if perceived 100% correctly, should not cause a psychotic break in the average listener.

Replies from: Vaniver, jessica.liu.taylor
comment by Vaniver · 2021-10-19T22:17:49.005Z · LW(p) · GW(p)

they randomly make big errors

I think it's important that the errors are not random; I think you mean something more like "they make large opaque errors."

comment by jessicata (jessica.liu.taylor) · 2021-10-19T19:26:25.635Z · LW(p) · GW(p)

Given what else Eliezer has said, it's reasonable to infer that the screaming is due to the possibility of everyone dying due to neural network based AIs being powerful but unalignable, not merely that your AI application might fail unexpectedly.

It's really strange to think the idea isn't upsetting when Eliezer says understanding it would cause "constant screaming". Even if that's an exaggeration, really??????? Maybe ask someone who doesn't read LW regularly whether Elizer is saying the idea you could get by knowing how neural nets work is upsetting, I think they would agree with me.

Replies from: localdeity
comment by localdeity · 2021-10-21T21:05:22.116Z · LW(p) · GW(p)

He specified "mission-critical".  An AI's ability to take over other machines in the network, take over the internet, manufacture grey goo, etc. (choose your favorite doomsday scenario), is not really related to how mission-critical its original task was.  (In fact, someone's AI to choose the best photo filters to match the current mood on Instagram to maximize "likes" seems both more likely to have arbitrary network access and less likely to have careful oversight than a self-driving car AI.)  Therefore I do think his comment was about the likelihood of failure in the critical task, and not about alignment.

I think he meant something like this:  The neural net, used e.g. to recognize cars on the road, makes most of its deductions based on accidental correlations and shortcuts in the training data—things like "it was sunny in all the pictures of trucks", or "if it recognizes the exact shape and orientation of the car's mirror, then it knows which model of car it is, and deduces the rest of the car's shape and position from that, rather than by observing the rest of the car".  (Actually they'd be lower-level and less human-legible than this.  It's like someone parsing tables out of Wikipedia pages' HTML, but instead of matching th/tr/td elements, it just counts "<" characters, and God help us if one of the elements has an extra < due to holding a link or something.)  If you understood just how fragile and divorced from reality the shortcuts were, while you were sitting in such a car rushing down the highway, you would scream.

(The counterargument to screaming, it seems to me, is that it's relying on 100 different fragile accidental correlations, any 70 of which are sufficient—and it's unlikely that more than 10 of them will break at once, especially if the neural net gets updated every few months, so the ensemble is robust even though the parts are not.  I expect one could develop confidence in this by measuring just how overdetermined the "this is a car" deductions are, and how much they vary.  But that requires careful measurement and calculation, and many people might not get past the intuitive "JFC my life depends on the equivalent of 100 of those reckless HTML-parsing shortcuts, I'm going to die".  And I expect there are plenty of applications where the ensemble really is fragile and has a >10% chance of serious failure within a few months.)

(NB. I've never worked on neural nets.)

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-21T21:10:07.466Z · LW(p) · GW(p)

Ok, I see how this is plausible. I do think that the reply to Zvi adds some context where Zvi is basically saying "Eliezer is always screaming, taking pauses to scream at others", and the thing Eliezer is usually expressing fear about is AI killing everyone. I see how it could go either way though.

comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-19T22:13:00.012Z · LW(p) · GW(p)

One thing that has been bothering me a lot is that it seems like it's really likely that people don't realize just how distinct CFAR and MIRI are.

I've worked at each org for about three years total.

Some things which make it reasonable to lump them together and use the label "CFAR/MIRI":

  • They both descend from what was at one time a single organization.
  • They had side-by-side office spaces for many years, including a shared lunch table in the middle where people from both orgs would hang out and chat.
  • There are a lot of people common to both orgs (e.g. Anna does work for both orgs, I moved from CFAR to MIRI).
  • CFAR ran many explicit programs on MIRI's behalf (e.g. MSFP, or less directly but still pretty clearly AIRCS).
  • Most MIRI staff have been to a CFAR workshop.  Most MIRI staff have participated in at least one debugging session with a CFAR staff member (this was a service CFAR explicitly offered for a while).
  • Both orgs are explicitly concerned with navigating existential risk from unaligned artificial intelligence.
  • If MIRI "needed help," CFAR would be there.  If CFAR "needed help," MIRI would be there.  They are explicitly friendly, allied orgs.
  • The "most CFARish" MIRI employee has a lot in common, both in their own traits and in their historical experiences, with the "most MIRIish" CFAR employee.

That's the motte.  All of that is true, and reasonable, and relevant, and probably people could name a couple of other similarly true and reasonable and relevant points.

The bailey:

  • The overlap of people like Anna is representative of the overlap between the two orgs; MIRI people and CFAR people see themselves as being in a single bucket.
  • The median CFAR employee and the median MIRI employee interact frequently.
  • CFAR and MIRI share a single memetic pool; what's being discussed in one org is being discussed in the other; what's trendy in one org is what's trendy in the other.
  • CFAR and MIRI coordinate on goals as a rule, as opposed to when it's convenient; they're working together on a single plan.
  • The day-to-day experience of a CFAR employee strongly resembles the day-to-day experience of a MIRI employee.
  • The experience of a CFAR employee is representative of the experience of a MIRI employee, and vice versa; one is a good proxy for the other.
  • The two orgs have identical, overlapping, or solidly compatible internal cultures.  The rules and norms of one org are very similar to the rules and norms of the other.
  • Somebody who's been to a CFAR workshop understands the vibe at MIRI; somebody who's been to a MIRI-X understands the vibe at CFAR
  • Other relevant individuals and groups in the rationalist and EA communities think of CFAR and MIRI as being essentially the same thing

None of the above is even close to true, as far as I can see.  And that seems really relevant—just as Eli was pointing out that it would be bad to wrongly bucket CFAR and Leverage together, so too do I claim that it would be bad to wrongly bucket MIRI and CFAR together.

I think that it's entirely valid to point out commonalities between them, or ways in which the culture or norms of one might resemble or reinforce the culture or norms of the other.

I think it's extremely invalid to assume those commonalities as a matter of course.

And I think it's quite bad (though I don't think anyone has done this intentionally) to cause people to start assuming those commonalities, or cause people to think that everyone knows that they exist.

I think that Alex Vermeer and Malo Bourgon are extremely unlike Tim Telleen-Lawton and Pete Michaud (even though they've all worked together a lot and have a lot of respect for each other); I think that the experience of Jack Carroll is extremely unlike the experience of Carson Jones; I think that a CFAR staff retreat is extremely unlike a MIRI research retreat; I think that what's expected as a matter of course from a MIRI researcher is extremely unlike what's expected as a matter of course from a CFAR theorist; I think that what constitutes proprietary information or informational security at each org is extremely different, etc. etc. etc.

I want to make space here for people to just plainly state the overlap; I don't think this is offensive and I do think it's relevant.

But until laying this groundwork, I had the feeling that people would feel quite scared or cautious or unlikely-to-be-believed or suspected-of-ulterior-motives if they dared point at the disoverlap, which in my experience is much larger.

Replies from: AnnaSalamon, Scott Garrabrant, elityre
comment by AnnaSalamon · 2021-10-19T23:25:11.628Z · LW(p) · GW(p)

I agree with all of the above. And yet a third thing, which Jessica also discusses in the OP, is the community near MIRI and/or CFAR, whose ideology has been somewhat shaped by the two organizations.

There are some good things to be gained from lumping things together (larger datasets on which to attempt inference) and some things that are confusing.

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2021-10-24T01:14:44.114Z · LW(p) · GW(p)

I know you're busy with all this and other things, but how is this statement

One thing that has been bothering me a lot is that it seems like it’s really likely that people don’t realize just how distinct CFAR and MIRI are.

[...]

I agree with all of the above

compatible with this statement [LW(p) · GW(p)]?

As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong.

Actually, that was true for the last few years (with an ambiguous in-between time during covid), but it is not true now.

This thread is agreeing the orgs are completely different, but elsewhere you agreed that CFAR functions as a funnel into MIRI. I ask this out of personal interest in CFAR and MIRI going forwards and because I'm currently much more confused about how the two work than I was a week ago.

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-24T02:18:00.803Z · LW(p) · GW(p)

In the era 2015 - 2018, CFAR served mostly as not a funnel into MIRI in terms of total effort, programs, the curriculum of those programs, etc., but also:

  • CFAR ran some specific programs intended to funnel promising people toward MIRI, such as MSFP
  • CFAR "kept its eyes out" during its regular programs for people who looked promising and might be interested in getting more involved with MIRI or MIRI-adjacent work

Toward the 2018 - 2020 era, some CFAR staff incubated the AIRCS program, which was a lot like CFAR workshops except geared toward bridging between the AI risk community and various computer scientist bubbles, with a strong eye toward finding people who might work on MIRI projects.  AIRCS started as a more-or-less independent project that occasionally borrowed CFAR logistical support, but over time CFAR decided to contribute more explicit effort to it, until it eventually became (afaik) straightforwardly one of the two or three most important "things going on at CFAR," according to CFAR.

Staff who were there at the time (this was as I was phasing out) might correct this summary, but I believe it's right in its essentials.

In the last two years, CFAR hasn't done much outward-facing work at all, due to COVID, and so has neither been a MIRI funnel nor definitively not a MIRI funnel.

Replies from: AnnaSalamon, tomcatfish
comment by AnnaSalamon · 2021-10-25T04:28:05.704Z · LW(p) · GW(p)

In the last two years, CFAR hasn't done much outward-facing work at all, due to COVID, and so has neither been a MIRI funnel nor definitively not a MIRI funnel.

Yes, but I would predict that we won't be the same sort of MIRI funnel going forward. This is because MIRI used to have specific research programs that it needed to hire for, and it it was sponsoring AIRCS (covering direct expenses plus loaning us some researchers to help run the thing) in order to recruit for that, and those research programs have been discontinued and so AIRCS won't be so much of a thing anymore.

This has been the main part of why no AIRCS post vaccines, not just COVID.

I, and I would guess some others at CFAR, am interested in running AIRCS-like programs going forward, especially if there are groups that want to help us pay the direct expenses for those programs and/or researchers that want to collaborate with us on such programs. (Message me if you're reading this and in one of those categories.) But it'll be less MIRI-specific this time, since there isn't that recruiting angle.

Also, more broadly, CFAR has adopted different structures for organizing ourselves internally, and we are bigger now into "if you work for CFAR, or are a graduate of our instructor training program, and you have a 'telos' that you're on fire to do, you can probably do it with CFAR's venue/dollars/collaborations of some sorts" (we're calling this "platform CFAR," Elizabeth Garrett invented it and set it up maybe about a year ago, can't remember), and also into doing hourly rather than salaried work in general (so we don't feel an obligation to fill time with some imagined 'supposed to do CFAR-like activity" vagueness, so that we can be mentally free) and are also into taking more care not to have me or anyone speak for others at CFAR or organize people into a common imagined narrative one must pretend to believe, but rather into letting people do what we each believe in, and try to engage each other where sensible. Which makes it a bit harder to know what CFAR will be doing going forward, and also leaves me thinking it'll have a bit more variety in it. Probably.

comment by Alex Vermillion (tomcatfish) · 2021-10-24T02:54:21.316Z · LW(p) · GW(p)

Ah, so I should take the first statement as being strictly NOW, like 2021? That clears things up a lot, thanks!

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-24T03:00:50.132Z · LW(p) · GW(p)

I think Anna was saying "it is true that in the 2018 - 2020 era, CFAR was about 60% a hiring ground and only 40% something else, but that is not true currently."

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2021-10-24T04:50:10.193Z · LW(p) · GW(p)

If this is the case, I do understand now, but I think the comment claiming that it's not true at the literal current moment of October 2021 is useless in a misleading (though probably not intentional way).

I think it is important to the CFAR-aligned folks that CFAR is not "bad" in the way noted in that comment, but to everyone else, the important thing is whether or not that criticism is true. It was the initial ignorance on my end that we were looking at the same fact from different angles that led to the confusion.

(Also, I'm not continuing this out of a desire to show that "I'm right" or something, but just to explain why I cared since I now understand the mistake and can explain it. I'm happy to flesh it out more if this wasn't very clear)

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-24T05:32:06.330Z · LW(p) · GW(p)

TBC it easily may also be that CFAR made strategic shifts during COVID that make the statement true in a non-trivial way; I simply wouldn't know that fact and so can't speak to it.

comment by Scott Garrabrant · 2021-10-19T23:48:29.845Z · LW(p) · GW(p)

Mostly agree. I especially agree about the organizational structure being very different.

I would not have said ""The median CFAR employee and the median MIRI employee interact frequently." is not even close to true", but it depends on the operationalization of frequently. But according to my operationalization, the lunch table alone makes it close to true.

I would also not have said "I think that a CFAR staff retreat is extremely unlike a MIRI research retreat." (e.g. we have attempted to Circle at a research retreat more than once.) (I haven't actually been to a CFAR staff retreat, but I have been to some things that I imagine are somewhat close, like workshops where a majority of attendees are CFAR staff). 

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-20T02:13:34.144Z · LW(p) · GW(p)

I think "we've attempted to circle at a research retreat more than once" is only a little stronger evidence of overlap than "we also ate food at our retreat."

Fair point about the lunch table, although it's my sense that a strict majority of MIRI employees were almost never at the lunch table and for the first two years of my time at CFAR we didn't share a lunch table.

Replies from: Linch, Scott Garrabrant
comment by Linch · 2021-10-22T04:18:58.837Z · LW(p) · GW(p)

If you pick a randomly selected academic or hobby conference, I will be much more surprised that they had circling than if they had food.

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-22T06:49:20.793Z · LW(p) · GW(p)

Yeah.  I am more pointing at "the very fact that Scott seems to think that 'trying to circle more than once' is sufficient to posit substantial resemblance between MIRI research retreats and CFAR staff retreats is strong evidence that Scott has no idea what the space of CFAR staff retreats is like."

Replies from: Linch
comment by Linch · 2021-10-22T09:34:33.492Z · LW(p) · GW(p)

To clarify, are you saying that CFAR staff retreats don't involve circling?

Replies from: AnnaSalamon, Duncan_Sabien
comment by AnnaSalamon · 2021-10-22T10:09:16.246Z · LW(p) · GW(p)

CFAR staff retreats often involve circling. Our last one, a couple weeks ago, had this, though as an optional evening thing that some but not most took part in.

comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-22T16:36:33.507Z · LW(p) · GW(p)

I'm saying they involved circling often while I was there but that fact was something like 3-15% of their "character" (and probably closer to 3% imo) and so learning that some other thing also involves circling tells you very little about the overall resemblance of the two things.

comment by Scott Garrabrant · 2021-10-20T04:16:37.220Z · LW(p) · GW(p)

Surprised by the circling comment, but it doesn't seem worth going deep on a nitpick.

comment by Eli Tyre (elityre) · 2021-10-20T00:00:47.650Z · LW(p) · GW(p)

All this sounds broadly correct to me, modulo some nitpicks that are on the whole smaller than Scott's objections [LW(p) · GW(p)] (for a sense of scale).

comment by AnnaSalamon · 2021-10-16T22:23:14.121Z · LW(p) · GW(p)

FWIW, the above matches my own experiences/observations/hearsay at and near MIRI and CFAR, and seems to me personally like a sensible and correct way to put it together into a parsable narrative. The OP speaks for me. (Clarifying at a CFAR colleague's request that here and elsewhere, I'm speaking for just for myself and not for CFAR or anyone else.)

(I of course still want other conflicting details and narratives that folks may have; my personal 'oh wow this puts a lot of pieces together in a parsable form that yields basically correct predictions' level is high here, but insofar as I'm encouraging anything because I'm in a position where my words are loud invitations, I want to encourage folks to share all the details/stories/reactions pointing in all the directions.) I also have a few factual nitpicks that I may get around to commenting, but they don’t subtract from my overall agreement.

I appreciate the extent to which you (Jessicata) manage to make the whole thing parsable and sensible to me and some of my imagined readers. I tried a couple times to write up some bits of experience/thoughts, but had trouble managing to say many different things A without seeming to also negate other true things A’, A’’, etc., maybe partly because I’m triggered about a lot of this / haven’t figured out how to mesh different parts of what I’m seeing with some overall common sense, and also because I kept anticipating the same in many readers.

Replies from: elityre, AnnaSalamon, crabman, BrienneYudkowsky
comment by Eli Tyre (elityre) · 2021-10-20T05:54:30.920Z · LW(p) · GW(p)

The OP speaks for me.

Anna, I feel frustrated that you wrote this. Unless I have severely misunderstood you, this seems extremely misleading.

For context, before this post was published Anna and I discussed the comparison between MIRI/CFAR and Leverage. 

At that time, you, Anna, posited a high level dynamic involving "narrative pyramid schemes" accelerating, and then going bankrupt, at about the same time. I agreed that this seemed like it might have something to it, but emphasized that, despite some high level similarities, what happened at MIRI/CFAR was meaningfully different from, and much much less harmful than, what Zoe described in her post.

We then went through a specific operationalization of one of the specific claimed parallels (specifically the frequency and oppressiveness of superior-to-subordinate debugging), and you agreed that while the CFAR case was, quantitatively, an order of magnitude better than what Zoe describes. We talked more generally about some of the other parallels, and you generally agreed that the specific harms were much greater in the Leverage case. 

(And just now, I talked with another CFAR staff member who reported that the two of you went point by point, and for each one you agreed that the CFAR/MIRI situation was much less bad than the Leverage case. [edit: I misunderstood. They only went through 5 points, out of many, but out of those 5 Anna agreed that the Leverage case was broadly worse.])

I think that you believe, as I do, that there were some high-level structural similarities between the dynamics at MIRI/CFAR and at Leverage, and also what happened at Leverage was substantially worse* than what happened at MIRI/CFAR.

Do you believe that?

If so, can you please clearly say so? 

I feel like not clearly stating that second part is extremely and damagingly misleading. 

What is at stake is not just the abstract dynamics, but also the concrete question of how alarmed, qualitatively, people around here should be. It seems to me that you, with this comment, are implying that it is appropriate to be about as alarmed by Zoe's report of Leverage as by this description of MIRI/CFAR. Which seems wrong to me.

[Edit: * - This formally read "an order of magnitude worse". 

I think this is correct, for a number of common sense metrics (ie "there was at least 10x as many hours of superior-subordinate debugging at Leverage, where this seemed to be an institutionalized practice making up a lot of a person's day, compared to CFAR, where this did happen sometimes what wasn't a core feature of the org. (This is without taking into account the differences in how harmful those hours were. The worst case of which I'm aware of this happening at CFAR was less harmful than Zoe's account.) 

I think across most metrics named, Leverage had a worse or stronger version of thing, with a few exceptions. MIRI's (but not CFAR's, mostly) narrative had more urgency about it than Leverage's for instance, because of AI timeline considerations, and overall the level of "intensity" or "pressure" around MIRI and Leverage might have been similar? I'm not close enough to either org to say with confidence.

But overall, I think it is weird to talk about "orders of magnitude" without referring to a specific metric, since it has the veneer of rigor without really adding much substance. I'm hoping that this edit adds some of that substance and I'm walking my claim back to the vaguer "substantially worse", with the caveat that I am generally in favor of, and open to sharing more specific quantitative estimates on specific operationalizations if asked.]

 

Replies from: AnnaSalamon, adam_scholl
comment by AnnaSalamon · 2021-10-20T09:02:27.754Z · LW(p) · GW(p)

I think that you believe, as I do, that there were some high-level structural similarities between the dynamics at MIRI/CFAR and at Leverage, and also what happened at Leverage was an order of magnitude worse than what happened at MIRI/CFAR.

Leverage_2018-2019 sounds considerably worse than Leverage 2013-2016.

My current guess is that if you took a random secular American to be your judge, or a random LWer, and you let them watch the life of a randomly chosen member of the Leverage psychology team from 2018-2019 (which I’m told is the worst part) and also of a randomly chosen staff member at either MIRI or CFAR, they would be at least 10x more horrified by the experience of the one in the Leverage psychology team.

I somehow don’t know how to say in my own person “was an order of magnitude worse”, but I can say the above. The reason I don’t know how to say “was an order of magnitude worse” is because it honestly looks to me (as to Jessica in the OP) like many places are pretty bad for many people, in the sense of degrading their souls via deceptions, manipulations, and other ethical violations. I’m not sure if this view of mine will sound over-the-top/dismissable or we-all-already-know-that/dismissible, or something else, but I have in mind such things as:

  • It seems to me that many many kids enter school with a desire to learn and an ability to trust their own mind, and leave school with a weird kind of “make sure you don’t get it wrong” that inhibits trying and doing. Some of this is normal aging, but my best guess is that an important chunk is more like cultural damage.

  • Many teenagers can do philosophy, stretch, try to think about the world. Most of the same folks at 30 or 40 can’t, outside of the ~one specific disciplines in which they’re a professional. They don’t let themselves.

  • Lots of upper middle class adults hardly know how to have conversations, of the “talk from the person inside who is actually home, asking what they want to know instead of staying safe, hitting new unpredictable thoughts/conversations” sense. This is a change from childhood. Again, this is probably partly aging, but I suspect cultural damage, and I’ve been told a couple times (including by folks who have no contact with Vassar or anyone else in this community) that this is less true for working class folks than for upper middle class folks, which if true is evidence for it being partly cultural damage though I should check this better.

  • Some staff IMO initially expect that folks at CFAR or Google or the FDA or wherever will be trying to do something real, and then come to later relate to it more like belief-in-belief, and to lots of other things too, with language coming to seem more like a mechanism for coordinating our belief-in-beliefs, and less like light with which one can talk and reason. And with things in general coming to seem kind of remote and as though you can’t really hope for anything real.

Anyhow. This essay wants to be larger than I’m willing to make this comment-reply before sleeping, so I’ll just keep doing it poorly/briefly, and hope to have more conversation later not necessarily under Jessica’s OP. But my best guess is that both CFAR of most of the last ten years, and the average workplace, are:

a) On the one hand, quite a bit less overtly hellish than the Leverage psychology teams of 2018-2019; but nevertheless maybe full of secret bits of despair and giving-up-on bits of our birthrights, in ways that are mostly not consciously noticed; b) More than 1/10th as damaging to most employees’ basic human capacities, compared to Leverage_2018-2019.

Why do I think b? Partly because of my observations on what happens to people in the broader world (including control groups of folks who do their own thing among good people and end up fine, but I might be rigging my data and playing “no true scottsmen” games to get rid of the rest, and misconstruing natural aging or something). And partly because I chatted with several people in the past week who spent time at Leverage, and they all seemed like they had intact souls, to me, although my soul-ometer is not necessarily that accurate etc.

But, anyhow, I agree that most people would see what you’re saying, I’m just seeing something else and I care about it and I’m sorry if I said it in a confusing/misleading way but it is actually pretty hard to talk about.

Epistemic status of all this: scratchwork, alas.

Replies from: hg00, cousin_it
comment by hg00 · 2021-10-20T23:00:44.559Z · LW(p) · GW(p)

These claims seem rather extreme and unsupported to me:

  • "Lots of upper middle class adults hardly know how to have conversations..."

  • "the average workplace [is] more than 1/10th as damaging to most employees’ basic human capacities, compared to Leverage_2018-2019."

I suggest if you write a toplevel post, you search for evidence for/against them.

Elaborating a bit on my reasons for skepticism:

  • It seems like for the past 10+ years, you've been mostly interacting with people in CFAR-adjacent contexts. I'm not sure what your source of knowledge is on "average" upper middle class adults/workplaces. My personal experience is normal people are comfortable having non-superficial conversations if you convince them you aren't weird first, and normal workplaces are pretty much fine. (I might be overselecting on smaller companies where people have a sense of humor.)

    • A specific concrete piece of evidence: Joe Rogan has one of the world's most popular podcasts, and the episodes I've heard very much seem to me like they're "hitting new unpredictable thoughts". Rogan is notorious for talking to guests about DMT, for instance.
  • The two observations seems a bit inconsistent, if you'll grant that working class people generally have worse working conditions than upper middle class people -- you'd expect them to experience more workplace abuse and therefore have more trauma. (In which context would an abusive boss be more likely to get called out successfully: a tech company or a restaurant?)

  • I've noticed a pattern where people like Vassar will make extreme claims without much supporting evidence and people will respond with "wow, what an interesting guy" instead of asking for evidence. I'm trying to push back against that.

  • I can imagine you'd be tempted to rationalize that whatever pathological stuff is/was present at CFAR is also common in the general population / organizations in general.

Replies from: Unreal, Douglas_Knight
comment by Unreal · 2021-10-21T16:37:17.052Z · LW(p) · GW(p)

RE: "Lots of upper middle class adults hardly know how to have conversations..."

I will let Anna speak for herself, but I have evidence of my own to bring... maybe not directly about the thing she's saying but nearby things. 

  • I have noticed friends who jumped up to upper middle class status due to suddenly coming into a lot of wealth (prob from crypto stuff). I noticed that their conversations got worse (from my POV). 
    • In particular: They were more self-preoccupied. They discussed more banal things. They spent a lot of time optimizing things that mostly seemed trivial to me (like what to have for dinner). When I brought up more worldly topics of conversation, someone expressed a kind of "wow I haven't thought about the world in such a long time, it'd be nice to think about the world more." Their tone was a tad wistful and they looked at me like they could learn something from me, but also they weren't going to try very hard and we both knew it. I felt like they were in a wealth/class bubble that insulated them from many of the world's problems and suffering. It seemed like they'd lost touch with their real questions and deep inner longings. I don't think this was as true of them before, but maybe I wasn't paying sufficient attention before, I dunno. 
    • It's like their life path switched from 'seeking' to 'maintaining'. They walked far enough, and they picked a nice spot, and now that's where they at. 
  • I used to work in tech. My coworkers were REALLY preoccupied with trivial things like Pokemon Go, sports, video games, what to eat/drink, new toys and gadgets, how to make more money, Marvel movies, career advancement. Almost to the point of obsession. It was like an adult playground atmosphere... pretty fun, pretty pleasant, and pretty banal. Our job was great. The people were great. The money was great. And I personally had to get the f out of there. 
    • This isn't to say that they aren't capable of having 'real conversations' about the world at times. But on the day-to-day level, I sensed an overwhelming force trying to keep them from looking at what the world is actually like, the part they're playing in it, what really matters, etc. It felt like a dream world. 
    • They also tended to have an alcohol or drug 'habit' or 'hobby' of some kind. Pot or alcohol; take your pick. 
  • My more NY-flavored / finance-or-marketing-or-whatever-flavored friends like to drink, own nice watches, wear nice suits, have nice apartments, etc. Different flavor from the West Coast tech scene, but the same thing going on. They appear happy, happier than before. But also... eh. Their preoccupations again seem not-very-alive and have an artificial smell. They seem a bit blocked from having interesting and life-changing thoughts. 

I don't really judge the people I am talking about. I am sad about the situation but don't feel like they're doing something wrong. 

I think the upper middle class capitalist dream is not all it is cracked up to be, and I would encourage people to try it out if they want to... but also to get over it once they're done trying it? It's nice for a while, and I like my friends having nice things and having money and stuff. But I don't think it's very character-building or teaching them new things or answering their most important questions. I also don't like the way it insulates people from noticing how much death, suffering, and injustice there is going on. 

Replies from: Unreal, Viliam, ESRogs
comment by Unreal · 2021-10-21T16:42:12.617Z · LW(p) · GW(p)

Oh yeah they also spent a lot of time trying to have the right or correct opinions. So they would certainly talk about 'the world' but mostly for the sake of having "right opinions" about it. Not so that they could necessarily, like, have insights into it or feel connected to what was happening. It was a game with not very high or real stakes for them. They tended to rehash the SAME arguments over and over with each other. 

comment by Viliam · 2021-10-21T21:36:37.591Z · LW(p) · GW(p)

This all sounds super fascinating to me, but perhaps a new post would be better for this.

My current best guess is that some people are "intrinsically" interested in the world, and for others the interest is only "instrumental". The intrinsically interested are learning things about the real world because it is fascinating and because it is real. The instrumentally interested are only learning about things they assume might be necessary for satisfying their material needs. Throwing lots of money at them will remove chains from the former, but will turn off the engine for the latter.

For me another shocking thing about people in tech is how few of them are actually interested in the tech. Again, seems to be this intrinsical/instrumental distinction. The former group studies Haskell or design patterns or whatever. The latter group is only interested in things that can currently increase their salary, and even there they are mostly looking for shortcuts. Twenty years ago, programmers were considered nerdy. These days, programmers who care about e.g. clean code are considered too nerdy by most programmers.

I also don't like the way it insulates people from noticing how much death, suffering, and injustice there is going on.

I often communicate with people outside my bubble, so my personal wealth does not isolate me from hearing about their suffering. If I won a lottery, I would probably spend more time helping people, because that's the type of thing I sometimes do, and I would now have more free time for that. I would expect this to be even stronger for any effective altruist.

(There is a voice in my head telling me that this all might be a fundamental attribution error, that I am assuming fixed underlying personality traits that only get better expressed as people get rich, and underestimate the effect of the environment, such as peer pressure of other rich people.)

Your next comment (people for whom having "right opinions" is super important) sounds to me like managers. Having an opinion different from other managers is a liability; it signals that you are either not flexible enough or can't see what your superiors want you to think.

comment by ESRogs · 2021-10-22T00:04:07.234Z · LW(p) · GW(p)

When I brought up more worldly topics

Bit of a nitpick, but FYI I think you're using "worldly" here in almost the opposite of the way it's usually used. It seems like you mean "weighty" or "philosophical" or something to do with the big questions in life. Whereas traditionally, the term means:

of or concerned with material values or ordinary life rather than a spiritual existence

On that definition I'd say it was your friends who wanted to talk about worldly stuff, while you wanted to push the conversation in a non-worldly direction! (As I understand, the meaning originally comes from contrasting "the world" and the church.)

Replies from: Unreal
comment by Unreal · 2021-10-22T01:22:07.131Z · LW(p) · GW(p)

Oh, hmmmmm. Sorry for lack of clarity. I don't remember exactly what the topic I brought up was. I just know it wasn't very 'local'. Could have been philosophical / deep. Could have been geopolitical / global / big picture. 

comment by Douglas_Knight · 2021-11-16T16:37:53.076Z · LW(p) · GW(p)

A couple books suggesting that white collar workplaces are more traumatic than blue collar ones are Moral Mazes (cited by Jessica) and Bullshit Jobs.

comment by cousin_it · 2021-10-20T12:54:00.919Z · LW(p) · GW(p)

I used to think the ability to have deep conversations is an indicator of how "alive" a person is, but now I think that view is wrong. It's better to look at what the person has done and is doing. Surprisingly there's little correlation: I often come across people who are very measured in conversation, but turn out to have amazing skills and do amazing things.

Replies from: iceman
comment by iceman · 2021-10-20T15:37:29.408Z · LW(p) · GW(p)

Assuming that language is about coordination instead of object level world modeling, why should we be surprised that there's little correlation between these two very different things?

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-20T23:30:22.098Z · LW(p) · GW(p)

Because object level world modeling is vastly easier and more unconstrained when you can draw on the sight of other minds, so a live world-modeler who can't talk to people has something going wrong (whether in them or in the environment).

comment by Adam Scholl (adam_scholl) · 2021-10-20T07:13:04.111Z · LW(p) · GW(p)

I also feel really frustrated that you wrote this, Anna. I think there are a number of obvious and significant disanalogies between the situations at Leverage versus MIRI/CFAR. There's a lot to say here, but a few examples which seem especially salient:

  • To the best of my knowledge, the leadership of neither MIRI nor CFAR has ever slept with any subordinates, much less many of them.
  • While I think staff at MIRI and CFAR do engage in motivated reasoning sometimes wrt PR, neither org engaged in anything close to the level of obsessive, anti-epistemic reputational control alleged in Zoe's post. MIRI and CFAR staff were not required to sign NDAs agreeing they wouldn't talk badly about the org—in fact, at least in my experience with CFAR, staff much more commonly share criticism of the org than praise. CFAR staff were regularly encouraged to share their ideas at workshops and on LessWrong, to get public feedback. And when we did mess up, we tried quite hard to publicly and accurately describe our wrongdoing—e.g., Anna and I spent low-hundreds of hours investigating/thinking through the Brent affair, and tried so hard to avoid accidentally doing anti-epistemic reputational control (this was our most common topic of conversation during this process) that in my opinion, our writeup about it actually makes CFAR seem much more culpable than I think it was.
  • As I understand it, there were ~3 staff historically whose job descriptions involved debugging in some way which you, Anna, now feel uncomfortable with/think was fucky. But to the best of your knowledge, these situations caused much less harm than e.g. Zoe seems to have experienced, and the large majority of staff did not experience this—in general staff rarely explicitly debugged each other, and when it did happen it was clearly opt-in, and fairly symmetrical (e.g., in my personal conversations with you Anna, I'd guess the ratio of you something-like-debugging me to the reverse is maybe 3/2?).
  • CFAR put really a lot of time and effort into trying to figure out how to teach rationality techniques, and how to talk with people about x-risk, without accidentally doing something fucky to people's psyches. Our training curriculum for workshop mentors includes extensive advice on ways to avoid accidentally causing psychological harm. Harm did happen sometimes, which was why our training emphasized it so heavily. But we really fucking tried, and my sense is that we actually did very well on the whole at establishing institutional and personal knowledge about how to be gentle with people in these situations; personally, it's the skillset I'd most worry about the community losing if CFAR shut down and more events started being run by other orgs.

Insofar as you agree with the above, Anna, I'd appreciate you stating that clearly, since I think saying "the OP speaks for me" implies you think the core analogy described in the OP was non-misleading.

Replies from: AnnaSalamon, Duncan_Sabien, Benquo
comment by AnnaSalamon · 2021-10-20T07:42:52.969Z · LW(p) · GW(p)

Yeah, sorry. I agree that my comment “the OP speaks for me” is leading a lot of people to false views that I should correct. It’s somehow tricky because there’s a different thing I worry will be obscured by my doing this, but I’ll do it anyhow as is correct and try to come back for that different thing later.

To the best of my knowledge, the leadership of neither MIRI nor CFAR has ever slept with a subordinate, much less many of them.

Agreed.

While I think staff at CFAR and MIRI probably engaged in motivated reasoning sometimes wrt PR, neither org engaged in anything close to the level of obsessive, anti-epistemic reputational control alleged in Zoe's post. CFAR and MIRI staff were certainly not required to sign NDAs agreeing they wouldn't talk badly about the org—in fact, in my experience CFAR staff much more commonly share criticism of the org than praise.  CFAR staff were regularly encouraged to share their ideas at workshops and on LessWrong, to get public feedback. And when we did mess up, we tried extremely hard to publicly and accurately describe our wrongdoing—e.g., Anna and I personally spent hundreds of hours investigating/thinking about the Brent affair, and tried so hard to avoid accidentally doing anti-epistemic reputational control that in my opinion, our writeup about it actually makes CFAR seem much more culpable than I think it was.

I agree that there’s a large difference in both philosophy of how/whether to manage reputation, and amount of control exhibited/attempted about how staff would talk about the organizations, with Leverage doing a lot of that and CFAR doing less of it than most organizations.

As I understand it, there were ~3 staff historically whose job description involved debugging in some way which you, Anna, now feel uncomfortable with/think was fucky. But to the best of your knowledge, these situations caused much less harm than e.g. Zoe seems to have experienced, and the large majority of staff did not experience this—in general staff rarely explicitly debugged each other, and when it did happen it was clearly opt-in, and fairly symmetrical (e.g., in my personal conversations with you Anna, I'd guess the ratio of you something-like-debugging me to the reverse is maybe 3/2?).

I think this understates both how many people it happened with, and how fucky it sometimes was. (Also, it was job but not “job description”, although I think Zoe’s was “job description”). I think this one was actually worse in some of the early years, vs your model of it. My guess is indeed that it involved fewer hours than Zoe, and was overall less deliberately part of a dynamic quite as fucky as Zoe’s, but as I mentioned to you on the phone, an early peripheral staff member left CFAR for a mental institution in a way that seemed plausibly to do with how debugging and trials worked, and definitely to do with workplace stress of some sort, as well as with a preexisting condition they entered with and didn’t tell us about. (We would’ve handled this better later, I think.) There are some other situations that were also I think pretty fucked up, in the sense of “I think the average person would experience some horror/indignation if they took in what was happening.”

I can also think of stories of real scarring outside the three people I was counting.

I… do think it was considerably less weird looking, and less overtly fucked-up looking, than the descriptions I have (since writing my “this post speaks for me” comment) gotten of Leverage in the 2018-2019 era.

Also, most people at CFAR, especially in recent years, I think suffered none or nearly none of this. (I believe the same was true for parts of Leverage, though not sure.)

So, if we are playing the “compare how bad Leverage and CFAR are along each axis” game (which is not the main thing I took the OP to be doing, at all, nor the main thing I was trying to agree with, at all), I do think Leverage is worse than CFAR on this axis but I think the “per capita” damage of this sort that hit CFAR staff in the early years (“per capita” rather than cumulative, because Leverage had many more people) was maybe about a tenth of my best guess at what was up in the near-Zoe parts of Leverage in 2018-2019, which is a lot but, yes, different.

CFAR put really a lot of time and effort into trying to figure out how to teach rationality techniques, and how to talk with people about x-risk, without accidentally doing something fucky to people's psyches. Our training curriculum for workshop mentors includes extensive advice on ways to avoid accidentally causing psychological harm. Harm did happen sometimes, which was why our training emphasized it so heavily. But we really fucking tried, and my sense is that we actually did very well on the whole at establishing institutional and personal knowledge about how to be gentle with people in these situations; personally, it's the skillset I'd most worry about the community losing if CFAR shut down and more events started being run by other orgs.

We indeed put a lot of effort into this, and got some actual skill and good institutional habits out.

Replies from: Viliam
comment by Viliam · 2021-10-20T11:35:19.389Z · LW(p) · GW(p)

Perhaps this is an opportunity to create an internal document on "unhealthy behaviors" that would list the screwups and the lessons learned, and read it together regularly, like a safety training? (Analogically to how organizations that get their computers hacked or documents stolen, describe how it happened as a part of their safety training.) Perhaps with anonymous feedback whether someone has a concern that MIRI or CFAR is slipping into some bad pattern again.

Also, it might be useful to hire an external psychologist who would in regular intervals have a discussion with MIRI/CFAR employees. And to provide this document to the psychologist, so they know what risks to focus on. (Furthermore I think the psychologist should not be a rationalist; to provide a better outside view.)

For starters, someone could create the first version of the document by extracting information from this debate.

EDIT: Oops, on second reading of your comment, it seems like you already have something like this. Uhm, maybe a good opportunity to update/extend the document?

*

As a completely separate topic, it would be nice to have a table with the following columns: "Safety concern", "What happened in MIRI/CFAR", "What happened in Leverage (as far as we know)", "Similarities", "Differences". But this is much less important, in long term.

comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-21T23:59:00.564Z · LW(p) · GW(p)

I endorse Adam's commentary, though I did not feel the frustration Eli and Adam report, possibly because I know Anna well enough that I reflexively did the caveating in my own brain rather than modeling the audience.

comment by Benquo · 2021-10-20T08:08:30.654Z · LW(p) · GW(p)

To the best of my knowledge, the leadership of neither MIRI nor CFAR has ever slept with a subordinate, much less many of them.

This issue doesn't seem particularly important to me but the comparison you're making is a good example of a more general problem I want to talk about.

My impression is that the decision structure of CFAR was much less legible & transparent than that of Leverage, so that it would be harder to determine who might be treated as subordinate to whom in what context. In addition, my impression from the years I was around is that Leverage didn't preside over as much of an external scene, - Leverage followers had formalized roles as members of the organization, while CFAR had a "community," many of whom were workshop alumni.

And when we did mess up, we tried extremely hard to publicly and accurately describe our wrongdoing—e.g., Anna and I personally spent hundreds of hours investigating/thinking about the Brent affair, and tried so hard to avoid accidentally doing anti-epistemic reputational control that in my opinion, our writeup about it actually makes CFAR seem much more culpable than I think it was.

Am I missing something here? The communication I read from CFAR seemed like it was trying to reveal as little as it could get away with, gradually saying more (and taking a harsher stance towards Brent) in response to public pressure, not like it was trying to help me, a reader, understand what had happened.

Replies from: ESRogs, adam_scholl, Puxi Deek
comment by ESRogs · 2021-10-21T23:49:40.544Z · LW(p) · GW(p)

Anna and I personally spent hundreds of hours investigating/thinking about the Brent affair... our writeup about it...

 

Am I missing something here? The communication I read from CFAR seemed like it was trying to reveal as little as it could get away with...

FWIW, I think you and Adam are talking about two different pieces of communication. I think you are thinking of the communication leading up to the big community-wide discussion that happened in Sept 2018, while Adam is thinking specifically of CFAR's follow-up communication months after that — in particular this post. (It would have been in between those two times when Adam and Anna did all that thinking that he was talking about.)

Replies from: adam_scholl
comment by Adam Scholl (adam_scholl) · 2021-10-22T00:48:09.965Z · LW(p) · GW(p)

Yeah, this was the post I meant.

comment by Adam Scholl (adam_scholl) · 2021-10-20T09:22:03.765Z · LW(p) · GW(p)

I agree manager/staff relations have often been less clear at CFAR than is typical. But I'm skeptical that's relevant here, since as far as I know there aren't really even borderline examples of this happening. The closest example to something like this I can think of is that staff occasionally invite their partners to attend or volunteer at workshops, which I think does pose some risk of fucky power dynamics, albeit dramatically less risk imo than would be posed by "the clear leader of an organization, who's revered by staff as a world-historically important philosopher upon whose actions the fate of the world rests, and who has unilateral power to fire any of them, sleeps with many employees."

Am I missing something here? The communication I read from CFAR seemed like it was trying to reveal as little as it could get away with, gradually saying more (and taking a harsher stance towards Brent) in response to public pressure, not like it was trying to help me, a reader, understand what had happened.

As lead author on the Brent post, I felt bummed reading this. I tried really hard to avoid letting my care for/interest in CFAR affect my descriptions of what happened, or my choices about what to describe. Anna and I spent quite large amounts of time—at least double-digit hours, I think probably triple-digit—searching for ways our cognition might be biased or motivated or PR-like, and trying to correct for that. We debated and introspected about it, ran drafts by friends of ours who seemed unusually likely to call us on bullshit, etc.

Looking back, my sense remains that we basically succeeded—i.e., that we described the situation about as accurately and neutrally as we could have. If I'm wrong about this... well, it wasn't for lack of trying.

Replies from: Thrasymachus
comment by Thrasymachus · 2021-10-20T11:27:26.967Z · LW(p) · GW(p)

Looking back, my sense remains that we basically succeeded—i.e., that we described the situation about as accurately and neutrally as we could have. If I'm wrong about this... well, all I can say is that it wasn't for lack of trying.

I think CFAR ultimately succeeded in providing a candid and good faith account of what went wrong, but the time it took to get there (i.e. 6 months between this and the initial update/apology) invites adverse inferences like those in the grandparent. 

A lot of the information ultimately disclosed in March would definitely have been known to CFAR in September, such as Brent's prior involvement as a volunteer/contractor for CFAR, his relationships/friendships with current staff, and the events as ESPR. The initial responses remained coy on these points, and seemed apt to give the misleading impression CFAR's mistakes were (relatively) much milder than they in fact were. I (among many) contacted CFAR leadership to urge them to provide more candid and complete account when I discovered some of this further information independently. 

I also think, similar to how it would be reasonable to doubt 'utmost corporate candour' back then given initial partial disclosure, it's reasonable to doubt CFAR has addressed the shortcomings revealed given the lack of concrete follow-up. I also approached CFAR leadership when CFAR's 2019 Progress Report and Future Plans [LW · GW]initially made no mention of what happened with Brent, nor what CFAR intended to improve in response to it. What was added in is not greatly reassuring:

And after spending significant time investigating our mistakes with regard to Brent, we reformed our hiring, admissions and conduct policies, to reduce the likelihood such mistakes reoccur.

A cynic would note this is 'marking your own homework', but cynicism is unnecessary to recommend more self-scepticism. I don't doubt the Brent situation indeed inspired a lot of soul searching and substantial, sincere efforts to improve. What is more doubtful (especially given the rest of the morass of comments) is whether these efforts actually worked. Although there is little prospect of satisfying me, more transparency over what exactly has changed - and perhaps third party oversight and review - may better reassure others.   

comment by Puxi Deek · 2021-10-20T08:17:52.876Z · LW(p) · GW(p)

It would help if they actually listed and gave examples of exactly what kind of mental manipulation they were doing to people other than telling them to take drugs. These comments seem to dance around the exactly details of what happened and only talk about the group dynamics between people as a result of these mysterious actions/events.

comment by AnnaSalamon · 2021-10-17T01:12:28.508Z · LW(p) · GW(p)

To be clear, a lot of what I find so relaxing about Jessica’s post is that my experience reading it is of seeing someone who is successfully noticing a bunch of details in a way that, relative to what I’m trying to track, leaves room for lots of different things to get sorted out separately.

I just got an email that led me to sort of triggeredly worry that folks will take my publicly agreeing with the OP to mean that I e.g. think MIRI is bad in general. I don’t think that; I really like MIRI and have huge respect and appreciation for a lot of the people there; I also like many things about the CFAR experiment and love basically all of the people who worked there; I think there’s a lot to value across this whole space.

I like the detailed specific points that are made in the OP (with some specific disagreements; though also with corroborating detail I can add in various places); I think this whole “how do we make sense of what happens when people get together into groups? and what happened exactly in the different groups?” question is an unusually good time to lean on detail-tracking and reading comprehension.

Replies from: BrienneYudkowsky
comment by LoganStrohl (BrienneYudkowsky) · 2021-10-17T05:50:21.874Z · LW(p) · GW(p)

[I deleted a comment in this thread because I realized it belonged in a different thread. Just being clumsy, sry.]

comment by philip_b (crabman) · 2021-10-17T10:12:10.260Z · LW(p) · GW(p)

To my understanding, since the time when the events described in the OP took place, MIRI and CFAR have been very close and getting closer and closer. As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong. Since you're one of the leaders of CFAR, that makes you one of the leading people behind all those things the OP is critical of.

The OP even writes that she thought and thinks CFAR was corrupt in 2017:

Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz. (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; ...)

Here she mentions Ziz also thinking that CFAR was corrupt, and I remember that in her blog, Ziz thought you being in the center of said corruption.

So, how all is this compatible with you agreeing with the OP?

Replies from: AnnaSalamon, AnnaSalamon
comment by AnnaSalamon · 2021-10-20T09:23:53.072Z · LW(p) · GW(p)

Since you're one of the leaders of CFAR, that makes you one of the leading people behind all those things the OP is critical of.

Yes.

So, how all is this compatible with you agreeing with the OP?

Basically because I came to see I’d been doing it wrong.

Happy to try to navigate follow-up questions if anyone has any.

Replies from: TurnTrout, anonce
comment by TurnTrout · 2021-10-20T11:16:15.493Z · LW(p) · GW(p)

Happy to try to navigate follow-up questions if anyone has any.

PhoenixFriend wrote:

Psychedelic use was common among the leadership of CFAR and spread through imitation, if not actual institutional encouragement, to the rank-and-file.

Is this true?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-22T10:07:26.369Z · LW(p) · GW(p)

Basically no. Can't say a plain "no," but can say "basically no." I'm not willing to give details on this one. I'm somehow fretting on this one, asking if "basically no" is true from all vantage points (it isn't, but it's true from most), looking for a phrase similar to that but slightly weaker, considering e.g. "mostly no", but something stronger is true. I think this'll be the last thing I say in this thread about this topic.

comment by anonce · 2021-11-09T03:09:43.990Z · LW(p) · GW(p)

What does "corrupt" mean in this context?  What are some examples of noncorrupt employers?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-11-11T07:21:53.691Z · LW(p) · GW(p)

A CFAR board member asked me to clarify what I meant about “corrupt”, also, in addition to this question.

So, um. Some legitimately true facts the board member asked me to share, to reduce confusion on these points:

  • There hasn’t been any embezzlement. No one has taken CFAR’s money and used it to buy themselves personal goods.
  • I think if you took non-profits that were CFAR’s size + duration (or larger and longer-lasting), in the US, and ranked them by “how corrupt is this non-profit according to observers who people think of as reasonable, and who got to watch everything by video and see all the details”, CFAR would on my best guess be ranked in the “less corrupt” half rather than in the “more corrupt” half.

This board member pointed out that if I call somebody “tall” people might legitimately think I mean they are taller than most people, and if I agree with an OP that says CFAR was “corrupt” they might think I’m agreeing that CFAR was “more corrupt” than most similarly sized and durationed non-profits, or something.

The thing I actually think here is not that. It’s more that I think CFAR’s actions were far from the kind of straight-forward, sincere attempt to increase rationality, compared to what people might have hoped for from us, or compared to what a relatively untraumatized 12-year-old up-and-coming-LWer might expect to see from adults who said they were trying to save the world from AI via learning how to think. (IMO, this was made mostly via a bunch of people doing reasoning that they told themselves was intended to help with existential risk or with rationality or at least to help CFAR or do their jobs, but that was not as much that as the thing a kid might’ve hoped for. I think I, in my roles at CFAR, was often defensive and power-seeking and reflexively flinching away from things that would cause change; I think many deferred to me in cases where their own sincere, Sequences-esque reasoning would not have thought this advisable; I think we fled from facts where we should not have, etc.).

I think this is pretty common, and that many of us got it mostly from mimicking others at other institutions (“this is how most companies do management/PR/whatever; let’s dissociate a bit until we can ‘think’ that it’s fine”). But AFAICT it is not compatible (despite being common) with the kinds of impact we were and are hoping to have (which are not common), nor with the thing that young or sincere readers of the Sequences, who were orienting more from “what would make sense” and less from “how do most organizations act” would have expected. And I think it had the result of wasting a bunch of good peoples’ time and money, and making it look as though the work we were attempting is intrinsically low-reward, low-yield, without actually checking to see what would happen if we tried to locate rationality/sanity skills in a simpleway.

I looked at the Wikipedia article on corruption to see if it had helpful ontology I could borrow. I would say that the kind of corruption I am talking about is “systemic” corruption rather than individual, and involved “abuse of discretion”.

A lot of what I am calling “corruption” — i.e., a lot of the systematic divergence between the actions CFAR was taking, and the actions that a sincere, unjaded, able-to-actually-talk-to-each-other version of us would’ve chosen for CFAR to take, as a best guess for how to further our missions — came via me personally, since I was in a leadership role manipulating the staff of CFAR by giving them narratives about how the world would be more saved if they did such-and-such (different narratives for different folks), and looking to see how they responded to these narratives in order to craft different ones. I didn’t say things I believed false, but I did choose which things to say in a way that was more manipulative than I let on, and I hoarded information to have more control of people and what they could or couldn’t do in the way of pulling on CFAR’s plans in ways I couldn’t predict, and so on. Others on my view chose to go along with this, partly because they hoped I was doing something good (as did I), partly because it was way easier, partly because we all got to feel as though were were important via our work, partly because none of us were fully conscious of most of this.

This is “abuse of discretion” in that it was using places in which my and our judgment had institutional power because people trusted me and us, and making those judgments via a process that was predictably going to have worse rather than better outcomes, basically in my case via what I’ve lately been calling narrative addiction [LW · GW].

I love the people who work at CFAR, both now and in the past, and predict that most would make your house or organization or whatnot better if you live or hire them or similar. They’re bringing a bunch of sincere goodwill, willingness to try what is uncomfortable (not fully, but more than most, and enough that I admire it and am impressed a lot), attempt better epistemic practices than I see most places where they know how to, etc. I’m afraid to say paragraphs like the ones preceding this one lest I cause people who are quite good as people in our social class go, and who sacrificed at my request in many cases, to look bad.

But in addition to the common human pass-time of ranking all of us relative to each other, figuring out who to scapegoat and who to pass other relative positive or negative judgments on, there is a different endeavor I care very much about: one of trying to see the common patterns that’re keeping us stuck. Including patterns that may be pretty common in our time and place, but that (I think? citation needed, I’ll grant) may have been pretty uncommon in the places where progress historically actually occurred.

And that is what I was so relieved to see Jessica’s OP opening a beginning of a space for us to talk about. I do not think Jessica was saying CFAR was unusually bad; she estimates it was on her best guess a less traumatizing place than Google. She just also tries to see through lines between patterns across places, in ways I found very relieving and hopeful. Patterns I strongly resisted seeing for most of the last six years. It’s the amount of doublethink I found in myself on the topic, more than almost any of the rest of it, that most makes me think “yes there is a non-trivial insight here, that Jessica has and is trying to convey and that I hope eventually does get communicated somehow, despite all the difficulties of talking about it so far.”

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-11-11T08:15:34.193Z · LW(p) · GW(p)

I have strong-upvoted this comment, which is not a sentence I think people usually ought leave as its own reply but which seems relevant given my relationship to Anna and CFAR and so forth.

comment by AnnaSalamon · 2021-10-20T09:24:10.395Z · LW(p) · GW(p)

As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong.

Actually, that was true for the last few years (with an ambiguous in-between time during covid), but it is not true now. Partly because MIRI abandoned the research direction we’d most been trying to help them recruit for. CFAR will be choosing its own paths going forward more.

comment by Aryeh Englander (alenglander) · 2021-10-18T12:21:31.055Z · LW(p) · GW(p)

I see that many people are commenting how it's crazy to try to keep things secret between coworkers, or to not allow people to even mention certain projects, or that this kind of secrecy is psychologically damaging, or the like.

Now, I imagine this is heavily dependent on exactly how it's implemented, and I have no idea how it's implemented at MIRI. But just as a relevant data point - this kind of secrecy is totally par for the course for anybody who works for certain government and especially military-related organizations or contractors. You need extensive background checks to get a security clearance, and even then you can't mention anything classified to someone else unless they have a valid need to know, you're in a secure classified area that meets a lot of very detailed guidelines, etc. Even within small groups, there are certain projects that you simply are not allowed to discuss with other group members, since they do not necessarily have a valid need to know. If you're not sure whether something is classified, you should be talking to someone higher up who does know. There are projects that you cannot even admit that they exist, and there are even words that you cannot mention in connection to each other even though each word on its own is totally normal and unclassified. In some places like the CIA or the NSA, you're usually not even supposed to admit that you work there.

Again, this is probably all very dependent on exactly how the security guidelines are implemented. I am also not commenting at all on whether or not the information that MIRI tries to keep secret should in fact be kept secret. I am just pointing out that if some organization thinks that certain pieces of information really do need to be kept secret, and if they implement secrecy guidelines in the proper way, then as far as I could tell everything that's been described as MIRI policies seems pretty reasonable to me.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-19T22:08:04.537Z · LW(p) · GW(p)

Some secrecy between coworkers could be reasonable. Including secrecy about what secret projects exist (e.g. "we're combining AI techniques X and Y and applying them to application Z first as a test").

What seemed off is that the only information concealed by the policy in question (that researchers shouldn't ask each other what they're working on) is who is and isn't recently working on a secret project. That isn't remotely enough information to derive AI insights to any significant degree. Doing detective work on "who started saying they had secrets at the same time" to derive AI insights is a worse use of time than just reading more AI papers.

The policy in question is strictly dominated by an alternative policy, of revealing that you are working on a secret project but not which one. When I see a policy that is this clearly suboptimal for the stated goal, I have to infer alternative motives, such as maintaining domination of people by isolating them from each other. (Such a motive could be memetic/collective, partially constituted by people copying each other, rather than serving anyone's individual interest, although personal motives are relevant too)

Mainstream organizations being secretive at the level MIRI was isn't a particularly strong argument. As we learned with COVID, many mainstream organizations are opposing their stated mission. Zack Davis points out [LW(p) · GW(p)] that controlling people into acting against their interests is a common function of mainstream policies (this is especially obvious in the military). Such control is especially counterproductive for FAI research, where a large part of the problem is to make AI act on human values rather than false approximations of them. Revealing actual human value requires freedom to act according to revealed preferences, not just pre-specified models of goals. (In other words: if everything in an organization is organized around pursuing a legible goal that is only an instrumental goal of human value, that org is either a UFAI or is not a general intelligence)

If mainstream policies were sufficient, there wouldn't be any need for MIRI, since other AI orgs already use mainstream policies.

Replies from: tomcatfish, Duncan_Sabien
comment by Alex Vermillion (tomcatfish) · 2021-10-24T01:30:32.127Z · LW(p) · GW(p)

There are a few parts in here that seem fishy enough to me to try to red flag them.

Mainstream organizations being secretive at the level MIRI was isn’t a particularly strong argument. As we learned with COVID, many mainstream organizations are opposing their stated mission.

This is fair as a detraction to the sorta appeal to authority it is in reply to, but is also not a very good proof that secrecy is a bad idea. To boil it down smaller, the argument went "Secrecy works well for many existing organizations" and you replied "Many existing organizations did a bad job during Covid". Strictly speaking, doing a bad job during Covid means that not everything is going well, but this is still a pretty weird and weak argument.

This whole paragraph:

Zack Davis points out that controlling people into acting against their interests is a common function of mainstream policies (this is especially obvious in the military). Such control is especially counterproductive for FAI research, where a large part of the problem is to make AI act on human values rather than false approximations of them. Revealing actual human value requires freedom to act according to revealed preferences, not just pre-specified models of goals. (In other words: if everything in an organization is organized around pursuing a legible goal that is only an instrumental goal of human value, that org is either a UFAI or is not a general intelligence)

also makes next to no sense to me. Please correct me if I'm wrong (which I kinda think I might be), but I read this as

  1. Mainstream organizations make people act against their own values
  2. We want AI to act on human values 3, Only agents acting on human values can develop an AI that acts on human values
  3. By 1,3, Mainstream organizations act against human values
  4. By 3,4, Mainstream organizations cannot develop FAI

Which seems to not follow in any way to me.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-24T02:53:30.303Z · LW(p) · GW(p)

There's a big difference between "optimizing poorly" and "pessimizing", i.e. making the peoblem worse in ways that require some amount of cleverness. Mainstream institutions handling COVID was a case of pessimizing not just optimizing poorly, e.g. banning tests, telling people masks don't work, and seizing mask shipments.

I don't think you're mis-stating the argument here, it really is a thing I'm arguing that institutions that make people act against their values can't build FAI. As an example you could imagine an institution that optimized for some utility function U that was designed by committee. That U wouldn't be the human utility function (unless the design-by-committee process is a reliable value loader), so forcing everyone to optimize U means you aren't optimizing the human utility function, it has the same issues as a paperclip maximizer.

What if you try setting U = "get FAI"? Too bad, "FAI" is a lisp token, for it to have semantics it has to connect with human value somehow, i.e. someone actually wanting a thing and being assisted in getting it.

Maybe you can have a research org where some people are slaves and some aren't, but for this to work you'd need a legible distinction between the two classes, so you don't get confused into thinking you're optimizing the slave's utility function by enslaving them.

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2021-10-24T03:01:02.238Z · LW(p) · GW(p)

With a bit more meat, I can see what you're referring to better.

I still don't agree I think, but I can see why you would build that belief much better than I could before. I appreciate the clarification, thank you.

comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-24T02:12:53.143Z · LW(p) · GW(p)

You have by far more information than me about what it's like on the ground as a MIRI researcher.

But one thing missing so far is that my sense was that a lot of researchers preferred the described level of secretiveness as a simplifying move?

e.g. "It seems like I could say more without violating any norms, but I have a hard time tracking where the norms are and it's easier for me to just be quiet as a general principle.  I'm going to just be quiet as a general principle rather than being the-maximum-cooperative-amount-of-open, which would be a burden on me to track with the level of conscientiousness I would want to apply."

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-24T02:33:20.939Z · LW(p) · GW(p)

The policy described was mandated, it wasn't just on a voluntary basis. Anyway, I don't really trust something optimizing this badly to have a non-negligible shot at FAI, so the point is kind of moot.

comment by So8res · 2021-10-20T22:53:38.366Z · LW(p) · GW(p)

First and foremost: Jessica, I'm sad you had a bad late/post-MIRI experience. I found your contributions to MIRI valuable (Quantilizers and Reflective Solomonoff Induction spring to mind as some cool stuff), and I personally wish you well.

A bit of meta before I say anything else: I'm leery of busting in here with critical commentary, and thereby causing people to think they can't air dirty laundry without their former employer busting in with critical commentary. I'm going to say a thing or two anyway, in the name of honest communication. I'm open to suggestions for alternative ways to handle this tradeoff.

Now, some quick notes: I think Jessica is truthfully reporting her experiences as she recalls them. I endorse orthonormal's comment [LW(p) · GW(p)] as more-or-less matching my own recollections. That said, in a few of Jessica's specific claims, I believe I recognize the conversations she's referring to, and I feel misunderstood and/or misconstrued. I don't want to go through old conversations blow-by-blow, but for a sense of the flavor, I note that in this comment Jessica seems to me to misconstrue some of Eliezer's tweets [LW(p) · GW(p)] in a way that feels similar to me. Also, as one example from the text, looking at the part of the text that names me specifically:

Nate Soares frequently [...] [said] that we must create a human emulation using nanotechnology that is designed by a "genie" AI [...]

I wouldn't personally use a phrase like "we [at MIRI] must create a human emulation using nanotech designed by a genie AI". I'd phrase that claim more like "my current best concrete idea is to solve narrow alignment sufficient for a limited science/engineering AGI to safely design nanotech capable of, eg, uploading a human". This difference is important to me. In contrast with the connotations I read into Jessica's account, I didn't/don't have particularly high confidence in that specific plan (and I wrote contemporaneously about how plans don't have truth values, and that the point of having a concrete plan isn't to think it will work). Also, my views in this vicinity are not particularly MIRI-centric (I have been a regular advocate of all AGI research teams thinking concretely and specifically about pivotal acts and how their tech could be used to end the acute risk period). Jessica was perhaps confused by my use of we-as-in-humanity instead of we-as-in-MIRI. I recall attempting to clarify that during our conversation, but perhaps it didn't stick.

My experience conversing with Jessica, in the time period before she departed MIRI, was one of regular miscommunications, similar in flavor to the above two examples.

(NB: I'm not currently planning to engage in much back-and-forth.)

Replies from: jessica.liu.taylor, jessica.liu.taylor, hg00
comment by jessicata (jessica.liu.taylor) · 2021-10-21T00:17:46.202Z · LW(p) · GW(p)

Thanks, I appreciate you saying that you're sorry my experience was bad towards the end (I notice it actually makes me feel better about the situation), that you're aware of how criticizing people the wrong way can discourage speech and are correcting for that, and that you're still concerned enough about misconstruals to correct them where you see fit. I've edited the relevant section of the OP to link to this comment. I'm glad I had a chance to work with you even if things got really confusing towards the end.

comment by jessicata (jessica.liu.taylor) · 2021-10-21T01:09:24.990Z · LW(p) · GW(p)

With regard to the specific misconstruals:

  • I don't think OP asserted that this specific plan was fixed, it was an example of a back-chaining plan, but I see how "a world-saving plan" could imply that it was this specific plan, which it wasn't.
  • I didn't specify which small group was taking over the world, I didn't mean to imply that it had to be MIRI specifically, maybe the comparison with Leverage led to that seeming like it was implied?
  • I still don't understand how I'm misconstruing Eliezer's tweets, it seems very clear to me that he's saying something about how neural nets work would be very upsetting if learned about and I don't see what else he could be saying.
Replies from: Connor_Flexman
comment by Connor_Flexman · 2021-10-23T05:46:22.950Z · LW(p) · GW(p)

Regarding Eliezer's tweets, I think the issue is that he is joking about the "never stop screaming". He is using humor to point at a true fact, that it's really unfortunate how unreliable neural nets are, but he's not actually saying that if you study neural nets until you understand them then you will have a psychotic break and never stop screaming.

comment by hg00 · 2021-10-21T07:11:24.708Z · LW(p) · GW(p)

I'm not sure I agree with Jessica's interpretation of Eliezer's tweets, but I do think they illustrate an important point about MIRI: MIRI can't seem to decide if it's an advocacy org or a research org.

"if you actually knew how deep neural networks were solving your important mission-critical problems, you'd never stop screaming" is frankly evidence-free hyperbole, of the same sort activist groups use (e.g. "taxation is theft"). People like Chris Olah have studied how neural nets solve problems a lot, and I've never heard of them screaming about what they discovered.

Suppose there was a libertarian advocacy group with a bombastic leader who liked to go tweeting things like "if you realized how bad taxation is for the economy, you'd never stop screaming". After a few years of advocacy, the group decides they want to switch to being a think tank. Suppose they hire some unusually honest economists, who study taxation and notice things in the data that kinda suggest taxation might actually be good for the economy sometimes. Imagine you're one of those economists and you're gonna ask your boss about looking into this more. You might have second thoughts like: Will my boss scream at me? Will they fire me? The organizational incentives don't seem to favor truthseeking.

Another issue with advocacy is you can get so caught up in convincing people that the problem needs to be solved that you forget to solve it, or even take actions that are counterproductive for solving it. For AI safety advocacy, you want to convince everyone that the problem is super difficult and requires more attention and resources. But for AI safety research, you want to make the problem easy, and solve it with the attention and resources you have.

In The Algorithm Design Manual, Steven Skiena writes:

In any group brainstorming session, the most useful person in the room is the one who keeps asking “Why can’t we do it this way?”; not the nitpicker who keeps telling them why. Because he or she will eventually stumble on an approach that can’t be shot down... The correct answer to “Can I do it this way?” is never “no,” but “no, because. . . .” By clearly articulating your reasoning as to why something doesn’t work, you can check whether you have glossed over a possibility that you didn’t think hard enough about. It is amazing how often the reason you can’t find a convincing explanation for something is because your conclusion is wrong.

Being an advocacy org means you're less likely to hire people who continually ask "Why can’t we do it this way?", and those who are hired will be discouraged from this behavior if it's implied that a leader might scream if they dislike the proposed solution. The activist mindset tends to favor evidence-free hyperbole over carefully checking if you glossed over a possibility, or wondering if an inability to convince others means your conclusion is wrong.

I dunno if there's an easy solution to this -- I would like to see both advocacy work and research work regarding AI safety. But having them in the same org seems potentially suboptimal.

Replies from: Scott Garrabrant, Aella
comment by Scott Garrabrant · 2021-10-21T13:41:17.682Z · LW(p) · GW(p)

MIRI can't seem to decide if it's an advocacy org or a research org.

MIRI is a research org. It is not an advocacy org. It is not even close. You can tell by the fact that it basically hasn't said anything for the last 4 years. Eliezer's personal twitter account does not make MIRI an advocacy org.

(I recognize this isn't addressing your actual point. I just found the frame frustrating.)

comment by Aella · 2021-10-22T00:11:25.366Z · LW(p) · GW(p)

as a tiny, mostly-uninformed data point, i read "if you realized how bad taxation is for the economy, you'd never stop screaming" to have a very diff vibe from Eliezer's tweet, cause he didn't use the word bad. I know it's a small diff but it hits diff. Something in his tweet was amusing because it felt like it was pointing to a presumably neutral thing and making it scary? whereas saying the same thing about a clearly moralistic point seems like it's doing a different thing. 

Again - a very minor point here, just wanted to throw it in.

comment by gallabytes · 2021-10-25T01:08:03.396Z · LW(p) · GW(p)

There's this general problem of Rationalists splitting into factions and subcults with minor doctrinal differences, each composed of relatively elite members of The Community, each with a narrative of how they’re the real rationalists and the rest are just posers and/or parasites. And, they're kinda right. Many of the rest are posers, we have a mop problem.

There’s just one problem. All of these groups are wrong. They are in fact only slightly more special than their rival groups think they are. In fact, the criticisms each group makes of the epistemics and practices of other groups are mostly on-point.

Once people have formed a political splinter group, almost anything they write will start to contain a subtle attempt to slip in the doctrine they're trying to push. With sufficient skill, you can make it hard to pin down where the frame is getting shoved in.

I have at one point or another been personally involved with a quite large fraction of the rationalist subcults. This has made the thread hard to read - I keep feeling a tug of motivation to jump into the fray, to take a position in the jostling for credibility or whatever it is being fought over here, which is then marred by the realization that this will win nothing. Local validity isn't a cure for wrong questions. The tug of political defensiveness that I feel, and that many commenters are probably also feeling, is sufficient to show that whatever question is being asked here is not the right one.

Seeing my friends behave this way hurts. The defensiveness has at this point gone far enough that it contains outright lies.

I'm stuck with a political alignment because of history and social ties. In terms of political camps, I've been part of the Vassarites since 2017. It's definitely a faction, and its members obviously know this at some level, despite their repeated insistence to me of the contrary over the years.

They’re right about a bunch of stuff, and wrong about a bunch of stuff. Plenty of people in the comments are looking to scapegoat them for trying to take ideas seriously instead of just chilling out and following somebody’s party line. That doesn’t really help anything. When I was in the camp, people doing that locked me in further, made outsiders seem more insane and unreachable, and made public disagreement with my camp feel dangerous in the context of a broader political game where the scapegoaters were more wrong than the Vassarites.

So I’m making a public declaration of not being part of that camp anymore, and leaving it there. I left earlier this year, and have spent much of the time since trying to reorient / understand why I had to leave. I still count them among my closest friends, but I don't want to be socially liable for the things they say. I don't want the implicit assumption to be that I'd agree with them or back them up.

I had to edit out several lines from this comment because they would just be used as ammunition against one side or another. The degree of truth-seeking in the discourse is low enough that any specific information has to be given very carefully so it can’t be immediately taken up as a weapon.

This game sucks and I want out.

Replies from: Benquo
comment by Benquo · 2021-10-25T03:07:44.780Z · LW(p) · GW(p)

I still count them among my closest friends, but I don’t want to be socially liable for the things they say. I don’t want the implicit assumption to be that I’d agree with them or back them up.

Same. I don't think I can exit a faction by declaration without joining another, but I want many of the consequences of this. I think I get to move towards this outcome by engaging nonfactional protocols more, not by creating political distance between me & some particular faction.

Replies from: lex
comment by lex · 2021-10-28T00:42:28.901Z · LW(p) · GW(p)

Without disagreeing with any specific logical statement you have made, I call bullshit on this. You have quoted a short segment such that technically what you're saying is not false, but you're drawing a broader equivalence & request for social credit around "not wanting to be in factions" which is not valid in context of the fact that you are blatantly participating in a faction and doing factional protocols. People are usually on board with the idea of it being better to just talk rather than do politics, and I acknowledge & appreciate the sense in which you want to want to not do politics, but there is a game here which you are playing in and I wish you would own up to that.

Replies from: jessica.liu.taylor, Benquo
comment by jessicata (jessica.liu.taylor) · 2021-10-28T02:07:39.493Z · LW(p) · GW(p)

If Ben says: "I desire X, and I could get that by doing less faction stuff", that implies that he is doing faction stuff. But you're taking it as implying that he isn't.

The only way I could understand your criticism is as making a revealed-preference critique, where Ben is expressing a preference for doing non-faction stuff but is still doing faction stuff. That doesn't seem like a strong critique, though, since doing less faction stuff is somewhat difficult, and noticing the problem is the first step to fixing it.

comment by Benquo · 2021-10-28T02:05:53.832Z · LW(p) · GW(p)

Seems like you agree with what I actually said, and are claiming to find some implied posture objectionable, but aren't willing to criticize me explicitly enough for me or anyone else to learn from. ¯_(ツ)_/¯

comment by temporary_visitor_account · 2021-10-18T16:38:59.460Z · LW(p) · GW(p)

I want to provide an outside view that people might find helpful. This is based on my experience as a high school teacher (6 months total experience), a professor at an R1 university (eight years total experience), and someone who has mentored extraordinarily bright early-career scientists (15 years experience).

It’s very clear to me that the rationalist community is acting as a de facto school and system of interconnected mentorship opportunities. In some cases (CFAR, e.g.) this is explicit.

Academia also does this. It has ~1000 years of experience, dating from the founding of the University of Cambridge, and has learned a few things in that time.

An important discovery is that there are serious responsibilities that come with attending on “young” minds (young in quotes; generically the first quarter of life, depending on era, that’s <15 up to today around <30). These minds are considered inherently vulnerable, who need to be protected from manipulation, boundary violations, etc. It’s been discovered that making this a blanket and non-negotiable rule has significant positive epistemic and moral effects that haven’t been replicated with alternatives.

Even before academic institutions, this is seen: consider the extensive discussions in the Socratic dialogues. In later eras it is implicit in the phrase in loco parentis. Historically this has appeared as concern for the soul, or (in the post-religious era) psychological health. Importantly, the young minds are not allowed to waive this concern.

I see two places where the rationalist community has absolutely failed in implementing this responsible practice. The outcomes have been not sane.

  1. Small: it’s clear that there is a repeat abuser associated with multiple breakdowns and drug abuse. However, even in a post that’s describing this, he’s described as charismatic, fun, enlightening by people with positions of status and influence.

Reading Scott Alexander’s comment above, the paragraph beginning with “I want to clarify that I don’t dislike Vassar”, my feeling is that I don’t care if powerful/influential people in the community are ok with the person, think it’s well-intentioned. Simply giving the high-status "he's a friend" mostly cancels out the effect of formal disinvitations, but it's clear that multiple vulnerable people, knowingly or not, are reporting harm.

  1. Large: the health and well-being of young minds is continually subordinated to a higher goal (preventing AI apocalypse) that is allowed to trump basic principles of care. Whether or not the people with power say this is what’s happening, or even publicly disavow it, it’s clear it’s being allowed to happen. Vulnerable people are getting wrapped up in “accelerated timelines” (etc) that are leading them to make bad personal decisions and nobody is calling this out as a systematic problem.

I do not buy AI risk on this scale/urgency. But even if I did, I would consider the immediate duty of care to override these concerns. If I didn’t want to do that, I would not work with vulnerable young minds.

A final remark. When I was a high school teacher, it was a residential setting. A colleague decided to start a personality cult among the young men. It got extraordinarily messed up and abusive extraordinarily quickly (two weeks). The man was a sociopath; the young men were not, but they engaged in sexual abuse. This happened because of lax oversight from the principal who was in the process of retiring/handing over the reins.

I hope this helps. I wish potentially vulnerable young people (in this era, everyone under thirty) who see the rationalist community as a source of guidance and mentorship to take care of each other and demand more from influential people.

Please contact me privately if you have any concerns about what I’ve written above.

Edit: minor edits; miscounted years since my first faculty appointment.

Replies from: gwillen, frank-bellamy, Sniffnoy, Linch
comment by gwillen · 2021-10-18T23:24:38.715Z · LW(p) · GW(p)

Upvoted for thoughtful dissent and outside perspective.

I ... have some complicated mixed feelings here. LW has a very substantial contingent of "gifted kids", who spent a decent chunk of their (...I suppose I should say "our") lives being frustrated that the world would not take them seriously due to age. Groups like that are never going to tolerate norms saying that young age is a reason to talk down to someone. And guidelines for protecting younger people from older people, to the extent that they involve disapproval or prevention of apparently-consensual choices by younger people, are going to be tricky that way. Any concern that "young minds are not allowed to waive" will be (rightly) seen as condescending, especially if you extend "young" to age 30. This does not really become less true if the concern is accurate.

This is extra-true here, because the "rationalist community" is not a single organization with a hierarchy, or indeed (I claim) even really a single community. So you can't make enforceable global rules of conduct, and it's very hard to kick someone out entirely (although I would say it's effectively been done a couple of times.)

You might be relieved to learn that, at least from where I'm standing, a substantial fraction of the community is not in fact working towards (or necessarily even believing strongly in) the higher goal of preventing the AI apocalypse. (I am not personally working towards it; I would not say that I have a firm resolution either way on how much I believe in it, but I tend towards being skeptical of most specific forms that I have seen described.)

And, not to "tu quoque" exactly, I hope, but... my sense is that academia is not great along this axis? I have never been a grad student, but I would say at least half my grad student friends have had significant mental health problems directly related to their work. And a small but substantial number have had larger problems stemming directly from abusive or (more often) incompetent advisors. In most cases, the latter seemed to have very little recourse against their advisors, especially the truly abusive ones, which seems like exactly the sort of thing that you're calling out here. There were always theoretically paths they could take to deal with the problem, but in practice the advisor has so much more power in the relationship that it would usually involve major bridge-burning to use them, and in some cases it's not clear it would have helped even then.

This latter problem -- of theoretical escalation paths around your manager existing, but being unusable in practice -- seems pretty similar, to me, between academia and industry. But my impression is that academia has much worse "managers", on average, because advisors are selected primarily for research skill, and often have poor management skills.

This is all to say -- coming back around to the point -- that I think academia has lots of people who behave in ways similar to how Michael Vassar is described here. (I have not met him personally, and cannot speak to that description myself.) Granted, academia has rules of conduct that would prevent some of the things seen here. I expect it would be very rare for an advisor to get their advisees into psychedelic drugs. But on the flip side, people in Vassar's "orbit" who grow disillusioned with him are free to leave. Grad students generally cannot do that, without a significant risk of losing years of work, and their hopes of an academic career.

If anything, I think the ability to say "this person is a terrible influence, and also we can acknowledge the good they have done" may be protective from a failure mode that I have anecdotally heard of in academia multiple times: the PI who is abusive in some way, and the "grapevine" is somewhat aware of this, but whose work is too valuable (e.g. in terms of grant money) to do anything about.

comment by River (frank-bellamy) · 2021-10-19T02:05:46.686Z · LW(p) · GW(p)

I find this position rather disturbing, especially coming from someone working at a university. I have spent the last sixish years working mostly with high school students, occasionally with university students, as a tutor and classroom teacher. I can think of many high school students who are more ready to make adult decisions than many adults I know, whose vulnerability comes primarily from the inferior status our society assigns them, rather than any inherent characteristic of youth. 

As a legal matter (and I believe the law is correct here), your implication that someone acts in loco parentis with respect to college students is simply not correct (with the possible exception of the rare genius kid who attends college at an unusually young age). College students are full adults, both legally and morally, and should be treated as such. College graduates even more so. You have no right to impose a special concern on adults just because they are 18-30.

I think one of the particular strengths of the rationalist/EA community is that we are generally pretty good at treating young adults as full adults, and taking them and their ideas seriously. 

comment by Sniffnoy · 2021-10-19T08:35:38.372Z · LW(p) · GW(p)

I want to more or less second what River said. Mostly I wouldn't have bothered replying to this... but your line of "today around <30" struck me as particularly wrong.

So, first of all, as River already noted, your claim about "in loco parentis" isn't accurate. People 18 or over are legally adults; yes, there used to be a notion of "in loco parentis" applied to college students, but that hasn't been current law since about the 60s.

But also, under 30? Like, you're talking about grad students? That is not my experience at all. Undergrads are still treated as kids to a substantial extent, yes, even if they're legally adults and there's no longer any such thing as "in loco parentis". But in my experience grad students are, absolutely, treated as adults, nor have I heard of things being otherwise. Perhaps this varies by field (I'm in math) or location or something, I don't know, but I at least have never heard of that before.

comment by Linch · 2021-10-19T01:10:53.053Z · LW(p) · GW(p)

Thanks for the outside perspective. If you're willing to go into more detail, I'm interested in a more detailed account from you on both what academia's safeguards are and (per gwillen's comment [LW(p) · GW(p)]) where do you think academia's safeguards fall short and how that can be fixed. 

This is decision-relevant to me as I work in a research organization outside of academia (though not working on AI risk specifically), and I would like us to both be more productive than typical in academia and have better safeguards against abuse.

If it helps, we have about 15 researchers now, we're entirely remote, and we hire typically from people who just finished their PhDs or have roughly equivalent research experience, although research interns/fellows are noticeably younger (maybe right after undergrad is the median). 

Replies from: temporary_visitor_account
comment by temporary_visitor_account · 2021-10-19T20:09:26.913Z · LW(p) · GW(p)

Sure. I'm really glad to hear. This is not my community, but you did explicitly ask.

This is just off the top of my head, and I don't mean it to be a final complete and correct list. It's just to give you a sense of some things I've encountered, and to help you and your org think about how to empower people and help them flourish. Academia uses a lot of these to avoid the geek-MOP-sociopath cycle.

I'm assuming your institution wants to follow an academic model, including teaching, mentorship, hiearchical student-teacher relationships, etc.

An open question is when you have a duty of care. My rule of thumb is (1) when you or the org is explicitly saying "I'm your teacher", "I'm your mentor"; (2) when you feel a power imbalance with someone because this relationship has arisen implicitly; (3) when someone is soliciting this role from you, whether you want it or not.

If you're a business making money, that's quite different, just say "we're going to use your body and mind to make money" and you've probably gotten your informed consent. :)

* Detection

1. Abuse is non-Gaussian. A small number of people may experience a great deal, while the majority see nothing wrong. That means that occasional random sampling is not going to identify problems. There are a lot of comments here saying "XYZ org (etc) was great, I saw nothing bad" — this is not a good signal.

2. Women and people from marginalized groups are at much higher risk. They're less able to trust a random stranger, and they're also less able to appeal to social norms or law enforcement. They also are at higher risk if they do report.

Somebody in the comments said that many of the people reporting abuse are trans, and "trans people suffer from mental illness more", so maybe they're just crazy and everything was actually pretty OK.

Hopefully this reasoning looks as crazy to you as it does to me; in the 1970s people would have said the same about gay people, but now we realize that a lot of that was due to homophobia (etc), and a lot of it was due to the fact that gay people, being marginalized, made soft targets for manipulation, blackmail, etc.

3. Always take reports seriously, even if "the person seems weird".

* Prevention

4. The most obvious thing is use common sense. If they didn't need it at Solvay in 1927, you probably don't need it now.

For example, avoid weird hyper-personal psychological interventions (circling, debugging, etc). Therapy is a regulated profession for good reasons, and the evidence-based therapies we know about have safeguards (e.g., asymmetric privacy, theraputic alliance, regulations about sexual activity and business relationships, boards that manage complaints, etc.)

5. Obey the law. Don't allow underage drinking, illegal drug use, drunk driving, etc., and don't allow others to allow it. Have a zero tolerance policy on this (if that feels like a buzzkill, you can say it has to do with liability).

The reason for this is (IMO) actually quite interesting. It's not that the law is necessarily a good guide to morals. It's more that abusers tend to be out of control (because they have a psychological disorder, because they think they're above or beyond ordinary requirements, or because they're abusing drugs themselves, etc), and violating the law is a sign of this.

The extreme example I know of is the high school personality cult I mentioned above. The colleague in question was (in retrospect) terrifying: he engaged in animal abuse and setting fires (two of the Macdonald triad), and the young men in his cult engaged in sexual abuse.

In the end, however, he was "busted" (fired) for statutory rape. The other stuff going on was too fuzzy, gradual, and excusable to hit people's radar at first (think boiling frog). But SR is a bright line, and if someone's crazy enough to cross that line, it's a signal that other things are off as well.

6. Preserve personal-professional boundaries with students/mentees. A baseline assumption is that you shouldn't really know much about anyone's personal life -- who they're dating, what their mental problems are, what kind of sex they like. It's not forbidden knowledge, but if you (or someone else in the org) does, you might ask: to what end? Is this helping them thrive?

Similarly, respect when someone wants those boundaries, or when they want to re-establish them.

Dating and sexual relationships across the student-teacher boundary should be completely out.

* Mitigation

7. When powerful people in your group say that an abusive person is an advsior, that sends a message to vulnerable people that they ought to, or need to, tolerate abuse by that person in other contexts. If you believe a person is abusing vulnerable people to whom you, or the org, owe a duty of care, you ought to cut communication with that person.

8. Don't give charismatic or "high performing" people a pass. There's no real correlation between excellence and being abusive -- if anything, the positive correlation with drug and mental problems makes it a negative correlation. Meanwhile, the same thing that can enable abuse (dark triad traits) can also appear as high performance.

9. Done right, none of this requires drama. Among other things, if your org is aware of (1) through (6), abusers will go elsewhere. Having zero tolerance also makes it a lot easier to help good-faith people not abuse unintentionally -- you can step in before things go off the rails, when the stakes are low, and save important relationships.

EDIT: since you asked where academia is falling short. I'd say it falls short in (1) and (2), is sort of OK in (3) in part because of Title IX and similar things, is good in (4) in part because there are long-standing traditions of common sense, and in (5) because lawyers, and falls short in (6), (7), and (8).

What's being described here seems to be violating all eight rules at different levels. Most obvious to me from the outside is (1), (4), (5), (6) and (7).

EDIT2: since this came up. Good practice is that the vulnerable person can't waive these concerns.

For example, the answer to "but I want to do [intense weird psychological thing] with my mentor" should be "not as long as you or this mentor remains with the org", or at the very least "not as long as this mentor remains with the org with a duty of care towards you".

Replies from: philh, Linch, ChristianKl
comment by philh · 2021-10-22T22:03:25.404Z · LW(p) · GW(p)

Somebody in the comments said that many of the people reporting abuse are trans, and “trans people suffer from mental illness more”, so maybe they’re just crazy and everything was actually pretty OK.

Hopefully this reasoning looks as crazy to you as it does to me; in the 1970s people would have said the same about gay people, but now we realize that a lot of that was due to homophobia (etc), and a lot of it was due to the fact that gay people, being marginalized, made soft targets for manipulation, blackmail, etc.

So, I think this is not a fair reading of the comment in question. Not a million miles away from, but far enough that I wanted to point it out.

But also, you seem to be saying something like: "consider that maybe trans people's rates of mental illness are downstream of them being trans and society being transphobic, not that their transness is downstream of mental illness".

And, okay, but...

Consider a hypothetical trans support forum. If rationalistthrowaway is right, you'd expect the members of that forum to have higher than average rates of mental illness, possibly leading to high profile events like psychotic breaks and suicides. And it sounds like you don't disagree with this?

(Like, it sounds like you might want to add "some of what gets diagnosed in mental illness in trans people is just the diagnostic machinery being transphobic, and that accounts for some of the increase". Sure, stipulated. But it doesn't sound like you'd say that accounts for all of the increase.)

Then someone sees these high profile events, and wonders what's going on here? Is it something about the forum that's triggering them? And someone else points out that trans people have a high baseline rate of mental illness, and that seems highly relevant.

It seems to me that your reply would be just as fitting there. Which is to say, I think it misunderstands the point being made; and also (through social punishment) makes it less likely that people will be able to figure out what's going on and be able to make things better.

comment by Linch · 2021-10-19T22:18:55.242Z · LW(p) · GW(p)

Thanks so much for the response! I really appreciate it.

I'm assuming your institution wants to follow an academic model, including teaching, mentorship, hierarchical student-teacher relationships, etc.

I think we have more of a standard manager-managee hierarchal relationship, with the normal corporate guardrails plus a few more. We also have explicit lines of reporting for abuse or other potential issues to people outside of the organization to minimize potential coverups.

Here are my general thoughts:

An open question is when you have a duty of care

I'm kind of confused. Surely organizations by default have a power dynamic over employees, and managers over reports, and abusing this is bad? Maybe I'm confused and you mean a stronger thing by "duty of care"

  1. Seems straightforwardly true to me, though I think you're maybe underestimating correlates of direct harm. (eg I expect in many of the cases cited, there's things like megalomania, insufficient humility, insufficient willingness to listen to contrary evidence, caring more about charismatic personalities than object-level arguments, etc)
  2. Speaking as someone in the subset of "women and minorities", I'd be pretty concerned about any form of special treatments or affordances given because "women and minorities" are at higher risk, aside from really obvious ones like being moderately more careful about male supervisor/female supervisee.
    1. In particular, this creates bad dynamics/incentive structures, like making it less likely to provide honest/critical feedback to "marginalized" groups, which is one of the things I was warned against in management training.
  3. This seems correct. Also you want multiple trusted points of contact outside the organization, which I think both academia and rationality are failing at.
    1. EA organizations often have Julia Wise, but she's stretched too thin and thus have (arguably) made significant mistakes as a result, as pointed out in a different thread.
  4. This seems right to me. I think "common sense" should be dereferenced a little for people coming from different cultures, but the company culture of the AngloAmerican elite seems not-crazy as a starting point. 
  5. I think it's Very Bad to allow most forms of lawbreaking on "work time." But I think you're implying something much stronger than that, and (speaking as someone who think all recreational drugs are dumb and straightforwardly do not pass any cost-benefits analysis, and have consumed less than a bottle of wine in my entire life) I really don't think it's the job of a workplace to police employee's time off, regardless of whether it's doing recreational drugs or listening to pirated music. 
    1. maybe it's different if jobs are in person?
      1. But I once worked at a company which had in our code-of-contact that employees can't drink in parties with other employees, and even though I had no inclination to drink, I still thought that was clearly too crazy/controlling
  6. This seems right. Most companies have rules against managers dating subordinates, and I think for probably good reasons.
  7. This sounds right, though "if you believe" is a probabilistic claim, and if I think the base rate is 5%, I'm not sure you think cutting communication should have at 15% (already ~3x elevated risk!) or 75% or 95%.
  8. I think I agree? But I think your reasoning is shoddy here. "There's no real correlation between excellence and being abusive" is a population claim, but obviously what people are evaluating is usually individuals.
  9. "Among other things, if your org is aware of (1) through (6), abusers will go elsewhere" One thing I'm confused about is if an organization has credible Bayesian evidence (say 40% is the cutoff) that an employee abuses their reports, it may make sense for the organization to fire them, way before there's enough evidence to convict in a court of law. But it's unclear what you should do in the broader ecosystem. 
    1. In academia my impression is that professors often switch universities after charges of suspicion, which seems not ideal and not what I'd want to replicate.
Replies from: temporary_visitor_account
comment by temporary_visitor_account · 2021-10-19T22:39:02.098Z · LW(p) · GW(p)

This seems like the beginning of a very good discussion, but:

  1. I want to be clear that I'm not a member of the LW community, and I don't want to take up space here.
  2. There are complex and interesting ideas in play on both sides that are hard to communicate in a back-and-forth, and are perhaps better saved for a structured long-form presentation.

To that end, I'll suggest that if you like we chat offline. I'm in NYC, for example, and you're welcome to get in touch via PM.

Replies from: Linch
comment by Linch · 2021-10-19T23:11:22.054Z · LW(p) · GW(p)

What I'm talking about is a system of moral duties and obligations connected to an explicitly academic mission. Academia is older than the corporation, and is a separate world. It's very important not to confuse them, and I wish that corporations (and "research labs" associated with corporations) would state very clearly "we are in no way an academic institution".

To be clear, my own organization is a nonprofit. We are not interested in making money, nor in doing other things of low moral value. 

I currently think emulating the culture of normal companies is a better starting template than academia or other research nonprofits (many of whom have strong positions that they want to believe and research that oh-so-interestingly happen to justify their pre-existing beliefs), though of course different cultures have different poisons that are more or less salient to different people. 

But yeah, let's take this offline.

comment by ChristianKl · 2021-10-20T08:30:37.072Z · LW(p) · GW(p)

Women and people from marginalized groups are at much higher risk. They're less able to trust a random stranger, and they're also less able to appeal to social norms or law enforcement. They also are at higher risk if they do report.

That seems to me doubtful. Relative to viticization survey reported rape numbers women seem to be much more willing to report it if they get raped then men. 

A woman who reports sexual harrassment from a male mentor has it radically easier then a man who reports sexual harrassment from a female mentor.

(this does not deminish the fact that it's worth listening to reports from women, but the mental model behind believing that it's easy to report for men is wrong)

comment by Unreal · 2021-10-18T16:28:04.285Z · LW(p) · GW(p)

Attempt to get shared models on "Variations in Responses":

Quote from another comment by Mr. Davis Kingsley:

My sense is that dynamics like those you describe were mostly not present at CFAR, or insofar as they were present weren't really the main thing.

I bid: 

This counts as counter-evidence, but it's unfortunately not very strong counter-evidence. Or at least it's weaker than one might naively believe. 

Why?

It is true of many groups that even while most of a group's activities or even the main point of a group's activities might be wholesome, above board, above water, beneficial, etc., it is possible that this is still secretly enabling the abuse of a silent or hidden minority. The minority that, in the end, is going to be easiest to dismiss, ridicule, or downplay. 

It might even be only ONE person who takes all the abuse. 

I think this dynamic is so fucked that most people don't want to admit that it's a real thing. How can a community or group that is mostly wholesome and good and happy be hiding atrocious skeletons in their closet? (Not that this is true of CFAR or MIRI, I'm not making that claim. I do get a 'vibe' from Zoe's post that it's what Leverage 1.0 might have been like. /grimace) 

An aside: I am somewhat angry with people's responses to jessicata in this comment section... (esp Viliam and somewhat Eli Tyre) I guess because I am smelling a dynamic where jessicata might be drawing some "blame-y" energy towards her (unintentionally), and people are playing into it. But uhhh I notice that my own "drama triangle rescuer" is playing into my reaction soo. D: Not sure what to do.

Anyway, to properly investigate matters like this, it's pretty important to be willing to engage the hidden / silent  / unpopular minority. Ideally, without taking on all their baggage (since they prob have some baggage). 

If we're not ready to engage that minority (while skillfully discerning what's delusion and what isn't), then we shouldn't force it. imo. 

Michael Vassar, it seems, is ... I notice that we collectively are having a hard time thinking about him clearly. At different points he is being painted a cartoon villain but also there's a weird undertone from his defenders of ... really wanting to downplay his involvement? That smells funky. Like, sure, maybe he wasn't very directly involved and stuff but... do you NOT consider him to be a thought leader for your group? Why are they referred to as Vassarites? Are you SURE you're not doing something slippery inside your minds? It kind of feels like something is mentally walled off in your own heads, ... :l :l 

On the other side of it, why do people seem TOO DETERMINED to turn him into a scapegoat? Most of you don't sound like you really know him at all. And while it's nice to have a bunch of one-time impressions of a guy, this is not a great foundation for judging his character either. And other people seem a little too eager to use unfounded rumors as evidence. 

I will admit that I don't particularly trust psychiatric institutions or mainstream narratives about psychology, and so I have some bias against Scott Alexander's take (with no ill will towards the man himself). 

I also mentioned in a different comment that I suspect there's some 'poison' inside Vassarite narratives about narrative control, society, institutions, etc. But I feel hopeful about the potential for a different framing that doesn't have the poison in it. 

... 

I am advocating for a lot more discernment and self-awareness in this discussion. And the things Anna mentioned in another comment, like caring, compassion, and curiosity. 

Replies from: Viliam, Vaniver, Unreal
comment by Viliam · 2021-10-18T23:38:57.075Z · LW(p) · GW(p)

Please allow me to point out one difference between the Rationalist community and Leverage that is so obvious and huge that many people possibly have missed it.

The Rationalist community has a website called LessWrong, where people critical of the community can publicly voice their complaints and discuss them. For example, you can write an article accusing their key organizations of being abusive, and it will get upvoted and displayed on the front page, so that everyone can add their part of the story. The worst thing the high-status members of the community will do to you is publicly post their disagreement in a comment. In turn, you can disagree with them; and you will probably get upvoted, too.

Leverage Research makes you sign an NDA, preventing you from talking about your experience there. Most Leverage ex-members are in fact afraid to discuss their experience. Leverage even tries (unsuccessfully) to suppress the discussion of Leverage on LessWrong.

Considering this, do you find it credible that the dynamics of both groups is actually very similar? Because that seems to be the narrative of the post we are discussing here -- the very post that got upvoted and is displayed publicly to insiders and outsiders alike. I do strongly object against making this kind of false equivalence.

it's pretty important to be willing to engage the hidden / silent  / unpopular minority

The hidden / silent / unpopular minority members can post their criticism of MIRI/CFAR right here, and most likely it will get upvoted. No legal threats whatsoever. No debugging sessions with their supervisor. Yes, some people will probably disagree with them, and those will get upvoted, too.

You know, this reminds me of comparison between dictatorships and democracies. In a dictatorship, the leader officially has a 100% popular support. In a democracy, maybe 50% of people say that the country sucks and the leadership is corrupt. Should we take these numbers at the face value? Should we even discount them both to the same degree and say "if the dictatorship claims to have 100% popular support, but in fact only 20% of people are happy with the situation, then if the democracy claims to have 50% popular support, we should apply the same ratio and conclude that only 10% of people are happy?".

Because it seems to me that you are making a similar claim here. We know that some people are afraid to talk publicly about their experience in Leverage. You seem to assume that there must be a similar group of people afraid to talk publicly about their experience in MIRI/CFAR. I think this is unlikely. I assume that if someone is unhappy about MIRI/CFAR doing something, there is probably a blog post about it somewhere (not necessarily on LessWrong) already.

On the other side of it, why do people seem TOO DETERMINED to turn [Michael Vassar] into a scapegoat?

Do you disagree with specific actions being attributed to Michael? Do you disagree with the conclusion that it is a good reason to avoid him and also tell all your friends to avoid him?

Replies from: Unreal, Vladimir_Nesov
comment by Unreal · 2021-10-19T02:13:56.651Z · LW(p) · GW(p)

Considering this, do you find it credible that the dynamics of both groups is actually very similar?

I'm a little unsure where this is coming from. I never made explicitly this comparison. 

That said, I was at a CFAR staff reunion recently where one of the talks was on 'narrative control' and we were certainly interested in the question about institutions and how they seem to employ mechanisms for (subtly or not) keeping people from looking at certain things or promoting particular thoughts or ideas. (I am not the biggest fan of the framing, because it feels like it has the 'poison'—a thing I've described in other comments.)

I'd like to be able to learn about these and other such mechanisms, and this is an inquiry I'm personally interested in. 

I do strongly object against making this kind of false equivalence.

I mostly trust that you, myself, and most readers can discern the differences that you're worried about conflating. But if you genuinely believe that a false equivalence might rise to prominence in our collective sense-making, I'm open to the possibility. If you check your expectations, do you expect that people will get confused about the gap between the Leverage situation and the CFAR/MIRI thing? Most of the comments so far seem unconfused on this afaict.

You seem to assume that there must be a similar group of people afraid to talk publicly about their experience in MIRI/CFAR.

Sorry, I think I wasn't being clear. I am not assuming this. 

My claim is that comments similar to the one Davis is making don't serve as a general strong counter-argument for situations where there might be a hidden minority. 

I am not (right now) claiming CFAR/MIRI has such a hidden minority. Just that the kind of evidence Davis was trying to provide doesn't strike me as very STRONG evidence, given the nature of the dynamics of this type of thing. 

Where the dynamics of this kind of thing can create polarized experiences, where a minority of people have a really BAD time, while most people do not notice or have it rise to the right level of conscious awareness. I am trying to add weight to Zoe's section in her post on "variations in responses." Even though Leverage was divided into subgroups and the workshops were chill all that, I don't think the subgroup divisions are the only force behind why there's a lot of variation in responses. 

I think even without subgroups, this 'class division' thing might have turned up in Leverage. Because it's actually not very hard to create a hidden minority, even in plain sight. 

And y'know what, even though CFAR is certainly not as bad as Leverage, and I'm not trying to bucket the two together... I will put forth that a silent minority has existed at CFAR, in the past, and that their experience was difficult and pretty traumatic for them. And I have strong reasons to believe they're still 'not over it'. They're my friends, and I care about them. I do not think CFAR is to blame or anything (again, I'm uninterested in the blame game). 

I hope it is fine for me to try to investigate the nature of these group dynamics. I don't really buy that my comments are contributing to a wild conflation between Leverage and CFAR. If anything, I think investigating on this level will contribute to greater understanding of the underlying patterns at play. 

Replies from: Viliam
comment by Viliam · 2021-10-19T10:06:39.797Z · LW(p) · GW(p)

The conflation between Leverage and CFAR is made by the article. Most explicitly here...

Most of what was considered bad about the events at Leverage Research also happened around MIRI/CFAR, around the same time period (2017-2019).

...and generally, the article goes like "Zoe said that X happens in Leverage. A kinda similar thing happens in MIRI/CFAR, too." The entire article (except for the intro) is structured as a point-by-point comparison with Zoe's article.

Most commenters don't buy it. But I imagine (perhaps incorrectly) that if a person unfamiliar with MIRI/CFAR and rationalist community in general would read the article, their impression would be that the two are pretty similar. This is why I consider it quite important to explain, very clearly, that they are not. This debate is public... and I expect it to be quote-mined (by RationalWiki and consequently Wikipedia).

I hope it is fine for me to try to investigate the nature of these group dynamics. 

Sure, go ahead!

I will put forth that a silent minority has existed at CFAR, in the past, and that their experience was difficult and pretty traumatic for them. And I have strong reasons to believe they're still 'not over it'.

I would be happy to hear about their experience. Generally, the upvotes here are pretty much guaranteed. Specific accusations can be addressed -- either by "actually, you got this part wrong" or by "oops, that was indeed a mistake, and here is what we are going to do to prevent this from happening again".

(And sometimes by plain refusal, like "no, if you believe that you are possessed by demons and need to exorcise them, the rationalist community will not play along; but we can recommend a good therapist". Similarly, if you like religion, superstition, magic, or drugs, please keep them at home, do not bring them to community activities, especially not in a way that might look like the community endorses this.)

Dear silent minority, if you are reading this, what can we do to allow you to speak about your experience? If you need anonymity, you can create a throwaway account. If you need a debate where LessWrong moderators cannot interfere, one of you can create an independent forum and advertize it here. If you are afraid of some, dunno, legal action or whatever, could you please post a proposal of a public commitment that MIRI/CFAR should take to allow you to speak freely?

(I might regret giving this advice but heck, just contact David Gerard from RationalWiki, he will be more than happy to hear and publish any dirt you have on MIRI/CFAR or anyone in the rationalist community.)

Any other proposals, what specifically could MIRI/CFAR do, or stop doing, to allow the silent minority to talk about their difficult and traumatic experience with the rationalist community and its organizations?

Replies from: Unreal
comment by Unreal · 2021-10-19T13:55:29.677Z · LW(p) · GW(p)

But I imagine (perhaps incorrectly) that if a person unfamiliar with MIRI/CFAR and rationalist community in general would read the article, their impression would be that the two are pretty similar.

I seem less concerned about this than you do. I don't see the consequences of this being particularly bad, in expectation. It seems you believe it is important, and I hear that. 

I would be happy to hear about their experience.

I'm frustrated by the way you are engaging in this... there's a strangely blithe tone, and I am reading it as somewhat mean? 

If you want to engage in a curious, non-judgy, and open conversation about the way this conversation is playing out, I could be up for that (in a different medium, maybe email or text or a phone call or something). Continuing on the object level like this is not working for me. You can DM me if you want... but obviously fine to ignore this also. If I know you IRL, it is a little more important to me, but if I don't know you, then I'm fine with whatever happens. Well wishes. 

comment by Vladimir_Nesov · 2021-10-19T14:56:36.032Z · LW(p) · GW(p)

This comment mostly makes good points in their own right, but I feel it's highly misleading to imply that those points are at all relevant to what Unreal's comment discussed. A policy doesn't need to be crucial to be good. A working doesn't need to be worse than terrible to get attention to its remaining flaws. Inaccuracy of a bug report should provoke a search for its better form, not nullify its salience.

comment by Vaniver · 2021-10-18T17:36:10.605Z · LW(p) · GW(p)

On the other side of it, why do people seem TOO DETERMINED to turn him into a scapegoat? Most of you don't sound like you really know him at all.

A blogger I read sometimes talks about his experience with lung cancer (decades ago), where people would ask his wife "so, he smoked, right?" and his wife would say "nope" and then they would look unsettled. He attributed it to something like "people want to feel like all health issues are deserved, and so their being good / in control will protect them." A world where people sometimes get lung cancer without having pressed the "give me lung cancer" button is scarier than the world where the only way to get it is by pressing the button.

I think there's something here where people are projecting all of the potential harm onto Michael, in a way that's sort of fair from a 'driving their actions' perspective (if they're worried about the effects of talking to him, maybe they shouldn't talk to him), but which really isn't owning the degree to which the effects they're worried about are caused by their instability or the them-Michael dynamic.

[A thing Anna and I discussed recently is, roughly, the tension between "telling the truth" and "not destabilizing the current regime"; I think it's easy to see there as being a core disagreement about whether or not it's better to see the way in which the organizations surrounding you are ___, and Michael is being thought of as some sort of pole for the "tell the truth, even if everything falls apart" principle.]

Replies from: Unreal
comment by Unreal · 2021-10-18T17:57:53.870Z · LW(p) · GW(p)

+1 to your example and esp "isn't owning the degree to which the effects they're worried about are caused by their instability or the them-Michael dynamic." 

I also want to leave open the hypothesis that this thing isn't a one-sided dynamic, and Michael and/or his group is unintentionally contributing to it. Whereas the lung cancer example seems almost entirely one-sided. 

comment by Unreal · 2021-10-18T16:39:04.408Z · LW(p) · GW(p)

Sorry if my tone about "something slippery" was way too confronting. I have simultaneously a lot of compassion and a lot of faith in people's ability to 'handle difficult truths' or something like that. But that nuanced tone is hard to get across on the internet. 

If you feel negatively impacted by my comment here, you are welcome to challenge me or confront me about it here or elsewhere. 

comment by lwanon · 2021-10-18T16:24:23.487Z · LW(p) · GW(p)

I don't live in the Bay anymore and haven't been on LessWrong for a while, but was informed of this thread by a friend.

I have only one thing to say, and will not be commenting any further due to an NDA.

Stay away from Geoff Anders and whatever nth iteration of "Leverage" he's on now.

Replies from: Freyja
comment by Freyja · 2021-10-18T17:38:51.693Z · LW(p) · GW(p)

You might not be able to say this, but I’m wondering whether it’s one of the NDAs Zoe references Geoff pressuring people to sign at the end of Leverage 1.0 in 2019,

comment by Unreal · 2021-10-19T16:01:16.742Z · LW(p) · GW(p)

(This is not a direct response to PhoenixFriend's comment but I am inspired because of that comment, and I recommend reading theirs first.) 

Note: CFAR recently had a staff reunion that I was present for. I made updates, including going from "Anna is avoidant, afraid, and tries to control more than she ought" to "Anna is in the process of updating, seeking feedback, and has reaffirmed honesty as a guiding principle." Given this, I feel personally relaxed about CFAR being in good hands for now; otherwise, maybe I'd be more agitated about CFAR. 

I'm not interested in questions of CFAR's virtue or lack thereof or fighting over its reputation. So I'm just gonna talk about general group dynamics with CFAR as an example, and people can join on this segment of the convo if they want. 

I don't think CFAR is a cult, and things did not seem comparably bad to Leverage. This is almost a meaningless sentence? But let's get it out of the way? 

RE: Class distinctions within CFAR

So... my sense of the CFAR culture, even though it was indeed a small group of 12-ish people, was that there was a social hierarchy. Because as monkeys, of course, we would fall into such a pattern. 

I felt an uncomfortable tension between the stated 'egalitarian / flat' thing and the actual, living, breathing group dynamics we were in. I saw people like Duncan and Eli embodying the egalitarian principles and treating everyone as equal peers. I admire them for this. 

But I also think some (most?) people create a bubble field effect—where by their way of being, they create the reality around them, and this reality tends to align with their expectations / view of reality. And people even act differently as a result of being in their particular bubble field. (Reality distortion field is sometimes used, but I feel it has a negative connotation I'm not trying to bring in.) 

So Duncan had a bubble field that I claim was kind of nice. Not universally pleasant but. Generally wholesome. Well, I claim it was better than mine. 

My own bubble field at CFAR was like... "I'm a victim and nobody cares about me, and I hate everyone. I am impossible to understand, but if they bothered trying and succeeded, they would be on my side." My mindset was more adversarial and scarcity-based. (Not as much these days! Good for me!) 

There were other people with similarly... saddening? bubble fields. Sad to see. Don't love it for them. 

Victim-minded people tended to, I claim, be at the bottom of the social hierarchy. And other people... either didn't really dispute that trend, or they didn't know how to do anything about it, or they tried things but mostly helplessly and without significant effect. 

...

Also, I claim a lot of people at CFAR and myself (and the world, let's be honest) had the unfortunate quality of "cowardice"—or to expand: 

A lack of trust in their own judgment and sense of right/wrong combined with various personal fears, which led to being avoidant of difficult truths and triggering social situations ... and being passive in the face of potential wrongs. Sometimes this manifested as physical dissociation, being overwhelmed, or freeze states. This passivity was sometimes rationalized as 'it's not my business' or 'that's their problem' or 'there's nothing I can do about that'... but in practice, it looked like people had bad social norms which led to them being bad 'neighbors, friends, colleagues'. I further claim certain 'small town' folk or 'community-oriented' folk do not have this problem. 

No one is to blame for this quality; rather, I view it as a collective problem that the world needs to solve in our local and global culture. I don't expect CFAR to 'solve' it for the world, but it could be a noble rationality project. 

Some people at CFAR did not have this quality, which was a boon. But they may have had other blindspots that contributed to ... not-great social norms. But not 'not-great' here doesn't mean 'not-great' compared to most orgs or groups... for this analysis, I think CFAR's norms are probably about on par with the average EA org.

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-19T16:33:38.168Z · LW(p) · GW(p)

I endorse Unreal's commentary.

I more and more feel like it was a mistake to turn down my invitation to the recent staff reunion/speaking-for-the-dead, but I continue to feel like I could not, at the time, have convinced myself, by telling myself only true things, that it was safe for me to be there or that I was in fact welcome.

I re-mention this here because it accords with and marginally confirms:

going from "Anna is avoidant, afraid, and tries to control more than she ought" to "Anna is in the process of updating, seeking feedback, and has reaffirmed honesty as a guiding principle."

Like, "Duncan felt unsafe because of the former, and is now regretting his non-attendance because of signals and bits of information which are evidence of the latter."

comment by AnnaSalamon · 2021-10-16T22:53:04.205Z · LW(p) · GW(p)

Here is a thread for detail disagreements, including nitpicks and including larger things, that aren’t necessarily meant to connect up with any particular claim about what overall narratives are accurate. (Or maybe the whole comment section is that, because this is LessWrong? Not sure.)

I’m starting this because local validity semantics [LW · GW] are important, and because it’s easier to get details right if I (and probably others) can consider those details without having to pre-compute whether those details will support correct or incorrect larger claims [? · GW].

For me personally, part of the issue is that though I disagree with a couple of the OPs details, I also have some other details that support the larger narrative which are not included in the OP, probably because I have many experiences in the MIRI/CFAR/adjacent communities space that Jessicata doesn’t know and couldn’t include. And I keep expecting that if I post details without these kinds of conceptualizing statements, people will use this to make false inferences about my guesses about higher-order-bits of what happened.

Replies from: habryka4, BrienneYudkowsky, habryka4, AnnaSalamon, Viliam
comment by habryka (habryka4) · 2021-10-17T02:32:13.759Z · LW(p) · GW(p)

The post explicitly calls for thinking about how this situation is similar to what is happening/happened at Leverage, and I think that's a good thing to do. I do think that I do have specific evidence that makes me think that what happened at Leverage seemed pretty different from my experiences with CFAR/MIRI.

Like, I've talked to a lot of people about stuff that happened at Leverage in the last few days, and I do think that overall, the level of secrecy and paranoia about information leaks at Leverage seemed drastically higher than anywhere else in the community that I've seen, and I feel like the post is trying to draw some parallel here that fails to land for me (though it's also plausible it is pointing out a higher level of information control than I thought was present at MIRI/CFAR).

I have also had my disagreements with MIRI being more secretive, and think it comes with a high cost that I think has been underestimated by at least some of the leadership, but I haven't heard of people being "quarantined from their friends" because they attracted some "set of demons/bad objects that might infect others when they come into contact with them", which feels to me like a different level of social isolation, and is part of the thing that happened in Leverage near the end. Whereas I’ve never heard of anything even remotely like this happening at MIRI or CFAR.

To be clear, I think this kind of purity dynamic is also present in other contexts, like high-class/low-class dynamics, and various other problematic common social dynamics, but I haven't seen anything that seems to result in as much social isolation and alienation, in a way that seemed straightforwardly very harmful to me, and more harmful than anything comparable I've seen in the rest of the community (though not more harmful than what I have heard from some people about e.g. working at Apple or the U.S. military, which seem to have very similarly strict procedures and also a number of quite bad associated pathologies). 

The other biggest thing that feels important to distinguish between what happened at Leverage and the rest of the community is the actual institutional and conscious optimization that has gone into PR control. 

Like, I think Ben Hoffman's point about "Blatant lies are the best kind!" [LW · GW] is pretty valid, and I do think that other parts of the community (including organizations like CEA and to some degree CFAR) have engaged in PR control in various harmful but less legible ways, but I do think there is something additionally mindkilly and gaslighty about straightforwardly lying, or directly threatening adversarial action to prevent people from speaking ill of someone, in the way Leverage has. I always felt that the rest of the rationality community had a very large and substantial dedication to being very clear about when they denotatively vs. connotatively disagree with something, and to have a very deep and almost religious respect for the literal truth (see e.g. a lot of Eliezer's stuff around the wizard's code and meta honesty), and I think the lack of that has made a lot of the dynamics around Leverage quite a bit worse. 

I also think it makes understanding the extent of the harm and ways to improve it a lot more difficult. I think the number of people who have been hurt by various things Leverage has done is really vastly larger than the number of people who have spoken out so far, in a ratio that I think is very different from what I believe is true about the rest of the community. As a concrete example, I have a large number of negative Leverage experiences between 2015-2017 that I never wrote up due to various complicated adversarial dynamics surrounding Leverage and CEA (as well as various NDAs and legal threats, made by both Leverage and CEA, not leveled at me, but leveled at enough people around me that I thought I might cause someone serious legal trouble if I repeat a thing I heard somewhere in a more public setting), and I feel pretty confident that I would feel very different if I had similarly bad experiences with CFAR or MIRI, based on my interactions with both of these organizations. 

I think this kind of information control feels like what ultimately flips things into the negative for me, in this situation with Leverage. Like, I think I am overall pretty in favor of people gathering together and working on a really intense project, investing really hard into some hypothesis that they have some special sauce that allows them to do something really hard and important that nobody else can do. I am also quite in favor of people doing a lot of introspection and weird psychology experiments on themselves, and to try their best to handle the vulnerability that comes with doing that near other people, even though there is a chance things will go badly and people will get hurt. 

But the thing that feels really crucial in all of this is that people can stay well-informed and can get the space they need to disengage, can get an external perspective when necessary, and somehow stay grounded all throughout this process. Which feels much harder to do in an environment where people are directly lying to you, or where people are making quite explicit plots to discredit you, or harm you in some other way, if you do leave the group, or leak information. 

I do notice that in the above I make various accusations of lying or deception by Leverage without really backing it up with specific evidence, which I apologize for, and I think people reading this should overall not take comments like mine at face value before having heard something pretty specific that backs up the accusations in them. I have various concrete examples I could give, but do notice that doing so would violate various implicit and explicit confidentiality agreements I made, that I wish I had not made, and I am still figuring out whether I can somehow extract and share the relevant details, without violating those agreements in any substantial way, or whether it might be better for me to break the implicit ones of those agreements (which seem less costly to break, given that I felt like I didn't really fully consent to them), given the ongoing pretty high cost.

Replies from: ChristianKl, Unreal
comment by ChristianKl · 2021-10-17T16:55:47.741Z · LW(p) · GW(p)

When it comes to agreements preventing disclosure of information often there's no agreement to keep the existence of the agreement itself secret. If you don't think you can ethically (and given other risks) share the content that's protected by certain agreements it would be worthwhile to share more about the agreements and with whom you have them. This might also be accompied by a request to those parties to agree to lift the agreement. It's worthwhile to know who thinks they need to be protected by secrecy agreements.

comment by Unreal · 2021-10-19T20:46:22.321Z · LW(p) · GW(p)

It has taken me about three days to mentally update more fully on this point. It seems worth highlighting now, using quotes from Oli's post: 

  • I've talked to a lot of people about stuff that happened at Leverage in the last few days, and I do think that overall, the level of secrecy and paranoia about information leaks at Leverage seemed drastically higher than anywhere else in the community that I've seen
  • I think the number of people who have been hurt by various things Leverage has done is really vastly larger than the number of people who have spoken out so far, in a ratio that I think is very different from what I believe is true about the rest of the community.

I am beginning to suspect that, even in the total privacy of their own minds, there are people who went through something at Leverage who can't have certain thoughts, out of fear. 

I believe it is not my place (or anyone's?) to force open a locked door, especially locked mental doors. 

Zoe's post may have initially given me the wrong impression—that other ex-Leverage people would also be able to articulate their experiences clearly and express their fears in a reasonable and open way. I guess I'm updating away from that initial impression. 

//

I suspect 'combining forces' with existing heavy-handed legal systems can sometimes be used in such a dominant manner that it damages people's epistemics and health. And this is why a lot of 'small-time' orgs and communities try to avoid attention of heavy-handed bureaucracies like the IRS, psych wards, police depts, etc., which are often only called upon in serious emergencies. 

I have a wonder about whether a small-time org willing to use (way above weight class) heavy-handed legal structures (like, beyond due diligence, such as actual threats of litigation) is evidence of that org acting in bad faith or doing something bad to its members. 

I've signed an NDA at MAPLE to protect donor information, but it's pretty basic stuff, and I have zero actual fear of litigation from MAPLE, and the NDA itself is not covering things I expect I'll want to do (such as leak info about funders). I've signed NDAs in the past for keeping certain intellectual property safe from theft (e.g. someone's inventing a new game and don't want others to get their idea). These seem like reasonable uses of NDAs. 

When I went to my first charting session at Leverage, they ... also asked me to sign some kind of NDA? As a client. It was a little weird? I think they wanted to protect intellectual property of their ... I kind of don't really remember honestly. Maybe if I'd tried to publish a paper on Connection Theory or Charting or Belief Reporting, they would have asked me to take it down. ¯\_(ツ)_/¯ 

maybe an unnecessary or heavy-handed integration between an org and legal power structures is a wtf kind of sign and seems good to try to avoid? 

Replies from: Freyja
comment by Freyja · 2021-10-20T15:56:33.273Z · LW(p) · GW(p)

I really don’t know about the experience of a lot of the other ex-Leveragers, but the time it took her to post it, the number and kind of allies she felt she needed before posting it, and the hedging qualifications within the post itself detailing her fears of retribution, plus just how many peoples’ initial responses to the post were to applaud her courage, might give you a sense that Zoe’s post was unusually, extremely difficult to make public, and that others might not have that same willingness yet (she even mentions it at the bottom, and presumably she knows more about how other ex-Leveragers feel than we do).

comment by LoganStrohl (BrienneYudkowsky) · 2021-10-17T05:49:23.756Z · LW(p) · GW(p)

I, um, don't have anything coherent to say yet. Just a heads up. I also don't really know where this comment should go.

But also I don't really expect to end up with anything coherent to say, and it is quite often the case that when I have something to say, people find it worthwhile to hear my incoherence anyway, because it contains things that underlay their own confused thoughts, and after hearing it they are able to un-confuse some of those thoughts and start making sense themselves. Or something. And I do have something incoherent to say. So here we go.

I think there's something wrong with the OP. I don't know what it is, yet. I'm hoping someone else might be able to work it out, or to see whatever it is that's causing me to say "something wrong" and then correctly identify it as whatever it actually is (possibly not "wrong" at all).

On the one hand, I feel familiarity in parts of your comment, Anna, about "matches my own experiences/observations/hearsay at and near MIRI and CFAR". Yet when you say "sensible", I feel, "no, the opposite of that".

Even though I can pick out several specific places where Jessicata talked about concrete events (e.g. "I believed that I was intrinsically evil" and "[Michael Vassar] was commenting on social epistemology"), I nevertheless have this impression that I most naturally conceptualize as "this post contained no actual things". While reading it, I felt like I was gazing into a lake that is suspended upside down in the sky, and trying to figure out whether the reflections I'm watching in its surface are treetops or low-hanging clouds. I felt like I was being invited into a mirror-maze that the author had been trapped in for... an unknown but very long amount of time.

There's something about nearly every phrase (and sentence, and paragraph, and section) here that I just, I just want to spit out, as though the phrase itself thinks it's made of potato chunks but in fact, out of the corner of my eye, I can tell it is actually made out of a combination of upside-down cloud reflections and glass shards.

Let's try looking at a particular, not-very-carefully-chosen sentence.

As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.

I have so many questions. "As a consequence" seems fine; maybe that really is potato chunks. But then, "the people most mentally concerned" happens, and I'm like, Which people were most mentally concerned? What does it mean to be mentally concerned? How could the author tell that those people were mentally concerned? Then we have "with strange social metaphysics", and I want to know "what is social metaphysics?", "what is it for social metaphysics to be strange or not strange?" and "what is it to be mentally concerned with strange social metaphysics"? Next is "were marginalized". How were they marginalize? What caused the author to believe that they were marginalized? What is it for someone to be marginalized? And I'm going to stop there because it's a long sentence and my reaction just goes on this way the whole time.

I recognize that it's possible to ask this many questions of this kind about absolutely any sentence anyone has ever uttered. Nevertheless, I have a pretty strong feeling that this sentence calls for such questions, somehow, much more loudly than most sentences do. And the questions the sentences call for are rarely answered in the post. It's like a tidal wave of... of whatever it is. More and more of these phrases-calling-for-questions pile up one after another, and there's no time in between to figure out what's going on, if you want to follow the post whatsoever.

There are definitely good things in here. A big part of my impression of the author, based on this post, is that they're smart and insightful, and trying to make the world better. I just, also have this feeling like something... isn't just wrong here, but is going wrong, and maybe the going has momentum, and I wonder how many readers will get temporarily trapped in the upside down mirror maze while thinking they're eating potatoes, unless they slow way way down and help me figure out what on earth is happening in this post.

Replies from: Vladimir_Nesov, Benquo, farp
comment by Vladimir_Nesov · 2021-10-17T14:24:31.419Z · LW(p) · GW(p)

This matches my impression in a certain sense. Specifically, the density of gears in the post (elements that would reliably hold arguments together, confer local validity, or pin them to reality) is low. It's a work of philosophy, not investigative journalism. So there is a lot of slack in shifting the narrative in any direction, which is dangerous for forming beliefs (as opposed to setting up new hypotheses), especially if done in a voice that is not your own. The narrative of the post is coherent and compelling, it's a good jumping-off point for developing it into beliefs and contingency plans, but the post itself can't be directly coerced into those things, and this epistemic status is not clearly associated with it.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-17T16:50:46.402Z · LW(p) · GW(p)

How do you think Zoe's post, or mainstream journalism about the rationalist community (e.g. Cade Metz's article, perhaps there are other better ones I don't know about) compare on this metric? Are there any examples of particularly good writeups about the community and its history you know about?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-10-17T17:02:23.219Z · LW(p) · GW(p)

I'm not saying that the post isn't good (I did say it's coherent and compelling), and I'm not at this moment aware of something better on its topic (though my ability to remain aware of such things is low, so that doesn't mean much). I'm saying specifically that gear density is low, so it's less suitable for belief formation than hypothesis setup. This is relevant as a more technical formulation of what I'm guessing LoganStrohl is gesturing at.

I think investigative journalism is often terrible, as is philosophy, but the concepts are meaningful in characterizing types of content with respect to gear density, including high quality content.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-17T17:04:03.921Z · LW(p) · GW(p)

I am intending this more as contribution of relevant information and initial models than firm conclusions; conclusions are easier to reach the more different relevant information and models are shared by different people, so I suppose I don't have a strong disagreement here.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-10-17T17:30:04.872Z · LW(p) · GW(p)

Sure, and this is clear to me as a practitioner of the yoga of taking in everything only as a hypothesis/narrative, mining it for gears, and separately checking what beliefs happen to crystallize out of this, if any. But for someone who doesn't always make this distinction, not having a clear indication of the status of the source material needlessly increases epistemic hygiene risks, so it's a good norm to make epistemic status of content more legible. My guess is that LoganStrohl's impression is partly of violation of this norm (which I'm not even sure clearly happened), shared by a surprising number of upvoters.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-17T17:51:21.028Z · LW(p) · GW(p)

Do you predict Logan's comment would have been much different if I had written "[epistemic status: contents of memory banks, arranged in a parseable semicoherent narrative sequence, which contains initial models that seem to compress the experiences in a Solomonoff sense better than alternative explanations, but which aren't intended to be final conclusions, given that only a small subset of the data has been revealed and better models are likely to be discovered in the future]"? I think this is to some degree implied by the title which starts with "My experience..." so I don't think this would have made a large difference, although I can't be sure about Logan's counterfactual comment.

Replies from: Vladimir_Nesov, tomcatfish
comment by Vladimir_Nesov · 2021-10-17T18:05:11.668Z · LW(p) · GW(p)

I'm not sure, but the hypothesis I'm chasing in this thread, intended as a plausible steelman of Logan's comment, thinks so. One alternative that is also plausible to me is motivated cognition that would decry undesirable source material for low gear density, and that one predicts little change in response to more legible epistemic status.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-17T18:08:02.828Z · LW(p) · GW(p)

I expect the alternative hypothesis to be true given the difference between the responses to this post and Zoe's post.

comment by Alex Vermillion (tomcatfish) · 2021-10-24T02:57:46.975Z · LW(p) · GW(p)

If you are genuinely asking, I think cutting that down into something slightly less clinical sounding (because it sounds sarcastic when formalized) would probably take a little steam out of that type of opposition, yes.

comment by Benquo · 2021-10-17T14:40:16.832Z · LW(p) · GW(p)

This reads like you feel compelled to avoid parsing the content of the OP, and instead intend to treat the criticisms it makes as a Lovecraftian horror the mind mustn't engage with. Attempts to interpret this sort of illegible intent-to-reject as though it were well-intentioned criticism end up looking like:

I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. I was catatonic for multiple days, afraid that by moving I would cause harm to those around me.

Very helpful to have a crisp example of this in text.

ETA: I blanked out the first few times I read Jessica's post on anti-normativity, but interpreted that accurately as my own intent to reject the information rather projecting my rejection onto the post itself, treated that as a serious problem I wanted to address, and was able to parse it after several more attempts.

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-17T17:14:13.010Z · LW(p) · GW(p)

I understood the first sentence of your comment to be something like "one of my hypotheses about Logan's reaction is that Logan has some internal mental pressure to not-parse or not-understand the content of what Jessica is trying to convey."

That makes sense to me as a hypothesis, if I've understood you, though I'd be curious for some guesses as to why someone might have such an internal mental pressure, and what it would be trying to accomplish or protect.

I didn't follow the rest of the comment, mostly due to various words like "this" and "it" having ambiguous referents.  Would you be willing to try everything after "attempts" again, using 3x as many words?

Replies from: Benquo
comment by Benquo · 2021-10-17T18:14:15.266Z · LW(p) · GW(p)

Summary:

Logan reports a refusal to parse the content of the OP. Logan locates a problem nonspecifically in the OP, not in Logan's specific reaction to it. This implies a belief that it would be bad to receive information from Jessica.

Logan reports a refusal to parse the content of the OP

But then, "the people most mentally concerned" happens, and I'm like, Which people were most mentally concerned? What does it mean to be mentally concerned? How could the author tell that those people were mentally concerned? Then we have "with strange social metaphysics", and I want to know "what is social metaphysics?", "what is it for social metaphysics to be strange or not strange?" and "what is it to be mentally concerned with strange social metaphysics"? Next is "were marginalized". How were they marginalize? What caused the author to believe that they were marginalized? What is it for someone to be marginalized?

Most of this isn't even slightly ambiguous, and Jessica explains most of the things being asked about, with examples, in the body of the post.

Logan locates a nonspecific problem in the OP, not in Logan's response to it.

I just, also have this feeling like something... isn't just wrong here, but is going wrong, and maybe the going has momentum, and I wonder how many readers will get temporarily trapped in the upside down mirror maze while thinking they're eating potatoes, unless they slow way way down and help me figure out what on earth is happening in this post.

This isn't a description of a specific criticism or disagreement. This is a claim that the post is nonspecifically going to cause readers to become disoriented and trapped.

This implies a belief that it would be bad to receive information from Jessica.

If the objection isn't that Jessica is mistaken but that she's "going wrong," that implies that the contents of Jessica's mind are dangerous to interact with. This is the basic trope of Lovecraftian horror - that there are some real things the human mind can't handle and therefore wants to avoid knowing. If something is dangerous, like nuclear waste or lions, we might want to contain it or otherwise keep it at a distance.

Since there's no mechanism suggested, this looks like an essentializing claim. If the problem isn't something specific that Jessica is doing or some specific transgression she's committing, then maybe that means Jessica's just intrinsically dangerous. Even if not, if Jessica were going to take this concern seriously, without a theory of how what she's doing is harmful, she would have to treat all of her intentions as dangerous and self-contain.

In other words, she'd have to proceed as though she might be intrinsically evil ("isn't just wrong here, but is going wrong, and maybe the going has momentum"), is in a hell of her own creation ("I felt like I was being invited into a mirror-maze that the author had been trapped in for... an unknown but very long amount of time."), and ought to avoid taking actions, i.e. become catatonic.

Replies from: Viliam, Duncan_Sabien
comment by Viliam · 2021-10-17T22:31:52.879Z · LW(p) · GW(p)

I also don't know what "social metaphysics" means.

I get the mood of the story. If you look at specific accusations, here is what I found, maybe I overlooked something:

there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR; at least one other than me had worked at MIRI for a significant time, and at least one had done work with MIRI on a shorter-term basis.  There was, in addition, a case of someone becoming very paranoid, attacking a mental health worker, and hijacking her car, leading to jail time; this person was not an employee of either organization, but had attended multiple CFAR events including a relatively exclusive AI-focused one.

There are even cases of suicide in the Berkeley rationality community [...] associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption

a prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years.

MIRI became very secretive about research.  Many researchers were working on secret projects, and I learned almost nothing about these.  I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact.  Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects.

Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness

Years before, MIRI had a non-disclosure agreement that members were pressured to sign, as part of a legal dispute with Louie Helm.

Anna Salamon said that Michael was causing someone else at MIRI to "downvote Eliezer in his head" and that this was bad because it meant that the "community" would not agree about who the leaders were, and would therefore have akrasia issues due to the lack of agreement on a single leader in their head telling them what to do.

MIRI had a "world-saving plan". [...] Nate Soares frequently talked about how it was necessary to have a "plan" to make the entire future ok, to avert AI risk; this plan would need to "backchain" from a state of no AI risk and may, for example, say that we must create a human emulation using nanotechnology that is designed by a "genie" AI, which does a narrow task rather than taking responsibility for the entire future; this would allow the entire world to be taken over by a small group including the emulated human.

Our task was to create an integrated, formal theory of values, decisions, epistemology, self-improvement, etc ("Friendliness theory"), which would help us develop Friendly AI faster than the rest of the world combined was developing AGI (which was, according to leaders, probably in less than 20 years).  It was said that a large part of our advantage in doing this research so fast was that we were "actually trying" and others weren't.  It was stated by multiple people that we wouldn't really have had a chance to save the world without Eliezer Yudkowsky.

I heard that "political" discussions at CFAR (e.g. determining how to resolve conflicts between people at the organization, which could result in people leaving the organization) were mixed with "debugging" conversations, in a way that would make it hard for people to focus primarily on the debugged person's mental progress without imposing pre-determined conclusions.  Unfortunately, when there are few people with high psychological aptitude around, it's hard to avoid "debugging" conversations having political power dynamics, although it's likely that the problem could have been mitigated.

I recall talking to a former CFAR employee who was scapegoated and ousted after failing to appeal to the winning internal coalition; he was obviously quite paranoid and distrustful, and another friend and I agreed that he showed PTSD symptoms.

This is like 5-10% of the text. A curious thing is that it is actually the remaining 90-95% of the text that evoke bad feelings in the reader; at least in my case.

To compare, when I was reading Zoe's article, I was shocked by the described facts. When I was reading Jessica's article, I was shocked by the horrible things that happened to her, but the facts felt... most of them boring... the most worrying part was about a group of people who decided that CFAR was evil, spent some time blogging against CFAR, then some of them killed themselves; which is very sad, but I fail to see how exactly CFAR is responsible for this, when it seems like the anti-CFAR group actually escalated the underlying problems to the point of suicide. (This reminds me of XiXiDu describing [LW · GW] how fighting against MIRI causes him health problems; I feel bad about him having the problems, but I am not sure what MIRI could possibly do to stop this.)

Jessica's narrative is that MIRI/CFAR is just like Leverage, except less transparent. Yet when she mentions specific details, it often goes somewhat like this: "Zoe mentioned that Leverage did X. CFAR does not do X, but I feel terrible anyway, so it is similar. Here is something vaguely analogical." Like, how can you conclude that not doing something bad is even worse than doing it, because it is less transparent?! Of course it is less transparent if it, you know, actually does not exist.

Or maybe I'm tired and failing at reading comprehension. I wish someone would rewrite the article, to focus on the specific accusations against MIRI/CFAR, and remove all those analogies-except-not-really with Zoe; just make it a standalone list of specific accusations. Then let's discuss that.

Replies from: elityre
comment by Eli Tyre (elityre) · 2021-10-18T03:38:02.202Z · LW(p) · GW(p)

This comment was very helpful. Thank you.

comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-17T18:23:01.876Z · LW(p) · GW(p)

Thanks for the expansion!  Mulling.

comment by farp · 2021-10-17T16:26:25.497Z · LW(p) · GW(p)

Thanks for this articulate and vulnerable writeup. I do think we might all agree that the experience you are describing seems like a very good description of what somebody in a cult would go through while facing information that would trigger disillusionment. 

I am not asserting you are in a cult, maybe I should use more delicate language, but in context I would like to point out this (to me) obvious parallel.

comment by habryka (habryka4) · 2021-10-17T18:51:05.288Z · LW(p) · GW(p)

I feel like one really major component that is missing from the story above, in particular a number of the psychotic breaks, is to mention Michael Vassar and a bunch of the people he tends to hang out with. I don't have a ton of detail on exactly what happened in each of the cases where someone seemed to have a really bad time, but having looked into it for a few hours in each case, I think all three of them were in pretty close proximity to having spent a bunch of time (and in some of the cases after taking psychedelic drugs) with Michael. 

I think this is important because Michael has I think a very large psychological effect on people, and also has some bad tendencies to severely outgroup people who are not part of his very local social group, and also some history of attacking outsiders who behave in ways he doesn't like very viciously, including making quite a lot of very concrete threats (things like "I hope you will be guillotined, and the social justice community will find you and track you down and destroy your life, after I do everything I can to send them onto you"). I personally have found those threats to very drastically increase the stress I experience from interfacing with Michael (and some others in his social group), and also my models of how these kinds of things happen have a lot to do with dynamics where this kind of punishment is expected if you deviate from the group norm.

I am not totally confident that Michael has played a big role in all of the bad psychotic experiences listed above, but my current best guess is that he has, and I do indeed pretty directly encourage people to not spend a lot of time with Michael (though I do think talking to him occasionally is actually great and I have learned a lot of useful things from talking to him, and also think he has helped me see various forms of corruption and bad behavior in my environment that I am genuinely grateful to have noticed, but I very strongly predict that I would have a very intensely bad experience if I were to spend more time around Michael, in a way I would not endorse in the long run). 

Replies from: jessica.liu.taylor, Chris_Leong, Gunnar_Zarncke
comment by jessicata (jessica.liu.taylor) · 2021-10-17T19:20:12.653Z · LW(p) · GW(p)

I don’t have a ton of detail on exactly what happened in each of the cases where someone seemed to have a really bad time, but having looked into it for a few hours in each case, I think all three of them were in pretty close proximity to having spent a bunch of time (and in some of the cases after taking psychedelic drugs) with Michael.

Of the 4 hospitalizations and 1 case of jail time I know about, 3 of those hospitalized (including me) were talking significantly with Michael, and the others weren't afaik (and neither were the 2 suicidal people), though obviously I couldn't know about all conversations that were happening. Michael wasn't talking much with Leverage people at the time.

I hadn't heard of the statement about guillotines, that seems pretty intense.

I talked with someone recently who hadn't been in the Berkeley scene specifically but who had heard that Michael was "mind-controlling" people into joining a cult, and decided to meet him in person, at which point he concluded that Michael was actually doing some of the unique interventions that could bring people out of cults, which often involves causing them to notice things they're looking away from. It's common for there to be intense psychological reactions to this (I'm not even thinking of the psychotic break as the main one, since that didn't proximately involve Michael; there have been other conversations since then that have gotten pretty emotionally/psychologically intense), and that it's common for people to not want to have such reactions, although clearly at least some people think they're worth having for the value of learning new things.

Replies from: habryka4, Benito
comment by habryka (habryka4) · 2021-10-17T19:35:34.370Z · LW(p) · GW(p)

IIRC the one case of jail time also had a substantial interaction with Michael relatively shortly before the psychotic break occurred. Though someone else might have better info here and should correct me if I am wrong. I don't know of any 4th case, so I believe you that they didn't have much to do with Michael. This makes the current record 4/5 to me, which sure seems pretty high.

Michael wasn't talking much with Leverage people at the time.

I did not intend to indicate Michael had any effect on Leverage people, or to say that all or even a majority of the difficult psychological problems that people had in the community are downstream of Michael. I do think he had a large effect on some of the dynamics you are talking about in the OP, and I think any picture of what happened/is happening seems very incomplete without him and the associated social cluster.

I think the part about Michael helping people notice that they are in some kind of bad environment seems plausible to me, though doesn't have most of my probability mass (~15%), and most of my probability mass (~60%) is indeed that Michael mostly just leverages the same mechanisms for building a pretty abusive and cult-like ingroup that are common, with some flavor of "but don't you see that everyone else is completely crazy and evil" thrown into it. 

I think it is indeed pretty common for abusive environments to start with "here is why your current environment is abusive in this subtle way, and that's also why it's OK for me to do these abusive-seeming things, because it's not worse than anywhere else". I think this was a really large fraction of what happened with Brent, and I also think a pretty large fraction of what happened with Leverage. I also think it's a large fraction of what's going on with Michael.

I do want to reiterate that I do assign substantial probability mass (~15%) to your proposed hypothesis being right, and am interested in more evidence for it.

Replies from: andrew-rettek-1, jessica.liu.taylor
comment by Andrew Rettek (andrew-rettek-1) · 2021-10-17T23:25:33.780Z · LW(p) · GW(p)

IIRC the one case of jail time also had a substantial interaction with Michael relatively shortly before the psychotic break occurred

I was pretty involved in that case after the arrest and for several months after and spoke to MV about it, and AFAICT that person and Michael Vassar only met maybe once casually. I think he did spend a lot of time with others in MV's clique though. 

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-18T02:16:38.138Z · LW(p) · GW(p)

Ah, yeah, my model is that the person had spent a lot of time with MV's clique, though I wasn't super confident they had talked to Michael in particular. Not sure whether I would still count this as being an effect of Michael's actions, seems murkier than I made it out to be in my comment.

comment by jessicata (jessica.liu.taylor) · 2021-10-17T19:48:46.510Z · LW(p) · GW(p)

I think one of the ways of disambiguating here is to talk to people outside your social bubble, e.g. people who live in different places, people with different politics, people in different subcultures or on different websites (e.g. Twitter or Reddit), people you run into in different contexts, people who have had experience in different mainstream institutions (e.g. different academic departments, startups, mainstream corporations). Presumably, the more of a culty bubble you're in, the more prediction error this will generate, and the harder it will be establish communication protocols across the gap. This establishes a point of comparison between people in bubble A vs B.

I spent a long part of the 2020 quarantine period with Michael and some friends of his (and friends of theirs) who were previously in a non-bay-area cult, which exposed me to a lot of new perspectives I didn't know about (not just theirs, but also those of some prison reform advocates and religious people), and made Michael seem less extremal or insular in comparison, since I wasn't just comparing him to the bubble of people who I already knew about.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-17T20:04:20.778Z · LW(p) · GW(p)

Hmm, I've tried to read this comment for something like 5 minutes, but I can't really figure out its logical structure. Let me give it a try in a more written format: 

I think one of the ways of disambiguating here

Presumably this is referring to distinguishing the hypothesis that Michael is kind of causing a bunch of cult-like problems, from the hypothesis that he helping people see problems that are actually present. 

here is to talk to people outside your social bubble, e.g. people who live in different places, people with different politics, people in different subcultures or on different websites (e.g. Twitter), people you run into in different contexts, people who have had experience in different mainstream institutions. Presumably, the more of a culty bubble you're in, the more prediction error this will generate, and the harder it will be establish communication protocols across the gap.

I don't understand this part. Why would there be a monotonous relationship here? I agree with the bubble part, and while I expect there to be a vague correlation, it doesn't feel like it measures anything like the core of what's going on. I wouldn't measure the cultishness of an economics department based on how good they are at talking to improv-students. It might still be good for them to get better at talking to improv students, but failure to do so doesn't feel like particularly strong evidence to me (compared to other dimensions, like the degree to which they feel alienated from the rest of the world, or have psychotic breaks, or feel under a lot of social pressure to not speak out, or many other things that seem similarly straightforward to measure but feel like they get more at the core of the thing). 

But also, I don't understand how I am supposed to disambiguate things here? Like, maybe the hypothesis here is that by doing this myself I could understand how insular my own environment is? I do think that seems like a reasonable point of evidence, though I also think my experiences have been very different from people at MIRI or CFAR. I also generally don't have a hard time establishing communication protocols across these kinds of gaps, as far as I can tell.

who were previously in a non-bay-area cult, which exposed me to a lot of new perspectives I didn't know about (not just theirs, but also those of some prison reform advocates and religious people), and made Michael seem less extremal or insular in comparison, since I wasn't just comparing him to the bubble of people who I already knew about.

This is interesting, and definitely some evidence, and I appreciate you mentioning it. 

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-17T20:08:12.036Z · LW(p) · GW(p)

If you think the anecdote I shared is evidence, it seems like you agree with my theory to some extent? Or maybe you have a different theory for how it's relevant?

E.g. say you're an econ student, and there's this one person in the econ department who seems to have all these weird opinions about social behavior and think body language is unusually important. Then you go talk to some drama students and find that they have opinions that are even more extreme in the same direction. It seems like the update you should make is that you're in a more insular social context than the person with opinions on social behavior, who originally seemed to you to be in a small bubble that wasn't taking in a lot of relevant information.

(basically, a lot of what I'm asserting constitutes "being in a cult" is living in a simulation of an artificially small, closed world)

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-17T20:13:59.751Z · LW(p) · GW(p)

The update was more straightforward, based on "I looked at some things that are definitely cults, what Michael does seems less extremal and insular in comparison, therefore it seems less likely for Michael to run into the same problems". I don't think that update required agreeing with your theory to any substantial degree.

I do think your paragraph still clarified things a bit for me, though with my current understanding, presumably the group to compare yourself against are less cults, and more just like, average people who are somewhat further out on some interesting dimension. And if you notice that average people seem really crazy and cult-like to you, then I do think this is something to pay attention to (though like, average people are also really crazy on lots of topics, like schooling and death and economics and various COVID related things that I feel pretty confident in, and so I don't think this is some kind of knockdown argument, though I do think having arrived at truths that large fractions of the population don't believe definitely increase the risks from insularity). 

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-17T20:18:10.392Z · LW(p) · GW(p)

I definitely don't want to imply that agreement with the majority is a metric, rather the ability to have a discussion at all, to be able to see part of the world they're seeing and take that information into account in your own view (which might be called "interpretive labor" or "active listening").

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-17T20:21:47.850Z · LW(p) · GW(p)

Agree. I do think the two are often kind of entwined (like, I am not capable of holding arbitrarily many maps of the world in my mind at the same time, so when I arrive at some unconventional beliefs that has broad consequences, the new models based on that belief will often replace more conventional models of the domain, and I will have to spend time regenerating the more conventional models and beliefs in conversation with someone who doesn't hold the unconventional belief, which does frequently make the conversation kind of harder, and I still don't think is evidence of something going terribly wrong)

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T18:44:53.149Z · LW(p) · GW(p)

Oh, something that might not have been clear is that talking with other people Michael knows made it clear that Michael was less insular than MIRI/CFAR people (who would have been less able to talk with such a diverse group of people, afaict), not just that he was less insular than people in cults.

comment by Ben Pace (Benito) · 2021-10-17T19:39:06.079Z · LW(p) · GW(p)

Do you know if the 3 people who were talking significantly with Michael did LSD at the time or with him?

Erm... feel free to keep plausible deniability. Taking LSD seems to me like a pretty worthwhile thing to do in lots of contexts and I'm willing to put a substantial amount of resources to defending against legal attacks (or supporting you in the face of them) that are caused by you replying openly here. (I don't know if that's plausible, I've not thought about it much so mentioned it anyway.)

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-17T19:54:50.701Z · LW(p) · GW(p)

I had taken a psychedelic previously with Michael; one other person probably had; the other probably hadn't; I'm quite unsure of the latter two judgments. I'm not going to disambiguate about specific drugs.

comment by Chris_Leong · 2021-10-18T02:22:00.313Z · LW(p) · GW(p)

What kinds of things was he attacking people for?

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-18T02:26:04.053Z · LW(p) · GW(p)

I am not fully sure. I have heard him say very similar things to the above directed at Holden (and have heard reports of the things I put in quotes above). I think in that case the thing he is attacking him for is leveraging people's desire to be a morally good person in a way that they don't endorse (and plays into various guilt narratives), to get them to give him money, and to get them to dedicate their life towards Effective Altruism, and via that technique, preventing a substantial fraction of the world's top talent to dedicate themselves towards actually important problems, and also causing them various forms of psychological harm.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-19T16:05:44.910Z · LW(p) · GW(p)

I am not fully sure. I have heard him say very similar things to the above directed at Holden (and have heard reports of the things I put in quotes above). 

Do you have an idea of when those things were directed at Holden?

comment by Gunnar_Zarncke · 2021-10-18T12:59:50.710Z · LW(p) · GW(p)

UPDATE: I mostly retract this comment. It was clarified that the threat was made in a mostly public context which changes the frame for me significantly. 

I think it is problematic to post a presumably very private communication (the threat) to such a broad audience. Even when it is correctly attributed it lacks all the context of the situation it was uttered in. It lacks any amends that way or may not have been made and exposes many people to the dynamics of the narrative resulting from the posting here. I'm not saying you shouldn't post it. I don't know the context and what you know either. But I think you should take ownership of the consequences of citing it and anyway it might escalate from here (a norm proposed by Scott Adams a while ago). 

Replies from: habryka4, Gunnar_Zarncke
comment by habryka (habryka4) · 2021-10-19T03:50:02.436Z · LW(p) · GW(p)

I don't think the context in which I heard about this communication was very private. There was a period where Michael seemed to try to get people to attack GiveWell and Holden quite loudly, and the above was part of the things I heard from that time. The above did not to me strike me as a statement intended to be very private, and also my model of Michael has norms that encourage sharing this kind of thing, even if it happens in private communication. 

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2021-10-19T08:13:33.476Z · LW(p) · GW(p)

Thank you for the clarification. I think it is valuable to include this context in your comment.

I will adjust my comment accordingly.

comment by Gunnar_Zarncke · 2021-10-18T22:36:42.807Z · LW(p) · GW(p)

Can somebody give me some hints according to which norms this could be downvoted?

Replies from: None
comment by [deleted] · 2021-10-19T02:43:20.961Z · LW(p) · GW(p)

I didn't downvote, but I almost did because it seems like it's hard enough to reveal that kind of thing without also having to worry about social disapproval.

comment by AnnaSalamon · 2021-10-16T23:00:08.882Z · LW(p) · GW(p)

I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said "yes". This part of the plan was the same.

Re: “this part of the plan was the same”: IMO, some at CFAR were interested in helping some subset of people become Elon Musk, but this is different from the idea that everyone is supposed to become Musk and that that is the plan. IME there was usually mostly (though not invariably, which I expect led to problems; and for all I know “usually” may also have been the case in various parts and years of Leverage) acceptance for folks who did not wish to try to change themselves much.

Replies from: elityre, Gunnar_Zarncke
comment by Eli Tyre (elityre) · 2021-10-18T03:00:28.095Z · LW(p) · GW(p)

Yeah, I very strongly don't endorse this as a description of CFAR's activities or of CFAR's goals, and I'm pretty surprised to hear that someone at CFAR said something like this (unless it was Val, in which case I'm less surprised). 

Most of my probability mass is on the CFAR instructor was taking "become Elon Musk" to be a sort of generic, hyperbolic term for "become very capable."

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T03:11:28.809Z · LW(p) · GW(p)

The person I asked was Duncan. I suggested the "Elon Musk" framing in the question. I didn't mean it literally, I meant him as an archetypal example of an extremely capable person. That's probably what was meant at Leverage too.

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-18T04:37:42.485Z · LW(p) · GW(p)

I do not doubt Jessica's report here whatsoever.

I also have zero memory of this, and it is not the sort of sentiment I recall holding in any enduring fashion, or putting forth elsewhere.

I suspect I intended my reply pretty casually/metaphorically, and would have similarly answered "yes" if someone had asked me if we were trying to improve ourselves to become any number of shorthand examples of "happy, effective, capable, and sane."

2016 Duncan apparently thought more of Elon Musk than 2021 Duncan does.

comment by Gunnar_Zarncke · 2021-10-18T22:40:10.469Z · LW(p) · GW(p)

Related Tweet by Mason:

One of the weirdest ideas in Bay Area rationalist/adjacent circles is that you become someone like e.g. Elon Musk, hyper-productive and motivated, by introspecting a ton

comment by Viliam · 2021-10-17T16:07:22.089Z · LW(p) · GW(p)

Okay, here goes the nitpicking...

There was an atmosphere of psycho-spiritual development, often involving Kegan stages.

I am confused, because I assumed that Kegan stages are typically used by people who believe they are superior to LW-style rationalists. You know, "the rationalists believe in objective reality, so they are at Kegan level 4, while I am a post-rationalist who respects deep wisdom and religion, so I am at Kegan level 5."

Replies from: jessica.liu.taylor, habryka4
comment by jessicata (jessica.liu.taylor) · 2021-10-17T16:16:34.491Z · LW(p) · GW(p)

Here are some examples of long-time LW posters who think Kegan stages are important:

Though I can't find an example of him posting on LessWrong, Ethan Dickinson is in the Berkeley rationality community and is mentioned here [LW · GW] as introducing people to Kegan stages. There are multiple others, these are just the people who it was easy to find Internet evidence about.

There's a lot of overlap in people posting about "rationalism" and "postrationalism", it's often a matter of self-identification rather than actual use of different methods to think, e.g. lots of "rationalists" are into meditation, lots of "postrationalists" use approximately Bayesian analysis when thinking about e.g. COVID. I have noticed that "rationalists" tend to think the "rationalist/postrationalist" distinction is more important than the "postrationalists" do; "postrationalists" are now on Twitter using vaguer terms like "ingroup" or "TCOT" (this corner of Twitter) for themselves.

I also mentioned a high amount of interaction between CFAR and Monastic Academic in the post.

Replies from: Duncan_Sabien, Kaj_Sotala
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-17T16:59:01.816Z · LW(p) · GW(p)

To speak a little bit on the interaction between CFAR and MAPLE:

My understanding is that none of Anna, Val, Pete, Tim, Elizabeth, Jack, etc. (current or historic higher-ups at CFAR) had any substantial engagement with MAPLE.  My sense is that Anna has spoken with MAPLE people a good bit in terms of total hours, but not at all a lot when compared with how many hours Anna spends speaking to all sorts of people all the time—much much less, for instance, than Anna has spoken to Leverage folks or CEA folks or LW folks.

I believe that Renshin Lee (née Lauren) began substantially engaging with MAPLE only after leaving their employment at CFAR, and drew no particular link between the two (i.e. was not saying "MAPLE is the obvious next step after CFAR" or anything like that, but rather was doing what was personally good for them).

I think mmmmaybe a couple other CFAR alumni or people-near-CFAR went to MAPLE for a meditation retreat or two?  And wrote favorably about that, from the perspective of individuals?  These (I think but do not know for sure) include people like Abram Demski and Qiaochu Yuan, and a small number of people from CFAR's hundreds of workshop alumni, some of whom went on to engage with MAPLE more fully (Alex Flint, Herschel Schwartz).

But there was also strong pushback from CFAR staff alumni (me, Davis Kingsley) against MAPLE's attempted marketing toward rationalists, and its claims of being an effective charity or genuine world-saving group.  And there was never AFAIK a thing which fifty-or-more-out-of-a-hundred-people would describe as "a high amount of interaction" between the two orgs (no co-run events, no shared advertisements, no endorsements, no long ongoing back and forth conversations between members acting in their role as members, no trend of either group leaking members to the other group, no substantial exchange of models or perspectives, etc).  I think it was much more "nodding respectfully to each other as we pass in the hallway" than "sitting down together at the lunch table."

I could be wrong about this.  I was sort of removed-from-the-loop of CFAR in late 2018/early 2019.  It's possible there was substantial memetic exchange and cooperation after that point.

But up until that point, there were definitely no substantive interactions, and nothing ever made its way to my ears in 2019 or 2020 that made me think that had changed.

I'm definitely open to people showing me I'm wrong, here, but given my current state of knowledge the claim of "high interaction between CFAR and Monastic Academy" is just false.

(Where it would feel true to claim high interaction between CFAR and MIRI, or CFAR and LW, or CFAR and CEA, or CFAR and SPARC, or even CFAR and Leverage.  The least of these is, as far as I can tell, an order of magnitude more substantial than the interaction between CFAR and MAPLE.)

Replies from: Unreal, jessica.liu.taylor, elityre, habryka4
comment by Unreal · 2021-10-17T18:39:13.795Z · LW(p) · GW(p)

This is Ren, and I was like "?!?" by the sentence in the post: "There is a significant degree of overlap between people who worked with or at CFAR and people at the Monastic Academy." 

I am having trouble engaging with LW comments in general so thankfully Duncan is here with #somefacts. I pretty much agree with his list of informative facts. 

More facts:

  • Adom / Quincy did a two-month apprenticeship at MAPLE, a couple years after being employed by CFAR. He and I are the only CFAR employees who've trained at MAPLE. 
  • CFAR-adjacent people visit MAPLE sometimes, maybe for about a week in length. 
  • Some CFAR workshop alums have trained at MAPLE or Oak as apprentices or residents, but I would largely not call them "people who worked with or at CFAR." There are a lot of CFAR alums, and there are also a lot of MAPLE alums. 
  • MAPLE and Oak have applied for EA grants in the past, which have resulted in them communicating with some CFAR-y type people like Anna Salamon, but this does not feel like a central example of "interaction" of the kind implied. 

The inferential gap between the MAPLE and rationalist worldview is pretty large. There's definitely an interesting "thing" about ex-CFAR staff turning to trad religion that you might want to squint at (I am one example, out of, I believe, three total), but I don't like the way the OP tacks this sentence onto a section as though it were some kind of argument or evidence for some vague something something. And I think that's why my reaction was "?!?" and not just "hmm." 

But also, I cannot deny that the intuition jessicata has about MAPLE is not entirely off either. It gives off the same smells. But I still don't like the placement of the sentence in the OP because I think it assumes too much. 

comment by jessicata (jessica.liu.taylor) · 2021-10-17T17:02:09.892Z · LW(p) · GW(p)

Thanks, this adds helpful details. I've linked this comment in the OP.

comment by Eli Tyre (elityre) · 2021-10-18T01:56:15.140Z · LW(p) · GW(p)

As someone who was more involved with CFAR than Duncan was from in 2019 on, all this sounds correct to me.

comment by habryka (habryka4) · 2021-10-17T17:54:36.579Z · LW(p) · GW(p)

I was also planning to leave a comment with a similar take.

comment by Kaj_Sotala · 2021-10-18T23:01:26.801Z · LW(p) · GW(p)

FWIW I wouldn't necessarily say that Kegan stages are important - they seem like an interesting model in part because they feel like they map quite well to some of the ways in which my own thought has changed over time. But I still only consider them to be at the level of "this is an interesting and intuitively plausible model"; there hasn't been enough research on them to convincingly show that they'd be valid in the general population as well.

comment by habryka (habryka4) · 2021-10-17T18:57:00.483Z · LW(p) · GW(p)

There was a period in something like 2016-2017 some rationalists were playing around with Kegan stages in the Bay Area. Most people I knew weren't a huge fan of them, though the then-ED of CFAR (Pete Michaud) did have a tendency of bringing them up from time to time in a way I found quite annoying. It was a model a few people used from time to time, though my sense is that it never got much traction in the community. The "often" in the above quoted sentence definitely feels surprising to me, though I don't know how many people at MIRI were using them at the time, and maybe it was more than in the rest of my social circle at time. I still hear them brought up sometimes, but usually in a pretty subdued way, more referencing the general idea of people being able to place themselves in a broader context, but in a much less concrete and less totalizing way than the way I saw them being used in 2016-2017.

Replies from: Holly_Elmore, Vaniver
comment by Holly_Elmore · 2021-10-17T22:13:12.407Z · LW(p) · GW(p)

I was very peripheral to the Bay Area rationality at that time and I heard about Kegan levels enough to rub me the wrong way. Seemed bizarre to me that one man’s idiosyncratic theory of development would be taken so seriously by a community I generally thought was more discerning. That’s why I remember so clearly that it came up many times.

Replies from: Linch
comment by Linch · 2021-10-18T04:06:54.975Z · LW(p) · GW(p)

+1, except I was more physically and maybe socially close. 

comment by Vaniver · 2021-10-17T20:40:37.684Z · LW(p) · GW(p)

It was a model a few people used from time to time, though my sense is that it never got much traction in the community.

FWIW I think this understates the influence of Kegan levels. I don't know how much people did differently because of it, which is maybe what you're pointing at, but it was definitely a thing people had heard of and expected other people to have heard of and some people targeted directly.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-17T20:45:57.071Z · LW(p) · GW(p)

Huh, some chance I am just wrong here, but to me it didn't feel like Kegan levels had more prominence or expectation of being understood than e.g. land value taxes, which is also a topic some people are really into, but doesn't feel to me like it's very core to the community.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-10-17T20:54:29.250Z · LW(p) · GW(p)

Datapoint: I understand neither Kegan levels nor land value taxes.

comment by Zack_M_Davis · 2021-10-17T03:51:21.288Z · LW(p) · GW(p)

I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact. Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects.

Trying to maintain secrecy within the organization like this (as contrasted to secrecy from the public) seems nuts to me. Certainly, if you have any clever ideas about how to build an AGI, you wouldn't want to put them on the public internet, where they might inspire someone who doesn't appreciate the difficulty of the alignment problem to do something dangerous.

But one would hope that the people working at MIRI do appreciate the difficulty of the alignment problem (as a real thing about the world, and not just something to temporarily believe because your current employer says so). If you want the alignment-savvy people to have an edge over the rest of the world (!), you should want them to be maximally intellectually productive, which naturally requires the ability to talk to each other without the overhead of seeking permission from a designated authority figure. (Where the standard practice of bottlenecking information and decisionmaking on a designated authority figure makes sense if you're a government or a corporation trying to wrangle people into serving the needs of the organization against their own interests, but I didn't think "we" were operating on that model.)

Replies from: Eliezer_Yudkowsky, Vaniver, Vladimir_Nesov, Chris_Leong, ChristianKl
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-10-17T18:18:40.896Z · LW(p) · GW(p)

Secrecy is not about good trustworthy people who get to have all the secrets versus bad untrustworthy people who don't get any.  This frame may itself be part of the problem; a frame like that makes it incredibly socially difficult to implement standard practices.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-18T07:06:22.817Z · LW(p) · GW(p)

To attempt to make this point more legible:

Standard best practice in places like the military and intelligence organizations, where lives depend on secrecy being kept from outsiders - but not insiders - is to compartmentalize and maintain "need to know." Similarly, in information security, the best practice is to only give being security access to what they need, and granularize access to different services / data, and well as differentiating read / write / delete access.  Even in regular organizations, lots of information is need-to-know - HR complaints, future budgets, estimates of profitability of a publicly traded company before quarterly reports, and so on. This is normal, and even though it's costly, those costs are needed. 

This type of granular control isn't intended to stop internal productivity, it is to limit the extent of failures in secrecy, and attempts to exploit the system by leveraging non-public information, both of which are inevitable, since costs to prevent failures grow very quickly as the risk of failure approaches zero. For all of these reasons, the ideal is to have trustworthy people who have low but non-zero probabilities of screwing up on secrecy. Then, you ask them not to share things that are not necessary for others' work. You only allow limited exceptions and discretion where it is useful. The alternative, of "good trustworthy people [] get to have all the secrets versus bad untrustworthy people who don't get any," simply doesn't work in practice.

Replies from: Zack_M_Davis, ChristianKl
comment by Zack_M_Davis · 2021-10-18T18:06:52.364Z · LW(p) · GW(p)

Thanks for the explanation. (My comment was written from my idiosyncratic perspective of having been frequently intellectually stymied by speech restrictions, and not having given much careful thought to organizational design.)

comment by ChristianKl · 2021-10-18T07:36:29.462Z · LW(p) · GW(p)

I would imagine that most military and intelligence organziation do have psychiatrists and therapists on staff that employees can access when they run into psychological trouble due to their work projects where they can share information about their work project. 

Especially, when operating in an envirioment that does get people in contact with issues that caused some people to be institutionalized having only a superior to share information but not anybody do deal with the psychological issues arrising from the work seems like a flawed system.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-18T16:27:12.802Z · LW(p) · GW(p)

I agree that there is a real issue here that needs to be addressed, and I wasn't claiming that there is no reason to have support - just that there is a reason to compartmentalize.

And yes, US military use of mental health resources is off-the-charts. But in the intelligence community there are some really screwed up incentives, in that having a mental health issue can get your clearance revoked - and you won't necessarily lose your job, but the impact on a person's career is a great reason to avoid mental health care, and my (second-hand, not reliable) understanding is that there is a real problem with this.

Replies from: anon03
comment by anon03 · 2021-10-18T17:31:34.907Z · LW(p) · GW(p)

Seconding this: When I did classified work at a USA company, I got the strong impression that (1) If I have any financial problems or mental health problems, I need to tell the security office immediately; (2) If I do so, the security office would immediately tell the military, and then the military would potentially revoke my security clearance. Note that some people get immediately fired if they lose their clearance. That wasn't true for me—but losing my clearance would have certainly hurt my future job prospects.

My strong impression was that neither the security office nor anyone else had any intention to help us employees with our financial or mental health problems. Nope, their only role was to exacerbate personal problems, not solve them. There's an obvious incentive problem here; why would anyone disclose their incipient financial or mental health problems to the company, before they blow up? But I think from the company's perspective, that's a feature not a bug. :-P

(As it happens, neither myself nor any of my close colleagues had financial or mental health problems while I was working there. So it's possible that my impressions are wrong.)

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-18T18:19:35.983Z · LW(p) · GW(p)

I don't specifically know about mental health, but I do know specific stories about financial problems being treated as security concerns - and I don't think I need to explain how incredibly horrific it is to have an employee say to their employer that they are in financial trouble, and be told that they lost their job and income because of it.

comment by Vaniver · 2021-10-17T04:02:22.047Z · LW(p) · GW(p)

I didn't think "we" were operating on that model.

I think it's actually quite hard to have everyone in an organization trust everyone else in an organization, or to only hire people who would be trusted by everyone in the organization. So you might want to have some sort of tiered system, where (perhaps) the researchers all trust each other, but only trust the engineers they work with, and don't trust any of the ops staff, and this lets you only need one researcher to trust an engineer to hire them.

[On net I think the balance is probably still in favor of "internal transparency, gated primarily by time and interests instead of security clearance", but it's less obvious than it originally seems.]

comment by Vladimir_Nesov · 2021-10-17T13:52:49.095Z · LW(p) · GW(p)

The steelman that comes to mind is that by the time you actually know that you have a dangerous secret, it's either too late or risky to set up a secrecy policy. So it's useful to install secrecy policies in advance. The downsides that might be currently apparent are bugs that you still have the slack to resolve.

comment by Chris_Leong · 2021-10-17T13:47:37.180Z · LW(p) · GW(p)

It depends. For example, if you have an intern program, then they probably aren't especially trusted as these decision generally don't receive the same degree of scrutiny as employment.

And ops people prob don't need to know details of the technical research.

comment by ChristianKl · 2021-10-18T07:29:22.550Z · LW(p) · GW(p)

In case it becomes known to any of a few powerful intelligence agencies that MIRI works on an internal project that they believe is likely to create an AGI in one or two years, that intelligence agency will hack/surveil MIRI to get all the secrets. 

To the extend that MIRI's theory of change is that they are going to build an AGI on their own independent of any outside organization a high degree of secrecy is likely necessary for that plan to work. 

I think it's highly questionable that MIRI will be able to develop AGI faster (especially when researchers don't talk to each other) then organizations like Deep Mind and thus it's unclear to me whether the plan makes sense, but it seems hard to imagine that plan without secrecy.

comment by daig · 2021-10-22T23:53:27.856Z · LW(p) · GW(p)

I’d like to offer some data points without much justification, I hope it might spur some thought/discussion without needing to be taken on faith:

  • I’m not a “vassarite”
  • I’ve never met Vassar
  • I’ve independently come to many of the same observations, before encountering the rationality community, that are typically attributed to vassar’s crazy making, eg around an “infinitely corrupt” and fundamentally coercive/conformist world. I’m not sure what to make of this yet but at the least it makes insistence that we all just decide to talk things over rationally a bit tiring. I think this realization interacts poorly from the rationalist starting point of “save the world at all costs” and clearcut views of cooperation vs defection - which contributed to a lot of the breakdowns around the vassar folks.
  • my impression is that he’s very angry at the world, has a strong sense of personal importance and agency, and a disregard for others psychological well-being. I suspect he relishes in ”breaking” those unprepared to integrate his personal truth, because at least then they can’t be used in service of the “infinitely corrupt”
  • i think the way he was expelled from the community relied on some pretty underhanded social gaming, denied the agency of those compelled by him, and completely missed the way in which he was actually harmful, such that it now proliferates in other forms hidden from discussion.
  • I have met Jessica and found her one of the clearest thinkers in the rationalist extended community by a large margin (Not to say she is the only, there are others more prominent)
  • I’ve never had a psychotic break or instability otherwise, and yet
  • Her account of psychosis is the most lucid I’ve encountered- and her current epistemic integration of it seems really sound
  • i think psychosis is quite complicated, and represents a willingness to leave consensus territory more than a magnitude of absolute wrongness (although it is obviously characterized by non-reality-tracking beliefs). In particular i see many commonly accepted beliefs in the world (some in the rationalists) both more wrong and more harmful than most of what psychotics come up with. I think the impact and source of the wrong beliefs should be considered rather, than treating it as a magical departure from agency.
  • I’ve seen prominent members of the rationalist community consistently employing similar tactics around attentional misdirection, disowned power asymmetry, and commitment escalation that they accuse their villains of; vassar, geoff, brent, ziz etc.
  • Jessica seems committed to honest truth seeking detached from political agenda in a way I really appreciate and to a degree i haven’t seen elsewhere at similar levels in the community.
  • I’ve noticed a semi-systematic pattern of “very sane” people goading people in an unstable space of uncharted territory into being more crazy than they would be otherwise, by dismissing their concerns in a ways that are obviously erroneous, preemptively treating them as dangerous, etc - driving them further from consensus rather than helping them reintegrate.
  • the less “weird” an organization, and the more members relying on its credibility, the more costly it is to speak against them - so it tends to only come from those who can hold their own or who have nothing left to lose (frequently coming from a damaged psychological state). Leverage is easier to call out on than MIRI/CFAR, and there are still more upstanding factions in the same boat. Onlookers might do well to take this availability bias into account when sizing up the accounts on each side.
  • The tightly knit rationalist power hubs have a near monopoly on “weird x-risk mitigation” funding and intellectual capital - so for the kind of people who take x-risk deathly seriously, they may see no other option than to submit to the official narrative for fear of getting blacklisted. EDIT: would also like to add that I’m generally optimistic about the individuals involved with CFAR/MIRI especially in light of the discussing here, and despite having similar experiences as jessica with them in the past. I do worry that nitpicking specifics of “who’s worse” or hunting “bad actors” detracts from understanding the root dynamics that lead to so many dramatic incidents in the extended community.
Replies from: clone of saturn, AnnaSalamon
comment by clone of saturn · 2021-10-23T09:02:28.224Z · LW(p) · GW(p)

Could you explain what it means to be "infinitely" corrupt?

comment by AnnaSalamon · 2021-10-23T00:23:58.220Z · LW(p) · GW(p)

Thank you. I disagree with "... relishes 'breaking' others", and probably some other points, but a bunch of this seems really right and like content I haven't seen written up elsewhere. Do share more if you have it. I'm also curious where you got this stuff from.

comment by Rafael Harth (sil-ver) · 2021-10-27T03:31:57.501Z · LW(p) · GW(p)

One thing I'd like to say at this point is that I think you (jessicata) have shown very high levels of integrity in responding to comments. There's been some harsh criticism of your post, and regardless of how justified it is, it takes character not to get defensive, especially given the subject matter. To me, this is also a factor in how I think about the post itself.

comment by Connor_Flexman · 2021-10-23T09:45:38.760Z · LW(p) · GW(p)

I want to bring up a concept I found very useful for thinking about how to become less susceptible to these sorts of things.

(NB that while I don't agree with much of the criticism here, I do think "the community" does modestly increase psychosis risk, and the Ziz and Vassar bubbles do so to extraordinary degrees. I also think there's a bunch of low-hanging fruit here, so I'd like us to take this seriously and get psychosis risk lower than baseline.)

(ETA because people bring this up in the comments: law of equal and opposite advice applies. Many people seem to not have the problems that I've seen many other people really struggle with. That's fine. Also I state these strongly—if you took all this advice strongly, you would swing way too far in the opposite direction. I do not anticipate anyone will do that but other people seem to be concerned about it so I will note that here. Please adjust the tone and strength-of-claim until it feels right to you, unless you are young and new to the "community" and then take it more strongly than feels right to you.)

Anyways, the concept: I heard the word “totalizing” on Twitter at some point (h/t to somebody). It now seems fundamental to my understanding of these dynamics. “Totalizing” was used in the sense of a “totalizing ideology”. This may just be a subculture term without a realer definition, but it means something like “an ideology that claims to affect/define meaning for all parts of your life, rather than just some”—and implicitly also that this ideology has a major effect and causes some behaviors at odds with default behavior.

This definition heavily overlaps with the stuff people typically associate with cults. For example, discouraging contact with family/outside, or having a whole lot hanging on whether the leaders approve of you. Both of these clearly affect how much you can have going on in your "outside" life.

Note that obviously totalization is on an axis. It's not just about time spent on an ideology, but how much mental space that ideology takes up.

I think some of the biggest negative influences on me in the rationality community also had the trait of pushing towards totalization, though were unalike in many other ways. One was ideological and peer pressure to turn socializing/parties/entertainment into networking/learning, which meant that part of my life also could become about the ideology. Another was the idea of turning my thoughts/thinking/being into more fodder to think about thinking processes and self-improve, which cannibalized more of my default state.

I think engaging with new, more totalizing versions of the ideology or culture is a major way that people get more psychotic. Consider the maximum-entropy model of psychosis, so named because you aren't specifying any of the neural or psychological mechanisms, you're taking strictly what you can verify and being maximally agnostic about it. In this model, you might define psychosis as when “thought gets too far away from normal, and your new mental state is devoid of many of the guardrails/protections/negative-feedback-loops/sanity-checks that your normal mental states have." (This model gels nicely with the fact that psychosis can be treated so well via drinking water, doing dishes, not thinking for awhile, tranquilizers, socializing, etc. (h/t anon).) In this max-ent model of psychosis, it is pretty obvious how totalization leads to psychosis. Changing more state, reducing more guardrails, rolling your own psychological protections that are guaranteed to have flaws, and cutting out all the normal stuff in your life that resets state. (Changing a bunch of psychological stuff at once is generally a terrible idea for the same reason, though that's a general psychosis tip rather than a totalization-related one.)

I still don't have a concise or great theoretical explanation for why totalization seems so predictive of ideological damage. I have a lot of reasons for why it seems clearly bad regarding your belief-structure, and some other reasons why it may just be strongly correlated with overreach in ways that aren't perfectly causal. But without getting into precisely why, I think it's an important lens to view the rationalist "community" in.

So I think one of the main things I want to see less of in the rationalist/EA "communities" is totalization.

This has a billion object-level points, most of which will be left as an exercise to the reader:

  • Don’t proselytize EA to high schoolers. Don’t proselytize other crazy ideologies without guardrails to young people. Only do that after your ideology has proven to make a healthy community with normal levels of burnout/psychosis. I think we can get there in a few years, but I don't think we're there yet. It just actually takes time to evolve the right memes, unfortunately.
  • To repeat the perennial criticism... it makes sense that the rationality community ends up pretty insular, but it seems good for loads of reasons to have more outside contact and ties. I think at the very least, encouraging people to hire outside the community and do hobbies outside the community are good starting points.
  • I've long felt that at parties and social events (in the Bay Area anyways) less time should be spent on model-building and networking and learning, and more time should be spent on anything else. Spending your time networking or learning at parties is fine if those are pretty different than your normal life, but we don't really have that luxury.
  • Someone recently tried to tell me they wanted to put all their charitable money into AI safety specifically, because it was their comparative advantage. I disagree with this even on a personal basis with small amounts. Making donations to other causes helps you take them seriously, in the way that trading with real-but-trivial amounts of money instead of paper trading moves you strongly from Far Mode into Near Mode. I think paying 10% overhead of charitable money to lower-EV causes is going to be much better for AI safety in the long-run due to seriousness-in-exploration, AND I shouldn’t even have to justify it as such—I should be able to say something like “it’s just unvirtuous to put all eggs in one basket, don’t do it”. I think the old arguments about obviously putting all your money into the highest-EV charity at a given time are similarly wrong.
  • I love that Lightcone has a bunch of books outside the standard rationalist literature, about Jobs, Bezos, LKY, etc etc.
  • In general, I don’t like when people try to re-write social mechanisms (I’m fine with tinkering, small experiments, etc). This feels to me like one of the fastest ways to de-stabilize people, as well as the dumbest Chesterton’s fence to take down because of how socializing is in the wheelhouse of cultural gradient descent and not at all remotely in the wheelhouse of theorizing.
  • I’m much more wary of psychological theorizing, x-rationality, etc due to basically the exact points in the bullet above—your mind is in the wheelhouse of gradient descent, not guided theorizing. I walk this one—I quit my last project in part because of this. Other forms of tinkering-style psychological experimentation or growth are likely more ok. But even “lots of debugging” seems bad here, basically because it gives you too much episteme of how your brain works and not enough techne or metis to balance it out. You end up subtly or not-subtly pushing in all sorts of directions that don’t work, and it causes problems. I think the single biggest improvement to debugging (both for ROI and for health) is if there was a culture of often saying “this one’s hopeless, leave it be” much earlier and explicitly, or saying “yeah almost all of this is effectively-unchangeable”. Going multiple levels down the tree to solve a bug is going too far. It’s too easy to get totalized by the bug-fixing spirit if you regard everything as mutable.
  • As dumb as jobs are, I’m much more pro-job than I used to be for a bunch of reasons. The core reasons are obv not because of psychosis, but other facets of totalization-escape seems like a major deal.
  • As dumb as regular schedules, are, ditto. Having things that you repeatedly have to succeed in doing leaves you genuinely much less room for going psychotic. Being nocturnal and such are also offenders in this category.
  • I'd like to see Bay Area rationalist culture put some emphasis on real holidays rather than only rolling their own. E.g. Solstice instead of Christmas seems fine, but also we should have a lot of emphasis on Christmas too? I had a housemate who ran amazing Easter celebrations in Event Horizon that were extremely positive, and I loved that they captured the real spirit of Easter rather than trying to inject the spirit of Rationality into the corpse of Easter to create some animated zombie holiday. In this vein I also love Petrov Day but slightly worry that we focus much less on July 4th or Thanksgiving or other holidays that are more shared with others. I guess maybe I should just be glad we haven't rationalized those...
  • Co-dependency and totalizing relationships seem relevant here although not much new to say.

Anna's crusade for hobbies over the last several years has seemed extremely useful on this topic directly and indirectly.

I got one comment on a draft of this about how someone basically still endorsed years later their totalization after their CFAR workshop. I think this is sort of fine—very excitable and [other characterizations] people can easily become fairly totalized when entering a new world. However, I still think that a culture which totalized them somewhat less would have been better.

Also, lots of people totalize themselves—I was one of those people who got very excited about rationality and wanted to push it to new heights and such, unendorsed by anyone in the "community" (and even disendorsed). So this isn't a question of "leadership" of some kind asking too much from people (except Vassar)—it's more a question of building a healthy culture. Let us not confuse blame with seeking to become better.

Replies from: Benquo, Unreal
comment by Benquo · 2021-10-23T13:14:37.209Z · LW(p) · GW(p)

Rationality ought to be totalizing. https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory [LW · GW]

Replies from: RobbBB, Connor_Flexman, Unreal
comment by Rob Bensinger (RobbBB) · 2021-10-23T19:05:25.544Z · LW(p) · GW(p)

Yeah, I think this points at a thing that bothers me about Connor's list, even though it seems clear to me that Connor's advice should be "in the mix".

Some imperfect ways of trying to point at the thing:

 

1.  'Playing video games all the time even though this doesn't feel deeply fulfilling or productive' is bad. 'Forcing yourself to never have fun and thereby burning out' is also bad. Outside of the most extreme examples, it can be hard to figure out exactly where to draw the line and what's healthy, what conduces to flourishing, etc. But just tracking these as two important failure modes, without assuming one of these error categories is universally better than the other, can help.

(I feel like "flourishing" is a better word than "healthy" here, because it's more... I want to say, "transhumanist"? Acknowledges that life is about achieving good things, not just cautiously avoiding bad things?)

 

2.  I feel like a lot of Connor's phrasings, taken fully seriously, almost risk... totalizing in the opposite direction? Insofar as that's a thing. And totalizing toward complacency, mainstream-conformity, and non-ambition leads to sad, soft, quiet failure modes, the absence of many good things; whereas totalizing in the opposite direction leads to louder, more Reddit-viral failure modes; so there is a large risk that we'll be less able to course-correct if we go too far in the 'stability over innovation' direction.

 

3.  I feel like the Connor list would be a large overcorrection for most people, since this advice doesn't build in a way to tell whether you're going too far in this direction, and most people aren't at high risk for psychosis/mania/etc.

I sort of feel like adopting this full list (vs. just having it 'in the mix') would mean building a large share of rationalist institutions, rituals, and norms around 'let's steer a wide berth around psychosis-adjacent behavior'.

It seems clear to me that there are ways of doing the Rationality Community better, but I guess I don't currently buy that this particular problem is so... core? Or so universal?

What specifically is our evidence that in absolute terms, psychosis-adjacent patterns are a larger rationality-community problem than depression-adjacent patterns, OCD-adjacent patterns, dissociation-adjacent patterns, etc., etc.?

 

4.  Ceteris paribus, it's a sign of rationality if someone compartmentalizes less, is better able to make changes to their lives in response to new information (including, e.g., installing trigger-action plans), takes more actions that are good for their long-term values and not just short-term rewards, etc.

I worry that a culture built around your suggestions, Connor (vs. one that just has those in the mix as considerations), would pathologize a lot of 'signs of rationality' and drive away or regress-to-the-mean the people who make this community different from a randomly selected community.

This paragraph especially raised this worry for me:

Also, lots of people totalize themselves—I was one of those people who got very excited about rationality and wanted to push it to new heights and such, unendorsed by anyone in the "community" (and even disendorsed). So this isn't a question of "leadership" of some kind asking too much from people (except Vassar)—it's more a question of building a healthy culture. Let us not confuse blame with seeking to become better.

I don't know anything about what things you wanted to push for, and with that context I assume I'd go 'oh yeah, that is obviously unhealthy and unreasonable'?

But as written, without the context, this reads to me like it's pathologizing rationality, treating ambition and 'just try things' as unhealthy, etc.

I really worry about a possible future version of the community that treats 'getting very excited about rationality and wanting to push it to new heights' as childishly naive, old hat / obviously could never work, or (worse!) as a clear sign of an "unhealthy" mind.

(Unless, like, we actually reach the point of confidence that we've run out of big ways to improve our rationality. If we run out of improvements, then I want to believe we've run out of improvements. But I don't think that's our situation today.)

 

5.  There's such a thing as being too incautious, adventurous, and experimental; there's also such a thing as being too cautious and unadventurous, and insufficiently experimental. I actually think that the rationalists have a lot of both problems, rather than things being heavily stacked in the 'too incautious' category. (Though maybe this is because I interact with a different subset of rationalists.)

 

An idea in this space that makes me feel excited rather than worried, is Anna's description of a "Center for Bridging between Common Sense and Singularity Scenarios [LW(p) · GW(p)]" and her examples and proposals in Reality-Revealing and Reality-Masking Puzzles [LW · GW].

I'm excited about the idea of figuring out how to make a more "grounded" rationalist community, one that treats all the crazy x-risk, transhumanism, Bayes, etc. stuff as "just more normality" (or something like that). But I'm more wary of the thing you're pointing at, which feels more to me like "giving up on the weird stuff" or "trying to build a weirdness-free compartment in your mind" than like trying to integrate the weird rationalist stuff into being a human being.

Replies from: RobbBB, SaidAchmiz, TekhneMakre
comment by Rob Bensinger (RobbBB) · 2021-10-23T19:29:10.317Z · LW(p) · GW(p)

I think this is also a case of 'reverse all advice you hear'. No one is at the optimum on most dimensions, so a lot of people will benefit from the advice 'be more X' and a lot of people will benefit from the advice 'be less X'. I'm guessing your (Connor's) advice applies perfectly to lots of people, but for me...

  • Even after working at MIRI and living in the Bay for eight years, I don't have any close rationalist friends who I talk to (e.g.) once a week, and that makes me sad.

    I have non-rationalist friends who I do lots of stuff with, but in those interactions I mostly don't feel like I can fully be 'me', because most of the things I'm thinking about moment-to-moment and most of the things that feel deeply important to me don't fit the mental schemas non-rationalists round things off to. I end up feeling like I have to either play-act at fitting a more normal role, or spend almost all my leisure time bridging inferential gap after inferential gap. (And no, self-modifying to better fit mainstream schemas does not appeal to me!)
     
  • I'd love to go to these parties you're complaining about that are focused on "model-building and... learning"!

    Actually, the thing I want is more extreme than that: I'd love to go to more 'let's do CFAR-workshop-style stuff together' or 'let's talk about existential risk' parties.

    I think the personal problem I've had is the opposite of the one you're pointing at: I feel like (for my idiosyncratic preferences) there's usually not enough social affordance to talk about "real stuff" at rationalist-hosted parties, versus talking about pleasantries. This makes me feel like I'm playing a role / reading a script, which I find draining and a little soul-crushing.

    In contrast, events where I don't feel like there's a 'pretend to be normal' expectation (and where I can talk about my bizarre actual goals and problems) feel very freeing and fulfilling to me, and like they're feeding me nutrients I've been low on rather than empty calories.
     
  • "Making donations to other [lower-EV] causes helps you take them seriously, in the way that trading with real-but-trivial amounts of money instead of paper trading moves you strongly from Far Mode into Near Mode"

    OK, but what about the skills of 'doing the thing you think is highest-EV', 'trying to figure out what the highest-EV thing is', or 'developing deeper and more specialized knowledge on the highest-EV things (vs. flitting between topics)'? I feel like those are pretty important skills too, and more neglected by the world at large; and they have the advantage of being good actions on their own terms, rather than relying on a speculative theory that says this might help me do higher-EV things later.

    I feel especially excited about trying to come up with new projects that might be extremely-high-EV, rather than just evaluating existing stuff.

    I again feel like in my own life, I don't have enough naive EA conversations about humanity's big Hamming problems / bottlenecks. (Which is presumably mostly my fault! Certainly it's up to me to fix this stuff. But if the community were uniformly bad in the opposite direction, then I wouldn't expect to be able to have this problem.)
     
  • "I'd like to see Bay Area rationalist culture put some emphasis on real holidays rather than only rolling their own."

    Rationalist solstice is a real holiday! 😠

    I went to a mostly-unironic rationalist July 4 party that I liked a lot, which updates me toward your view. But I think I still mostly come down on the opposite side of this tradeoff, if I were only optimizing for my own happiness.

    'No Christmas' feels sad and cut-off-from-mainstream-culture to me, but 'pantomiming Christmas without endorsing its values or virtues' feels empty to me. "Rationalizing" Christmas feels like the perfect approach here (for me personally): make a new holiday that's about things I actually care about and value, that draws out neglected aspects of Christmas (or precursor holidays like Saturnalia). I'd love to attend a rationalist seder, a rationalist Easter, a rationalist Chanukkah, etc. (Where 'rationalist' refers to changing the traditions themselves, not just 'a bunch of rationalists celebrating together in a way that studiously tries to avoid any acknowledgment of anything weird about us'.)
comment by Said Achmiz (SaidAchmiz) · 2021-10-23T21:40:10.180Z · LW(p) · GW(p)

I think that many people (and I have not decided yet if I am one such) may respond to this with “one man’s modus tollens is another’s modus ponens”.

That is, one might read things like this:

I sort of feel like adopting this full list (vs. just having it ‘in the mix’) would mean building a large share of rationalist institutions, rituals, and norms around ‘let’s steer a wide berth around psychosis-adjacent behavior’.

… and say: “yes, exactly, that’s the point”.

Or, one might read this:

  1. Ceteris paribus, it’s a sign of rationality if someone compartmentalizes less, is better able to make changes to their lives in response to new information (including, e.g., installing trigger-action plans), takes more actions that are good for their long-term values and not just short-term rewards, etc.

… and say: “yes, exactly, and that’s bad”.

(Does that seem absurd to you? But consider that one might not take at face value the notion that the change in response to new information is warranted, that the “long-term values” have been properly apprehended—or even real, instead of confabulated; etc.)

One might read this:

I worry that a culture built around your suggestions, Connor (vs. one that just has those in the mix as considerations), would pathologize a lot of ‘signs of rationality’ and drive away or regress-to-the-mean the people who make this community different from a randomly selected community.

… and say: “yes, just so, and this is good, because many of the ways in which this community is different from a randomly selected community are bad”.

This paragraph especially raised this worry for me:

Also, lots of people totalize themselves—I was one of those people who got very excited about rationality and wanted to push it to new heights and such, unendorsed by anyone in the “community” (and even disendorsed). So this isn’t a question of “leadership” of some kind asking too much from people (except Vassar)—it’s more a question of building a healthy culture. Let us not confuse blame with seeking to become better.

I don’t know anything about what things you wanted to push for, and with that context I assume I’d go ‘oh yeah, that is obviously unhealthy and unreasonable’?

But is this unhealthy and unreasonable, or is it actually prudent? In other words—to continue the previous pattern—one might read this:

But as written, without the context, this reads to me like it’s pathologizing rationality, treating ambition and ‘just try things’ as unhealthy, etc.

… and say: “yes, we have erred much too far in the opposite direction, this is precisely a good change to make”.

We can put things in this way: you are saying, essentially, that Connor’s criticisms and recommendations indicate changes that would undermine the essence of the rationalist community. But might one not say, in response: “yes, and that’s the point, because the rationalist community is fundamentally a bad idea and does more harm than good by existing”? (Note that this is different from saying that rationality, either as a meme or as a personal principle, is bad or harmful somehow.)

Replies from: RobbBB, Connor_Flexman
comment by Rob Bensinger (RobbBB) · 2021-10-24T01:54:37.942Z · LW(p) · GW(p)

Yeah, I disagree with that view.

To keep track of the discussion so far, it seems like there are at least three dimensions of disagreement:

 

1.  Mainstream vs. Rationalists Cage Match

1A. Overall, the rationality community is way better than mainstream society.

1B. The rationality community is about as good as mainstream society.

1C. The rationality community is way worse than mainstream society.

 

My model is that I, Connor, Anna, and Vassar agree with 1A, and hypothetical-Said-commenter agrees with 1C. (The rationalists are pretty weird, so it makes sense that 1B would be a less common view.)

 

2. Psychoticism vs. Anti-Psychoticism

2A. The rationality community has a big, highly tractable problem: it's way too high on 'broadly psychoticism-adjacent characteristics'.

2B. The rationality community has a big, highly tractable problem: it's way too low on those characteristics.

2C. The rationality community is basically fine on this metric. Like, we should be more cautious around drugs, but aside from drug use there isn't a big clear thing it makes sense for most community members to change here.

 

My model is that Connor, Anna, and hypothetical-Said-commenter endorse 2A, Vassar endorses 2B, and I currently endorse 2C. (I think there are problems here, but more like 'some community members are immunocompromised and need special protections', less like 'there's an obesity epidemic ravaging the community'.)

Actually, I feel a bit confused about Anna's view here, since she seems very critical of mainstream society's (low-psychoticism?) culture, but she also seems to think the rationalist community is causing lots of unnecessary harm by destabilizing community members, encouraging overly-rapid changes of belief and behavior, etc.

If I had to speculate (properly very wrongly) about Anna's view here, maybe it's that there's a third path where you take ideas incredibly seriously, but otherwise are very low-psychoticism and very 'grounded'?

The mental image that comes to mind for me is a 60-year-old rural east coast libertarian with a very 'get off my lawn, you stupid kids' perspective on mainstream culture. Relatively independent, without being devoid of culture/tradition/community; takes her own ideology very seriously, and doesn't compromise with the mainstream Modesty-style; but also is very solid, stable, and habit-based, and doesn't constantly go off and do wild things just because someone tossed the idea out there.

(My question would then be whether you can have all those things plus rationality, or whether the rationality would inherently ruin it because you keep having to update all your beliefs, including your beliefs about your core identity and values. Also, whether this is anything remotely like what Anna or anyone else would advocate?)

 

3. Rationality Community: Good or Bad?

There are various ways to operationalize this, but I'll go with:

3A. The rationality community is doing amazing. There isn't much to improve on. We're at least as cool as Dath Ilan teenagers, and plausibly cooler.

3B. The rationality community is doing OK. There's some medium-sized low-hanging fruit we could grab to realize modest improvements, and some large high-hanging fruit we can build toward over time, but mostly people are being pretty sensible and the norms are fine (somewhere between "meh" and "good").

3C. The rationality community is doing quite poorly. There's large, known low-hanging fruit we could use to easily transform the community into a way way better (happier, more effective, etc.) entity.

3D. The rationality community is irredeemably bad, isn't doing useful stuff, should dissolve, etc.

 

My model is that I endorse 3B ('we're doing OK'); Connor, Anna, and Vassar endorse 3C ('we're doing quite poorly'); and hypothetical-Said-commenter endorses 3D.

This maps pretty well onto people's views-as-modeled-by-me in question 2, though you could obviously think psychoticism isn't a big rationalist problem while also thinking there are other huge specific problems / low-hanging fruit for the rationalists.

I guess I'm pretty sympathetic to 3C. Maybe I'd endorse 3C instead in a different mood. If I had to guess at the big thing rationalists are failing at, it would probably be 'not enough vulnerability / honesty / Hamming-ness' and/or 'not enough dakka / follow-through / commitment'?

 

I probably completely mangled some of y'alls views, so please correct me here.

Replies from: Unreal, SaidAchmiz
comment by Unreal · 2021-10-24T21:44:29.064Z · LW(p) · GW(p)

A lot of the comments in response to Connor's point are turning this into a 2D axis with 'mainstream norms' on one side and 'weird/DIY norms' on the other and trying to play tug-of-war, but I actually think the thing is way more nuanced than this suggests. 

Proposal: 

  • Investigate the phenomenon of totalization. Where does it come from, what motivates it, what kinds of people fall into it... To what extent is it coming from external vs internal pressure? Are there 'good' kinds of totalizing and 'bad' kinds? 
  • Among people who totalize, what kinds of vulnerabilities do they experience as a result? Do they get exploited more by bad actors? Do they make common sense mistakes? Etc.

I am willing to bet there is a 'good' kind of totalizing and a 'bad' kind. And I think my comment about elitism was one of the bad kinds. And I think it's not that hard to tell which is which? I think it's hard to tell 'from the inside' but I... think I could tell from the outside with enough observation and asking them questions? 

A very basic hypothesis is: To the extent that a totalizing impulse is coming from addiction (underspecified term here, I don't want to unpack rn), it is not healthy. To the extent that a totalizing impulse is coming from an open-hearted, non-clingy, soulful conviction, it is healthy. 

I would test that hypothesis, if it were my project. Others may have different hypotheses. 

comment by Said Achmiz (SaidAchmiz) · 2021-10-24T03:28:51.880Z · LW(p) · GW(p)

I want to note that the view / reasoning given in my comment applies (or could apply) quite a bit more broadly than the specific “psychoticism” issue (and indeed I took Connor’s top-level comment to be aimed more broadly than that). (I don’t know, actually, that I have much to say about that specific issue, beyond what I’ve already said elsethread here.)

I do like the “rural east coast libertarian” image. (As far as “can you have that and also rationality” question, well, why not? But perhaps the better question is “can you have that and Bay Area rationalist culture”—to which the answer might be, “why would you want to?”)

comment by Connor_Flexman · 2021-10-23T22:42:41.442Z · LW(p) · GW(p)

(I would not take this modus tollens, I don't think the "community" is even close to fundamentally bad, I just think some serious reforms are in order for some of the culture that we let younger people build here.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-10-23T22:53:16.509Z · LW(p) · GW(p)

Indeed, I did not suspect that you would—but (I conjecture?) you also do not agree with Rob’s characterizations of the consequences of your points. It’s one who agrees with Rob’s positive take, but opposes his normative views on the community, that would take the other logical branch here.

comment by TekhneMakre · 2021-10-24T03:14:43.133Z · LW(p) · GW(p)

> a larger rationality-community problem than depression-adjacent patterns, OCD-adjacent patterns, dissociation-adjacent patterns

Well, Connor's list would probably help with most of these as well. (Not that I disagree with your point.)

comment by Connor_Flexman · 2021-10-23T22:16:54.086Z · LW(p) · GW(p)

But the "community" should not be totalizing.

(Also, I think rationality should still be less totalizing than many people take it to be, because a lot of people replace common sense with rationality. Instead one should totalize themselves very slowly, over years, watching for all sorts of mis-steps and mistakes, and merge their past life with their new life. Sure, rationality will eventually pervade your thinking, but that doesn't mean at age 22 you throw out all of society's wisdom and roll your own.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-10-23T23:14:54.487Z · LW(p) · GW(p)

Reservationism is the proper antidote to the (prematurely) totalizing nature of rationality.

That is: take whatever rationality tells you, and judge it with your own existing common sense, practical reason, and understanding of the world. Reject whatever seems to you to be unreasonable. Take on whatever seems to you to be right and proper. Excise or replace existing parts of your epistemology and worldview only when it genuinely seems to you that those parts are dysfunctional or incorrect, regardless of what the rationality you encounter is telling you about them.

(Don’t take this quick summary as a substitute for reading the linked essay; read it yourself, judge it for yourself.)

Note, by the way, that rationality—as taught in the Sequences—already recommends this! If anyone fails to approach the practice of rationality in this proper way, they are failing to do that which we have explicitly been told to do! If your rationality is “prematurely totalizing”, then you’re doing it wrong.

Consider also how many times we have heard a version of this: “When I read the Sequences, the ideas found therein seemed so obvious—like they’d put into words things I’ve always somehow known or thought, but had never been able to formulate so clearly and concisely!”. This is not a coincidence! If you learn of a “rationality”-related idea, and it seems to you to be obviously correct, such that you find that not only is it obvious that you should integrate it into your worldview, but indeed that you’ve already integrated it (so naturally and perfectly does it fit)—well, good! But if you encounter an idea that is strange, and counterintuitive, then examine it well, before you rush to integrate it; examine it with your existing reason—which will necessarily include all the “rationality” that you have already carefully and prudently integrated.

(And this, too, we have already been told.)

comment by Unreal · 2021-10-23T15:12:16.421Z · LW(p) · GW(p)

I don't think there's actually a contradiction between Eliezer's post and Connor's comment. But maybe you should bring up specifics if you think there is one. 

comment by Unreal · 2021-10-23T15:06:30.125Z · LW(p) · GW(p)

I like everything you say here. Hear hear. 

I resonate as someone who wanted to 'totalize' themselves when I lived in the Bay Area rationalist scene. One hint as to why: I have felt, from a young age, compelled towards being one of the elite. I don't think this is the case for most rationalists or anything, but noting my own personal motivation in case this helps anyone introspect on their own motivations more readily.

It was important for my identity / ego to be "one of the top / best people" and to associate with the best people. I had a natural way of dismissing anyone I thought was "below" my threshold of worthiness—I basically "didn't think about them" and had no room in my brain for them. (I recognize the problematic-ness of that now? Like these kinds of thoughts lead to genocide, exploitation, runaway power, slavery, and a bunch of other horrible things. As such, I now find this 'way of seeing' morally repulsive.)

The whole rationality game was of egoic interest to me, because it seemed like a clear and even correct way of distinguishing the elite from the non-elite. Obviously Eliezer and Anna and others were just better than other people and better at thinking which is hugely important obviously and AI risk is something that most people don't take seriously oh my god what is wrong with most people ahhh we're all gonna die. (I didn't really think thoughts like this or feel this way. But it would take more for me to give an accurate representation so I settled for a caricature. I hope you're more charitable to your own insides.) 

When a foolish ego wants something like this, it basically does everything it can to immerse themselves in it, and while it's very motivating and good for learning, it is compelled towards totalization and will make foolish sacrifices. In the same way, perhaps, that young pretty Koreans sacrifice for the sake of becoming an idol in the kpop world. 

MAPLE is like rehab for ego addicts. I find myself visiting my parents each year (after not having spoken to them for a decade) and valuing 'just spending time together with people'. Like going bowling for the sake of togetherness, more than for the sake of bowling. And people here have a plethora of hobbies like woodworking, playing music, juggling, mushroom picking, baking, etc. Some people here want to totalize but are largely frustrated by their inability to do so to the extent they want to, and I think it's an unhealthy addiction pattern that they haven't figured out how to break. :l 

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-23T22:10:51.558Z · LW(p) · GW(p)

I note that the things which you're resonating with, which Connor proposes and which you expect would have helped you, or helped protect you...

...protect you from things which were not problems for me.

Which is not to say that those things are bad.  Like, saving people from problems they have (that I do not have) sounds good to me.

But it does mean that there is [a good thing] for at least [some people] already, and while it may be right to trade off against that, I would want us to be eyes-open that it might be a tradeoff, rather than assuming that sliding in the Connor-Unreal direction is strictly and costlessly good.

Replies from: Unreal, Connor_Flexman
comment by Unreal · 2021-10-24T21:17:39.375Z · LW(p) · GW(p)

Hmm, I want to point out I did not say anything about what I expected would have helped me or helped 'protect' me. I don't see anything on that in my comment... 

I also don't think it'd be good for me to be saved from my problems...? but maybe I'm misunderstanding what you meant. 

I definitely like Connor's post. My "hear hear" was a kind of friendly encouragement for him speaking to something that felt real. I like the totalization concept. Was a good comment imo. 

I do not particularly endorse his proposal... It seems like a non-starter. A better proposal might be to run some workshops or something that try to investigate this 'totalization' phenomenon in the community and what's going on with it. That sounds fun! I'd totally be into doing this. Prob can't though. 

comment by Connor_Flexman · 2021-10-23T22:46:36.209Z · LW(p) · GW(p)

I agree with most of this point. I've added an ETA to the original to reflect this. My quibble (that I think is actually important) is that I think it should be less of a tradeoff and more of an {each person does the thing that is right for them}. 

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-24T00:37:41.491Z · LW(p) · GW(p)

Endorsed, but that means when we're talking about setting group norms and community standards, what we're really shooting for is stuff that makes all the options available to everyone, and which helps people figure out what would be good for them as individuals.

Where one attractor near what you were proposing (i.e. not what you were proposing but what people might hear in your proposal, or what your proposal might amount to in practice) is "new way good, old way bad."

Instead of "old way insufficient, new way more all-encompassing and cosmopolitan."

Replies from: Connor_Flexman
comment by Connor_Flexman · 2021-10-24T01:07:47.943Z · LW(p) · GW(p)

Yeah, ideally would have lampshaded this more. My bad.

The part that gets extra complex is that I personally think ~2/3+ of people who say totalization is fine for them are in fact wrong and are missing out on tons of subtle things that you don't notice until longer-term. But obviously the mostly likely thing is that I'm wrong about this. Hard to tell either way. I'd like to point this out more somehow so I can find out, but I'd sort of hoped my original comment would make things click for people without further time. I suppose I'll have to think about how to broach this further.

comment by Nisan · 2021-10-17T21:21:30.885Z · LW(p) · GW(p)

The psychotic break you describe sounds very scary and unpleasant, and I'm sorry you experienced that.

comment by Unreal · 2021-10-17T19:48:25.506Z · LW(p) · GW(p)

I am impressed and appreciative towards Logan for trying to say things on this post despite not being very coherent. I am appreciative and have admiration towards Anna for making sincere attempts to communicate out of a principled stance in favor of information sharing. I am surprised and impressed by Zoe's coherence on a pretty triggering and nuanced subject. I enjoy hearing from jessicata, and I appreciate the way her mind works; I liked this post, and I found it kind of relieving.

I am a bit crestfallen at my own lack of mental skillfulness in response to reading posts like this one. 

While this feels like a not-very-LW-y way to go about things, I will just try to make a list... of .... things ... 

  • I don't like LW's discussion norms or the structure of its website. I think it favors left-brained non-indexical dialogue—which I believe compounds on many of the major problems of science today. I want to holistically appreciate the role of emotions, intuitions, physical sensations, facial expressions, felt senses, identity, and background context on truth-seeking. LW feels like it wants to strip that away or makes it very hard to bring them in. I don't blame LW, its creators, or whatever for any of that. It's fine, it's fine.
  • My reaction here doesn't seem very skillful, but I basically feel super blocked on engaging on LW for the above reason. I don't like it! My communication style is pretty nonverbal and seems like it only works in person, and thisisnobody'sfault eerg. 
  • I appreciate Michael Vassar for the work that he's done; I think he's helped people. I think he's made certain things more clear that are difficult to see. That said, I also detect a poison in the framing (that I associate with him and his group) that is corrosively seeping into certain narratives about trust, narrative control, institutions, etc. Namely, the poison takes the form of cynicism or "trust no one" or paranoia. I feel like yelling about it. 
  • I don't think psychosis is a good thing to induce in a person, ever. Period. I don't care if they have 'consented' to it, I don't think it's a wise or loving thing to do to oneself or to another. I believe similarly about suicide. That said, if people are determined to go through with it, then my "seeming non-acceptance" about it is more likely to cause harm to the fabric of community. At bottom, I accept people. And I distinguish this from accepting certain actions that people might take. But I get that people conflate their personhood with their behaviors, so. Here we are. 
    • I accept you no matter what. Sorry if my words feel like a rejection. 
  • This "person X infecting person Y with a demon" framing seems to have the 'paranoia' poison mentioned above, and therefore, I don't like it. It's not 'entirely wrong / inaccurate' but I want people to frame the thing in a way that feels less mentally destabilizing to internalize. 
  • You might be thinking that reality is just mentally destabilizing when you get down to it, and so maybe people doing the truth-seeking thing... some of them are just gonna go mad, if they're doing it right. But friends, I wanna make a claim. It's possible to build mental capacity such that difficult truths can be seen without being terrible for you. (Monastic training has been helpful for me.) Some people are maybe not willing to go through extensive mental training before getting to learn forbidden truths, but ... my take: The end of the world doesn't need your brain to explode on top of all its other issues. Manic insight trips are not what we need. Beware a sense of urgency that tells you that you need to make drastic changes to your psyche or that you need to take drugs in order to "accelerate your progress". 
    • It's not worth risking your mind; your mind is precious. Also you are not special. Mutant super soldier Elon-Musk experimentation that risks your death or insanity is not gonna make the world a better place, and I claim... for at least some of you, you're probably attracted to it in the first place because you secretly hope for your own annihilation. This is foolishness. 
  • OK, end rant. I made a bunch of claims and definitely didn't try to explain them. Sorry, LW. I am happy for you to make your own discoveries about these claims. I have done my own investigations into them over a long period of time and a lifetime of mistakes; unfortunately I am pretty bad at legibility. ... Some knowledge might be best discovered by walking the path for yourself. 
  • I think I would be more legible in a one-on-one convo, with a fair amount of patience. Scaling is hard. 
  • Open to feedback. Open to changing my mind. 
  • Not open to drama
Replies from: BrienneYudkowsky
comment by LoganStrohl (BrienneYudkowsky) · 2021-10-17T20:32:39.620Z · LW(p) · GW(p)

oh man sounds like we have a really similar relationship with LW for the same reasons

comment by Chris_Leong · 2021-10-17T02:09:48.577Z · LW(p) · GW(p)

I am very sorry to hear about your experiences. I hope you've found peace and that the organizations can take your experiences on board.

On one hand you seem to want there to be more open discussion around mental health, whilst on the other you are criticising MIRI and CFAR for having people have mental health issues in their orbit. These seem somewhat in tension with each other.

I think one of the factors is that the mission itself is stressful. For example, air traffic control and the police are high stress careers, yet we need both.

Another issue is that rationality is in some ways more welcoming of (at least some subset) people whom society would seem weird. Especially since certain conditions can be paired with great insight or drive. It seems like the less a community appreciates the silver-lining of mental health issues, the better they'd score according to your metric.

Regarding secrecy, I'd prefer for AI groups to lean too much on the side of maintaining precautions about info-hazards than too little. (I'm only referring to technical research not misbehaviour). I think it's perfectly valid for donors to decide that they aren't going to give money without transparency, but there should also be respect for people who are willing to trust/make the leap of faith.

It's not the environment for everyone, but then again, the same could be said about the famously secretive Apple or say, working in national security. That seems like less of a organizational problem than one of personal fit (other alignment organizations and grant opportunities seem less secretive). One question: Would your position have changed it you felt that they had been more upfront about the stresses of the job?

Another point: I put the probability of us being screwed without Eliezer very high. This is completely different from putting him on a pedestal and pretending he's perfect or that no-one can ever exceed his talents. Rather, my model is that without him it likely likely would have taken a number of more years for AI safety research to really kick off.  And in this game even a few years can really matter. Now other people will have a more detailed model of the history than me and may come to different conclusions. But it's not really a claim that's very far out there. Of course, I don't know who said it or the context so its hard for me to update on this.

Regarding people being a better philosopher than Kant or not, we have to take into account that we have access to much more information today. Basic scientific knowledge and understanding of computers resolves many of the disputes that philosophers used to argue over. Further, even if someone hasn't read Kant directly, they may very well have read people who have read Kant. So I actually think that there would be a huge number of people floating around who would be better philosophers than Kant given these unfair advantages. Of course, it's an entirely different question if we gave Kant access to today's knowledge, but then again, given population growth, it wouldn't surprise me if there were still a significant number of them.

One point I strongly agree with you on is that rationalists should pay more attention to philosophy. In fact, my most recent theory of logical counterfactuals was in part influenced by Kant - although I haven't had time to read his critique of pure reason yet.

Replies from: jessica.liu.taylor, ESRogs
comment by jessicata (jessica.liu.taylor) · 2021-10-17T02:34:21.782Z · LW(p) · GW(p)

I am very sorry to hear about your experiences. I hope you’ve found peace and that the organizations can take your experiences on board.

Thanks, I appreciate the thought.

On one hand you seem to want there to be more open discussion around mental health, whilst on the other you are criticising MIRI and CFAR for having people have mental health issues in their orbit. These seem somewhat in tension with each other.

I don't see why these would be in tension. If there is more and better discussion then that reduces the chance of bad outcomes. (Partially, I brought up the mental health issues because it seemed like people were criticizing Leverage for having people with mental health issues in their orbit, but it seems like Leverage handled the issue relatively well all things considered.)

I think one of the factors is that the mission itself is stressful. For example, air traffic control and the police are high stress careers, yet we need both.

I basically agree.

It seems like the less a community appreciates the silver-lining of mental health issues, the better they’d score according to your metric.

I don't think so. I'm explicitly saying that talking about weird perceptions people might have, such as mental subprocess implantation, is better than the alternative; this is more likely to realize the benefits of neuro-atypicality, by allowing people to recognize when non-neurotypicals are having accurate perceptions, and reduce the risk of psychiatric hospitalization or other bad outcomes.

Regarding secrecy, I’d prefer for AI groups to lean too much on the side of maintaining precautions about info-hazards than too much.

I don't think it makes sense to run a "biased" expected value calculation that over-weights costs relative to benefits of information sharing. There are significant negative consequences when discussion about large topics is suppressed, which include failure to have the kind of high-integrity conversations that could lead to actual solutions.

Would your position have changed it you felt that they had been more upfront about the stresses of the job?

I don't think the issue was just that it was stressful, but that it was stressful in really unexpected ways. I think me from when I started work would be pretty surprised reading this post.

Rather, my model is that without him it likely likely would have taken a number of more years for AI safety research to really kick off. And in this game even a few years can really matter.

That doesn't seem wrong to me. However there's a big difference between saying it saves a few years vs. causes us to have a chance at all when we otherwise wouldn't. (I can't rule out the latter claim, but it seems like most of the relevant ideas were already in the memespace, so it's more an issue of timing and consolidation/problem-framing.)

Regarding people being a better philosopher than Kant or not, we have to take into account that we have access to much more information today.

I think when people are comparing philosophers they're usually trying to compare novel contributions the person made relative to what came before, not how much raw philosophical knowledge they possess.

One point I strongly agree with you on is that rationalists should pay more attention to philosophy.

Yes, I've definitely noticed a trend where rationalists are mostly continuing from Hume and Turing, neglecting e.g. Kant as a response to Hume.

Replies from: SaidAchmiz, Zack_M_Davis, Chris_Leong
comment by Said Achmiz (SaidAchmiz) · 2021-10-17T03:47:43.824Z · LW(p) · GW(p)

One point I strongly agree with you on is that rationalists should pay more attention to philosophy.

Yes, I’ve definitely noticed a trend where rationalists are mostly continuing from Hume and Turing, neglecting e.g. Kant as a response to Hume.

I’ve yet to see a readable explanation of what Kant had to say (in response to Hume or otherwise) that’s particularly worth paying attention to (despite my philosophy classes in college having covered Kant, and making some attempts later to read him). If you (or someone else) were to write an LW post about this, I think this might be of great benefit to everyone here.

Replies from: RobbBB, jessica.liu.taylor, BrienneYudkowsky, Chris_Leong
comment by Rob Bensinger (RobbBB) · 2021-10-18T02:11:10.234Z · LW(p) · GW(p)

I don't know what Kant-insights Jessica thinks LW is neglecting, but I endorse Allen Wood's introduction to Kant as a general resource.

(Partly because Wood is a Kant scholar who loves Kant but talks a bunch about how Kant was just being sloppy / inconsistent in lots of his core discussions of noumena, rather than assuming that everything Kant says reflects some deep insight. This makes me less worried about IMO one of the big failure modes of philosopher-historians, which is that they get too creative with their novel interpretations + treat their favorite historical philosophers like truth oracles.)

BTW, when it comes to transcendental idealism, I mostly think of Arthur Schopenhauer as 'Kant, but with less muddled thinking and not-absolutely-horrible writing style'. So I'd usually rather go ask what Schopenhauer thought of a thing, rather than what Kant thought. (But I mostly disagree with Kant and Schopenhauer, so I may be the wrong person to ask about how to properly steel-man Kant.)

comment by jessicata (jessica.liu.taylor) · 2021-10-18T02:25:33.322Z · LW(p) · GW(p)

I've been working on a write-up on and off for months, which I might or might not ever get around to finishing.

The basic gist is that, while Hume assumes you have sense-data and are learning structures like causation from this sense-data, Kant is saying you need concepts of causation to have sense-data at all.

The Transcendental Aesthetic is a pretty simple argument if applied to Solomonoff induction. Suppose you tried to write an AI to learn about time, which didn't already have time. How would it structure its observations, so it could learn about time from these different observations? That seems pretty hard, perhaps not really possible, since "learning" implies past observations affecting how future observations are interpreted.

In Solomonoff induction there is a time-structure built in, which structures observations. That is, the inductor assumes a priori that its observations are structured in a sequence.

Kant argues that space is also a priori this way. This is a somewhat suspicious argument given that vanilla Solomonoff induction doesn't need a priori space to structure its observations. But maybe it's true in the case of humans, since our visual cortexes have a notion of spacial observation already built in. (That is, when we see things, we see them at particular locations)

Other than time and space to structuring observations, what else has to be there? To see the same object twice there has to be a notion that two observations could be of the same object. But that is more structure than simply spacetime, there's also a structure of connection between different observations so they can be of the same object.

Solomonoff induction might learn this through compression. Kant, unfortunately, doesn't explicitly discuss compression all that much. However, even Solomonoff induction makes a priori assumptions beyond spacetime, namely, that the universe is a Turing machine. This is a kind of causal assumption. You couldn't get "this runs on a Turing machine" by just looking at a bunch of data, without having some kind of prior that already contains Turing machines. It is, instead, assumed a priori that there's a Turing machine causing your observations.

The book is mostly a lot of stuff like this, what thought structures we must assume a priori to learn from data at all.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-10-18T05:19:07.305Z · LW(p) · GW(p)

The basic gist is that, while Hume assumes you have sense-data and are learning structures like causation from this sense-data, Kant is saying you need concepts of causation to have sense-data at all.

Hmm. Both of these ideas seem very wrong (though Kant’s, perhaps, more so). Is there anything else of value? If this (and similar things) are all that there is, then maybe rationalists are right to mostly ignore Kant…

comment by LoganStrohl (BrienneYudkowsky) · 2021-10-17T04:29:23.118Z · LW(p) · GW(p)

I've yet to see a readable explanation of what Kant had to say (in response to Hume or otherwise) that's particularly worth paying attention to

As an undergrad, instead of following the actual instructions and writing a proper paper on Kant, I thought it would be more interesting and valuable to simply attempt to paraphrase what he actually said, paragraph by paragraph. It's the work of a young person with little experience in either philosophy or writing, but it certainly seems to have had a pretty big influence on my thinking over the past ten years, and I got an A. So, mostly for your entertainment, I present to you "Kant in [really not nearly as plain as I thought at the time] English". (It's just the bit on apperception.)

Replies from: FeepingCreature, SaidAchmiz
comment by FeepingCreature · 2021-10-17T20:02:24.057Z · LW(p) · GW(p)

I think this is either basic psychology or wrong.¹

For one, Kant seems to be conflating the operation of a concept with its perception:

Since the concept of “unity” must exist for there to be combination (or “conjunction”) in the first place, unity can’t come from combination itself. The whole-ness of unified things must be a product of something beyond combination.

This seems to say that the brain cannot unify things unless it has a concept of combination. However, just as an example, reinforcement learning in AI shows this to be false: unification can happen as a mechanistic consequence of the medium in which experiences are embedded, and an understanding of unification - a perception as a concept - is wholly unnecessary.

Then okay, concepts are generalizations (compressions?) of sense data, and there's an implied world of which we become cognizant by assuming that the inner structure matches the outer structure. So far, so Simple Idea Of Truth. But then he does the same thing again with "unity", where he assumes that persistent identity-perception is necessary for judgment. Which I think any consideration of a nematode would disprove: judgment can also happen mechanistically.

I mean, I don't believe that the self is objectively unified, so Kant's view would be a problem for me. But I also just think that the model where most mental capabilities are caused by aspects of the medium and nonreflective computation, and consciousness only reacts to them in "hindsight", seems a lot more convincing to me in light of my pop understanding of neuroscience and introspection of my own mind.

edit: In summary, I think Kant's model cannot separate, and thus repeatedly mixes up, cognition and consciousness.

¹ Okay, let me be fair: I think this is spectacularly correct and insightful for its time. But I don't think people who have read the Sequences will get closer to the truth from it.

comment by Said Achmiz (SaidAchmiz) · 2021-10-17T04:31:40.988Z · LW(p) · GW(p)

Interesting, thanks—I will read this ASAP!

Replies from: shminux
comment by Shmi (shminux) · 2021-10-17T07:09:17.345Z · LW(p) · GW(p)

If you manage to get through that, maybe you can summarize it? Even Logan's accessible explanation makes my eyes glaze over.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-10-17T14:04:14.049Z · LW(p) · GW(p)

I got through that page and… no, I really can’t summarize it. I don’t really have any idea what Kant is supposed to have been saying, or why he said any of those things, or the significance of any of it…

I’m afraid I remain as perplexed as ever.

comment by Chris_Leong · 2021-10-17T12:04:57.480Z · LW(p) · GW(p)

I appreciate Kant's idea that certain things may arise from how we see and interpret the world. I think it's plausible that this is an accurate high-level description of things like counterfactuals and probability.

(I'm a bit busy atm so I haven't provided much detail)

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2021-10-18T03:40:31.317Z · LW(p) · GW(p)

'how-level' would be easier to parse

Replies from: Chris_Leong, Chris_Leong
comment by Chris_Leong · 2021-10-19T12:13:06.684Z · LW(p) · GW(p)

Oops, I meant "high-level"

comment by Chris_Leong · 2021-10-19T02:21:36.424Z · LW(p) · GW(p)

"How-level"?

Replies from: Taran
comment by Taran · 2021-10-19T08:51:09.042Z · LW(p) · GW(p)

They're suggesting that you should have written "...this is an accurate how-level description of things like..."  It's a minor point but I guess I agree.

comment by Zack_M_Davis · 2021-10-17T04:17:33.197Z · LW(p) · GW(p)

there's a big difference between saying it saves a few years vs. causes us to have a chance at all when we otherwise wouldn't. [...] it seems like most of the relevant ideas were already in the memespace

I was struck by the 4th edition of AI: A Modern Approach [LW · GW] quoting Nobert Weiner writing in 1960 (!), "If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively ... we had better be quite sure that the purpose put into the machine is the purpose which we really desire."

It must not have seemed like a pressing issue in 1960, but Weiner noticed the problem! (And Yudkowsky didn't notice, at first [LW · GW].) How much better off are our analogues in the worlds where someone like Weiner (or, more ambitiously, Charles Babbage) did treat it as a pressing issue? How much measure do they have?

Replies from: Chris_Leong
comment by Chris_Leong · 2021-10-17T12:02:38.911Z · LW(p) · GW(p)

Yeah, it's quite plausible that it might have taken another decade (then again I don't know if Bostrom thought super-intelligence was possible before encountering Eliezer)

comment by Chris_Leong · 2021-10-17T12:22:02.547Z · LW(p) · GW(p)

I don't think so. I'm explicitly saying that talking about weird perceptions people might have, such as mental subprocess implantation, is better than the alternative; this is more likely to realize the benefits of neuro-atypicality, by allowing people to recognize when non-neurotypicals are having accurate perceptions, and reduce the risk of psychiatric hospitalization or other bad outcomes.


I guess my point was that a community that excludes anyone who has mental health issues would score well on your metric, while a community that is welcoming would score poorly.

I think when people are comparing philosophers they're usually trying to compare novel contributions the person made relative to what came before, not how much raw philosophical knowledge they possess.

Another possiblity is that they might be comparing their ability to form a correct philosophical opinion. This isn't the same as raw knowledge, but I suspect that our epistemic position makes it much easier. Not only because of more information, but also because modern philosopher tends to be much clearer and explicit than older philosophy and so people can use it as an example to learn how to think clearly.

comment by ESRogs · 2021-10-17T09:13:46.304Z · LW(p) · GW(p)

Regarding secrecy, I'd prefer for AI groups to lean too much on the side of maintaining precautions about info-hazards than too much.

Was one of the much's in this sentence supposed to be a 'little'?

(My guess is that you meant to say that you want orgs to err on the side of being overly cautious rather than being overly reckless, but wanted to double-check.)

Replies from: Chris_Leong
comment by Chris_Leong · 2021-10-17T12:01:37.657Z · LW(p) · GW(p)

I'd prefer too much rather than too little.

comment by Dawn Drescher (Telofy) · 2021-11-11T00:19:52.588Z · LW(p) · GW(p)

I just want to send you some sympathy this way. Everything you’ve gone through and all the self-doubt and everything else that I can’t put a name to must be very stressful an exhausting. Reading and responding to hundreds of comments, often very critical ones, is very exhausting too. And who knows what else is going on in your life at the same time. Yet your comments show none of the exhaustion that I would’ve felt in your situation.

I’d also like to second what Rafael [LW(p) · GW(p)] already said!

It seems it’s been a few weeks since most of these discussions happened, so I hope you’ve had a chance to relax and recover in the meantime, or I hope that you’ll have some very soon!

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-11-12T01:58:10.768Z · LW(p) · GW(p)

I appreciate that you're thinking about my well-being! While I found it stressful to post this and then read and respond to so many comments, I didn't have much else going on at the time so I did manage to rest a lot. I definitely feel better after having gotten this off my chest.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-11-12T05:33:15.235Z · LW(p) · GW(p)

I'm glad to hear that you had time to rest a lot while this thread was going on.

comment by ChristianKl · 2021-10-17T14:38:52.341Z · LW(p) · GW(p)

As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR

[...]

As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.

That sounds to me like you are saying that people who were talking about demons got marginalized. To me that's not a sign of MIRI/CFAR being culty, but what most people would expect from a group of rationalists. It might have been a wrong decision not to take people who talk about demons more seriously to address their issues, but it doesn't match the error type of what's culty.

If I'm misunderstanding what you are saying, can you clarify?

Replies from: Benquo, jessica.liu.taylor
comment by Benquo · 2021-10-17T15:08:27.620Z · LW(p) · GW(p)

There's an important problem here which Jessica described in some detail in a more grounded way than the "demons" frame:

As a brief model of something similar to this (not necessarily the same model as the Leverage people were using): people often pick up behaviors ("know-how") and mental models from other people, through acculturation and imitation. Some of this influence could be (a) largely unconscious on the part of the receiver, (b) partially intentional or the part of the person having mental effects on others (where these intentions may include behaviorist conditioning, similar to hypnosis, causing behaviors to be triggered under certain circumstances), and (c) overall harmful to the receiver's conscious goals. According to IFS-like psychological models, it's common for a single brain to contain multiple sub-processes with different intentions. While the mental subprocess implantation hypothesis is somewhat strange, it's hard to rule out based on physics or psychology.

If we're confused about a problem like Friendly AI, it's preparadigmatic & therefore most people trying to talk about it are using words wrong [LW · GW]. Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they're confused about, than for simply ignoring the problems. 

Replies from: Dacyn
comment by Dacyn · 2021-11-22T11:35:37.903Z · LW(p) · GW(p)

-"Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they’re confused about, than for simply ignoring the problems."

I feel like "talking confusedly" here means "talking in a way that no one else can understand". If no one else can understand, they cannot give feedback on your ideas. That said, it is not clear that penalizing confused talk is a solution to this problem.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-11-22T20:51:27.708Z · LW(p) · GW(p)

At least some people were able to understand though. This lead to a sort of social division where some people were much more willing/able to talk about certain social phenomena than other people were.

comment by jessicata (jessica.liu.taylor) · 2021-10-17T15:05:07.203Z · LW(p) · GW(p)

It's not particularly a sign of being "culty" but my main point was that it worked out worse for the people involved, overall, so it doesn't make that much sense to think Leverage did worse overall with their mental health issues and weird metaphysics.

I do think that Bayesian virtue, taken to its logical conclusion, would consider these hypotheses to the point of thinking about whether they explain sensory data better than alternative hypotheses, and not reject them because they're badly-formalized and unproven at the start; there is an exploratory stage in generating new theories, where the initial explanations are usually wrong in important places, but can lead to more refined theories over time.

Replies from: Freyja
comment by Freyja · 2021-10-18T17:32:46.824Z · LW(p) · GW(p)

It seems like one of the problems with ‘the Leverage situation’ is that collectively, we don’t know how bad it was for people involved. There are many key Leverage figures who don’t seem to have gotten involved in these conversations (anonymously or not) or ever spoken publicly or in groups connected to this community about their experience. And, we have evidence that some of them have been hiding their post-Leverage experiences from each other.

So I think making the claim that the MIRI/CFAR related experiences were ‘worse’ because there exists evidence of psychiatric hospitalisation etc is wrong and premature.

And also? I’m sort of frustrated that you’re repeatedly saying that -right now-, when people are trying to encourage stories from a group of people who we might expect to have felt insecure, paranoid, and gaslit about whether anything bad ‘actually happened’ to them.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T18:41:19.611Z · LW(p) · GW(p)

It's a guess based on limited information, obviously. I tagged it as in inference. It's not just based on public information, it's also based on having talked with some ex-Leverage people. I don't like that you're considering it really important for ex-Leverage people to say things were "really bad" for them while discouraging me from saying things about how bad my own (and others') experiences were, that's optimizing for a predetermined conclusion in opposition to actually listening to people (which could reveal unexpected information). I'll revise my estimate if I get sufficient evidence in the other direction.

comment by AlexMennen · 2021-10-17T17:23:04.470Z · LW(p) · GW(p)

I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner, which would be more dangerous

Does anyone actually believe and/or want to defend this? I have a strong intuition that public-facing discussion of AI timelines within the rationalist and AI alignment communities is highly unlikely to have a non-negligible effect on AI timelines, especially in comparison to the potential benefit it could have for the AI alignment community being better able to reason about something very relevant to the problem they are trying to solve. (Ditto for probably most but not all topics regarding AGI that people interested in AI alignment may be tempted to discuss publicly.)

Replies from: habryka4, Vaniver, steve2152
comment by habryka (habryka4) · 2021-10-17T18:04:09.240Z · LW(p) · GW(p)

I kind of believe this, but it's not a huge effect. I do think that the discussion around short timelines had some effect on the scaling laws research, which I think had some effect on OpenAI going pretty hard on aggressively scaling models, which accelerated progress by a decent amount.

My guess is the benefits of public discussion are still worth more, but given our very close proximity to some of the world's best AI labs, I do think the basic mechanism of action here is pretty plausible.

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2021-10-17T18:48:12.294Z · LW(p) · GW(p)

Your comment makes sense to me as a consideration for someone writing on LW in 2017. It doesn't really make sense to me as a consideration for someone writing on LW in 2021. (The horse has left the barn.) Do you agree?

Replies from: habryka4, Zack_M_Davis
comment by habryka (habryka4) · 2021-10-17T19:04:37.870Z · LW(p) · GW(p)

No, I think the same mechanism of action is still pretty plausible, even in 2021 (attracting more researchers and encouraging more effort to go into blindly-scaling-type research), so I think additional research here could have similar effects. As Gwern has written about extensively, for some reason the vast majority of AI companies are still not taking the scaling hypothesis seriously, so there is lots of room for more AI companies going in on it. 

I also think there is a broader reference class of "having important ideas about how to build AGI" (of which the scaling hypothesis is one), that due to our proximity to top AI labs, does seem like it could have a decently sized effect. 

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2021-10-17T19:49:23.372Z · LW(p) · GW(p)

As in my comment [LW(p) · GW(p)], I think saying "Timelines are short because the path to AGI is (blah blah)" is potentially problematic in a way that saying "Timelines are short" is not problematic. In particular, it's especially problematic (1) If "(blah blah)" an obscure line of research, or (2) if "(blah blah)" is a well-known but not widely-accepted line of research (e.g. scaling hypothesis) AND the post includes new concrete evidence or new good arguments in favor of it.

If neither of those is applicable, then I want to say there's really no problem. Like, if some AI Company Leader is not betting on the scaling hypothesis, not after GPT-2, not after GPT-3, not after everything that Gwern and OpenAI etc. have said about the topic … well, I have a hard time imagining that yet another LW post endorsing the scaling hypothesis would be what tips the balance for them.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-17T20:08:10.036Z · LW(p) · GW(p)

I have updated over the years on how many important people in AI read and follow LessWrong and the associated meme-space. I agree marginal discussion does not make a big difference. I also think overall all discussion still probably didn't make enough of a difference to make it net-negative, but it was substantial enough to cause me to think for quite a while on whether it was worth it overall. 

I agree with you that the future costs seem marginally lower, but not low enough to make me not think hard and want to encourage others to think hard about the tradeoff. My estimate of the tradeoff came out on the net-positive side, but I wouldn't think it would be crazy for someone's tradeoff to come out on the net-negative side.

comment by Zack_M_Davis · 2021-10-17T18:54:13.586Z · LW(p) · GW(p)

There could be more than one horse.

comment by Vaniver · 2021-10-17T21:10:05.557Z · LW(p) · GW(p)

Does anyone actually believe and/or want to defend this? 

I believe this. For example, one of my benign beliefs in ~2014 was "songs in frequency space are basically just images; you can probably do interesting things in the music space by just taking off-the-shelf image stuff (like style transfer) and doing it on songs."

The first paper doing something similar that I know of came out in 2018. If I had posted about it in 2014, would it have happened sooner? Maybe--I think there's a sort of weird thing going on in the music space where all the people with giant libraries of music want to maintain their relationships with the producers of music, and so there's not much value for them in doing research like this, and so there might be unusually little searching for fruit in that corner of the orchard. But also maybe my idea was bad, or wouldn't really help all of that much, or no one would have done it just because they read it. (I don't think that paper worked in wavelet space, but didn't look too closely.)


I'm much less certain that the net effect is "you shouldn't talk about such things." The more important the consequences of sharing a belief seem to you ("oh, if you just put together X and Y you can build unsafe AGI"), the more important for your models that you're right ("oh, if that doesn't work I think we have five more years").

comment by Steven Byrnes (steve2152) · 2021-10-17T17:41:04.225Z · LW(p) · GW(p)

It's possible for someone to believe "Timelines are short because the path to AGI is (blah blah)", in which case they might hesitate to publicly justify their timelines, and this might indirectly bleed into a hesitation to bring it up in the first place. I agree that merely stating a belief about timelines publicly on LW, per se, seems pretty harmless right now, unless there's something I'm not thinking of.

(Update: if you're a famous AI person or politician publishing a high-profile op-ed that it's feasible for a focused project to make AGI today, that would be a bit different, that would require some thought about whether you're contributing to a worldwide competitive sprint to AGI. But a LW post today wouldn't move the needle on that, I think.)

Replies from: AlexMennen
comment by AlexMennen · 2021-10-17T19:51:34.266Z · LW(p) · GW(p)

Timelines are short because the path to AGI is (blah blah)

This requires a high degree of precision about your knowledge of the path to AGI, which makes it seem not that plausible, unless timelines are very short no matter what you say because others will stumble their way through the path you've identified soon anyway.

comment by Adam Zerner (adamzerner) · 2021-10-17T22:12:58.905Z · LW(p) · GW(p)

Some of the mental health issues seem like they might be due to individual people not acting as appropriately as they should, but a lot of it seems to me to be due to the inherent stresses of trying to save the world. And if this is indeed the case, then we should probably have some sort of system in place, or training, to prepare people for these psychological stresses before they dive in.

I started musing on this idea earlier in Preparing For Ambition [LW · GW]. In that post I focused on my anxiety as a startup founder, but I think it applies to various fields. For example, recently I came across the following excerpt from My Emotions as CEO:

I felt lonely every day – maybe not constantly, but definitely every day for 9+ years. I haven’t talked to a CEO who didn’t feel extreme loneliness. For the first time in my life I didn’t feel like I could be friends, even work friends, with anyone else on the team. That might have been my own baggage or a consequence of struggling to bring my whole self to work. The loneliness driver I’ve heard of most from other CEOs is the inability to talk with people about the emotional rollercoaster that’s inherent to the role.

It seems like in being a CEO, there are certain emotions that are extremely common, almost unavoidable. So then, it'd probably be a good idea to do what you can to prepare for these emotions beforehand. Maybe you can't avoid them entirely, but I'd think that in preparing beforehand, you'd be able to mitigate them a good amount. Sorta like learning to fall before learning to run, maybe.

comment by cousin_it · 2021-10-17T12:54:36.296Z · LW(p) · GW(p)

Maybe at Google or some other corporation you'd have a more pleasant time, because many employees view it as "just putting food on the table", which stabilizes things. It has some bureaucratic and Machiavellian stuff for sure, but to me it feels less psychologically pressuring than having everything be about the mission all the time.

Just for disclosure, I was a MIRI research associate for a short time, long ago, remotely, and the experience mostly just passed me by. I only remember lots of email threads about AI strategy, nothing about psychology. There was some talk about having secret research, but when joining I said that I wouldn't work on anything secret, so all my math / decision theory stuff is public on LW.

comment by Avi (Avi Weiss) · 2021-10-22T07:47:09.791Z · LW(p) · GW(p)

FYI - Geoff will be talking about the history of Leverage and related topics on Twitch tomorrow (Saturday, October 23rd 2021) starting at 10am PT (USA West Coast Time). Apparently Anna Salamon will be joining the discussion as well.

Geoff's Tweet

Text from the Tweet (for those who don't use Twitter):

"Hey folks — I'm going live on Twitch, starting this Saturday. Join me, 10am-1pm PT:
twitch.tv/geoffanders
This first stream will be on the topic of the history of my research institute, Leverage Research, and the Rationality community, with @AnnaWSalamon as a guest."

comment by bn22 · 2021-10-17T18:01:29.752Z · LW(p) · GW(p)

None of the arguments in this post seem as if they actually indict anything about MIRI or CFAR. The first claim of CFAR/MIRI somehow motivating 4 suicides provides no evidence that CFAR is unique in this regard or conducive to this kind of outcome and seems like a bizarre framing of events considering that stories about things like someone committing suicide out of suspicion over the post office's nefarious agenda generally aren't seen as an issue on the part of the postal service.

Additionally the focus on Roko's Basilisk-esque "info hazards" as a part of MIRI/CFAR reduces the credibility of this point seeing as the original basilisk thought experiment was invented as a criticism of SIAI and according to every LDT the basilisk has no incentive to actually carry out any threats. The second part is even weaker with how it essentially posits a non-argument for how the formation of a conspiracy mindset would be a foreseeable hazard from one's coworkers disagreeing with them on something important for possibly malevolent reasons and there being secrecy in a workplace. The point about how someone other than CFAR calling the police on CFAR-opposed people who were doing something illegal to them was evidence of authoritarianism on the part of CFAR and the broader rationality community is charitably speaking a bizarre argument to stake a claim on.

The section on world saving/scarcity narratives again provides no counter arguments or evidence against any arguments for why MIRI would be right to consider alignment especially important or how this sense of importance is on aggregate especially different from that of people who enjoy recycling or are seriously concerned about Anthropogenic global warming.

The evidence presented for the existence of the existence of a scarcity narrative is similarly weak with how it essentially amounts to a statement that the author imagined that the people around them would incorrectly disagree about how good of a philosopher someone was and the assertion that the works of Kant as opposed to say Dan Brown are systematically overlooked and extremely important for a.i. alignment for no given reason.

There are other issues I have with this post such as the comparisons between an organization's members being motivated by mental models of their leadership providing evidence for being cult like being a bad argument due to being applicable to almost every organization that exists but I don't feel like writing more about this subject than I already have.

comment by Jarred Filmer (4thWayWastrel) · 2021-10-17T22:16:28.139Z · LW(p) · GW(p)

Agree or disagree: "There may be a pattern wherein rationalist types form an insular group to create and apply novel theories of cognition to themselves, and it gets really weird and intense leading to a rash of psychological breaks."

Replies from: Viliam
comment by Viliam · 2021-10-18T15:33:30.031Z · LW(p) · GW(p)

Is "rationalist types" an euphemism for aspergers? In that case, "aspergers creating a new theory of cognition, applying it on themselves, and only getting feedback from other aspergers studying the same theory" sounds like something that could easily spiral out of control.

comment by mukashi (adrian-arellano-davin) · 2021-10-18T02:45:59.691Z · LW(p) · GW(p)

As someone who is pretty much an outsider to this community, I think it is interesting that a major drive for many people in this community seems to be tackling the most important problems in the world. I am not saying is a bad thing, I am just surprised. In my case, I work in academia not so much because of the impact I can have working here, but mainly because it allows me to have a more balanced life with a flexible time schedule. 

Replies from: Linch
comment by Linch · 2021-10-22T04:42:32.265Z · LW(p) · GW(p)

I'm actually pretty surprised by this, the people I personally know in academia who aren't community members tend to a) be true believers about their impact or b) really love the problems they work on or their subfields or c) feel kind of burned. Liking academia for work-life balance reasons seem very surprising to me, even my friends in fields with a fair amount of free time (eg theoretical CS) usually believe that they can have an easier life elsewhere.

comment by vV_Vv · 2021-10-19T12:21:48.110Z · LW(p) · GW(p)

Dusting off this old account of mine just to say I told you so [LW(p) · GW(p)].

 

Now, some snark:

"Leverage is a cult!"

"No, MIRI/CFAR is a cult!"

"No, the Vassarites are a cult!"

"No, the Zizians are a cult!"

Scott: if you believe that people have auras that can implant demons into your mind then you're clearly insane and you should seek medical help.

Also Scott: beware this charismatic Vassar guy, he can give you psychosis!

Scott 2015: Universal love, said the cactus person

Scott 2016: acritically signal boosts Aella talking about her inordinate drug use.

Scott 2018: promotes a scamcoin by Aella and Vinay Gupta, a differently sane tech entrepreneur-cum-spiritual guru, who apparently burned his brain during a “collaborative celebration” session.

Scott 2021: why do rationalists take so many psychedelic drugs? Must be Vassar's bad influence.

 

Btw, I hate to pick on Scott, since he's likely the sanest person in the whole community, but he's also one of the most influential people there, possibly even more than Eliezer, therefore I find his lack of self awareness disturbing.

That's all folks

Replies from: Dach, sil-ver
comment by Dach · 2021-10-19T13:17:56.743Z · LW(p) · GW(p)

Scott: if you believe that people have auras that can implant demons into your mind then you're clearly insane and you should seek medical help.

Also Scott: beware this charismatic Vassar guy, he can give you psychosis!

These so obviously aren't the same thing- what's your point here? If just general nonsense snark, I would be more inclined to appreciate it if it weren't masquerading as an actual argument.

People do not have auras that implant demons into your mind, and alleging so is... I wish I could be more measured somehow. But it's insane and you should probably seek medical help. On the other hand, people who are really charismatic can in fact manipulate others in really damaging ways, especially when combined with drugs etc. These are both simultaneously true, and their relationship is superficial.

Scott 2015: Universal love, said the cactus person

Scott 2016: acritically signal boosts Aella talking about her inordinate drug use.

Scott 2018: promotes a scamcoin by Aella and Vinay Gupta, a differently sane tech entrepreneur-cum-spiritual guru, who apparently burned his brain during a “collaborative celebration” session.

Personally, when I read the cactus person thing I thought it was a joke about how using drugs to seek "enlightenment" was dumb, and aside from that it was just entertainment? That Aella thing is a single link in a sea of 40 from 5 years ago, so I don't care. I don't know who Vinay Gupta is- from reading Scott's comments on that thread I get the impression he also didn't really know who he was?

I'll add a fourth silly piece of evidence to this list for laughs. In Unsong, the prominent villain known as the Drug Lord is evil and brainwashes people. Must be some sort of hidden message about Michael Vassar, huh? He warned us in advance!

comment by Rafael Harth (sil-ver) · 2021-10-19T13:04:13.714Z · LW(p) · GW(p)

I'm surprised that this comment is receiving positive reception. I don't know the author, but I strong strong-downvoted the comment as it seems like a sufficiently uncharitable (e.g., calling one mention in Scott's link posts "acritically signal-boosting") and low-evidence (e.g., calling Luna a scam) take that it wouldn't be taken seriously if it were written by an unknown person. If there is something important to be said here, it could have been done much better.

Replies from: AnnaSalamon, Viliam
comment by AnnaSalamon · 2021-10-19T13:12:14.206Z · LW(p) · GW(p)

I enjoyed it (and upvoted) for humor plus IMO having a point. Humor is great after a thread this long.

Replies from: Ruby
comment by Ruby · 2021-10-19T15:39:37.579Z · LW(p) · GW(p)

I appreciate the pointing out of apparent inconsistency but feel the humor is kind of mean-spirited/attacky, which maybe we should have some amount of. I wouldn't want to see comments trending in this direction of snark too much.

I didn't vote either way.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-19T17:28:18.226Z · LW(p) · GW(p)

I was gonna weak-upvote because I enjoyed the sass, but then the number of false/misleading claims got too high for me and I downvoted. Scott practically has a sequence about why he's wary of psychedelics (and 'Universal Love' is sort of part of that sequence, riffing on the question of secret unverifiable revelations), and vV_Vv could have mentioned that!

comment by Viliam · 2021-10-19T17:41:11.896Z · LW(p) · GW(p)

I was like: "finally a snarky comment that I can upvote". :D

To be able to laugh at yourself and criticism of yourself, is a mark of mental health. I am happy this community still has it. Especially in the context of discussing cultishness, suprresion of criticism, mental health, etc.

What is true is already true; if we are hypocritical about something, I would prefer to be aware of it. Furthermore, I would prefer to be told so in a friendly way and here, rather than in a hostile way and somewhere else. Because sooner or later someone else will notice it, too.

To address specific points, well... cactus person is a fiction, not a biography. MIRI/CFAR is not a cult; some things were problematic, but they can talk about them openly and fix them. Leverage... needs a separate longer conversation instead of a sidenote for a snarky comment; and frankly we do not have lot of data about them, though that fact itself is also some kind of evidence. Vassarites are not a cult, but Michael is a person you should not introduce to people you care about. Zizians... I didn't even know they existed until I read this article, probably also not a cult, but a very unhealthy community nonetheless (for some reason they remind me of the "incel" subreddits, with all that self-reinforcing all-encompassing negativity).

I believe that the more "mentally fragile" someone is, the easier it is to push them over the edge. Vassar seems to seek out fragile people. So rather than "avoid Vassar, he can give you psychosis", it is "if you are biologically prone to have a psychosis someday, avoid Vassar, he can trigger it, and he considers that the right thing to do". Otherwise, talk to him freely, you may find him impressive or boring; either way, say no to the drugs he will recommend to you.

The drugs seem to be in the Bay Area water supply (metaphorically or literally? no one really knows for sure), that is another reason to move somewhere else sooner rather than later. In Bay Area, you probably can't avoid meeting junkies every day, this shifts your "Overton window" -- you assume that if you take 100 times less drugs than them, you are okay; but you don't realize it is still 100 times more than the rest of the planet. Sometimes the easiest way to change yourself is to change your environment.

Replies from: sil-ver, vV_Vv, ioannes_shade, IlyaShpitser
comment by Rafael Harth (sil-ver) · 2021-10-19T18:38:13.235Z · LW(p) · GW(p)

To be able to laugh at yourself and criticism of yourself, is a mark of mental health. I am happy this community still has it. Especially in the context of discussing cultishness, suprresion of criticism, mental health, etc.

Yeah, but I don't see how you get from there to "therefore, we should invite/promote/incentivize unfair criticism". And we definitely don't do this in general, so there has to be something special about vV_Vv's comment. I guess it's probably the humor that I'm honestly not seeing in this case. The comment just seems straight-forwardly spiteful to me.

Replies from: Viliam
comment by Viliam · 2021-10-19T19:49:11.812Z · LW(p) · GW(p)

Yes, humor makes the difference between "unfair" and "hyperbolic". (Or the hyperbole makes the humor. Uhhh... explaining humor isn't my forte.)

However, countersignaling is risky [LW · GW], and your reaction is an evidence for that.

Also, there is a chance that my perception is wrong. I made my decision unconsciously; my System 1 decided so for reasons not completely transparent to me, and then it took some effort to also see it from the opposite perspective. (I suppose the line "now, some snark" is something that an actually hostile person would not write; they would just do it, without labeling it as such.)

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-10-19T21:32:02.908Z · LW(p) · GW(p)

That's actually very interesting. Up until this reply I didn't realize that you would go as far as to call it counter-signaling, implying that the intent of the comment wasn't to be mean (?). I assumed that your model was "the person was mean but also funny, and that makes it ok". (When you said it's important to be able to laugh at oneself, did you mean to say that vV_Vv's comment was doing that? That doesn't seem right given that they're not really a part of the community.)

I tend to think that I have unusually good sensors for whether someone did or didn't intend to be mean. (I think it's related to having status-regulating emotions.) Even after rereading the comment, I still get fairly strong "this person was trying to be mean" vibes.

In general, I think being easily offended correlates with not being successful in life, as does feeling status-regulating emotions. I have this pet theory that both are underrepresented among rationalists, which would make me an extreme exception. So maybe what was going on is that most people read it and misinterpreted it as counter-signaling, whereas a few unlucky easily-offended people like me interpreted it correctly. But it's also possible that I'm totally wrong.

Replies from: Viliam
comment by Viliam · 2021-10-20T16:00:14.174Z · LW(p) · GW(p)

Hah, seems like I was wrong after all! Wow, I am a little disappointed -- not for being wrong, but for the comment not being as funny as I believed it was. :( Because people should sometimes make jokes in situations like this, in my opinion.

comment by vV_Vv · 2021-10-20T15:41:47.645Z · LW(p) · GW(p)

My interpretation of the Cactus Person post is that it was a fictionalized account of personal experiences and and an expression of frustration about not being able to gather any real knowledge out of them, which is therefore entertained as a reasonable hypothesis to have in the first place. If I'm mistaken then I apologize to Scott, however the post is ambigious enough that I'm likely not the only person to have interpreted this way.

He also wrote one post about the early psychedelicists that ends with "There seems to me at least a moderate chance that [ psychedelics ] will make you more interesting without your consent – whether that is a good or a bad thing depends on exactly how interesting you want to be.", and he linked to Aella describing her massive LSD use, which he commented as "what happens when you take LSD once a week for a year?" (it should have been "what happens when this person takes LSD once a week for a year, don't try this at home, or you might end up in a padded cell or a coffin").

I've never interacted with the rationalist community IRL, and in fact for the last 5 or so years my exposure to them was mostly through SSC/ACX + the occasional tweet from rat-adjacent accounts that I follow, but my impression is that psychedelic drug use was rampant in the community, with leading figures, including Scott, either partaking themselves or at least knowing about it and condoning it as nothing more than an interesting quirk. Therefore, blaming it all on a single person sounds like scapegoating, which I found something interesting to note in a funny way.

As you say, psychedelics might be just a Bay Area thing, and maybe Vassar and his Vassarites were taking it to a different level compared to the rat/Bay Aryan baseline, I don't know them so it could be possible, in which case the finger pointing would make more sense. Still, whenever you have a formal or informal norm, you're going to have excesses at the tails of the distribution. If your norm is "no illegal drugs, only alcohol in moderation", the excesses will be some people who binge drink or smoke joints, if your norm is "psychedelics in moderation", the excesses will be people who fry their brains with LSD.

 

As for the cultish aspects, I get the impression that while not overall a cult, the IRL rat community tends to naturally coalesce into very tightly-knit subcommunities of highly non-neurotypical (and possibly "mentally fragile") people who hang together with little boundaries between workplace, cohabitation, friendship, dating, "spiritual" mentorship, with prevalence of questionable therapy/bonding practices ("debugging", "circling") and isolation from outsiders ("normies"). These subcommunites gravitate around charismatic individuals (e.g. Eliezer, Anna, Geoff, Vassar, Ziz) with very strong opinions that they argue forcefully, and are regarded as infallible leaders by their followers. I don't know to what extend these leaders encourage this idolatry delibrately and to what extent they just find themselves in the eye of the storm, so to speak, but in any case, looking from outside, whether you call it cultish or not, it doesn't appear like a healthy social dynamics.

comment by ioannes (ioannes_shade) · 2021-10-19T18:11:25.444Z · LW(p) · GW(p)

The drugs seem to be in the Bay Area water supply (metaphorically or literally? no one really knows for sure), that is another reason to move somewhere else sooner rather than later. In Bay Area, you probably can't avoid meeting junkies every day, this shifts your "Overton window"

In a bunch of comments on this post, people are giving opinions about "drugs." I think this is the wrong level of abstraction, sorta like having an opinion about whether food is good or bad.

Different drugs have wildly different effect and risk profiles – it doesn't make sense to lump them all together into one category.

Replies from: vV_Vv, Viliam
comment by vV_Vv · 2021-10-20T19:08:13.417Z · LW(p) · GW(p)

No offense, but the article you linked is quite terrible because it compares total deaths while completely disregarding the base rates of use. By the same logic, cycling is more dangerous than base jumping.

This said, yes, some drugs are more dangerous than others, but good policies need to be simple, unambiguous and easy to enforce. A policy of "no illegal drugs" satisfies these criteria, while a policy of "do your own research and use your own judgment" in practice means "junkies welcome".

comment by Viliam · 2021-10-20T16:42:08.228Z · LW(p) · GW(p)

Technically, yes.

On the meta level, this "hey, not all drugs are bad, I can find some research online, and decide which ones are safe" way of thinking seems like what gave us the problem.

Replies from: ioannes_shade
comment by ioannes (ioannes_shade) · 2021-10-25T22:31:38.862Z · LW(p) · GW(p)

I think something like Jim's point [LW(p) · GW(p)] of overcorrecting from a coarse view of "all drugs are bad" to a coarse view of "hey, the authorities lied to us about drugs and they're probably okay to use casually" is closer to what gave us the problem.

comment by IlyaShpitser · 2021-10-21T02:03:33.506Z · LW(p) · GW(p)

"MIRI/CFAR is not a cult."

What does being a cult space monkey feel like from the inside?

This entire depressing thread is reminding me a little of how long it took folks who watch Rick and Morty to realize Rick is an awful abusive person, because he's the show's main character, and isn't "coded" as a villain.

Replies from: Benito, dxu
comment by Ben Pace (Benito) · 2021-10-22T19:07:38.490Z · LW(p) · GW(p)

Ilya, I respect your expertise in causal modeling, and I appreciate when you make contributions to the site sharing things you've learned and helping others see the parts of the world you understand, like this [LW(p) · GW(p)] and this [LW(p) · GW(p)] and this [LW(p) · GW(p)]. In contrast your last 5 comments on the site have net negative karma scores, getting into politics and substance-less snark on the community. A number of your comments are just short and snarky and political (example [LW(p) · GW(p)], example [LW(p) · GW(p)]) or gossiping about the site (many of your comments are just complaining about Eliezer and rationalists).

I'm pretty excited about your contributions to the site when they're substantive Ilya, but this comment is a warning that if you continue to have a string of net negative comments without contributing a lot of great stuff too, the most likely outcome is that I'll ban you.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-22T19:44:54.432Z · LW(p) · GW(p)

1. Thank you for all your effort to make LW valuable.

2. I think there's something pretty valuable about this particular comment of Ilya's. I'm not tracking all the tradeoffs, thinking through what it would be like if every comment this rude and judgemental were allowed, etc.; so I'll just try to say what I think is valuable about it, without trying to make an overall judgement. It's something like, from inside [the thing Ilya is calling a cult, insofar as it's a thing at all], we're at risk of feedback loopy dynamics. For example, (mis)information cascades, where people keep updating on each other's judgements (which are only discretized summaries of previous evidence), rather than updating on each other's observations exactly once (which would be harder). For example, narrative pyramid schemes, where stories about where a group will put its effort gain political capital in a way disconnected from object-level evaluations of consequences of plans. For example, fear of retribution materializing out of nothing, by people seeing each other act as though they're afraid of retribution and inferring that they themselves have something to fear.

So, these feedback dynamics are bad, and also very natural. It seems valuable to have some Ilyas, who are rightly viewed as having some weight by our own supposed values, and who will break all frames of political "respect". Frames of political respect sometimes become mechanisms for propagating pyramid schemes, and sometimes cause people to infer that those around them are deferring out of fear and so the leaders are to be feared rather than reasoned about. So, political frames contribute to mostly bad feedback dynamics, and Ilyas break political frames.

Speaking more phenomenally and less theoretically: sometimes an Ilya says something that gives me a "jolt", and then I seem to suddenly have more access to peeking behind certain things, or being able to occupy, maybe temporarily, an outlook that's like "a different Normal". And this feels basically good to me, like it seems less like I'm being tugged around, and more like I've jumped to another spot and now I can get triangulation / parallax on more things.

Replies from: dxu, Benito
comment by dxu · 2021-10-22T20:47:18.662Z · LW(p) · GW(p)

Strong-upvoted for raising this consideration to mind; this is exactly the sort of object-level analysis I hoped to see more of on this topic.

My sense here is that the thing you're talking about is quite valuable; I think the main source of divergence is likely to be located in considerations like: (1) how effectively comments like Ilya's actually provide [the thing], and (2) how possible it is to get [the thing] without the corresponding negatives associated with comments like Ilya's. I recognize that these points may be part of the "tradeoffs" you explicitly said you weren't trying to track in your comment, so I don't intend this as a request that you pivot to discussing said points (though naturally I would be thrilled if somebody did).

The thing that comes to mind, that might get us a bit of insight into (1) and (2) simultaneously, is in attempting to craft a comment that serves a similar purpose as that described in your comment, preserving the positives while (ideally) mitigating the negatives. I admit to not having a strong sense of how to create the "jolt" effect you describe, but I'll start anyway. (I have a strong expectation that whatever I come up with be imperfect along many axes, so I welcome feedback.)

For reference, Ilya's original comment:

"MIRI/CFAR is not a cult."

What does being a cult space monkey feel like from the inside?

This entire depressing thread is reminding me a little of how long it took folks who watch Rick and Morty to realize Rick is an awful abusive person, because he's the show's main character, and isn't "coded" as a villain.

My model is that there are two components here that are critical to creating a sufficiently sharp "jolt": the short, pithy nature of the comment, and the obvious way in which it disregards what you (TekhneMakre) characterized as "political frames of respect". I think these are obviously tied to, but not identical with, the norm-violating aspects of the comment that I perceived as problematic; here is my attempt at a similar comment that preserves both components without the norm-violating aspects:

"MIRI/CFAR is not a cult."

Saying this does not make it true.

I think this comment scores approximately as well as Ilya's on the "ignoring political frames" axis, and actually scores quite a bit better on the "shorty and pithy" axis, all while being significantly less norm-violating than his original comment.

(To be clear, I would still expect such a comment to receive negative karma, but I would expect it to receive substantially less negative karma, and also would likely not have prompted me to initiate a discussion of potential moderator action. (Though that second part would also depend heavily on whether Ilya's counterfactual comment history were substantially different, since my decision to call him out was based [in part] on multiple past observed norm violations.))

I'd be interested in learning whether my version of the comment hits the same note for you (or other users who shared your original sense that Ilya's comment was doing something valuable). I think there's strong potential for updates here, especially since (as I mentioned) I don't have a strong model of [the thing].


(Note that this exercise is not intended as a suggestion that Ilya or likeminded commenters try to post fewer things like Ilya's original comment, and more things like mine. Not only do I expect such a suggestion to be ineffective, my model of Ilya is unlikely to be moved by the concerns outlined here; this is because I expect that [the thing] TekhneMakre was pointing to was not in fact a deliberate aim of Ilya's comment--to the extent that it produced a positive reaction from at least some users, I expect that to be largely coincidental. The exercise is really not about Ilya, and much more about the [potentially valuable] reactions that his comment produced, intentional or otherwise.)

Replies from: philh
comment by philh · 2021-10-24T19:08:27.877Z · LW(p) · GW(p)

I think your suggestion doesn't work as well.

That phrase annoys me in general. "Saying this does not make it true" can be a reply to just about any descriptive claim someone can make? Taken at face value, it suggests the previous speaker was being dumb in a weirdly specific way that in this case (and IME usually when the phrase is used) we have no reason to think they were. Like, Villiam might be wrong about MIRI/CFAR being a cult, but he's probably not wrong because he thinks saying they're not a cult makes them not a cult.

My sense is that it's mostly used to just mean "[citation needed]" but it comes across more condescending than that?

But "[citation needed]" isn't as good either, because Ilya's original comment was pointing at a specific possible failure mode, that I'd put in long form as something like: "people who are in cults think they're not in cults too, so this is the kind of thing we should be especially careful about not simply believing on the strength of the normal sorts of evidence that make people believe things". I do think this is in general good to remember. (But just because Villiam didn't specifically acknowledge it doesn't mean he'd forgotten it. Also, some people might go full "well I guess I'll never know if my monthly casual D&D meet is a cult, it doesn't feel like one but" and that would be a mistake.)

To compress again, my suggested replacement for Ilya's comment would be simply: "what does being in a cult feel like from the inside?" Which, yeah, I still wouldn't like as a comment, it's still dismissive and I think not very insightful. But I think it's at least less aggressive, and still gets across what value I think is there.

(Possibly relevant: I don't recognize the term "space monkey" and don't know what it means either denotatively or connotatively, except that the connotation is clearly negative. Something drug related?)

Replies from: clone of saturn
comment by clone of saturn · 2021-10-25T09:58:48.888Z · LW(p) · GW(p)

(Possibly relevant: I don’t recognize the term “space monkey” and don’t know what it means either denotatively or connotatively, except that the connotation is clearly negative. Something drug related?)

I would guess it's a reference to the movie Fight Club.

comment by Ben Pace (Benito) · 2021-10-22T21:55:03.455Z · LW(p) · GW(p)

Thanks for this comment, upvote. Currently just writing a short comment, with some hope to reply more this weekend. Agree there's strong positive from having commenters who will break all frames of "political respect".

comment by dxu · 2021-10-22T01:11:47.195Z · LW(p) · GW(p)

This comment raises to mind an interesting question, which is: to what lengths does a commenter have to go, to what extent do they have to make it clear that they are not interested in the least to contributing to productive discussion (and moreover very interested in detracting from it), before the moderation team of LW decides to take coordinated action?

I ask, not as a thinly veiled attempt to suggest that Ilya be banned (though I will opine that, were he to be banned, he would not much be missed), but because his commenting pattern is the most obvious example I can think of in recent memory of something that is clearly against, not just the stated norms of LW, but the norms of any forum interested in anything like collaborative truthseeking. It is an invitation to turn the comments section into something like a factionalized battleground, something more closely resembling the current iteration of Reddit than any vision anyone might have of something better. The fact that these invitations have so far been ignored does not obviate the fact that that is clearly and obviously what they are.

So I think this is an excellent opportunity to inquire into LW moderation policy. If such things as Ilya's "contributions" to this thread are not considered worthy of moderator action, what factors might actually be sufficient to prompt such action? (This is not a rhetorical question.)

Replies from: Zack_M_Davis, hg00, Ruby
comment by Zack_M_Davis · 2021-10-22T02:21:12.696Z · LW(p) · GW(p)

were he to be banned, he would not much be missed

False!—I would miss him. I agree that comments like the grandparent are not great, but Ilya is a bona fide subject matter expert (his Ph.D. advisor was Judea Pearl), so when he contributes references [LW(p) · GW(p)] or explanations [LW(p) · GW(p)], that's really valuable. Why escalate to banning a user when individual bad comments can be safely downvoted to invisibility?

Replies from: dxu
comment by dxu · 2021-10-22T02:58:14.610Z · LW(p) · GW(p)

I'm aware of Ilya's subject matter expertise (as well as his connection to Pearl), yes. My decision to avoid mentioning said expertise was motivated in part by precisely a curiosity as to whether it would be brought up as a relevant factor in replies (for the record: I predicted that it would), and indeed it seems my prediction was borne out.

Now, recognizing that you (Zack) obviously don't speak for the moderation team, I'd nonetheless like to ask you (and any other bystanders or--indeed--moderators who might happen to be reading this): what role do you think things like subject matter expertise ought to play in deciding whether to evict a user from an online forum?

Note 1: Despite the somewhat snide-sounding tone of the above, I do intend my question as a genuine, non-rhetorical question; I am open to the answer being something other than "none whatsoever". I do think, however, that whatever the correct norm is here, it would benefit from being made common knowledge, even if that involves making somewhat ugly-sounding statements like "The LW moderation team will treat you differently if you are a subject matter expert compared to if you are not."

Note 2: I also don't mean to imply that, if Ilya were not a subject matter expert who occasionally contributes comments of real value, the comments he made here would in and of themselves be ban-worthy. This is the other part of the reason why I avoided talking about Ilya's credentials until it was brought up by someone else: I'm entirely open to the answer to my original question ("what does it take to get a LW mod to ban somebody") being something like, "We have a bunch of red lines, which have nothing to do with credentials, and also which Ilya's comments entirely fail to cross, such that our decision not to ban (or take any other administrative action against) him would hold even if Ilya was not a subject matter expert."

Note 3: Having said that, suppose it is the case that whether a commenter is ban-worthy is dependent, not purely on whether they cross some set of red lines, but on some kind of cost-benefit calculation. Then to what degree do a commenter's non-constructive (or outright destructive) comments have to outnumber their productive contributions before the scales are considered to have tipped? Looking at Ilya's recent comment history, the ratio of "useless" comments to "useful" comments seems quite heavily skewed in favor of "useless", and that's without counting the slew of comments he's left on this post. Is the argument here that "useless" comments are in some sense "okay", because they can all be "downvoted into invisibility", such that the implied ratio is actually infinite, i.e. someone can make as many terrible comments as they want, as long as they've made at least one positive contribution in the past? Or is it merely that the ratio is some really large number? Or something else entirely?

Note 4: Perhaps the ratio isn't the right way to think about it at all. Perhaps the idea is simply that banning a user from LW is a really serious thing to do (which somewhat lines up with Zack calling it an "escalation"), and each instance of a ban requires an in-depth discussion (cf. the decision to ban Brent's account), such that the effort involved isn't worth it unless the harms are really huge and obviously visible?

It's not clear to me what the right way is to think about this. What I do know is that the impulse which triggered my initial comment was a thought along the lines of "If this was my personal blog or Facebook wall, I would consider multiple comments as bad as Ilya's to be a ban-worthy offense." To the extent that LW moderation norms differ from those of a personal blog or Facebook wall (and again, I am entirely open to the idea that they do differ, for sensible, important reasons!), I think it's useful to have an open, transparent discussion of how, where, and why.

Replies from: Benito, Zack_M_Davis
comment by Ben Pace (Benito) · 2021-10-22T19:09:40.512Z · LW(p) · GW(p)

I do a cost-benefit calculation.

If someone's producing lots of great ideas and posts on the site, but they're sometimes aggressive or rude or spiky, then I will put in more effort to give them feedback and give them a lot more rope than if (on the other end of the spectrum) they're a first time poster. If Ilya's comment was an account's first comment, I'd ban the account and delete the comment. That sort of new user growth is bad for the site.

Responding to this situation in particular: I had the perception that Ilya had in the past contributed substantially to the site (in large part on the topic of causal modeling), and have (in my head) been giving him leeway for that. Also I met him once at LessWrong thing in Cambridge UK when I was 16 and he was friendly, and that gave me a sense he would be open to conversation and feedback if it came. That said, looking over his past comments was much more heat to light than I expected (lots more random unpleasant and rude comments and way fewer substantive contributions), so I am a bit surprised.

I've now given Ilya a warning upthread [LW(p) · GW(p)].

Replies from: dxu
comment by dxu · 2021-10-22T19:19:13.842Z · LW(p) · GW(p)

Thanks for replying; strong-upvote for displaying transparency.

comment by Zack_M_Davis · 2021-10-22T03:49:21.575Z · LW(p) · GW(p)

Personally, I lean laissez faire on moderation: I consider banning a non-spam user from the whole website to be quite serious, and that the karma system makes a decently large (but definitely not infinite!) ratio of useless-to-useful comments acceptable. Separately from that, I admit that applying different rules to celebrities would be pretty unprincipled, but I fear that my gut feeling actually is leaning that way.

comment by hg00 · 2021-10-22T04:11:40.666Z · LW(p) · GW(p)

It is an invitation to turn the comments section into something like a factionalized battleground

If you want to avoid letting a comments section descend into a factionalized battleground, you also might want to avoid saying that people "would not much be missed" if they are banned. From my perspective, you're now at about Ilya's level, but with a lot more words (and a lot more people in your faction).

Replies from: dxu
comment by dxu · 2021-10-22T04:48:56.925Z · LW(p) · GW(p)

From my perspective, the commenters here have, with very few exceptions, performed admirably at not turning this thread into a factionalized battleground. (Note that my use of "admirably" here is only in relation to my already-high expectations for LW users; in the context of the broader Internet a more proper adverb might be "incredibly".) You may note, for example, that prior to my comment, Ilya's comment had not received a single response, indicating that no one found his bait worth biting on. Given this, I was (and remain) quite confident that my statement that Ilya "would not much be missed" would not have the factionalizing effect you imply it might have had; and indeed the resulting comments would seem to favor my prediction over yours.

Furthermore, since this observation is something you cannot possibly have missed prior to writing your comment, it seems to me quite likely that you wrote what you did for rhetorical effect; but I confess myself unclear on what, precisely, you intended to suggest with your rhetorical approach here. A surface-level reading would seem to suggest an interpretation along the lines that what I wrote and what Ilya wrote were equally bad; is this in fact what you meant? If so, I find that claim... implausible, to say the least; you may note that nowhere in my comment, for example, did I refer to Ilya as a "cult space monkey", or attempt to draw conclusions about his character based on a televised American cartoon.

To the extent that you mean to suggest that the contents of these two comments are literally equivalent, I would submit that you need to provide (much) more argument in favor of that conclusion than you did. To the extent that you meant to equate them not in degree, but in kind... well, I suppose I can grant that to some extent; certainly I did not intend my statement that Ilya "would not much be missed" in a friendly way. But even if that's so, I think you can agree that I'm being stunningly generous with this interpretation; without the benefit of such generosity I think it's fair to say that your comment erases such distinctions quite badly. (This tendency to erase distinctions is a pattern I have observed from you in other threads as well, to be clear; though in your case I didn't think your engagement style was quite bad enough to be worth calling out explicitly, at least until you basically elicited it with your reply here.)

Replies from: dxu, hg00
comment by dxu · 2021-10-22T05:47:56.563Z · LW(p) · GW(p)

Re-reading my comments in this thread, I think there's a topic that's worth treating more deeply here, without that treatment being contained within (and fettered to) a confrontational context. To be clear, I still endorse my above reply to hg00, whose comment I continue to think was bad and deserved to be called out--but I also feel there's an analysis here that can't be conducted while simultaneously responding to someone else's (conflict-oriented) comment.

I'll start with this bit in particular, since I suspect this is the part that hg00 (and others who share their concerns) would consider most directly relevant:

I ask, not as a thinly veiled attempt to suggest that Ilya be banned (though I will opine that, were he to be banned, he would not much be missed)

I think it's entirely fair to say that the inclusion of the parenthetical clause was unnecessary, in the sense that my point could have been made just as well without it, and (as long as we're on the topic) it was moreover likely a slight-to-moderate impediment to the advancement of my broader goal (initiating a discussion of LW's moderation policy), since it diverts the readers' mental cycles in an unproductive direction. It's also fair to say that, at the time of writing my initial comment, such considerations largely did not factor into my decision to include said clause.

What did factor into my decision? I think there's a part I endorse and a part I don't (which is why, on the whole, I don't think I can say I fully regret writing what I did). The part I don't endorse is pretty simple, so I'll start with that: it was a sense of tit-for-that, of defecting-to-punish-defection, where "defection" in this case is intended to indicate [something like] making an obviously adversarial remark with no purpose other than to be adversarial. I don't endorse this because it's negative-sum: repeated iterations of this action burn the commons, without trading it for anything I'd consider worthwhile.

The part I do endorse, on the other hand, is something like... I'd call it "stating the obvious"? "Identifying what others refuse to identify"? I don't quite like that second phrase, because it makes the whole thing sound weirdly heroic and messianic, in a way that I really don't think it is; my view is that I'd like this behavior to become more common, and that second phrase kind of construes it as the exact opposite of that. But still, I think it captures something important, which is... like...

What does being a cult space monkey feel like from the inside?

This entire depressing thread is reminding me a little of how long it took folks who watch Rick and Morty to realize Rick is an awful abusive person, because he's the show's main character, and isn't "coded" as a villain.

I think this comment is terrible. Full-stop. Like, it seems uniquely terrible to me, in a way that the supermajority of comments on LessWrong are not. There's no attempt at all to disguise this as something resembling productive criticism; it transparently and nakedly presents itself as exactly what it is: a series of ad hominem attacks with no merit whatsoever. I think, given that I'm going to talk about this at all, it would feel almost... dishonest? ... to not include a part somewhere that just outright states, "Yes, this is terrible. It's not just your imagination; I'm not going to dance around it or awkwardly imply that I dislike it less than I do; it is simply and straightforwardly terrible, and I would not miss it if it were gone."

The alternative, it seems to me, is that some minority of commenters (including Ilya) continue to post terrible comments, and somehow despite how manifestly terrible they are it never becomes common knowledge how terrible they are (because no one ever outright says it--just downvotes and moves on, or worse, replies politely and inquisitively and in a way that never at all suggests that posting comments like this on the regular isn't okay), and it just keeps happening over and over, death-by-a-thousand-cuts style, and meanwhile I'm standing here on the sidelines shouting HEY Y'ALL WHAT THE FUCK ARE YOU DOING-- [LW · GW]

Anyway. I don't regret that part. I think that if a commenter (especially a well-respected one! especially a credentialed one, especially one with "celebrity" status) starts to post comments that, if they came from a new account with zero karma, would get them a moderator warning almost immediately, and somehow manage to continue doing so for years on end without so much as a single comment asking "hey what's going on here is this okay?", it is absolutely predictable that there will be people looking at the situation and saying to themselves, "hmm, I wonder if that kind of behavior just... passes, around these parts?" And if someone (not necessarily me, I'd have been thrilled if it wasn't me) were to finally step in and call attention to the thing, and if in the process they included a rather impolitely worded remark to the effect that they "wouldn't miss you if you were gone"... I can't bring myself to entirely disendorse that behavior.

Replies from: Richard_Kennaway, dxu
comment by Richard_Kennaway · 2021-10-22T08:43:08.797Z · LW(p) · GW(p)

I'm standing here on the sidelines shouting HEY Y'ALL WHAT THE FUCK ARE YOU DOING--

Curiously, "HEY Y'ALL WHAT THE FUCK ARE YOU DOING" is how I read Ilya's comment.

I'm not interested in the c-word, but the more this goes on, the more wary I am of having anything to do with MIRI, CFAR, Leverage 2.0, and any related organisations, as well as some of the individuals spoken of. Not that I ever have done, but until now that was only because I'm on another continent, and I don't do community anyway.

comment by dxu · 2021-10-22T05:57:20.796Z · LW(p) · GW(p)

And I think it's also reasonable, given the above context, to squint suspiciously at anyone who looks at the two initial comments in question in sequence, and then says something like this:

From my perspective, you're now at about Ilya's level, but with a lot more words

(I already took this sentiment apart in the grandparent, of course, but the additional context should make it clear why I laser-focused on that part of the comment. And of course, it's also helpful to have the same sentiment expressed without the cloaking of a snide, adversarial framing.)

comment by hg00 · 2021-10-22T06:25:34.239Z · LW(p) · GW(p)

It's not obvious to me that Ilya meant his comment as aggressively as you took it. We're all primates and it can be useful to be reminded of that, even if we're primates that go to space sometimes. Asking yourself "would I be responding similar to how I'm responding now if I was, in fact, in a cult" seems potentially useful. It's also worth remembering that people coded as good aren't always good.

Your comment was less crass than Ilya's, but it felt like you were slipping "we all agree my opponent is a clear norm violator" into a larger argument without providing any supporting evidence. I was triggered by a perception of manipulativeness and aggressive conformism, which put me in a more factionalistic mindset.

Replies from: dxu
comment by dxu · 2021-10-22T07:23:44.795Z · LW(p) · GW(p)

So, there are a number of things I want to say to this. It might first be meaningful to establish the following, however:

Asking yourself "would I be responding similar to how I'm responding now if I was, in fact, in a cult" seems potentially useful.

I don't think I'm in a cult. (Separately, I don't think the MIRI/CFAR associated social circle is a cult.)

The reason I include the qualifier "separately" is because, in my case, these are very much two separate claims: I do not live in the Bay Area or any other rationalist community "hot spot", I have had (to my knowledge) no physical contact with any member of the rationalist community, "core" or otherwise, and the surrounding social fabric I'm embedded in is about as far from cult-like as you can get. So even if MIRI/CFAR were a cult--that is to say, even if the second of my claims were false--they could not have transmitted their "cultishness" to me except by means of writing stuff on the Internet... and at that point I very much dispute that "cultishness" is even the right framing to be using.

(Yes, memes are proof that ideas can propagate without a supporting social fabric. However, I have seen little evidence that the idea cluster associated with MIRI is particularly "memetically fit", except in the entirely ordinary sense that the ideas they peddle seemingly make sense to quite a lot of people who aren't physically part of the rationalist community--which you would also observe if they were just, y'know, true.)

There's more I want to say about your framing; I think it misses the mark in several other ways, the most prominent of which is the amount of emphasis you give to the meta level as opposed to the object level [LW(p) · GW(p)] (and in fact the comment I just linked is also downthread of a reply to you). But I think it's best to circumscribe different topics of discussion to different threads, so if you have anything to say about that topic, I'd ask that you reply to the linked comment instead of this one.

As far as the topic of this comment thread is concerned... no, I don't think your impression was mistaken. That is to say, the thing you sensed from me, which you described as

"we all agree my opponent is a clear norm violator"

is something I intended to convey with my comment... well, not really the part about Ilya being my "opponent" (and I'm also not sure what you mean by "slipping [it] into a larger argument", mind you)--but the part about norm violations is absolutely a correct reading of my intentions. I think (and continue to think) that Ilya's comment was in blatant violation of a bunch of norms, and I maintain that calling it out was the right thing to do [LW(p) · GW(p)]. There is no plausible interpretation I can imagine under which calling somebody a "cult space monkey" can remotely be construed as non-norm-violating; to the extent that you disagree with this, I think you are simply, straightforwardly incorrect.

(It sounds like you view statements like the above as an expression of "aggressive conformism". I could go on about how I disagree with that, but instead I'll simply note that under a slight swap of priors, one could easily make the argument that it was the original comment by Ilya that's an example of "aggressive conformism". And yet I note that for some reason your perception of aggressive conformism was only triggered in response to a comment attacking a position with which you happen to agree, rather than by the initial comment itself. I think it's quite fair to call this a worrisome flag--by your own standards, no less.)

To be absolutely clear: it sounds as though you are under the impression that I criticized Ilya's comment because he called MIRI/CFAR a cult, and since I disagreed with that, I tried to label him a "norm violator" in order to invalidate his assertion. (This would make sense of your use of the word "opponent", and also nails down the "larger argument" I presume you presumed I was insinuating.) This is not the case. I criticized Ilya's comment because (not to put too fine a point on it) it was a fucking terrible comment, and because I don't visit LW so I can see people compare each other to characters from Rick and Morty (or call people fascists [LW(p) · GW(p)], or accuse people of health fraud [LW(p) · GW(p)], or whatever the hell this is [LW(p) · GW(p)]). Contrary to what you may be inclined to think, not everyone here selectively levels criticism at things they disagree with.

Replies from: hg00
comment by hg00 · 2021-10-22T09:55:24.330Z · LW(p) · GW(p)

Separately, I don't think the MIRI/CFAR associated social circle is a cult.

Nor do I. (I've donated money to at least one of those organizations.) [Edit: I think they might be too tribal for their own good -- many groups are -- but the word "cult" seems too strong.]

I do think MIRI/CFAR is to some degree an "internet tribe". You've probably noticed that those can be pathological.

Anyway, you're writing a lot of words here. There's plenty of space to propose or cite a specific norm, explain why you think it's a generally good norm, and explain why Ilya violated it. I think if you did that, and left off the rest of the rhetoric, it would read as more transparent and less manipulative to me. A norm against "people [comparing] each other to characters from Rick and Morty" seems suspiciously specific to this case (and also not necessarily a great norm in general).

Basically I'm getting more of an "ostracize him!" vibe than a "how can we keep the garden clean?" vibe -- you were pretending to do the second one in your earlier comment, but I think the cursing here makes it clear that your true intention is more like the first. I don't like mob justice, even if the person is guilty. (BTW, proposing specific norms also helps keep you honest, e.g. if your proposed norm was "don't be crass", cursing would violate that norm.)

(It sounds like you view statements like the above as an expression of "aggressive conformism". I could go on about how I disagree with that, but instead I'll simply note that under a slight swap of priors, one could easily make the argument that it was the original comment by Ilya that's an example of "aggressive conformism". And yet I note that for some reason your perception of aggressive conformism was only triggered in response to a comment attacking a position with which you happen to agree, rather than by the initial comment itself. I think it's quite fair to call this a worrisome flag--by your own standards, no less.)

Ilya's position is not one I agree with.

I'm annoyed by aggressive conformism wherever I see it. When it comes to MIRI/CFAR, my instinct is to defend them in venues where everyone criticizes them, and criticize them in venues where everyone defends them.

I'll let you have the last word in this thread. Hopefully that will cut down on unwanted meta-level discussion.

Replies from: dxu, dxu
comment by dxu · 2021-10-22T16:32:20.997Z · LW(p) · GW(p)

Basically I'm getting more of an "ostracize him!" vibe than a "how can we keep the garden clean?" vibe -- you were pretending to do the second one in your earlier comment, but I think the cursing here makes it clear that your true intention is more like the first.

I didn't respond to this earlier, but I think I'd also like to flag here that I don't appreciate this (inaccurate) attempt to impute my intentions. I will state it outright: your reading of my intention is incorrect, and also seems to me to be based on a very flimsy reasoning process.

(To expand on that last part: I don't believe "cursing" acts a valid item of evidence in favor of any assertion in particular. Certainly I intended my words to have a certain rhetorical effect there, else I would not have chosen the words I did--but the part where you immediately draw from that some conclusion about my "true intention" seems to me invalid, both in general and in this specific case.)

Replies from: dxu
comment by dxu · 2021-10-22T17:17:53.575Z · LW(p) · GW(p)

META: I debated with myself for a while about whether to post the parent comment, and--if I posted it--whether to adjust the wording to come across as less sharp. In the end, I judged that posting the comment I did was the best option given the circumstances, but I'd like to offer some further commentary on my thought process here.

From my perspective, conversations that occur under an adversarial framing are (mostly) not productive, and it was (and remains) quite obvious to me that my reply above is largely adversarial. I mostly view this as an inescapable cost of replying in this case; when someone alleges that your comments have some nefarious intention behind them, the adversarial framing is pretty much baked in, and if you want to defuse that framing there's really no way to do it outside of ignoring the allegation entirely... which I did contemplate doing. (Which is why my other, earlier reply was was short, and addressed only what I saw as the main concern.)

I ultimately decided against remaining silent here because I judged that the impact of allowing such an allegation to stand would be to weaken the impact of all of my other comments in this subthread, including ones that make points I think are important. I am nonetheless saddened that there is no way to address such a claim without shifting the conversation at least somewhat back towards the adversarial frame, and thusly I am annoyed and frustrated that such a conversational move was rendered necessary. (If anyone has suggestions for how to better navigate this tradeoff in the future, I am open to hearing them.)


Separately: I suspect a large part of the adversarial interpretation here in fact arises directly from the role of the person posting the comment. When I wrote the parent comment, I attempted to include some neutral observations on the reasoning of the grandparent (e.g. "I don't believe 'cursing' acts as a valid item of evidence in favor of any assertion in particular"). And I'm quite confident that, had this remark been made by a third party, it would be interpreted for the most part as a neutral observation. But I anticipate that, because the remark in question was made by me (the person against whom the initial allegation was leveled), it will acquire a subtext that it would not otherwise possess.

I currently also see this as a mostly unavoidable consequence of the framing here. I don't see a good way to circumvent this, but at the same time I find myself rather keenly aware (and, if I'm to be honest, slightly resentful) of the way in which this prevents otherwise ordinary commentary from having the same effect it normally would. The net effect of this dynamic, I expect, is to discourage people from posting "neutral observations" in situations where they might reasonably expect that those observations will come across as adversarially coded.

Again: I don't have a good model of how to mitigate this effect (ideally while retaining the benefits of the heuristic in question); it's plausible to me that this may be intractable as long as we're dealing with humans. It nonetheless feels particularly salient to me at the moment, so I think I want to draw attention to it.

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-22T17:46:19.584Z · LW(p) · GW(p)

When I wrote the parent comment, I attempted to include some neutral observations on the reasoning of the grandparent (e.g. "I don't believe 'cursing' acts as a valid item of evidence in favor of any assertion in particular"). And I'm quite confident that, had this remark been made by a third party, it would be interpreted for the most part as a neutral observation.

I'm happy to endorse the content of the parent comment. I'm a fan of (constructive, gentle-but-firm) pushback against people making large assertions about the contents of other people's thoughts and intentions, without much more substantial evidence.

comment by dxu · 2021-10-22T14:51:08.929Z · LW(p) · GW(p)

Anyway, you're writing a lot of words here. There's plenty of space to propose or cite a specific norm, explain why you think it's a generally good norm, and explain why Ilya violated it. I think if you did that, and left off the rest of the rhetoric, it would read as more transparent and less manipulative to me. A norm against "people [comparing] each other to characters from Rick and Morty" seems suspiciously specific to this case (and also not necessarily a great norm in general).

Okay, sure. I think LW should (and for the most part, does) have a norm against personal attacks. I think LW should also (and again, for the most part, does) have a norm against low-effort sniping. I think Ilya's comment[ing pattern] runs afoul of both of these norms (and does so rather obviously to boot), neither of which (I claim) is "suspiciously specific" in the way you describe.

comment by Ruby · 2021-10-22T02:51:10.075Z · LW(p) · GW(p)

IlyaShpister's comments are worthy of moderator attention, I'm looking at them now. 

The recent community discussion threads, this one alone at 741 comments, have exceeded the team's (or at least my) capacity to read and review every comment. Maybe we should set up a way for us to at least review every negative karma comment.

Replies from: dxu
comment by dxu · 2021-10-22T03:05:55.616Z · LW(p) · GW(p)

Thanks for your reply. I didn't intend my comment to impose any kind of implicit obligation on you or any other member of the mod team (especially if your capacity is as strained as it is), so to the extent that my initial comment came across as exerting social pressure for you to shift your priorities away from other more pressing concerns, I regret wording things the way I did, and hereby explicitly disavow that interpretation.

Replies from: Ruby
comment by Ruby · 2021-10-22T03:14:42.243Z · LW(p) · GW(p)

I appreciate the considerateness! 

These are important questions, though, that you've raised. I consider it a piece of "integrity debt" (as Ray would call it) that we don't have clear transparent moderation policies posted anywhere. I hope to get to that soonish and hopefully I can at least answer some of the questions you raised tomorrow.

comment by Charlie Steiner · 2021-10-17T05:04:32.599Z · LW(p) · GW(p)

Thanks. This puts the social dynamics at play in a different light for me - or rather it takes things I had heard about but not understood and puts them in any kind of light at all.

I am liking the AI Insights writeup so far.

I feel a strong sympathy for people who think they are better philosophers than Kant.

comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-17T10:26:39.088Z · LW(p) · GW(p)

everything I knew about how to be hired would point towards having little mental resistance to organizational narratives

Can you elaborate a little on this?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-17T14:59:52.321Z · LW(p) · GW(p)

At university, for example, you'll generally get a better grade if you let the narrative you're being told be the basic structure of your thinking, even if you have specific disagreements in places that you have specific evidence for. In Rao's terminology, people who are Clueless are hired for, in an important sense, actually believing the organizational level at some level (even if there is some amount of double-think), and being manipulable by others around them who are maintaining the simulation.

If I showed too much disagreement with the narrative without high ability to explain myself in terms of the existing narrative, it would probably have seemed less desirable to hire me.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-17T15:40:06.255Z · LW(p) · GW(p)

I'm not sure whether you're talking about hiring in most organizations or hiring in MIRI in particular?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-17T15:52:29.973Z · LW(p) · GW(p)

It applies to most organizations including MIRI. There are some differences in the MIRI case like the ideology being more altruistic-focused and ambitious, and also more plausible in a lot of ways.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2021-10-18T05:40:51.115Z · LW(p) · GW(p)

It seems to me like MIRI hiring, especially researchers in 2015-2017, but also in general, reliably produced hires with a certain philosophical stance (i.e. people who like UDASSA, TDT, etc.) and people with a certain kind of mathematical taste (i.e. people who like reflective oracles, Lob, haskell, etc.). 


I think that it selects pretty strongly for the above properties, and doesn't have much room for "little mental resistance to organizational narratives" (beyond any natural correlations).

I think there is also some selection on trustworthiness (e.g. following through with commitments) that is not as strong as the above selection, and that trustworthiness is correlated with altruism (and the above philosophical stance).

I think that altruism, ambition, timelines, agreement about the strategic landscape, agreement about probability of doom, little mental resistance to organizational narratives, etc. are/were basically rounding errors compared to selection on philosophical competence, and thus, by proxy, philosophical agreement (specifically a kind of philosophical agreement that things like agreement about timelines is not a good proxy for). 

(Later on, there was probably more selection on opinions about information security, but I don't think that applies much to you being hired.)

(Perhaps there is a large selection that is happening in the "applying for a job" side of the hiring pipeline. I don't feel like I can rule that out.)

(I will not be offended by a comment predicting that I believe this largely because of "little mental resistance to organizational narratives", even if the comment has no further justification.)

(I would also guess that I am somewhere in the bottom quartile of "mental resistance to organizational narratives" among MIRI employees.)

Replies from: Benquo, hg00
comment by Benquo · 2021-10-18T18:36:48.516Z · LW(p) · GW(p)

I will not be offended by a comment predicting that I believe this largely because of “little mental resistance to organizational narratives”, even if the comment has no further justification.

This isn't a full answer, but I suspect you believe this largely because you don't know what someone as smart as you who doesn't have "little mental resistance to organizational narratives" looks like, because mostly you haven't met them. They kind of look like very smart crazy people.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2021-10-18T19:38:24.010Z · LW(p) · GW(p)

Hmm, so this seems plausible, but in which case, it seems like the base rate for “little mental resistance to organizational narratives” is very low, and the story should not be "Hired people probably have little mental resistance because they were hired" but should instead be "Hired probably have little mental resistance because basically everyone has little mental resistance." (these are explanatory uses of "because", not a causal uses.)

This second story seems like it could be either very true or very false, for different values of "little", so it doesn't seems like it has a truth value until we operationalize "little."

Even beyond the base rates, it seems likely that a potential hire could be dismissed because they seem crazy, including at MIRI, but I would predict that MIRI is pretty far on the "willing to hire very smart crazy people" end of the spectrum.

comment by hg00 · 2021-10-21T00:55:13.372Z · LW(p) · GW(p)

It seems quite possible to me that the philosophical stance + mathematical taste you're describing aren't "natural kinds" (e.g. the topics you listed don't actually have a ton in common, besides being popular MIRI-sphere topics).

If that's the case, selecting for people with the described philosophical stance + mathematical taste could basically be selecting for "people with little resistance to MIRI's organizational narrative" (people who have formed their opinions about math + philosophy based on opinions common in/around MIRI).

selection on philosophical competence, and thus, by proxy, philosophical agreement

It sounds like you're saying that at MIRI, you approximate a potential hire's philosophical competence by checking to see how much they agree with you on philosophy. That doesn't seem great for group epistemics?

I've been following MIRI for many years now. I've sometimes noticed conversations that I'm tempted to summarize as "patting each other on the back for how right we all are". ("No one else is actually trying" has that flavor. Here [LW(p) · GW(p)] is a comment I wrote recently which might also be illustrative. You could argue this is a universal human tendency, but when I look back at different organizations where I've worked, I don't think any of them had it nearly as bad as MIRI does. Or at least, how bad MIRI used to have it. I believe it's gotten a bit better in recent years.)

I think MIRI is doing important work on important problems. I also think it would be high value of information for MIRI to experiment with trying to learn from people who don't share the "typical MIRI worldview" -- people interested in topics that MIRI-sphere people don't talk about much, people who have a somewhat different philosophical stance, etc. I think this could make MIRI's research significantly stronger. The marginal value of talking to / hiring a researcher who's already ~100% in agreement with you seems low compared to the marginal value of talking to / hiring a researcher who brings something new to the table.

If you're still in the mode of "searching for more promising paths", I think this sort of exploration strategy could be especially valuable. Perhaps you could establish some sort of visiting scholars program. This could maximize your exposure to diverse worldviews, and also encourage new researchers to be candid in their disagreements, if their employment is for a predetermined, agreed-upon length. (I know that SIAI had a visiting fellows program in years past that wasn't that great. If you want me to help you think about how to run something better I'm happy to do that.)

Another thought is it might be helpful to try & articulate precisely what makes MIRI different from other AI safety organizations, and make sure your hiring selects for that and nothing else. When I think about what makes MIRI different from other AI safety orgs, there are some broad things that come to mind:

But there are also some much more specific things, like the ones you mentioned -- interest in specific, fairly narrow mathematical & philosophical topics. From the outside it looks kinda like MIRI suffers from "not invented here" syndrome.

My personal guess is that MIRI would be a stronger org, and the AI safety ecosystem as a whole would be stronger, if MIRI expanded their scope to the bullet points I listed above and tried to eliminate the influence of "not invented here" on their hiring decisions. (My reasoning is partially based on the fact that I can't think of AI safety organizations besides MIRI which match the bullet points I listed. I think this proposal would be an expansion into neglected research territory. I'd appreciate a correction if there are orgs I'm unaware of / not remembering.)

Replies from: Scott Garrabrant, Scott Garrabrant, Scott Garrabrant, Scott Garrabrant
comment by Scott Garrabrant · 2021-10-21T03:04:52.185Z · LW(p) · GW(p)

It seems quite possible to me that the philosophical stance + mathematical taste you're describing aren't "natural kinds" (e.g. the topics you listed don't actually have a ton in common, besides being popular MIRI-sphere topics).

 

So, I believe that the philosophical stance is a natural kind. I can try to describe it better, but note that I won't be able to point at it perfectly:

I would describe it as "taking seriously the idea that you are a computation[Edit: an algorithm]." (As opposed to a collection of atoms, or a location in spacetime, or a Christian soul, or any number of other things you could identify with.)

I think that most of the selection for this philosophical stance happens not in MIRI hiring, but instead in being in the LW community. I think that the sequences are actually mostly about the consequences of this philosophical stance, and that the sequences pipeline is largely creating a selection for this philosophical stance. 

One can have this philosophical stance without a bunch of math ability, (many LessWrongers do) but when the philosophical stance is combined with math ability, it leads to a lot of agreement in taste in math-philosophy models, which is what you see in MIRI employees.

To make a specific (but hard to verify) claim, I think that if you were to take MIRI employees, and intervene on before they found lesswrong, and show them a lot of things like UDASSA, TDT, reflective oracles, they will be very interested in them relative to other math/philosophy ideas. Further, if you were to take people in 2000, before the existence of LW and filter on being interested in some of these ideas, you will will find people interested in many of these ideas.

(I listed ideas that came from MIRI, but there are many ideas that did not come from MIRI that people with this philosophical stance (and math ability) tend to be interested in: Logic, Probability, Game Theory, Information Theory, Algorithmic Information Theory)

(I used to not believe this. When I first started working at MIRI, I felt like I was lucky to have all of these mathematical and philosophical interests converge to the same place. I attributed it to a coincidence, but now think it has a common natural cause.)

(I think that this philosophical stance is really not enough to cause people to converge on many strategic questions. For example, I think Eliezer Yudkowsky, Jessica Taylor, Paul Christiano, and Andrew Critch all score very highly on this philosophical stance, and have a wide range of different views on timelines, probability of doom, and the strategic landscape.)

Replies from: hg00, sil-ver
comment by hg00 · 2021-10-21T04:52:31.807Z · LW(p) · GW(p)

The most natural shared interest for a group united by "taking seriously the idea that you are a computation" seems like computational neuroscience, but that's not on your list, nor do I recall it being covered in the sequences. If we were to tell 5 random philosophically inclined STEM PhD students to write a lit review on "taking seriously the idea that you are a computation" (giving them that phrase and nothing else), I'm quite doubtful we would see any sort of convergence towards the set of topics you allude to (Haskell, anthropics, mathematical logic).

As a way to quickly sample the sequences, I went to Eliezer's userpage, sorted by score [LW · GW], and checked the first 5 sequence posts:

IMO very little of the content of these 5 posts fits strongly into the theme of "taking seriously the idea that you are a computation". I think this might be another one of these rarity narrative things (computers have been a popular metaphor for the brain for decades, but we're the only ones who take this seriously, same way we're the only ones who are actually trying).

the sequences pipeline is largely creating a selection for this philosophical stance

I think the vast majority of people who bounce off the sequences do so either because it's too longwinded or they don't like Eliezer's writing style. I predict that if you ask someone involved in trying to popularize the sequences, they will agree.

In this post [LW · GW] Eliezer wrote:

I've written about how "science" is inherently public...

But that's only one vision of the future. In another vision, the knowledge we now call "science" is taken out of the public domain—the books and journals hidden away, guarded by mystic cults of gurus wearing robes, requiring fearsome initiation rituals for access—so that more people will actually study it.

I assume this has motivated a lot of the stylistic choices in the sequences and Eliezer's other writing: the 12 virtues of rationality, the litany of Gendlin/Tarski/Hodgell, parables and fables, Jeffreyssai and his robes/masks/rituals.

I find the sequences to be longwinded and repetitive. I think Eliezer is a smart guy with interesting ideas, but if I wanted to learn quantum mechanics (or any other academic topic the sequences cover), I would learn it from someone who has devoted their life to understanding the subject and is widely recognized as a subject matter expert.

From my perspective, the question how anyone gets through all 1800+ pages of the sequences. My answer is that the post I linked [LW · GW] is right. The mystical presentation, where Eliezer plays the role of your sensei who throws you to the mat out of nowhere if you forgot to keep your center of gravity low, really resonates with some people (and really doesn't resonate with others). By the time someone gets through all 1800+ pages, they've invested a significant chunk of their ego in Eliezer and his ideas.

Replies from: Scott Garrabrant, Scott Garrabrant
comment by Scott Garrabrant · 2021-10-21T06:00:22.774Z · LW(p) · GW(p)

I agree that the phrase "taking seriously the idea that you are a computation" does not directly point at the cluster, but I still think it is a natural cluster. I think that computational neuroscience is in fact high up on the list of things I expect less wrongers to be interested in. To the extent that they are not as interested in it as other things, I think it is because it is too hard to actually get much that feels like algorithmic structure from neuroscience.

I think that the interest in anthropics is related to the fact that computations are the kind of thing that can be multiply instantiated. I think logic is a computational-like model of epistemics. I think that haskell is not really that much about this philosophy, and is more about mathematical elegance. (I think that liking elegance/simplicity is mostly different from the "I am a computation" philosophy, and is also selected for at MIRI.)

I think that a lot of the sequences (including the first and third and fourth posts in your list) are about thinking about the computation that you are running in contrast and relation to an ideal (AIXI-like) computation.

I think that That alien message is directly about getting the reader to imagine being a subprocess inside an AI, and thinking about what they would do in that situation.

I think that the politics post is not that representative of the sequences, and it bubbled to the top by karma because politics gets lots of votes.

(It does feel a little like I am justifying the connection in a way that could be used to justify false connections. I still believe that there is a cluster very roughly described as "taking seriously the idea that you are a computation" that is a natural class of ideas that is the heart of the sequences)

I think the vast majority of people who bounce off the sequences do so either because it's too longwinded or they don't like Eliezer's writing style. I predict that if you ask someone involved in trying to popularize the sequences, they will agree.

I agree, but I think that the majority of people who love the sequences do so because they deeply share this philosophical stance, and don't find it much elsewhere, more so than because they e.g. find a bunch of advice in it that actually works for them.

I think the effect you describe is also part of why people like the sequences, but I think that a stronger effect is that there are a bunch of people who had a certain class of thoughts prior to reading the sequences, didn't see thoughts of this type before finding LessWrong, and then saw these thoughts in sequences. (I especially believe this about the kind of people who get hired at MIRI.) Prior to the sequences, they were intellectually lonely in not having people to talk to that shared this philosophical stance, that is a large part of their worldview.

I view the sequences as a collection of thoughts similar to things that I was already thinking, that was then used as a flag to connect me with people who were also already thinking the same things, more so than something that taught me a bunch of stuff. I predict a large portion of karma-weighted lesswrongers will say the same thing. (This isn't inconsistent with your theory, but I think would be evidence of mine.)

My theory about why people like the sequences is very entangled with the philosophical stance actually being a natural cluster, and thus something that many different people would have independently.

I think that MIRI selects for the kind of person who likes the sequences, which under my theory is a philosophical stance related to being a computation, and under your theory seems entangled with little mental resistance to (some kinds of) narratives.

comment by Scott Garrabrant · 2021-10-21T06:19:00.751Z · LW(p) · GW(p)

I notice I like "you are an algorithm" better than "you are a computation", since "computation" feels like it could point to a specific instantiation of an algorithm, and I think that algorithm as opposed to instantiation of an algorithm is an important part of it.

Replies from: Linch
comment by Linch · 2021-10-22T10:25:10.094Z · LW(p) · GW(p)

This sounds right to me. FDT feels more natural when I think of myself as an algorithm than when I think of myself as a computation, for example.

Replies from: Linch
comment by Linch · 2021-10-23T20:02:46.500Z · LW(p) · GW(p)

To be slightly more precise, I think I historically felt like I identify with like 60% of framings in the general MIRI cluster(at least the way it appears in public outputs) and now I'm like 80%+, and part of the difference here was that I already was pretty into stuff like empiricism, materalism, Bayesianism, etc, but I previously (not very reflectively) had opinions and intuitions in the direction of thinking myself as an computational instance, and these days I can understand the algorithmic framing much better (even though it's still not very intuitive/natural to me).

(Numbers made up and not well thought out)

comment by Rafael Harth (sil-ver) · 2021-10-21T07:13:01.806Z · LW(p) · GW(p)

Datapoint: I've read the sequences and am familiar with lots of Miri-related math and philosophy, and very much think humans are atoms. I think this is compatible with 95%+ (but not 100%) of Eliezer's writing.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2021-10-21T08:21:36.218Z · LW(p) · GW(p)

Interesting. I just went and looked at some old survey results hoping I would find a question like this one. I did not find a similar question. (The lack of a question about this is itself evidence against my theory.)

(Agreement among less wrongers is not that crux-y for my belief that it is both a natural cluster and is highly selected for at MIRI, but I am still interested about the question about LW)

comment by Scott Garrabrant · 2021-10-21T03:21:28.404Z · LW(p) · GW(p)

It sounds like you're saying that at MIRI, you approximate a potential hire's philosophical competence by checking to see how much they agree with you on philosophy. That doesn't seem great for group epistemics?

 

I did not mean to imply that MIRI does this any more than e.g. philosophy academia. 

When you don't have sufficient objective things to use to judge competence, you end up having to use agreement as a proxy for competence. This is because when you understand a mistake, you can filter for people who do not make that mistake, but when you do not understand a mistake you are making, it is hard to filter for people that do not make that mistake. 

Sometimes, you interact with someone who disagrees with you, and you talk to them, and you learn that you were making a mistake that they did not make, and this is a very good sign for competence, but you can only really get this positive signal about as often as you change your mind, which isn't often.

Sometimes, you can also disagree with someone, and see that their position is internally consistent, which is another way you can observe some competence without agreement.

I think that personally, I use a proxy that is something like "How much do I feel like I learn(/like where my mind goes) when I am talking to the person," which I think selects for some philosophical agreement (their concepts are not so far from my own that I can't translate), but also some philosophical disagreement (their concepts are better than my own at making at least one thing less confusing). (This condition does not feel necessary for me. I feel like having a coherent plan is also a great sign, even if I do not feel like I learn when I am talking to the person.)

comment by Scott Garrabrant · 2021-10-21T03:42:57.805Z · LW(p) · GW(p)

If that's the case, selecting for people with the described philosophical stance + mathematical taste could basically be selecting for "people with little resistance to MIRI's organizational narrative"

 

So, I do think that MIRI hiring does select for people with "little resistance to MIRI's organizational narrative," through the channel of "You have less mental resistance to narratives you agree with" and "You are more likely to work for an organization when you agree with their narrative." 

I think that additionally people have a score on "mental resistance to organizational narratives" in general, and was arguing that MIRI does not select against this property (very strongly). (Indeed, I think they select for it, but not as strongly as they select for philosophy). I think that when the OP was thinking about how much to trust her own judgement, this is the more relevant variable, and the variable they were referring to.

comment by Scott Garrabrant · 2021-10-21T04:08:47.131Z · LW(p) · GW(p)

I don't want to speak for/about MIRI here, but I think that I personally do the "patting each other on the back for how right we all are" more than I endorse doing it. I think the "we" is less likely to be MIRI, and more likely to be a larger group that includes people like Paul.

I agree that it would be really really great if MIRI can interact with and learn from different views. I think mostly everyone agrees with this, and has tried, and in practice, we keep hitting "inferential distance" shaped walls, and become discouraged, and (partially) give up. To be clear, there are a lot of people/ideas where I interact with them and conclude "There probably isn't much for me to learn here," but there are also a lot of people/ideas where I interact with them and become sad because I think there is something for me to learn there, and communicating across different ontologies is very hard.

I agree with your bullet points descriptively, but they are not exhaustive.

I agree that MIRI has strong (statistical) bias towards things that were invented internally. It is currently not clear to me how much of this statistical bias is also a mistake vs the correct reaction to how much internally invented things seem to fit our needs, and how hard it is to find the good stuff that exists externally when it exists. (I think there a lot of great ideas out there that I really wish I had, but I dont have a great method for filtering for in in the sea of irrelevant stuff.)

Replies from: dxu, hg00
comment by dxu · 2021-10-21T04:35:36.478Z · LW(p) · GW(p)

I agree that MIRI has strong (statistical) bias towards things that were invented internally. It is currently not clear to me how much of this statistical bias is also a mistake vs the correct reaction to how much internally invented things seem to fit our needs, and how hard it is to find the good stuff that exists externally when it exists. (I think there a lot of great ideas out there that I really wish I had, but I dont have a great method for filtering for in in the sea of irrelevant stuff.)

Strong-upvoted for this paragraph in particular, for pointing out that the strategy of "seeking out disagreement in order to learn" (which obviously isn't how hg00 actually worded it, but seems to me descriptive of their general suggested attitude/approach) has real costs, which can sometimes be prohibitively high.

I often see this strategy contrasted with a group's default behavior, and when this happens it is often presented as [something like] a Pareto improvement over said default behavior, with little treatment (or even acknowledgement) given to the tradeoffs involved. I think this occurs because the strategy in question is viewed as inherently virtuous (which in turn I fundamentally see as a consequence of epistemic learned helplessness run rampant, leaking past the limits of any particular domain and seeping into a general attitude towards anything considered sufficiently "hard" [read: controversial]), and attributing "virtuousness" to something often has the effect of obscuring the real costs and benefits thereof.

Replies from: hg00, Benito, TekhneMakre
comment by hg00 · 2021-10-21T08:31:00.747Z · LW(p) · GW(p)

which in turn I fundamentally see as a consequence of epistemic learned helplessness run rampant

Not sure I follow. It seems to me that the position you're pushing, that learning from people who disagree is prohibitively costly, is the one that goes with learned helplessness. ("We've tried it before, we encountered inferential distances, we gave up.")

Suppose there are two execs at an org on the verge of building AGI. One says "MIRI seems wrong for many reasons, but we should try and talk to them anyways to see what we learn." The other says "Nah, that's epistemic learned helplessness, and the costs are prohibitive. Turn this baby on." Which exec do you agree with?

This isn't exactly hypothetical, I know someone at a top AGI org (I believe they "take seriously the idea that they are a computation/algorithm") who reached out to MIRI and was basically ignored. It seems plausible to me that MIRI is alienating a lot of people this way, in fact. I really don't get the impression they are spending excessive resources engaging people with different worldviews.


Anyway, one way to think about it is talking to people who disagree is just a much more efficient way to increase the accuracy of your beliefs. Suppose the population as a whole is 50/50 pro-Skub and anti-Skub. Suppose you learn that someone is pro-Skub. This should cause you to update in the direction that they've been exposed to more evidence for the pro-Skub position than the anti-Skub position. If they're trying to learn facts about the world as quickly as possible, their time is much better spent reading an anti-Skub book than a pro-Skub book, since the pro-Skub book will have more facts they already know. An anti-Skub book also has more decision-relevant info. If they read a pro-Skub book, they'll probably still be pro-Skub afterwards. If they read an anti-Skub book, they might change their position and therefore change their actions.

Talking to an informed anti-Skub in person is even more efficient, since the anti-Skub person can present the very most relevant/persuasive evidence that is the very most likely to change their actions.

Applying this thinking to yourself, if you've got a particular position you hold, that's evidence you've been disproportionately exposed to facts that favor that position. If you want to get accurate beliefs quickly you should look for the strongest disconfirming evidence you can find.

None of this discussion even accounts for confirmation bias, groupthink, or information cascades! I'm getting a scary [LW · GW] "because we read a website that's nominally about biases, we're pretty much immune to bias" vibe from your comment. Knowing about a bias and having implemented an effective, evidence-based debiasing intervention for it are very different.

BTW this is probably the comment that updated me the most in the direction that LW will become / already is a cult.

Replies from: Scott Garrabrant, Scott Garrabrant
comment by Scott Garrabrant · 2021-10-21T11:43:07.541Z · LW(p) · GW(p)

So I think my orientation on seeking out disagreement is roughly as follows. (This is going to be a rant I write in the middle of the night, so might be a little incoherent.)

There are two distinct tasks: 1)Generating new useful hypotheses/tools, and 2)Selecting between existing hypotheses/filtering out bad hypotheses.

There are a bunch of things that make people good at both these tasks simultaneously. Further, each of these tasks is partially helpful for doing the other. However, I still think of them as mostly distinct tasks. 

I think skill at these tasks is correlated in general, but possibly anti-correlated after you filter on enough g correlates, in spite of the fact that they are each common subtasks of the other. 

I don't think this (anti-correlated given g) very confidently, but I do think it is good to track your own and others skill in the two tasks separately, because it is possible to have very different scores (and because of side effects of judging generators on reliability might make them less generative as a result of being afraid of being wrong, and similarly vise versa.)

I think that seeking out disagreement is especially useful for the selection task, and less useful for the generation task. I think that echo chambers are especially harmful for the selection task, but can sometimes be useful for the generation task. Working with someone who agrees with you on a bunch of stuff and shares your ontology allows you to build deeply faster. Someone with a lot of disagreement with you can cause you to get stuck on the basics and not get anywhere. (Sometimes disagreement can also be actively helpful for generation, but it is definitely not always helpful.)

I spend something like 90+% of my research time focused on the generation task. Sometimes I think my colleagues are seeing something that I am missing, and I seek out disagreement, so that I can get a new perspective, but the goal is to get a slightly different perspective on the thing I am working on, and not on really filtering based on which view is more true. I also sometimes do things like double-crux with people with fairly different world views, but even there, it feels like the goal is to collect new ways to think, rather than to change my mind. I think that for this task a small amount of focusing on people who disagree with you is pretty helpful, but even then, I think I get the most out of people who disagree with me a little bit, because I am more likely to be able to actually pick something up. Further, my focus is not really on actually understanding the other person, I just want to find new ways to think, so I will often translate things to something near by my ontology, and thus learn a lot, but still not be able to pass an ideological Turing test.

On the other hand, when you are not trying to find new stuff, but instead e.g. evaluate various different hypotheses about AI timelines, I think it is very important to try to understand views that are very far from your own, and take steps to avoid echo chamber effects. It is important to understand the view, the way the other person understands it, not just the way that conveniently fits with your ontology. This is my guess at the relevant skills, but I do not actually identify as especially good at this task. I am much better at generation, and I do a lot of outside-view style thinking here.

However, I think that currently, AI safety disagreements are not about two people having mostly the same ontology and disagreeing on some important variables, but rather trying to communicate across very different ontologies. This means that we have to build bridges, and the skills start to look more like generation skill. It doesn't help to just say, "Oh, this other person thinks I am wrong, I should be less confident." You actually have to turn that into something more productive, which means building new concepts, and a new ontology in which the views can productively dialogue. Actually talking to the person you are trying to bridge to is useful, but I think so is retreating to your echo chamber, and trying to make progress on just becoming less confused yourself.

For me, there is a handful of people who I think of as having very different views from me on AI safety, but are still close enough that I feel like I can understand them at all. When I think about how to communicate, I mostly think about bridging the gap to these people (which already feels like and impossibly hard task), and not as much the people that are really far away. Most of these people I would describe as sharing the philosophical stance I said MIRI selects for, but probably not all.

If I were focusing on resolving strategic disagreements, I would try to interact a lot more than I currently do with people who disagree with me. Currently, I am choosing to focus more on just trying to figure out how minds work in theory, which means I only interact with people who disagree with me a little. (Indeed, I currently also only interact with people who agree with me a little bit, and so am usually in an especially strong echo chamber, which is my own head.)

However, I feel pretty doomy about my current path, and might soon go back to trying to figure out what I should do, which means trying to leave the echo chamber. Often when I do this, I neither produce anything great nor change my mind, and eventually give up and go back to doing the doomy thing where at least I make some progress (at the task of figuring out how minds work in theory, which may or may not end up translating to AI safety at all).

Basically, I already do quite a bit of the "Here are a bunch of people who are about as smart as I am, and have thought about this a bunch, and have a whole bunch of views that differ from me and from each other. I should be not that confident" (although I should often take actions that are indistinguishable from confidence, since that is how you work with your inside view.) But learning from disagreements more than that is just really hard, and I don't know how to do it, and I don't think spending more time with them fixes it on its own. I think this would be my top priority if I had a strategy I was optimistic about, but I don't, and so instead, I am trying to figure out how minds work, which seems like it might be useful for a bunch of different paths. (I feel like I have some learned helplessness here, but I think everyone else (not just MIRI) is also failing to learn (new ontologies, rather than just noticing mistakes) from disagreements, which makes me think it is actually pretty hard.)

comment by Scott Garrabrant · 2021-10-21T09:54:30.834Z · LW(p) · GW(p)

Not sure I follow. It seems to me that the position you're pushing, that learning from people who disagree is prohibitively costly, is the one that goes with learned helplessness. ("We've tried it before, we encountered inferential distances, we gave up.")

 

I believe they are saying that cheering for seeking out disagreement is learned helplessness as opposed to doing a cost-benefit analysis about seeking out disagreement. I am not sure I get that part either. 

I was also confused reading the comment, thinking that maybe they copied the wrong paragraph, and meant the 2nd paragraph.

I am interested in the fact that you find the comment so cult-y though, because I didn't pick that up.

Replies from: hg00
comment by hg00 · 2021-10-21T10:53:57.940Z · LW(p) · GW(p)

I am interested in the fact that you find the comment so cult-y though, because I didn't pick that up.

It's a fairly incoherent comment which argues that we shouldn't work to overcome our biases or engage with people outside our group, with strawmanning that seems really flimsy... and it has a bunch of upvotes. Seems like curiosity, argument, and humility are out, and hubris is in.

comment by TekhneMakre · 2021-10-21T07:27:45.090Z · LW(p) · GW(p)
which in turn I fundamentally see as a consequence of epistemic learned helplessness run rampant

I don't understand this, but for some reason I'm interested. Could you say a couple sentences more? How does rampant learned helplessness about having correct beliefs make it more appealing to seek new information by seeking disagreement? Are you saying that there's learned helplessness about a different strategy for relating to potential sources of information?

Replies from: dxu
comment by dxu · 2021-10-21T16:56:22.051Z · LW(p) · GW(p)

So, my model is that "epistemic learned helplessness" essentially stems from an inability to achieve high confidence in one's own (gears-level [? · GW]) models. Specifically, by "high confidence" here I mean a level of confidence substantially higher than one would attribute to an ambient hypothesis in a particular space--if you're not strongly confident that your model [in some domain] is better than the average competing model [in that domain], then obviously you'd prefer to adopt an exploration-based strategy (that is to say: one in which you seek out disagreeing hypotheses in order to increase the variance of your information intake) with respect to that domain.

I think this is correct, so far as it goes, as long as we are in fact restricting our focus to some domain or set of domains. That is to say: as humans, naturally it's impossible to explore every domain in sufficient depth that we can form and hold high confidence in gears-level model for said domain, which in turn means there will obviously be some domains in which "epistemic learned helplessness" is simply the correct attitude to take. (And indeed, the original blog post in which Scott introduced the concept of "epistemic learned helplessness" does in fact contextualize it using history books as an example.)

Where I think this goes wrong, however, is when the proponent of "epistemic learned helplessness" fails to realize that this attitude's appropriateness is actually a function of one's confidence in some particular domain, and instead allows the attitude to seep into every domain. Once that happens, "inability to achieve confidence in own's own models" ceases to be a rational reaction to a lack of knowledge, and instead turns into an omnipresent fog clouding over everything you think and do. (And the exploration-based strategy I outlined above ceases to be a rational reaction to a lack of confidence, and instead turns into a strategy that's always correct and virtuous.)

This is the sense in which I characterized the result as

a consequence of epistemic learned helplessness run rampant, leaking past the limits of any particular domain and seeping into a general attitude towards anything considered sufficiently "hard"

(Note the importance of the disclaimer "hard". For example, I've yet to encounter anyone whose "epistemic learned helplessness" is so extreme that they stop to question e.g. whether they are in fact capable of driving a car. But that in itself is not particularly reassuring, especially when domains we care about include stuff labeled "hard".)


Now for the rub: I think anyone working on AI alignment (or any technical question of comparable difficulty) mustn't exhibit this attitude with respect to [the thing they're working on]. If you have a problem where you're not able to achieve high confidence in your own models of something (relative to competing ambient models), you're not going to be able to follow your own thoughts far enough to do good work--not without being interrupted by thoughts like "But if I multiply the probability of this assumption being true, by the probability of that assumption being true, by the probability of that assumption being true..." and "But [insert smart person here] thinks this assumption is unlikely to be true, so what probability should I assign to it really?"

I think this is very bad. And since I think it's very bad, naturally I will strongly oppose attempts to increase pressure in that particular direction--especially since I think pressure to think this way in this particular community is already ALARMINGLY HIGH. I think "epistemic learned helplessness" (which sometimes goes by more innocuous names as well, like fox epistemology or modest epistemology) is epistemically corrosive once it has breached quarantine, and by and large I think it has breached quarantine for a dismayingly large number of people (though thankfully my impression is that this has largely not occurred at MIRI).

Replies from: hg00
comment by hg00 · 2021-10-22T10:10:50.990Z · LW(p) · GW(p)

It seems like you wanted me to respond to this comment, so I'll write a quick reply.

Now for the rub: I think anyone working on AI alignment (or any technical question of comparable difficulty) mustn't exhibit this attitude with respect to [the thing they're working on]. If you have a problem where you're not able to achieve high confidence in your own models of something (relative to competing ambient models), you're not going to be able to follow your own thoughts far enough to do good work--not without being interrupted by thoughts like "But if I multiply the probability of this assumption being true, by the probability of that assumption being true, by the probability of that assumption being true..." and "But [insert smart person here] thinks this assumption is unlikely to be true, so what probability should I assign to it really?"

This doesn't seem true for me. I think through details of exotic hypotheticals all the time.

Maybe others are different. But it seems like maybe you're proposing that people self-deceive in order to get themselves confident enough to explore the ramifications of a particular hypothesis. I think we should be a bit skeptical of intentional self-deception. And if self-deception is really necessary, let's make it a temporary suspension of belief sort of thing, as opposed to a life belief that leads you to not talk to those with other views.

It's been a while since I read Inadequate Equilibria. But I remember the message of the book being fairly nuanced. For example, it seems pretty likely to me that there's no specific passage which contradicts the statement "hedgehogs make better predictions on average than foxes".

I support people trying to figure things out for themselves, and I apologize if I unintentionally discouraged anyone from doing that -- it wasn't my intention. I also think people consider learning from disagreement to be virtuous for a good reason, not just due to "epistemic learned helplessness". Also, learning from disagreement seems importantly different from generic deference -- especially if you took the time to learn about their views and found yourself unpersuaded. Basically, I think people should account for both known unknowns (in the form of people who disagree whose views you don't understand) and unknown unknowns, but it seems OK to not defer to the masses / defer to authorities if you have a solid grasp of how they came to their conclusion (this is my attempt to restate the thesis of Inadequate Equilibria as I remember it).

I don't deny that learning from disagreement has costs. Probably some people do it too much. I encouraged MIRI to do it more on the margin, but it could be that my guess about their current margin is incorrect, who knows.

Replies from: dxu
comment by dxu · 2021-10-22T15:26:04.832Z · LW(p) · GW(p)

Thanks for the reply.

But it seems like maybe you're proposing that people self-deceive in order to get themselves confident enough to explore the ramifications of a particular hypothesis. I think we should be a bit skeptical of intentional self-deception.

I want to clarify that this is not my proposal, and to the extent that it had been someone's proposal, I would be approximately as wary about it as you are. I think self-deception is quite bad on average, and even on occasions when it's good, that fact isn't predictable in advance, making choosing to self-deceive pretty much always a negative expected-value action.

The reason I suspect you interpreted this as my proposal is that you're speaking from a frame where "confidence in one's model" basically doesn't happen by default, so to get there people need to self-deceive, i.e. there's no way for someone [in a sufficiently "hard" domain] to have a model and be confident in that model without doing [something like] artificially inflating their confidence higher than it actually is.

I think this is basically false. I claim that having (real, not artificial) confidence in a given model (even of something "hard") is entirely possible, and moreover happens naturally, as part of the process of constructing a gears-level model to begin with. If your gears-level model actually captures some relevant fraction of the problem domain, I claim it will be obviously the case that it does so--and therefore a researcher holding that model would be very much justified in placing high confidence in [that part of] their model.

How much should such a researcher be swayed by the mere knowledge that other researchers disagree? I claim the ideal answer is "not at all", for the same reason that argument screens off authority [LW · GW]. And I agree that, from the perspective of somebody on the outside (who only has access to the information that two similarly-credentialed researchers disagree, without access to the gears in question), this can look basically like self-deception. But (I claim) from the inside the difference is very obvious, and not at all reminiscent of self-deception.

(Some fields do not admit good gears-level models at all, and therefore it's basically impossible to achieve the epistemic state described above. For people in such fields, they might plausibly imagine that all fields have this property. But this isn't the case--and in fact, I would argue that the division of the sciences into "harder" and "softer" is actually pointing at precisely this distinction: the "hardness" attributed to a field is in fact a measure of how possible it is to form a strong gears-level model.)

Does this mean "learning from disagreement" is useless? Not necessarily; gears-level models can also be wrong and/or incomplete, and one entirely plausible (and sometimes quite useful) mechanism by which to patch up incomplete models is to exchange gears with someone else, who may not be working with quite the same toolbox as you. But (I claim) for this process to actually help, it should done in a targeted way: ideally, you're going into the conversation already with some idea of what you hope to get out of it, having picked your partner beforehand for their likeliness to have gears you personally are missing. If you're merely "seeking out disagreement" for the purpose of fulfilling a quota, that (I claim) is unlikely to lead anywhere productive. (And I view your exhortations for MIRI to "seek out more disagreement on the margin" as proposing essentially just such a quota.)


(Standard disclaimer: I am not affiliated with MIRI, and my views do not necessarily reflect their views, etc.)

comment by hg00 · 2021-10-21T08:45:18.162Z · LW(p) · GW(p)

Thanks, this is encouraging.

I think mostly everyone agrees with this, and has tried, and in practice, we keep hitting "inferential distance" shaped walls, and become discouraged, and (partially) give up.

I've found an unexpected benefit of trying to explain my thinking and overcome the inferential distance is that I think of arguments which change my mind. Just having another person to bounce ideas off of causes me to look at things differently, which sometimes produces new insights. See also the book passage I quoted here [LW(p) · GW(p)].

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2021-10-21T09:43:14.595Z · LW(p) · GW(p)

Note that I think the form of inferential distance is often about trying to communicate across different ontologies. Sometimes a person will even correctly get the arguments of their discussion partner to the point where they can internally inhabit that point of view, but it is still hard to get the argument to dialogue productively with your other views because the two viewpoints have such different ontologies.

comment by James_Miller · 2021-10-18T17:37:12.619Z · LW(p) · GW(p)

As a college professor who has followed from physically afar the rationality community from the beginning, here are my suggestions:

  1.  Illegal drugs are, on average, very bad.  How about a policy that if you use illegal drugs you are presumptively considered not yet good enough to be in the community?
  2. My college has a system of deans that help students deal with all kinds of problems.  Every student has the same dean for all her time at my school, so the dean gets to know the student.  Perhaps trusted people could become rationality deans and hold office hours open to their charges.
  3. Taking AI risks seriously is necessary for the mission of the rationalist community but is going to take an emotional toll on lots of people.  Outsiders should be more understanding of problems that arise in this community than we would of, say, a college campus.
Replies from: ioannes_shade, Avi Weiss, habryka4, ozziegooen
comment by ioannes (ioannes_shade) · 2021-10-19T18:00:58.444Z · LW(p) · GW(p)

Illegal drugs are, on average, very bad.  How about a policy that if you use illegal drugs you are presumptively considered not yet good enough to be in the community?

The risk profile of a drug isn't correlated with its legal status, largely because our current drug laws were created for political purposes in the 1970s.  A quote from Nixon advisor John Ehrlichman:

“The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people. You understand what I’m saying? We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.”

 

A 2010 analysis concluded that psychedelics are causing far less harm than legal drugs like alcohol and tobacco. (Psychedelics still carry substantial risks, aren't for everybody, and should always be handled with care.)

Replies from: Linch, James_Miller
comment by Linch · 2021-10-21T10:05:05.699Z · LW(p) · GW(p)

A 2010 analysis concluded that psychedelics are causing far less harm than legal drugs like alcohol and tobacco. (Psychedelics still carry substantial risks, aren't for everybody, and should always be handled with care.)


? This is total harm, not per use. More people die of car crashes than from rabid wolves, but I still find myself more inclined to ride cars than ride rabid wolves as a form of transportation.

Replies from: Linch, ioannes_shade
comment by Linch · 2021-10-21T10:11:52.493Z · LW(p) · GW(p)

I'm confused why there were ~40 comments in this subthread without anybody else pointing out this pretty glaring error of logical inference (unless I'm misunderstanding something)

Replies from: Viliam
comment by Viliam · 2021-10-21T22:02:51.561Z · LW(p) · GW(p)

I was going to say something similar, that "how dangerous is substance X" only makes sense when you specify how much of the substance X and how often you consume.

Like, when you calculate "the danger of alcohol", are you describing those who drink one glass of wine each year on their birthday, or those who start every morning by drinking a cup of vodka, or some weighted average? Same question for every other substance.

And if the answer is "the danger of how the average user consumes substance X", well, what makes you sure that this number will apply to you? (Are you really going to make sure that your use is average, in both amount and frequency? Do you even know what those averages are?)

Then consider the fact that different people can react to the same substance differently. If you specify the "danger" as one number, what is the underlying probability distribution? If substance X causes serious-but-not-crippling problems in 50% of users, and substance Y completely destroys 5% of users, which one is "more dangerous"?

Replies from: Linch
comment by Linch · 2021-10-21T22:37:13.244Z · LW(p) · GW(p)

Agreed, there's two different errors here. One is conflating total harm with per-individual harm. The other, more subtle point you're alluding to is that a lot of the relative harms of alcohol/tobacco/etc has to do with frequency of use, which is a different question from whether doing X once in an individual or community setting is advisable.

comment by ioannes (ioannes_shade) · 2021-10-25T22:27:01.845Z · LW(p) · GW(p)

Good point, though I think current evidence [LW(p) · GW(p)] as a whole (anti-addictive; efficacy as a therapeutic modality; population surveys finding psychedelic use anticorrelated with psychological distress) pushes towards psychedelics' risk profile being less harmful though higher variance than alcohol and tobacco per use.

comment by James_Miller · 2021-10-19T18:31:57.150Z · LW(p) · GW(p)

The world health organization has estimated that in 2016, one in twenty deaths world-wide was caused by alcohol.  Smoking has been estimated to take ten years off your life.  Consequently, psychedelics can be horrible and still not as bad as alcohol and tobacco.

Replies from: ioannes_shade, Kaj_Sotala
comment by ioannes (ioannes_shade) · 2021-10-19T18:40:13.677Z · LW(p) · GW(p)

Consequently, psychedelics can be horrible and still not as bad as alcohol and tobacco.

They could be, but current evidence shows that psychedelic-assisted therapy is efficacious for PTSD, depression, end-of-life anxiety, smoking cessation, and probably alcoholism

Psychedelic experiences have been rated as extremely meaningful by healthy volunteers [1, 2], and psychedelic use is associated with decreased psychological distress and suicidality in population surveys.

Replies from: James_Miller
comment by James_Miller · 2021-10-19T19:26:42.613Z · LW(p) · GW(p)

Perhaps I am improperly estimating the harm of psychedelics by lumping them in with other illegal drugs.  But from the first sentence of [1] "When administered under supportive conditions, psilocybin occasioned experiences similar to spontaneously-occurring mystical experiences that, at 14-month follow-up, were considered by volunteers to be among the most personally meaningful and spiritually significant of their lives."    My read of this is that it made them less rational.  Plus, it would fill me with horror if a drug so hijacked my brain that it became what my brain perceived as its most significant experience ever.  Yes, please wirehead me when I'm feeble and in a nursing home, but not before.

Replies from: Avi Weiss
comment by Avi (Avi Weiss) · 2021-10-19T19:31:54.804Z · LW(p) · GW(p)

How does someone thinking that they had a meaningful experience make them less rational?

Replies from: Duncan_Sabien, James_Miller
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-19T20:11:04.063Z · LW(p) · GW(p)

Look, all experiences take place in the mind, in a very real way that's not just a clever conversational trick.

So whatever your most meaningful and spiritually significant moment, it's going to be "in your head."

But on a set of very reasonable priors, we would expect your most meaningful and spiritually significant head-moment to be correlated with and causally linked to some kind of unusual thing happening outside your head.  An activity, an interaction with other people, a novel observation.

Sometimes, a therapist says a few words, and a person has an internal cascade of thoughts and emotions and everything changes, and we wouldn't blink too hard at the person saying that moment was their most meaningful and spiritually significant.

It's not that the category of "just sitting there quietly thinking thoughts" is suspect.

And indeed, with the shakeup stimulus of a psychedelic, it's reasonable to imagine that people would successfully produce just such a cascade, some of the time.

But like ...

"Come on"?

The preconditions for the just-sitting-there-with-the-therapist moment to be so impactful are pretty substantial.  Someone has to have been all twisted up inside, and confused, and working on intractable problems that were causing them substantial distress.

If just sitting there and just taking a drug is itself enough to produce "holy crap, most important moment ever," then it seems to me, given my current model resolution, that one must additionally posit either

a) a supermajority of people have the precursors for the just-sitting-there-with-the-therapist moment, or something substantively similar, such that taking the drug allows them to reshuffle all the pieces and make an actual breakthrough

or

b) the drug is producing a "fake" sense of meaningfulness that's unrelated to the person's actual goals or experiences, and they're just not critically reviewing it with anything like rational/skeptical introspection.

One of these additional premises feels much more likely to me, especially given having read accounts of e.g. strict atheists reporting that they saw their minds being willing to believe in god while tripping.

It seems to me that if [rational] then [would be skeptical of the spiritual magnitude of just taking a drug and thinking for a bit], and that if not [skeptical, etc.] then, reasonably, an update against [rational].

Replies from: nostalgebraist, Kaj_Sotala, Benquo, Avi Weiss
comment by nostalgebraist · 2021-10-21T04:13:40.760Z · LW(p) · GW(p)

But on a set of very reasonable priors, we would expect your most meaningful and spiritually significant head-moment to be correlated with and causally linked to some kind of unusual thing happening outside your head.  An activity, an interaction with other people, a novel observation.

This doesn't feel plausible at all to me.  (This is one of two key places where I disagree with your framing)

Like, this is a huge category: "experiences that don't involve anything unusual happening around you."  It includes virtually all of the thinking we do -- especially the kind of thinking that demands concentration.  For most (all?) of us, it includes moments of immense terror and immense joy.  Fiction writers commonly spend many hours in this state, "just sitting there" and having ideas and figuring out how they fit together, before they ever commit a single word of those ideas to (digital) paper.  The same goes for artists of many other kinds.  This is where theorems are proven, where we confront our hidden shames and overcome them, (often) where we first realize that we love someone, or that we don't love someone, or . . .

The other place where I disagree with your framing: it seems like you are modeling human minds at a kind of coarse resolution, where people have mostly-coherent beliefs, with a single global "map" or world model that all the beliefs refer to,  and the beliefs have already been (at least approximately) "updated" to reflect all the person's actual experiences, etc.

That coarse-grained model is often helpful, but in this case, I think things make more sense if you "zoom in" and model human minds as very complicated bundles of heuristics, trying to solve a computationally expensive problem in real time, with lots of different topic-specific maps that sometimes conflict, and a lot of reliance on simplifying assumptions that we don't always realize we're making.

And indeed, this is much of why (just) thinking can be so interesting and meaningful: it gives us the ability to process information slower than realtime, digesting it with less aggressive reliance on cheap heuristics.  We "turn things over in our heads," disabling/re-enabling different heuristics, flipping through our different maps, etc.

I think a part of what psychedelics do is to produce a more intense version of "turning things over in one's head," disabling some of the more-ingrained heuristics that you usually forget about, getting you to apply a style of thinking X to a topic Y when you'd always normally think of Y in style Z, changing which things you mentally bin together vs. split apart.  This can yield real insights that are outside of your normal "search space," but even if not, it exposes you to a lot of potential ways of looking at things that you can use later if you deem them valuable.

(I have used psychedelics a number of times, and I have the impression that some of this use led to personal growth, although it might have been growth that would have occurred soon anyway.  I did find these experiences "meaningful," mostly in a way unrelated to "having breakthroughs" or "learning/realizing things" during the experience -- more to do with the cognitive/emotional presentation-of-new-possibilities I described in the previous paragraph.  And for the "art-like" aspect of the experience, the way I'd call a moving work of fiction or music "meaningful to me.")

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-21T06:13:17.602Z · LW(p) · GW(p)

I just can't get past what reads to me as tremendous typical mind fallacy in this comment?

Like, I think I would just straightforwardly agree with you, if you had caveatted that you were talking about LWers exclusively, or something similar.

But the whole thing above seems to think it's not about, I dunno, a normal curve of people centered on IQ 125 or something.

So much of what you're arguing falls apart once you look at the set of humans instead of the set of [fiction writers + artists + theorem provers + introspecters + people who do any kind of deliberate or active thinking at all on the regular].

As for the second bit: I'm not modeling human minds as having mostly-coherent beliefs or a single global map.

comment by Kaj_Sotala · 2021-10-20T19:45:27.431Z · LW(p) · GW(p)

a) a supermajority of people have the precursors for the just-sitting-there-with-the-therapist moment, or something substantively similar, such that taking the drug allows them to reshuffle all the pieces and make an actual breakthrough

I think that there are structures in the human mind that tend to generate various massive blind spots by default (some of them varying between people, some of them as close to universal as anything in human minds ever is), so I would consider the "a supermajority of people have the precursors for the just-sitting-there-with-the-therapist moment, or something substantively similar" hypothesis completely plausible even if nobody had ever done any drugs and we didn't have any evidence suggesting that drugs might trigger any particular insights.

A weak datapoint would be that out of the about ~twelve people I've facilitated something-like-IFS to, at least five have reported it being a significantly meaningful experience based on just a few sessions (in some cases just one), even if not the most meaningful in their life. And I'm not even among the most experienced or trained IFS facilitators in the world.

Also some of people's trip reports do sound like the kind of thing that you might get from deep enough experiential therapy (IFS and the like; thinking of personal psychological insights more than the 'contact with God' stuff).

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-10-20T20:15:43.365Z · LW(p) · GW(p)

Upvoted, but I would posit that there's an enormous filter in place before Kaj encounters these twelve people and they ask him to facilitate them in something-like-IFS.

I find the supermajority hypothesis weakly plausible.  I don't think it's true, but would not be really surprised to find out that it is.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2021-10-20T20:37:22.070Z · LW(p) · GW(p)

I would posit that there's an enormous filter in place before Kaj encounters these twelve people and they ask him to facilitate them in something-like-IFS.

That's certainly true. 

comment by Benquo · 2021-10-20T15:43:20.133Z · LW(p) · GW(p)

(a) seems implied by Thoreau's opinion, which a lot of people reported finding plausible well before psychedelics, so it's not an ad hoc hypothesis:

The mass of men lead lives of quiet desperation. What is called resignation is confirmed desperation. From the desperate city you go into the desperate country, and have to console yourself with the bravery of minks and muskrats. A stereotyped but unconscious despair is concealed even under what are called the games and amusements of mankind.

A lot of recent philosophers report that people are basically miserable, and psychiatry reports that a lot of people have diagnosable anxiety or depression disorders. This seems consistent with (a).

This is also consistent with my impression, and with the long run improvements in depression - it seems like for a lot of people psychedelics allow them to become conscious of ways they were hurting themselves and living in fear / conflict.

comment by Avi (Avi Weiss) · 2021-10-20T07:45:56.970Z · LW(p) · GW(p)

In my personal and anecdotal experience, for the people who have a positive experience with psychedelics it really is more your 'a' option.

Psychedelics are less about 'thinking random thoughts that seem meaningful' and more about what you describe there - reflecting on their actual life and perspectives with a fresh/clear/different perspective.

comment by James_Miller · 2021-10-19T19:41:07.673Z · LW(p) · GW(p)

Being spiritual and mystical seems antithetical to rationality.

Replies from: Avi Weiss
comment by Avi (Avi Weiss) · 2021-10-20T07:47:33.303Z · LW(p) · GW(p)

I would agree with you there.

I wouldn't agree that describing an experience as 'meaningful' is antithetical to rationality, though.

Replies from: James_Miller
comment by James_Miller · 2021-10-20T11:34:29.359Z · LW(p) · GW(p)

Finding meaning in life felt extremely important to me, until I had a kid and then I stopped thinking about it.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2021-10-20T13:51:39.368Z · LW(p) · GW(p)

I suppose one hypothesis here is that having a kid is dangerously mind warping on the same level as psychedelics.

Replies from: algekalipso, James_Miller
comment by algekalipso · 2021-10-20T22:24:20.777Z · LW(p) · GW(p)

This is substantiated by data in "Logarithmic Scales of Pleasure and Pain [EA · GW]" (quote):

Birth of children

I have heard a number of mothers and father say that having kids was the best thing that ever happened to them. The survey showed this was a very strong pattern, especially among women. In particular, a lot of the reports deal with the very moment in which they held their first baby in their arms for the first time. Some quotes to illustrate this pattern:

The best experience of my life was when my first child was born. I was unsure how I would feel or what to expect, but the moment I first heard her cry I fell in love with her instantly. I felt like suddenly there was another person in this world that I cared about and loved more than myself. I felt a sudden urge to protect her from all the bad in the world. When I first saw her face it was the most beautiful thing I had ever seen. It is almost an indescribable feeling. I felt like I understood the purpose and meaning of life at that moment. I didn’t know it was possible to feel the way I felt when I saw her. I was the happiest I have ever been in my entire life. That moment is something that I will cherish forever. The only other time I have ever felt that way was with the subsequent births of my other two children. It was almost a euphoric feeling. It was an intense calm and contentment.
—————
I was young and had a difficult pregnancy with my first born. I was scared because they had to do an emergency c-section because her health and mine were at risk. I had anticipated and thought about how the moment would be when I finally got to hold my first child and realize that I was a mother. It was unbelievably emotional and I don’t think anything in the world could top the amount of pleasure and joy I had when I got to see and hold her for the first time.
—————
I was 29 when my son was born. It was amazing. I never thought I would be a father. Watching him come into the world was easily the best day of my life. I did not realize that I could love someone or something so much. It was at about 3am in the morning so I was really tired. But it was wonderful nonetheless.
—————
I absolutely loved when my child was born. It was a wave of emotions that I haven’t felt by anything before. It was exciting and scary and beautiful all in one.

No luck for anti-natalists… the super-strong drug-like effects of having children will presumably continue to motivate most humans to reproduce no matter how strong the ethical case against doing so may be. Coming soon: a drug that makes you feel like “you just had 10,000 children”.

comment by James_Miller · 2021-10-20T14:01:47.750Z · LW(p) · GW(p)

Yes or "Nothing in Biology Makes Sense Except in the Light of Evolution" and brains of many adults without kids generate the "your life is meaningless" feeling.

Replies from: ChristianKl, mr-hire
comment by ChristianKl · 2021-10-20T14:44:53.999Z · LW(p) · GW(p)

Yes or "Nothing in Biology Makes Sense Except in the Light of Evolution" and brains of many adults without kids generate the "your life is meaningless" feeling.

Only to the extend that they don't have something else that gives their lifes meaning. One person I know recently became mother and said that it was less impactful for her then for other women because she already had meaning beforehand.

comment by Matt Goldenberg (mr-hire) · 2021-10-21T00:22:53.525Z · LW(p) · GW(p)

It seems like both of these are the same hypothesis.

Replies from: James_Miller
comment by James_Miller · 2021-10-21T00:42:28.881Z · LW(p) · GW(p)

"warping" means shifting away from the intended shape so since evolution "programed" us to have kids the effect of having kids on the brain should not be considered "mind warping".

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2021-10-21T01:32:30.417Z · LW(p) · GW(p)

I guess it depends whether you care about evolution's goals or your own.  If the way that evolution did it was to massively change what you care about/what's meaningful after you have children, then it seems it did it in a way that's mind warping.

comment by Kaj_Sotala · 2021-10-19T19:06:46.698Z · LW(p) · GW(p)

If people who use psychedelics should be considered not yet good enough for the community, and alcohol and tobacco are worse than psychedelics, does that mean people who use alcohol or tobacco should also be considered not yet good enough for the community?

Replies from: James_Miller
comment by James_Miller · 2021-10-19T19:19:00.030Z · LW(p) · GW(p)

Not necessarily since a willingness to violate drug laws is likely a negative signal about someone. If I were in charge of the rationalist world I would put anti-smoking and anti-drinking way ahead of eliminating biases, for working on self-improvement.

Replies from: Avi Weiss, ChristianKl, Kaj_Sotala, mr-hire
comment by Avi (Avi Weiss) · 2021-10-19T19:29:57.247Z · LW(p) · GW(p)

Most of the wildly successful people that exist in the western world today display current, or displayed prior, 'willingness to violate drug laws'.

comment by ChristianKl · 2021-10-20T08:38:10.785Z · LW(p) · GW(p)

I don't know of any rationalists who smoke. On the other hand I do know of rationalists who drink while I personally never drank any alcohol.

If you feel strongly about the alcohol topic, it might be worth to write a top-level post for it to make the case to more people.

Replies from: Viliam, James_Miller, Puxi Deek
comment by Viliam · 2021-10-20T15:48:08.466Z · LW(p) · GW(p)

I don't know of any rationalists who smoke.

It is interesting to put this in contrast with Objectivists. As far as I know, smoking was considered rational and high-status among them.

That might suggest that even in organizations that try to be rational, the instinct to copy high-status people is too strong, and members rationalize copying the personal quirks of the leaders as "doing the rational thing". We need a rationality movement started by a heavy drinker who also happens to be a furry, and see what their followers will consider the most rational way of life.

But maybe this is just about generations and geography. For our generation, especially in Bay Area, smoking is uncool, experimenting with drugs is cool. Occam's razor. ("Hey, not all drugs! Only the safe ones that my friends approve of, not the really harmful ones..." Exactly.)

comment by James_Miller · 2021-10-20T11:31:19.947Z · LW(p) · GW(p)

I feel much stronger about tobacco as it likely caused my father's death, and I never met my dad's dad who smoked and died of lung cancer before I was born.

comment by Puxi Deek · 2021-10-20T11:46:33.038Z · LW(p) · GW(p)

I don't know of any rationalist who is addicted to food. It's not like eating more would make you healthier or increase your mental capacity even temporarily, maybe if they follow certain strict diet but I doubt that's the case for them.

Replies from: Puxi Deek
comment by Puxi Deek · 2021-10-20T23:45:46.929Z · LW(p) · GW(p)

This must've hurt like a fucking bitch.

Replies from: Benito, Ruby
comment by Ben Pace (Benito) · 2021-10-21T00:05:28.973Z · LW(p) · GW(p)

(User banned for a year. Am choosing to leave content up for transparency about mod action in this thread in particular, though have deleted some of the account's low-quality comments on other posts.)

comment by Ruby · 2021-10-21T00:54:13.980Z · LW(p) · GW(p)
comment by Kaj_Sotala · 2021-10-20T19:22:27.709Z · LW(p) · GW(p)

Not necessarily since a willingness to violate drug laws is likely a negative signal about someone.

I would think that this'd depend on what a reasonable person looking at the existing research about the drugs in question would conclude about their effect.

In the specific case of psychedelics, I think a reasonable conclusion based on the existing research would be that they do involve some risks, but can be positive value in expectation if used responsibly.

If that's a reasonable conclusion to draw, then I wouldn't think that a person drawing that conclusion and using psychedelics as a result would be a negative signal about the person.

(In another comment, you mention the destructive effect that drugs have had on Mexico as a reason to avoid them. I'm not very familiar with the situation there, but Wikipedia tells me that the drugs traded by the Mexican drug cartels include cannabis, cocaine, methamphetamine, and heroin. Notably missing from the list are psychedelics such as psilocybin or LSD.)

Replies from: James_Miller
comment by James_Miller · 2021-10-21T00:38:40.927Z · LW(p) · GW(p)

"If that's a reasonable conclusion to draw, then I wouldn't think that a person drawing that conclusion and using psychedelics as a result would be a negative signal about the person."   I agree if you know the person only used the drugs after doing a serous analysis. 

  I know very little about the sale of psychedelics but if it is being sold by criminal organization my guess (and it is just a guess) is that the gangs with the most firepower are getting a cut.

comment by Matt Goldenberg (mr-hire) · 2021-10-20T13:49:32.779Z · LW(p) · GW(p)

a willingness to violate drug laws is likely a negative signal about someone.

 

I'm curious where you're getting this from. What's your evidence?

Replies from: James_Miller
comment by James_Miller · 2021-10-20T14:06:36.911Z · LW(p) · GW(p)

The illegal drug trade inflicts massive misery on the world, just look at what the drug gangs in Mexico do.  A person's willingness to add to this misery to increase his short-term pleasure in a manner that also likely harms his health is, for me a least, a huge negative signal about him.

Replies from: Benquo, Avi Weiss
comment by Benquo · 2021-10-20T15:36:49.232Z · LW(p) · GW(p)

This would seem to be a good argument for not paying taxes or helping the US government, or in particular an argument for excluding employees of the FBI, CIA, and DEA, since they are the institutions that have engaged in active violence to cause and perpetuate this situation. It doesn't seem like a plausible argument that it's wrong to take illegal drugs, except in the "there is no ethical consumption under capitalism" sense.

comment by Avi (Avi Weiss) · 2021-10-20T14:34:40.915Z · LW(p) · GW(p)

I have to say, your extreme/rigid opposition to any form of whatever you're currently defining as 'illegal drugs' reminds me of religious people who have similarly rigid and uncompromising views on things.

Ironically, this also seems to me to be antithetical to rationality...

comment by Avi (Avi Weiss) · 2021-10-18T18:45:14.082Z · LW(p) · GW(p)

In refernce to point 1, how would you define 'illegal drugs' (as defined by which country/state)?

My understanding is that if you applied that rule (people that have used or currently use 'illegal drugs' are not 'good enough' to be in the community) it would rule out at least ~90% of the humans I've ever interacted with.

Replies from: James_Miller
comment by James_Miller · 2021-10-18T18:57:50.386Z · LW(p) · GW(p)

I'm a legal realist so  I would not count marijuana in California even though its consumption is illegal under federal law because a judge is not going to punish you for consuming it.  I wrote "use" not "used".

Replies from: Linch
comment by Linch · 2021-10-21T22:43:37.325Z · LW(p) · GW(p)

Are you including productivity/prescription drugs like off-label use of Adderrall or modafinil or only recreational drugs? 

I think the former is substantially less dangerous, as, among others, there's at least in theory substantially less motivated reasoning in users for wanting reasons to justify their use. 

Replies from: James_Miller
comment by James_Miller · 2021-10-22T01:10:50.672Z · LW(p) · GW(p)

I'm not including prescription but off label use of Adderall or Modafinil as I do indeed think they can increase productivity (for some) and buying them doesn't enrich drug gangs.

comment by habryka (habryka4) · 2021-10-19T03:54:53.104Z · LW(p) · GW(p)

I noticed I had downvoted this comment, and kind of felt bad about it. I think this is a reasonable suggestion to make, but also think it is a bad suggestion for a variety of reasons. Generally I prefer voting systems to reward comments I think are good to have been made, and punish comments that seem to detract from the conversation, and despite my disagreement with the proposed policy, I do think this comment overall made things better. So I changed my downvote to an upvote, and am now leaving this comment to definitely disambiguate.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-19T06:15:52.285Z · LW(p) · GW(p)

I noticed the comment was in the negatives and strong-upvoted it because it seemed fine, though I disagree with it. :P I'll leave the strong upvote so as not to mess up others' votes.

comment by ozziegooen · 2021-10-18T20:45:01.426Z · LW(p) · GW(p)

Thanks for the opinion, and I find the take interesting.

I'm not a fan of the line, "How about a policy that if you use illegal drugs you are presumptively considered not yet good enough to be in the community?", in large part because of the phrase "not yet good enough". This is a really thorny topic that seems to have several assumptions baked into it that I'm uncomfortable with.

I also think that many here like at least some drugs that are "technically illegal", in part, because the FDA/federal rules move slowly. Different issue though.

I like points 2 and 3, I imagine if you had a post just with those two it would have gotten way more upvotes.

Replies from: James_Miller
comment by James_Miller · 2021-10-18T22:52:45.803Z · LW(p) · GW(p)

Thanks for the positive comment on (2) and (3) and I probably should have written them in a separate comment from (1).  While I'm far from an expert on drugs or the California rationalist community, the comments on this post seem to scream "huge drug problem."  I hope leaders in the community at least consider evaluating the drug situation in the community.  I agree with you about the FDA. 

comment by philip_b (crabman) · 2021-10-17T09:55:09.022Z · LW(p) · GW(p)

I know there are serious problems at other EA organizations, which produce largely fake research (and probably took in people who wanted to do real research, who become convinced by their experience to do fake research instead), although I don't know the specifics as well. EAs generally think that the vast majority of charities are doing low-value and/or fake work.

Do I understand correctly that here by "fake" you mean low-value or only pretending to be aimed at solving the most important problems of the humanity, rather than actual falsifications going on, publishing false data, that kind of thing?

Replies from: Linch, jessica.liu.taylor
comment by Linch · 2021-10-21T22:48:14.771Z · LW(p) · GW(p)

As an example of the difficulties in illusions of transparency, when I first read the post, my first interpretation of "largely fake research" was neither of what you said or what jessicata clarified below but I simply assumed that "fake research" => "untrue," in the sense that people who updated from >50% of research from those orgs will on average have a worse Brier score on related topics. This didn't seem unlikely to me on the face of it, since random error, motivated reasoning, and other systemic biases can all contribute to having bad models of the world.

Since 3 people can have 4 different interpretations of the same phrase, this makes me worried that there are many other semantic confusions I didn't spot.

comment by jessicata (jessica.liu.taylor) · 2021-10-17T15:02:52.333Z · LW(p) · GW(p)

I mean pretending to be aimed at solving the most important problems, and also creating organizational incentives for actual bias in the data. For example, I heard from someone at GiveWell that, when they created a report saying that a certain intervention had (small) health downsides as well as upsides, their supervisor said that the fact that these downsides were investigated at all (even if they were small) decreased the chance of the intervention being approved, which creates an obvious incentive for not investigating downsides.

There's also a divergence between GiveWell's internal analysis and their more external presentation and marketing; for example, while SCI is and was listed as a global health charity, GiveWell's analysis found that, while there was a measurable positive effect on income, there wasn't one on health metrics.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-10-17T15:16:07.620Z · LW(p) · GW(p)

while SCI is and was listed as a global health charity, GiveWell's analysis found that, while there was a measurable positive effect on income, there wasn't one on health metrics

That doesn't sound right? They say there is strong evidence that deworming kills the parasites, and weaker evidence that it both improves short term health and leads to higher incomes later in life. But in as much as it improves income, it pretty much has to be doing that via making health better: there isn't really any other plausible path from deworming to higher income. https://www.givewell.org/international/technical/programs/deworming

Replies from: Benquo
comment by Benquo · 2021-10-17T15:26:32.010Z · LW(p) · GW(p)

I'd expect that to show up in some long-run health metrics if that were the mechanism, though.

One way this could be net neutral is that it helps kids with worms but hurts kids without worms. They don't test for high parasitic load before administering these pills, they give them to all the kids (using coercive methods).

But also, killing foreign creatures living in the body is often bad for health. This is a surprising fact - on first principles I'd have predicted that mass administration of antibiotics would improve health by killing off gut bacteria- but this seems not to be generically true, and sometimes we even suffer from the missing gut bugs. (E.g. probiotics, and more directly relevant, helminth therapy.)

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-10-17T15:49:21.793Z · LW(p) · GW(p)

I'd expect that to show up in some long-run health metrics

GiveWell discusses this here: https://www.givewell.org/international/technical/programs/deworming

Summary:

  • ~0.1kg weight increase

  • Unusably noisy data on hemoglobin levels

  • No effect on height

While other metrics might show a change, if collected carefully, I think all we know at this point is that no one has done that research? Which is very different from saying that we do know that there is no effect on health?

Replies from: Benquo
comment by Benquo · 2021-10-17T18:30:42.492Z · LW(p) · GW(p)

While other metrics might show a change, if collected carefully, I think all we know at this point is that no one has done that research? Which is very different from saying that we do know that there is no effect on health?


Neither Jessica nor I said there was no effect on health. It seems like maybe we agree that there was no clearly significant, actually measured effect on long-run health. And GiveWell's marketing presents its recommendations as reflecting a justified high level of epistemic confidence in the benefit claims of its top charities.

We know that people have looked for long-run effects on health and failed to find anything more significant than the levels that routinely fail replication. With an income effect that huge attributable to health I'd expect a huge, p<.001 improvement in some metric like reaction times or fertility or reduction the incidence of some well-defined easy-to-measure malnutrition-related disease.

Worth noting that antibiotics (in a similar epistemic reference class to dewormers for reasons I mentioned above) are used to fatten livestock, so we should end up with some combination of:

  • Skepticism of weight gain as evidence of benefit.
  • Increased credence that normal humans can individually get bigger and healthier by taking antibiotics to kill off their gut bacteria.

I mostly favor the former, because when I was prescribed antibiotics for acne as a kid they made me feel miserable, which I would not describe as an improvement in health, and because in general it seems like people trying to know about this stuff think antibiotics are bad for you, and only worth it if you have an unusually harmful bacterial infection.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-10-18T00:24:30.321Z · LW(p) · GW(p)

Neither Jessica nor I said there was no effect on health

I had read "GiveWell's analysis found that, while there was a measurable positive effect on income, there wasn't one on health metrics" as "there was an effect on income that was measurable and positive, but there wasn't an effect on health metrics". Rereading, I think that's probably not what Jessica meant, though? Sorry!

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T00:40:33.923Z · LW(p) · GW(p)

Yeah, I meant there wasn't a measureable positive health effect.

comment by Jonas Hallgren · 2021-10-26T11:22:07.485Z · LW(p) · GW(p)

It feels kind of weird that this post only has 50 upvotes and is hidden in the layers of lesswrong as some skeleton in the closet waiting to strike at an opportune time. A lot of big names commented on this post and even though it's not entirely true and misrepresenting what happened to an extent it would make sense to kind of promote this type of a post anyway. It's setting a bad example if we don't promote as we then show that we don't encourage criticism which seems very anti-rational. Maybe a summary article of this incident could be done and put on the main website? It doesn't make sense to me that a post with a whooping 900 comments should be this hidden and it sure doesn't look good from an outside perspective.

Replies from: sil-ver, ChristianKl, TurnTrout, None
comment by Rafael Harth (sil-ver) · 2021-10-27T03:21:26.791Z · LW(p) · GW(p)

Note that the post had over 100 karma and then lost over half of it, probably because substantial criticism emerged in the comments. I've never seen that kind of a shift happen before, but it seems to show that people are thoughtful with their upvotes.

comment by ChristianKl · 2021-10-27T12:00:00.575Z · LW(p) · GW(p)

50 upvotes are more then the average post on LessWrong gets. If someone wants to write a summary of Jessica's post and the 900 comments I think there's a good chance that it will be well received. 

Part of why this post doesn't get more comments is because it's not just criticism but it was perceived to try to interfer with the Leverage debate. If all the references to Zoe and Leverage wouldn't be in this post it would likely be better received.

comment by TurnTrout · 2021-10-26T11:36:34.015Z · LW(p) · GW(p)

If someone has something new to say in a top-level post, they can say it; I would guess someone will make such a post in the next month or two. I don't think any top-down action is necessary, beyond people's natural interest in discussion.

Also—I would hardly call a post "hidden" if it has accrued 900 comments. It's been in "recently commented" almost the entire time since its posting, it was on the front page for several days, before naturally falling off due to lower karma + passage of time. 

Personally, I think it's good that people are starting to talk about other things. I don't find this interesting enough to occupy weeks of community attention. 

comment by [deleted] · 2021-10-27T22:10:13.908Z · LW(p) · GW(p)

Strong upvoted. I disagree that there's an active attempt at suppression (I agree with the other comments) but the last time I tried to dig into "is miri/cfar a cult" it was nearly impossible to do more than verify a few minor claims.

Some of that may just have been me being a few years late, but still. It would be nice if information on something so important was easier to find, rather than hidden (even if the hiding mechanism is the product of apathy instead of malice).

comment by Alexander (alexander-1) · 2021-11-20T05:51:46.000Z · LW(p) · GW(p)

This is very insightful and matches my personal experience and the experiences of some friends:

Sociology is less inherently subjective and meta than psychology, having intersubjectively measurable properties such as events in human lifetimes and social network graph structures.

I have not done too much meditation myself, but some friends who've gone very deep into that rabbit hole reported that too much meta-cognition made them hyperaware to an unhealthy extent.

I have noticed myself oscillating between learning how to make my cognition more effective (introspection, debugging, etc.) and taking breaks by just reading history, anthropology, literature, appreciating art or something more crafty/active.

I very much appreciate the sense of purpose and solidarity I get out of learning more about the humanities.

comment by agrippa · 2021-10-17T17:20:33.322Z · LW(p) · GW(p)

Thank you SO MUCH for writing this. 

The case Zoe recounts of someone "having a psychotic break" sounds tame relative to what I'm familiar with.  Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas.

I think this is so well put and important.

I think that your fear of extreme rebuke from publishing this stuff is obviously reasonable when dealing with a group that believes itself to be world-saving. Any such org is going to need to proactively combat this fear if they want people to speak out. To me this is totally obvious. 

Leverage was an especially legible organization, with a relatively clear interior/exterior distinction, while CFAR was less legible, having a set of events that different people were invited to, and many conversations including people not part of the organization.  Hence, it is easier to attribute organizational responsibility at Leverage than around MIRI/CFAR.  (This diffusion of responsibility, of course, doesn't help when there are actual crises, mental health or otherwise.)

I feel that this is a very important point.

I want to hear more experiences like yours. That's not "I want to hear them [before I draw conclusions]." I just want to hear them. I think this stuff should be known. 

I think most of LW believes we should not risk ostracizing a group (with respect to the rest of the world) that might save the world, by publicizing a few broken eggs. If that's the case, much discussion is completely moot. I personally kinda think that the world's best shot is the one where MIRI/CFAR type orgs don't break so many eggs. And I think transparency is the only realistic mechanism for course correction. 

Replies from: Vaniver, ChristianKl
comment by Vaniver · 2021-10-19T22:33:24.262Z · LW(p) · GW(p)

I think most of LW believes we should not risk ostracizing a group (with respect to the rest of the world) that might save the world, by publicizing a few broken eggs. If that's the case, much discussion is completely moot. I personally kinda think that the world's best shot is the one where MIRI/CFAR type orgs don't break so many eggs. And I think transparency is the only realistic mechanism for course correction. 

FWIW, I (former MIRI employee and current LW admin) saw a draft of this post before it was published, and told jessicata that I thought she should publish it, roughly because of that belief in transparency / ethical treatment of people.

comment by ChristianKl · 2021-10-18T09:48:25.106Z · LW(p) · GW(p)

Is a sign of most cults that they have a clear interior/exterior distinction. Whether or not someone is a scientologist is for example very clear. The fact that CFAR doesn't have that is an indication against it being a cult. 

comment by DPiepgrass · 2023-09-25T23:37:50.485Z · LW(p) · GW(p)

Though I don't remember people saying explicitly that Eliezer Yudkowsky was a better philosopher than Kant, I would guess many would have said so.

Reminds me of a Yudkowsky quote [LW · GW]:

Science isn't fair.  That's sorta the point.  An aspiring rationalist in 2007 starts with a huge advantage over an aspiring rationalist in 1957.  It's how we know that progress has occurred.

To me the thought of voluntarily embracing a system explicitly tied to the beliefs of one human being, who's dead, falls somewhere between the silly and the suicidal. 

So it's not that Eliezer is a better philosopher. Kant might easily have been a better philosopher, though it's true I haven't read Kant. But I expect Eliezer to be more advanced by having started from a higher baseline.

(However, I do suspect that Eliezer (like most of us) isn't skilled enough at the art he described, because as far as I've seen, the chain of reasoning in his expectation of ruinous AGI [LW · GW] on a short timeline seems, to me, surprisingly incomplete and unconvincing. My P(near-term doom) is shifted upward as much based on his reputation as anything else, which is not how it should be. Though my high P(long-term doom) is more self-generated and recently shifted down by others.)

comment by seed · 2021-10-17T08:53:26.109Z · LW(p) · GW(p)

Scott Aaronson, for example, blogs about "blank faced" non-self-explaining authoritarian bureaucrats being a constant problem in academia.  Venkatesh Rao writes about the corporate world, and the picture presented is one of a simulation constantly maintained thorough improv.

Well, I once met a person in academia who was convinced she'd be utterly bored anywhere outside academia. 

If you want an unbiased perspective on what life is like outside the rationality community, you should talk to people not associated with the rationality community. (Yes, Venkatesh Rao doesn't blog here as far as I can tell, but he is repeatedly mentioned on LW, so counts as "associated" for the purpose of this exercise.)

comment by EI · 2021-10-21T01:56:14.985Z · LW(p) · GW(p)

I wouldn't recommend psychedelics to anyone. If I had a choice to have never taken them, I would've chosen that than where I am today. You learn quite a bit about reality and your own life, but at the end of the day, it's not really going to help you in terms of finding meaning in life that would ultimately be healthy for your own good as a mortal being. For me, things just seem quite meaningless. Before, life is still enjoyable in its own way. They threaten me with more good times, yet I can't see how my life would be any different after that point. Oh maybe I get better sleep, maybe I get more human interactions, or just more choices in terms of which distractions I choose for the day. Before psychedelics I had my own preferences for the distractions, but now they all seem equally fine. When you lose yourself like that, it leads to depression. The easiest way to get out of it is to find meaning again, but that's just easier said than done. The way people have constructed these scenarios for me have been on the basis of a normal person's paradigm. They never worked because they can't really measure just how far away from that norm I am at. Without psychedelics, I would probably still be able to find something to hold on to.

It's like you meditate so you wouldn't hit that dude who stepped on you. Sure, you don't hit them anymore but is life still just as enjoyable or it's different now? Life used to be much more flavorful, but now you've achieved more emotional stability/peace, life has become a lot more bland. They both have their pros and cons, which is why I think they have so much materials on the afterlife for people to focus on. Something to look forward to, no more good times.

I'll probably just focus on developing empathy with my wife, something that I feel more worthwhile doing than anything else in the world. Meanwhile I was thinking of getting into Rust again, but it feels so pointless. The main difference between psychedelics and no psychedelics is whether I'm looking forward to it. Normally a person would look forward to something good happening in their lives, but if you feel like your life isn't that bad at all, what difference does it make? That's the key to the difference of having a flavorful life vs a bland life. I say this because it feels like something new since it hasn't happened yet, but who knows how I'd feel a year down the road. I've always considered these in terms of time, compared to my own experiences I've had so far. Like when I was living with my wife, I had so much enthusiasm about so many different things. I think my day following the pattern of from the worst to the best has to do with how much I have to focus on. When you have nothing you feel strongly about, your days start off pretty badly, whereas you normally would just focus on things that you focused on the day before. Now everyday is a new day to me. I might focus on making music for a few days or whatever, but they never last longer than a month. That's one thing I kept track of. When I look at other people who just usually do the same thing everyday without a hint of complaint, it makes me a bit jealous of how good of a life they are having instead of this bullshit of having to find something to do every single day. If I had a bit more emotional investment in things, then I would've been just like them, and I don't have to look for shit everyday anymore. My wisdom tells me that having my wife in my life isn't really going to change this very much. It'll just be a hype for awhile and then the emotional investment will reach some baseline level. Hopefully regular physical and emotional intimacy can keep the baseline level rather high. Good thing for my wife that I don't really feel like playing video games everyday anymore, but I'd rather have more flavor in my life than having to lick off the dried on food of a broken pot.

A good way to put it is that now I'm forced to look at the big picture regarding almost everything that I put any of my emotions toward. Before psychedelics, I could just stay in the small picture and get on with my life. Now I can't help myself having to go through the whole process of looking at everything at all different levels. At first it seems exciting because it's a new skill you've developed, but once it becomes a habit that you can't get out of even when you know that you'd be better off not thinking too much, the usefulness becomes much more doubtful. The skill is still very useful if you just want to have a good way to analyze things, but when it encompasses your entire life, you just wish you can find some enjoyment of staying foolish for awhile. The psychedelics themselves won't do this to you. You'd have to intentionally practice this, but the drugs definitely help in guiding you how you want to develop a new habit. I just didn't know that by developing such habit, I would lose interest in everything. The only thing the drug does is to help you to be more self-aware while tripping. I believe I said this awhile ago: if you want the work you do while tripping to have any impact, you have to keep in mind/remember what it was like while tripping and carry on the same type of mental work while you aren't tripping. I had a lot of time on my hands for the last couple of years, and I tried to follow the same pattern of thoughts I was having while tripping when I wasn't tripping. Here I am today. I've had all kinds of ups and downs with different emotional investments as I recall different periods of my life. If you have things that you care about, little annoyances in life would quickly be forgotten about, but if you have nothing, you end up thinking too much about them. Whether they are worthwhile to think about or not, I have to consciously remove myself through self-awareness, which takes effort. Life just becomes so much more work than before where you just go about your day and let natural distractions guide you through life.

For awhile I got a lot of motivation from developing my skills and getting good at doing things, which is why I put so much time into music. Once you've reached a certain plateau of satisfactory, you look at how far you've come and how far there is still to go, you think what's the point of climbing even higher. What difference would that make? If you have an ego, that'd probably be different, but I'm doing it purely for seeking out my own meaning. So I end up switching and find something else to get good at, and then you realize you are just gonna be doing the same thing again. Which is why I've stopped learning about quantum physics and Rust, knowing that there is quite a bit work to do there but feeling quite meaningless at the same time. Sometimes I take pleasure in knowing that I still suck at things that I've put a lot of work into. The idea of don't use it lose it can still bring meaning into making progress. Now it's just making progress for progress sake.

comment by Kenny · 2021-10-18T19:13:12.054Z · LW(p) · GW(p)

I don't think psychedelics really do much for most people. I think for those who say they have been fundamentally altered by them most likely have a construed notion/prior before getting into the whole spiel. It's just a means to an end to them. Them thinking that psychedelics would change you fundamentally made them easier to give into the notion that they've fundamentally changed as a result of taking psychedelics rather than the psychedelics being part of the entire psychological journey they are going through, regardless of whether psychedelics were involved. Psychedelics are well known for its effect of being open to suggestions. I think that's ultimately what happened. If you weren't going to suggest to yourself in the first place or have someone else suggest to you, you wouldn't have thought of the trip as something special.

Replies from: viljami-virolainen
comment by Viljami (viljami-virolainen) · 2021-10-18T19:51:37.165Z · LW(p) · GW(p)

You seem to be claiming that without somebody giving you suggestions, people would not think of psychedelic trips as something special. 

Well, as the discoverer of the substance, Hoffman surely did not have any preconceptions, since the first time he was exposed to LSD it was an accident, and had no idea of it's psychedelic properties.

His account is freely available online here: https://www.hallucinogens.org/hofmann/child1.htm

A quote where he describes the second exposure, which was intentional experiment: "This self-experiment showed that LSD-25 behaved as a psychoactive substance with extraordinary properties and potency. There was to my knowledge no other known substance that evoked such profound psychic effects in such extremely low doses, that caused such dramatic changes in human consciousness and our experience of the inner and outer world." 

Replies from: Kenny
comment by Kenny · 2021-10-18T21:02:20.039Z · LW(p) · GW(p)

That is exactly what I said in another comment about changing your state of mind and nothing else. Suggestions are outside of that change of state of mind. You seemed to be confused about mixing the effects of psychedelics and voodoo/woo/spiritual stuff. I know psychedelics being viewed as something related to spirituality is rather a popular rhetoric among both users and nonusers. The spirituality is what I mean by suggestion. You are suggesting something that has nothing to do with the mechanism of action of the drug.

Replies from: viljami-virolainen
comment by Viljami (viljami-virolainen) · 2021-10-19T11:38:40.514Z · LW(p) · GW(p)

Set, setting and suggestions can affect the experience for sure. Personal values, culture, religion etc. can make a difference in how the experience progresses, how it is interpreted and integrated and so on.

But my understanding from both the scientific literature and the anecdotal reports is that the nature of the mechanism of action of these drugs indeed is such that they can result in mystical experiences in people who take it.

See for example:

>Psychedelic drugs have long been known to be capable of inducing mystical or transcendental experiences. However, given the common “recreational” nature of much present-day psychedelic use, with typical doses tending to be lower than those commonly taken in the 1960s, the extent to which illicit use of psychedelics today is associated with mystical experiences is not known. Furthermore the mild psychedelic MDMA (“Ecstasy”) is more popular today than “full” psychedelics such as LSD or psilocybin, and the contribution of illicit MDMA use to mystical experiences is not known. The present study recruited 337 adults from the website and newsletter of the Multidisciplinary Association for Psychedelic Studies (MAPS), most of whom reported use of a variety of drugs both licit and illicit including psychedelics. Although only a quarter of the sample reported “spiritual” motives for using psychedelics, use of LSD and psilocybin was significantly positively related to scores on two well-known indices of mystical experiences in a dose-related manner, whereas use of MDMA, cannabis, cocaine, opiates and alcohol was not. Results suggest that even in today's context of “recreational” drug use, psychedelics such as LSD and psilocybin, when taken at higher doses, continue to induce mystical experiences in many users.
"Illicit Use of LSD or Psilocybin, but not MDMA or Nonpsychedelic Drugs, is Associated with Mystical Experiences in a Dose-Dependent Manner"
https://www.tandfonline.com/doi/abs/10.1080/02791072.2012.736842
 

Replies from: Kenny
comment by Kenny · 2021-10-20T00:13:10.671Z · LW(p) · GW(p)

The study's approach to mysticism seems to be rather qualitative than quantitative, based on self reporting and questionnaires, mostly from members of MAPS, whom probably have certain variables that aren't really controlled for compared to the general population.

Mysticism Scale. This 32-item questionnaire (Hood
1975) contains items that ask participants about past mys-
tical experiences (if any). The Mysticism Scale has been
used in research on the psychology of religion (Spilka et
al. 2003) but has only previously been applied to drug
experiences by Griffiths and colleagues (2006), who used
it to assess psychedelic drug (psilocybin) experiences. The
Mysticism Scale yields a total score based on three dimen-
sions of mystical experience: noetic quality (e.g., “I have
never experienced anything to be divine,” reverse-scored);
introvertive mysticism (e.g., “I have never had an experi-
ence which I was unable to express adequately through lan-
guage,” reverse-scored); and extrovertive mysticism (e.g,
“I have had an experience in which I felt everything in the
world to be part of the same whole”). The items are rated
on a nine-point scale ranging from −4 = “this description
is extremely not true of my own experience or experiences”
through 0 = “I cannot decide” to +4 = “this description is
extremely true of my own experience or experiences.” The
psychometric properties of this scale have been reported to
be sound (Reinert & Steifler 1993).

comment by TAG · 2021-10-17T01:47:23.128Z · LW(p) · GW(p)

Someone in the community told me that for me to think AGI probably won’t be developed soon, I must think I’m better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness

In relation to a massive claim of his own specialness...

comment by [deleted] · 2021-10-17T21:24:45.900Z · LW(p) · GW(p)

As someone who only sticks around here out of morbid fascination, congrats on realizing that all of this is not okay.  

Replies from: habryka4, Viliam, IlyaShpitser
comment by habryka (habryka4) · 2021-10-18T02:13:14.839Z · LW(p) · GW(p)

I think that's not really what the OP said, at least not in the naive way you seem to express here. You might disagree with them, but the OP is very specifically saying that they thought what they experienced was better than what they would have experienced in most other places in the world, or most other paths they could have taken.

I think judging the author to be wrong about their preferences, given their experiences, is not a totally crazy thing to do, given the situation, but your comment seems to somewhat misrepresent the author. To be clear, I do think the author believes that this was all not okay, but in a way that working at almost any job is not okay, in that it the whole world is kind of crazy about how coercive it is towards people.

comment by Viliam · 2021-10-18T15:54:10.782Z · LW(p) · GW(p)

congrats on realizing that all of this is not okay.

Ironically, some people realized something similar long ago [LW(p) · GW(p)], but...

comment by IlyaShpitser · 2021-10-18T15:37:44.205Z · LW(p) · GW(p)

+1 to all this.

comment by jdp · 2021-10-17T02:35:00.945Z · LW(p) · GW(p)Replies from: SaidAchmiz