My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

post by jessicata (jessica.liu.taylor) · 2021-10-16T21:28:12.427Z · LW · GW · 950 comments

Contents

  Background: choosing a career
  Trauma symptoms and other mental health problems
  Why do so few speak publicly, and after so long?
  Strange psycho-social-metaphysical hypotheses in a group setting
  World-saving plans and rarity narratives
  Debugging
  Other issues
  Conclusion
None
951 comments

I appreciate Zoe Curzi's revelations of her experience with Leverage.  I know how hard it is to speak up when no or few others do, and when people are trying to keep things under wraps.

I haven't posted much publicly about my experiences working as a researcher at MIRI (2015-2017) or around CFAR events, to a large degree because I've been afraid.  Now that Zoe has posted about her experience, I find it easier to do so, especially after the post was generally well-received by LessWrong.

I felt moved to write this, not just because of Zoe's post, but also because of Aella's commentary:

I've found established rationalist communities to have excellent norms that prevent stuff like what happened at Leverage. The times where it gets weird is typically when you mix in a strong leader + splintered, isolated subgroup + new norms. (this is not the first time)

This seemed to me to be definitely false, upon reading it.  Most of what was considered bad about the events at Leverage Research also happened around MIRI/CFAR, around the same time period (2017-2019).

I don't want to concentrate on the question of which is "worse"; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases.  I also caution against blame in general, in situations like these, where many people (including me!) contributed to the problem, and have kept quiet for various reasons.  With good reason, it is standard for truth and reconciliation events to focus on restorative rather than retributive justice, and include the possibility of forgiveness for past crimes.

As a roadmap for the rest of the post, I'll start by describing some background, describe some trauma symptoms and mental health issues I and others have experienced, and describe the actual situations that these mental events were influenced by and "about" to a significant extent.

Background: choosing a career

After I finished my CS/AI Master's degree at Stanford, I faced a choice of what to do next.  I had a job offer at Google for machine learning research and a job offer at MIRI for AI alignment research.  I had also previously considered pursuing a PhD at Stanford or Berkeley; I'd already done undergrad research at CoCoLab, so this could have easily been a natural transition.

I'd decided against a PhD on the basis that research in industry was a better opportunity to work on important problems that impact the world; since then I've gotten more information from insiders that academia is a "trash fire" (not my quote!), so I don't regret this decision.

I was faced with a decision between Google and MIRI.  I knew that at MIRI I'd be taking a pay cut.  On the other hand, I'd be working on AI alignment, an important problem for the future of the world, probably significantly more important than whatever I'd be working on at Google.  And I'd get an opportunity to work with smart, ambitious people, who were structuring their communication protocols and life decisions around the content of the LessWrong Sequences.

These Sequences contained many ideas that I had developed or discovered independently, such as functionalist theory of mind, the idea that Solomonoff Induction was a formalization of inductive epistemology, and the idea that one-boxing in Newcomb's problem is more rational than two-boxing.  The scene attracted thoughtful people who cared about getting the right answer on abstract problems like this, making for very interesting conversations.

Research at MIRI was an extension of such interesting conversations to rigorous mathematical formalism, making it very fun (at least for a time).  Some of the best research I've done was at MIRI (reflective oracles, logical induction, others).  I met many of my current friends through LessWrong, MIRI, and the broader LessWrong Berkeley community.

When I began at MIRI (in 2015), there were ambient concerns that it was a "cult"; this was a set of people with a non-mainstream ideology that claimed that the future of the world depended on a small set of people that included many of them.  These concerns didn't seem especially important to me at the time.  So what if the ideology is non-mainstream as long as it's reasonable?  And if the most reasonable set of ideas implies high impact from a rare form of research, so be it; that's been the case at times in history.

(Most of the rest of this post will be negative-valenced, like Zoe's post; I wanted to put some things I liked about MIRI and the Berkeley community up-front.  I will be noting parts of Zoe's post and comparing them to my own experience, which I hope helps to illuminate common patterns; it really helps to have an existing different account to prompt my memory of what happened.)

Trauma symptoms and other mental health problems

Back to Zoe's post.  I want to disagree with a frame that says that the main thing that's bad was that Leverage (or MIRI/CFAR) was a "cult".  This makes it seem like what happened at Leverage is much worse than what could happen at a normal company.  But, having read Moral Mazes and talked to people with normal corporate experience (especially in management), I find that "normal" corporations are often quite harmful to the psychological health of their employees, e.g. causing them to have complex PTSD symptoms, to see the world in zero-sum terms more often, and to have more preferences for things to be incoherent.  Normal startups are commonly called "cults", with good reason.  Overall, there are both benefits and harms of high-demand ideological communities ("cults") compared to more normal occupations and social groups, and the specifics matter more than the general class of something being "normal" or a "cult", although the general class affects the structure of the specifics.

Zoe begins by listing a number of trauma symptoms she experienced.  I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break.

The psychotic break was in October 2017, and involved psychedelic use (as part of trying to "fix" multiple deep mental problems at once, which was, empirically, overly ambitious); although people around me to some degree tried to help me, this "treatment" mostly made the problem worse, so I was placed in 1-2 weeks of intensive psychiatric hospitalization, followed by 2 weeks in a halfway house.  This was followed by severe depression lasting months, and less severe depression from then on, which I still haven't fully recovered from.  I had PTSD symptoms after the event and am still recovering.

During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation.  I was catatonic for multiple days, afraid that by moving I would cause harm to those around me.  This is in line with scrupulosity-related post-cult symptoms.

Talking about this is to some degree difficult because it's normal to think of this as "really bad".  Although it was exceptionally emotionally painful and confusing, the experience taught me a lot, very rapidly; I gained and partially stabilized a new perspective on society and my relation to it, and to my own mind.  I have much more ability to relate to normal people now, who are also for the most part also traumatized.

(Yes, I realize how strange it is that I was more able to relate to normal people by occupying an extremely weird mental state where I thought I was destroying the world and was ashamed and suicidal regarding this; such is the state of normal Americans, apparently, in a time when suicidal music is extremely popular among youth.)

Like Zoe, I have experienced enormous post-traumatic growth.  To quote a song, "I am Woman": "Yes, I'm wise, but it's wisdom born of pain.  I guess I've paid the price, but look how much I've gained."

While most people around MIRI and CFAR didn't have psychotic breaks, there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR; at least one other than me had worked at MIRI for a significant time, and at least one had done work with MIRI on a shorter-term basis.  There was, in addition, a case of someone becoming very paranoid, attacking a mental health worker, and hijacking her car, leading to jail time; this person was not an employee of either organization, but had attended multiple CFAR events including a relatively exclusive AI-focused one.

I heard that the paranoid person in question was concerned about a demon inside him, implanted by another person, trying to escape.  (I knew the other person in question, and their own account was consistent with attempting to implant mental subprocesses in others, although I don't believe they intended anything like this particular effect).  My own actions while psychotic later that year were, though physically nonviolent, highly morally confused; I felt that I was acting very badly and "steering in the wrong direction", e.g. in controlling the minds of people around me or subtly threatening them, and was seeing signs that I was harming people around me, although none of this was legible enough to seem objectively likely after the fact.  I was also extremely paranoid about the social environment, being unable to sleep normally due to fear.

There are even cases of suicide in the Berkeley rationality community associated with scrupulosity and mental self-improvement (specifically, Maia Pasek/SquirrelInHell [LW · GW], and Jay Winterford/Fluttershy [LW · GW], both of whom were long-time LessWrong posters; Jay wrote an essay about suicidality, evil, domination, and Roko's basilisk months before the suicide itself).  Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz.  (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; the splinter group seems to have selected for high scrupulosity and not attenuated its mental impact.)

The cases discussed are not always of MIRI/CFAR employees, so they're hard to attribute to the organizations themselves, even if they were clearly in the same or a nearby social circle.  Leverage was an especially legible organization, with a relatively clear interior/exterior distinction, while CFAR was less legible, having a set of events that different people were invited to, and many conversations including people not part of the organization.  Hence, it is easier to attribute organizational responsibility at Leverage than around MIRI/CFAR.  (This diffusion of responsibility, of course, doesn't help when there are actual crises, mental health or otherwise.)

Obviously, for every case of poor mental health that "blows up" and is noted, there are many cases that aren't.  Many people around MIRI/CFAR and Leverage, like Zoe, have trauma symptoms (including "cult after-effect symptoms") that aren't known about publicly until the person speaks up.

Why do so few speak publicly, and after so long?

Zoe discusses why she hadn't gone public until now.  She first cites fear of response:

Leverage was very good at convincing me that I was wrong, my feelings didn't matter, and that the world was something other than what I thought it was. After leaving, it took me years to reclaim that self-trust.

Clearly, not all cases of people trying to convince each other that they're wrong are abusive; there's an extra dimension of institutional gaslighting, people telling you something you have no reason to expect they actually believe, people being defensive and blocking information, giving implausible counter-arguments, trying to make you doubt your account and agree with their bottom line.

Jennifer Freyd writes about "betrayal blindness", a common problem where people hide from themselves evidence that their institutions have betrayed them.  I experienced this around MIRI/CFAR.

Some background on AI timelines: At the Asilomar Beneficial AI conference, in early 2017 (after AlphaGo was demonstrated in late 2016), I remember another attendee commenting on a "short timelines bug" going around.  Apparently a prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years.

This trend in belief included MIRI/CFAR leadership; one person commented that he noticed his timelines trending only towards getting shorter, and decided to update all at once.  I've written [LW · GW] about AI timelines in relation to political motivations before (long after I actually left MIRI).

Perhaps more important to my subsequent decisions, the AI timelines shortening triggered an acceleration of social dynamics.  MIRI became very secretive about research.  Many researchers were working on secret projects, and I learned almost nothing about these.  I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact.  Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects.

I had disagreements with the party line, such as on when human-level AGI was likely to be developed and about security policies around AI, and there was quite a lot of effort to convince me of their position, that AGI was likely coming soon and that I was endangering the world by talking openly about AI in the abstract (not even about specific new AI algorithms). Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness [EDIT: Eliezer himself and Sequences-type thinking, of course, would aggressively disagree [LW(p) · GW(p)] with the epistemic methodology advocated by this person].  I experienced a high degree of scrupulosity about writing anything even somewhat critical of the community and institutions (e.g. this post).  I saw evidence of bad faith around me, but it was hard to reject the frame for many months; I continued to worry about whether I was destroying everything by going down certain mental paths and not giving the party line the benefit of the doubt, despite its increasing absurdity.

Like Zoe, I was definitely worried about fear of response.  I had paranoid fantasies about a MIRI executive assassinating me.  The decision theory research I had done came to life, as I thought about the game theory of submitting to a threat of a gun, in relation to how different decision theories respond to extortion.

This imagination, though extreme (and definitely reflective of a cognitive error), was to some degree re-enforced by the social environment.  I mentioned the possibility of whistle-blowing on MIRI to someone I knew, who responded that I should consider talking with Chelsea Manning, a whistleblower who is under high threat.  There was quite a lot of paranoia at the time, both among the "establishment" (who feared being excluded or blamed) and "dissidents" (who feared retaliation by institutional actors).  (I would, if asked to take bets, have bet strongly against actual assassination, but I did fear other responses.)

More recently (in 2019), there were multiple masked protesters at a CFAR event (handing out pamphlets critical of MIRI and CFAR) who had a SWAT team called on them (by camp administrators, not CFAR people, although a CFAR executive had called the police previously about this group), who were arrested, and are now facing the possibility of long jail time.  While this group of people (Ziz and some friends/associates) chose an unnecessarily risky way to protest, hearing about this made me worry about violently authoritarian responses to whistleblowing, especially when I was under the impression that it was a CFAR-adjacent person who had called the cops to say the protesters had a gun (which they didn't have), which is the way I heard the story the first time.

Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past events secretively.  This matches my experience.

Like Zoe, I care about the people I interacted with during the time of the events (who are, for the most part, colleagues who I learned from), and I don't intend to cause harm to them through writing about these events.

Zoe discusses an unofficial NDA people signed as they left, agreeing not to talk badly of the organization.  While I wasn't pressured to sign an NDA, there were significant security policies discussed at the time (including the one about researchers not asking each other about research).  I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner, which would be more dangerous (the blog post in question would have been similar to the AI forecasting work I did later, here and here; judge for yourself how dangerous this is).  This made it hard to talk about the silencing dynamic; if you don't have the freedom to speak about the institution and limits of freedom of speech, then you don't have freedom of speech.

(Is it a surprise that, after over a year in an environment where I was encouraged to think seriously about the possibility that simple actions such as writing blog posts about AI forecasting could destroy the world, I would develop the belief that I could destroy everything through subtle mental movements that manipulate people?)

Years before, MIRI had a non-disclosure agreement that members were pressured to sign, as part of a legal dispute with Louie Helm.

I was certainly socially discouraged from revealing things that would harm the "brand" of MIRI and CFAR, by executive people.  There was some discussion at the time of the possibility of corruption in EA/rationality institutions (e.g. Ben Hoffman's posts criticizing effective altruism, GiveWell, and the Open Philanthropy Project); a lot of this didn't end up on the Internet due to PR concerns.

Someone who I was collaborating with at the time (Michael Vassar) was commenting on social epistemology and the strengths and weaknesses of various people's epistemology and strategy, including people who were leaders at MIRI/CFAR.  Subsequently, Anna Salamon said that Michael was causing someone else at MIRI to "downvote Eliezer in his head" and that this was bad because it meant that the "community" would not agree about who the leaders were, and would therefore have akrasia issues due to the lack of agreement on a single leader in their head telling them what to do.  (Anna says, years later, that she was concerned about bias in selectively causing downvotes rather than upvotes; however, at the time, based on what was said, I had the impression that the primary concern was about coordination around common leadership rather than bias specifically.)

This seemed culty to me and some friends; it's especially evocative in relation to Julian Jaynes' writing about bronze age cults, which detail a psychological model in which idols/gods give people voices in their head telling them what to do.

(As I describe these events in retrospect they seem rather ridiculous, but at the time I was seriously confused about whether I was especially crazy or in-the-wrong, and the leadership was behaving sensibly.  If I were the type of person to trust my own judgment in the face of organizational mind control, I probably wouldn't have been hired in the first place; everything I knew about how to be hired would point towards having little mental resistance to organizational narratives.)

Strange psycho-social-metaphysical hypotheses in a group setting

Zoe gives a list of points showing how "out of control" the situation at Leverage got.  This is consistent with what I've heard from other ex-Leverage people.

The weirdest part of the events recounted is the concern about possibly-demonic mental subprocesses being implanted by other people. As a brief model of something similar to this (not necessarily the same model as the Leverage people were using): people often pick up behaviors ("know-how") and mental models from other people, through acculturation and imitation. Some of this influence could be (a) largely unconscious on the part of the receiver, (b) partially intentional or the part of the person having mental effects on others (where these intentions may include behaviorist conditioning, similar to hypnosis, causing behaviors to be triggered under certain circumstances), and (c) overall harmful to the receiver's conscious goals. According to IFS-like psychological models, it's common for a single brain to contain multiple sub-processes with different intentions. While the mental subprocess implantation hypothesis is somewhat strange, it's hard to rule out based on physics or psychology.

As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR. These strange experiences are, as far as I can tell, part of a more general social phenomenon around that time period; I recall a tweet commenting that the election of Donald Trump convinced everyone that magic was real.

Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community.  While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible.  (I noted at the time that there might be a sense in which different people have "auras" in a way that is not less inherently rigorous than the way in which different people have "charisma", and I feared this type of comment would cause people to say I was crazy.)

As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.

The case Zoe recounts of someone "having a psychotic break" sounds tame relative to what I'm familiar with.  Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas.

Alternatively, like me, they can explore these metaphysics while:

Being able to discuss somewhat wacky experiential hypotheses, like the possibility of people spreading mental subprocesses to each other, in a group setting, and have the concern actually taken seriously as something that could seem true from some perspective (and which is hard to definitively rule out), seems much more conducive to people's mental well-being than refusing to have that discussion, so they struggle with (what they think is) mental subprocess implantation on their own.  Leverage definitely had large problems with these discussions, and perhaps tried to reach more intersubjective agreement about them than was plausible (leading to over-reification, as Zoe points out), but they seem less severe than the problems resulting from refusing to have them, such as psychiatric hospitalization and jail time.

"Psychosis" doesn't have to be a bad thing, even if it usually is in our society; it can be an exploration of perceptions and possibilities not before imagined, in a supportive environment that helps the subject to navigate reality in a new way; some of R.D. Liang's work is relevant here, describing psychotic mental states as a result of ontological insecurity following from an internal division of the self at a previous time. Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing.

World-saving plans and rarity narratives

Zoe cites the fact that Leverage has a "world-saving plan" (which included taking over the world) and considered Geoff Anders and Leverage to be extremely special, e.g. Geoff being possibly the best philosopher ever:

Within a few months of joining, a supervisor I trusted who had recruited me confided in me privately, “I think there’s good reason to believe Geoff is the best philosopher who’s ever lived, better than Kant. I think his existence on earth right now is an historical event.”

Like Leverage, MIRI had a "world-saving plan".  This is no secret; it's discussed in an Arbital article written by Eliezer Yudkowsky.  Nate Soares frequently talked about how it was necessary to have a "plan" to make the entire future ok, to avert AI risk; this plan would need to "backchain" from a state of no AI risk and may, for example, say that we must create a human emulation using nanotechnology that is designed by a "genie" AI, which does a narrow task rather than taking responsibility for the entire future; this would allow the entire world to be taken over by a small group including the emulated human. [EDIT: See Nate's clarification [LW(p) · GW(p)], the small group doesn't have to be MIRI specifically, and the upload plan is an example of a plan rather than a fixed super-plan.]

I remember taking on more and more mental "responsibility" over time, noting the ways in which people other than me weren't sufficient to solve the AI alignment problem, and I had special skills, so it was uniquely my job to solve the problem.  This ultimately broke down, and I found Ben Hoffman's post on responsibility to resonate (which discusses the issue of control-seeking).

The decision theory of backchaining and taking over the world somewhat beyond the scope of this post.  There are circumstances where back-chaining is appropriate, and "taking over the world" might be necessary, e.g. if there are existing actors already trying to take over the world and none of them would implement a satisfactory regime.  However, there are obvious problems with multiple actors each attempting to control everything, which are discussed in Ben Hoffman's post.

This connects with what Zoe calls "rarity narratives".  There were definitely rarity narratives around MIRI/CFAR.  Our task was to create an integrated, formal theory of values, decisions, epistemology, self-improvement, etc ("Friendliness theory"), which would help us develop Friendly AI faster than the rest of the world combined was developing AGI (which was, according to leaders, probably in less than 20 years).  It was said that a large part of our advantage in doing this research so fast was that we were "actually trying" and others weren't.  It was stated by multiple people that we wouldn't really have had a chance to save the world without Eliezer Yudkowsky (obviously implying that Eliezer was an extremely historically significant philosopher).

Though I don't remember people saying explicitly that Eliezer Yudkowsky was a better philosopher than Kant, I would guess many would have said so.  No one there, as far as I know, considered Kant worth learning from enough to actually read the Critique of Pure Reason in the course of their research; I only did so years later, and I'm relatively philosophically inclined.  I would guess that MIRI people would consider a different set of philosophers relevant, e.g. would include Turing and Einstein as relevant "philosophers", and I don't have reason to believe they would consider Eliezer more relevant than these, though I'm not certain either way.  (I think Eliezer is a world-historically-significant philosopher, though not as significant as Kant or Turing or Einstein.)

I don't think it's helpful to oppose "rarity narratives" in general.  People need to try to do hard things sometimes, and actually accomplishing those things would make the people in question special, and that isn't a good argument against trying the thing at all.  Intellectual groups with high information integrity, e.g. early quantum mechanics people, can have a large effect on history.  I currently think the intellectual work I do is pretty rare and important, so I have a "rarity narrative" about myself, even though I don't usually promote it.  Of course, a project claiming specialness while displaying low information integrity is, effectively, asking for more control and resources that it can beneficially use.

Rarity narratives can have the effects of making a group of people more insular, more concentrating relevance around itself and not learning from other sources (in the past or the present), making local social dynamics be more centered on a small number of special people, and increasing pressure on people to try to do (or pretend to try to do) things beyond their actual abilities; Zoe and I both experienced these effects.

(As a hint to evaluating rarity narratives yourself: compare Great Thinker's public output to what you've learned from other public sources; follow citations and see where Great Thinker might be getting their ideas from; read canonical great philosophy and literature; get a quantitative sense of how much insight is coming from which places throughout spacetime.)

The object-level specifics of each case of world-saving plan matter, of course; I think most readers of this post will be more familiar with MIRI's world-saving plan, especially since Zoe's post provides few object-level details about the content of Leverage's plan.

Debugging

Rarity ties into debugging; if what makes us different is that we're Actually Trying and the other AI research organizations aren't, then we're making a special psychological claim about ourselves, that we can detect the difference between actually and not-actually trying, and cause our minds to actually try more of the time.

Zoe asks whether debugging was "required"; she notes:

The explicit strategy for world-saving depended upon a team of highly moldable young people self-transforming into Elon Musks.

I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said "yes".  This part of the plan was the same [EDIT: Anna clarifies [LW(p) · GW(p)] that, while some people becoming like Elon Musk was some people's plan, there was usually acceptance of people not changing themselves; this might to some degree apply to Leverage as well].

Self-improvement was a major focus around MIRI and CFAR, and at other EA orgs.  It often used standard CFAR techniques, which were taught at workshops.  It was considered important to psychologically self-improve to the point of being able to solve extremely hard, future-lightcone-determining problems.

I don't think these are bad techniques, for the most part.  I think I learned a lot by observing and experimenting on my own mental processes.  (Zoe isn't saying Leverage's techniques are bad either, just that you could get most of them from elsewhere.)

Zoe notes a hierarchical structure where people debugged people they had power over:

Trainers were often doing vulnerable, deep psychological work with people with whom they also lived, made funding decisions about, or relied on for friendship. Sometimes people debugged each other symmetrically, but mostly there was a hierarchical, asymmetric structure of vulnerability; underlings debugged those lower than them on the totem pole, never their superiors, and superiors did debugging with other superiors.

This was also the case around MIRI and CFAR.  A lot of debugging was done by Anna Salamon, head of CFAR at the time; Ben Hoffman noted that "every conversation with Anna turns into an Anna-debugging-you conversation", which resonated with me and others.

There was certainly a power dynamic of "who can debug who"; to be a more advanced psychologist is to be offering therapy to others, being able to point out when they're being "defensive", when one wouldn't accept the same from them.  This power dynamic is also present in normal therapy, although the profession has norms such as only getting therapy from strangers, which change the situation.

How beneficial or harmful this was depends on the details.  I heard that "political" discussions at CFAR (e.g. determining how to resolve conflicts between people at the organization, which could result in people leaving the organization) were mixed with "debugging" conversations, in a way that would make it hard for people to focus primarily on the debugged person's mental progress without imposing pre-determined conclusions.  Unfortunately, when there are few people with high psychological aptitude around, it's hard to avoid "debugging" conversations having political power dynamics, although it's likely that the problem could have been mitigated.

[EDIT: See PhoenixFriend's pseudonymous comment [LW(p) · GW(p)], and replies to it, for more on power dynamics including debugging-related ones at CFAR specifically.]

It was really common for people in the social space, including me, to have a theory about how other people are broken, and how to fix them, by getting them to understand a deep principle you do and they don't.  I still think most people are broken and don't understand deep principles that I or some others do, so I don't think this was wrong, although I would now approach these conversations differently.

A lot of the language from Zoe's post, e.g. "help them become a master", resonates.  There was an atmosphere of psycho-spiritual development, often involving Kegan stages.  There is a significant degree of overlap between people who worked with or at CFAR and people at the Monastic Academy [EDIT: see Duncan's comment [LW(p) · GW(p)] estimating that the actual amount of interaction between CFAR and MAPLE was pretty low even though there was some overlap in people].

Although I wasn't directly financially encouraged to debug people, I infer that CFAR employees were, since instructing people was part of their job description.

Other issues

MIRI did have less time pressure imposed by the organization itself than Leverage did, despite the deadline implied by the AGI timeline; I had no issues with absurdly over-booked calendars.  I vaguely recall that CFAR employees were overworked especially around workshop times, though I'm pretty uncertain of the details.

Many people's social lives, including mine, were spent mostly "in the community"; much of this time was spent on "debugging" and other psychological work.  Some of my most important friendships at the time, including one with a housemate, were formed largely around a shared interest in psychological self-improvement.  There was, therefore, relatively little work-life separation (which has upsides as well as downsides).

Zoe recounts an experience with having unclear, shifting standards applied, with the fear of ostracism.  Though the details of my experience are quite different, I was definitely afraid of being considered "crazy" and marginalized for having philosophy ideas that were too weird, even though weird philosophy would be necessary to solve the AI alignment problem.  I noticed more people saying I and others were crazy as we were exploring sociological hypotheses that implied large problems with the social landscape we were in (e.g. people thought Ben Hoffman was crazy because of his criticisms of effective altruism). I recall talking to a former CFAR employee who was scapegoated and ousted after failing to appeal to the winning internal coalition; he was obviously quite paranoid and distrustful, and another friend and I agreed that he showed PTSD symptoms [EDIT: I infer scapegoating based on the public reason given being suspicious/insufficient; someone at CFAR points out that this person was paranoid and distrustful while first working at CFAR as well].

Like Zoe, I experienced myself and others being distanced from old family and friends, who didn't understand how high-impact the work we were doing was.  Since leaving the scene, I am more able to talk with normal people (including random strangers), although it's still hard to talk about why I expect the work I do to be high-impact.

An ex-Leverage person I know comments that "one of the things I give Geoff the most credit for is actually ending the group when he realized he had gotten in over his head. That still left people hurt and shocked, but did actually stop a lot of the compounding harm."  (While Geoff is still working on a project called "Leverage", the initial "Leverage 1.0" ended with most of the people leaving.) This is to some degree happening with MIRI and CFAR, with a change in the narrative about the organizations and their plans, although the details are currently less legible than with Leverage.

Conclusion

Perhaps one lesson to take from Zoe's account of Leverage is that spending relatively more time discussing sociology (including anthropology and history), and less time discussing psychology, is more likely to realize benefits while avoiding problems.  Sociology is less inherently subjective and meta than psychology, having intersubjectively measurable properties such as events in human lifetimes and social network graph structures.  My own thinking has certainly gone in this direction since my time at MIRI, to great benefit.  I hope this account I have written helps others to understand the sociology of the rationality community around 2017, and that this understanding helps people to understand other parts of the society they live in.

There are, obviously from what I have written, many correspondences, showing a common pattern for high-ambition ideological groups in the San Francisco Bay Area.  I know there are serious problems at other EA organizations, which produce largely fake research (and probably took in people who wanted to do real research, who become convinced by their experience to do fake research instead), although I don't know the specifics as well.  EAs generally think that the vast majority of charities are doing low-value and/or fake work.  I also know that San Francisco startup culture produces cult-like structures (and associated mental health symptoms) with regularity.  It seems more productive to, rather than singling out specific parties, think about the social and ecological forces that create and select for the social structures we actually see, which include relatively more and less cult-like structures.  (Of course, to the extent that harm is ongoing due to actions taken by people and organizations, it's important to be able to talk about that.)

It's possible that after reading this, you think this wasn't that bad.  Though I can only speak for myself here, I'm not sad that I went to work at MIRI instead of Google or academia after college.  I don't have reason to believe that either of these environments would have been better for my overall intellectual well-being or my career, despite the mental and social problems that resulted from the path I chose.  Scott Aaronson, for example, blogs about "blank faced" non-self-explaining authoritarian bureaucrats being a constant problem in academia.  Venkatesh Rao writes about the corporate world, and the picture presented is one of a simulation constantly maintained thorough improv.

I did grow from the experience in the end.  But I did so in large part by being very painfully aware of the ways in which it was bad.

I hope that those that think this is "not that bad" (perhaps due to knowing object-level specifics around MIRI/CFAR justifying these decisions) consider how they would find out whether the situation with Leverage was "not that bad", in comparison, given the similarity of the phenomena observed in both cases; such an investigation may involve learning object-level specifics about what happened at Leverage.  I hope that people don't scapegoat; in an environment where certain actions are knowingly being taken by multiple parties, singling out certain parties has negative effects on people's willingness to speak without actually producing any justice.

Aside from whether things were "bad" or "not that bad" overall, understanding the specifics of what happened, including harms to specific people, is important for actually accomplishing the ambitious goals these projects are aiming at; there is no reason to expect extreme accomplishments to result without very high levels of epistemic honesty.

950 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2021-10-17T22:08:56.321Z · LW(p) · GW(p)

I want to add some context I think is important to this.

Jessica was (I don't know if she still is) part of a group centered around a person named Vassar, informally dubbed "the Vassarites". Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to "jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.

Vassar ran MIRI a very long time ago, but either quit or got fired, and has since been saying that MIRI/CFAR is also infinitely corrupt and conformist and traumatizing (I don't think he thinks they're worse than everyone else, but I think he thinks they had a chance to be better, they wasted it, and so it's especially galling that they're just as bad).  Since then, he's tried to "jailbreak" a lot of people associated with MIRI and CFAR - again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. The combination of drugs and paranoia caused a lot of borderline psychosis, which the Vassarites mostly interpreted as success ("these people have been jailbroken out of the complacent/conformist world, and are now correctly paranoid and weird"). Occasionally it would also cause full-blown psychosis, which they would discourage people from seeking treatment for, because they thought psychiatrists were especially evil and corrupt and traumatizing and unable to understand that psychosis is just breaking mental shackles.

(I am a psychiatrist and obviously biased here)

Jessica talks about a cluster of psychoses from 2017 - 2019 which she blames on MIRI/CFAR. She admits that not all the people involved worked for MIRI or CFAR, but kind of equivocates around this and says they were "in the social circle" in some way. The actual connection is that most (maybe all?) of these people were involved with the Vassarites or the Zizians (the latter being IMO a Vassarite splinter group, though I think both groups would deny this characterization). The main connection to MIRI/CFAR is that the Vassarites recruited from the MIRI/CFAR social network.

I don't have hard evidence of all these points, but I think Jessica's text kind of obliquely confirms some of them. She writes:

"Psychosis" doesn't have to be a bad thing, even if it usually is in our society; it can be an exploration of perceptions and possibilities not before imagined, in a supportive environment that helps the subject to navigate reality in a new way; some of R.D. Liang's work is relevant here, describing psychotic mental states as a result of ontological insecurity following from an internal division of the self at a previous time. Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing.

RD Laing was a 1960s pseudoscientist who claimed that schizophrenia is how "the light [begins] to break through the cracks in our all-too-closed minds". He opposed schizophrenics taking medication, and advocated treatments like "rebirthing therapy" where people role-play fetuses going through the birth canal - for which he was stripped of his medical license. The Vassarites like him, because he is on their side in the whole "actually psychosis is just people being enlightened as to the true nature of society" thing. I think Laing was wrong, psychosis is actually bad, and that the "actually psychosis is good sometimes" mindset is extremely related to the Vassarites causing all of these cases of psychosis.

Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community.  While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible.  (I noted at the time that there might be a sense in which different people have "auras" in a way that is not less inherently rigorous than the way in which different people have "charisma", and I feared this type of comment would cause people to say I was crazy.) As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.

Jessica is accusing MIRI of being insufficiently supportive to her by not taking her talk about demons and auras seriously when she was borderline psychotic, and comparing this to Leverage, who she thinks did a better job by promoting an environment where people accepted these ideas. I think MIRI was correct to be concerned and (reading between the lines) telling her to seek normal medical treatment, instead of telling her that demons were real and she was right to worry about them, and I think her disagreement with this is coming from a belief that psychosis is potentially a form of useful creative learning. While I don't want to assert that I am 100% sure this can never be true, I think it's true rarely enough, and with enough downside risk, that treating it as a psychiatric emergency is warranted.

On the two cases of suicide, Jessica writes:

Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz.  (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; the splinter group seems to have selected for high scrupulosity and not attenuated its mental impact.)

Ziz tried to create an anti-CFAR/MIRI splinter group whose members had mental breakdowns. Jessica also tried to create an anti-CFAR/MIRI splinter group and had a mental breakdown. This isn't a coincidence - Vassar tried his jailbreaking thing on both of them, and it tended to reliably produce people who started crusades against MIRI/CFAR, and who had mental breakdowns. Here's an excerpt from Ziz's blog on her experience (edited heavily for length, and slightly to protect the innocent):

When I first met Vassar, it was a random encounter in an experimental group call organized by some small-brand rationalist. He talked for about an hour, and automatically became the center of conversation, I typed notes as fast as I could, thinking, “if this stuff is true it changes everything; it’s the [crux] of my life.” (It true, but I did not realize it immediately.) Randomly, another person found the link, came in and said, “hi”. [Vassar] said “hi”, she said “hi” again, apparently for humor. [Vassar] said something terse I forget “well if this is what …”, apparently giving up on the venue, and disconnected without further comment. One by one, the other ~10 people including besides her, including me disconnected disappointedly, wordlessly or just about right after. A wizard was gracing us with his wisdom and she fucked it up. And in my probably-representative case that was just about the only way I could communicate how frustrated I was at her for that.

[Vassar explained how] across society, the forces of gaslighting were attacking people’s basic ability to think and to a justice as a Schelling point until only the built-in Schelling points of gender and race remained, Vassar listed fronts in the war on gaslighting, disputes in the community, and included [local community member ZD] [...] ZD said Vassar broke them out of a mental hospital. I didn’t ask them how. But I considered that both badass and heroic. From what I hear, ZD was, probably as with most, imprisoned for no good reason, in some despicable act of, “get that unsightly person not playing along with the [heavily DRM’d] game we’ve called sanity out of my free world”.

I heard [local community member AM] was Vassar’s former “apprentice”. And I had started picking up jailbroken wisdom from them secondhand without knowing where it was from. But Vassar did it better. After Rationalist Fleet, I concluded I was probably worth Vassar’s time to talk to a bit, and I emailed him, carefully briefly stating my qualifications, in terms of ability to take ideas seriously and learn from him, so that he could get maximally dense VOI on whether to talk to me. A long conversation ensued. And I got a lot from it. [...]

Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve. And didn’t detransition. This all created an awful tension in me. The rationality community was kind of compromised as a rallying point for truthseeking. This was desperately bad for the world. [Vassar] was at the center of, largely the creator of a “no actually for real” rallying point for the jailbroken reality-not-social-reality version of this.

Ziz is describing the same cluster of psychoses Jessica is (including Jessica's own), but I think doing so more accurately, by describing how it was a Vassar-related phenomenon. I would add Ziz herself to the list of trans women who got negative mental effects from Vassar, although I think (not sure) Ziz would not endorse my description of her as having these.

What was the community's response to this? I have heard rumors that Vassar was fired from MIRI a long time ago for doing some very early version of this, although I don't know if it's true. He was banned from REACH (and implicitly rationalist social events) for somewhat unrelated reasons. I banned him from SSC meetups for a combination of reasons including these. For reasons I don't fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything's kind of been frozen in place since then.

I want to clarify that I don't dislike Vassar, he's actually been extremely nice to me, I continue to be in cordial and productive communication with him, and his overall influence on my life personally has been positive. He's also been surprisingly gracious about the fact that I go around accusing him of causing a bunch of cases of psychosis. I don't think he does the psychosis thing on purpose, I think he is honest in his belief that the world is corrupt and traumatizing (which at the margin, shades into values of "the world is corrupt and traumatizing" which everyone agrees are true) and I believe he is honest in his belief that he needs to figure out ways to help people do better. There are many smart people who work with him and support him who have not gone psychotic at all. I don't think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.  My main advice is that if he or someone related to him asks you if you want to take a bunch of drugs and hear his pitch for why the world is corrupt, you say no.

EDIT/UPDATE: I got a chance to talk to Vassar, who disagrees with my assessment above. We're still trying to figure out the details, but so far, we agree that there was a cluster of related psychoses around 2017, all of which were in the same broad part of the rationalist social graph. Features of that part were - it contained a lot of trans women, a lot of math-y people, and some people who had been influenced by Vassar, although Vassar himself may not have been a central member. We are still trying to trace the exact chain of who had problems first and how those problems spread. I still suspect that Vassar unwittingly came up with some ideas that other people then spread through the graph. Vassar still denies this and is going to try to explain a more complete story to me when I have more time.

Replies from: devi, Zack_M_Davis, jessica.liu.taylor, jessica.liu.taylor, nshepperd, jimrandomh, Yvain, gwern, ChristianKl, Desrtopa, jessica.liu.taylor, Dr_Manhattan, Yoav Ravid
comment by devi · 2021-10-18T16:48:56.294Z · LW(p) · GW(p)

Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC

Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I'll try to explain some context for the record.

In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme "trans women are [psychologically] men". I experienced this while dating AM (same as mentioned above). She repeatedly brought up this point in various interactions. Since we were both trans women this was hurting us both, so I look back with more pity than concern about malice. At some point during this time I started treating this as a hidden truth that I was proud of myself for being able to see, which I in retrospect I feel disgusted and complicit to have accepted. This was my state of mind when I discussed these issues with Zack reinforcing each others views. I believe (less certain) I also broached the topic with Michael and/or Anna at some point which probably went like a brief mutual acknowledgement of this hidden fact before continuing on to topics that were more important.

I don't think anyone mentioned above was being dishonest about what they thought or was acting from a desire to hurt trans people. Yet, above exchanges did in retrospect cause me emotional pain, stress, and contributed to internalizing sexism and transphobia. I definitely wouldn't describe this as a main causal factor to my psychosis (that was very casual drug use that even Michael chided me for). I cant' think of a good policy that would have been helpful to me in above interactions. Maybe emphasizing bucket-errors [LW · GW] in this context more, or spreading caution about generalizing from abstract models to yourself, but I think I would have been too rash to listen.

I wouldn't say I completely moved past this until years following the events. I think the following things were helpful for that (in no particular order): the intersex brains model and associated brain imagining studies, everyday-acceptance while living a normal life not allowing myself concerns larger than renovations or retirement savings, getting to experience some parts of female socialization and mother-daughter bonding, full support from friends and family in cases my gender has come into question, and the acknowledgement of a medical system that still has some gate-keeping aspects (note: I don't think this positive effect of a gate-keeping system at all justifies the negative of denying anyone morphological freedom).

Thinking back to these events, engaging with the LessWrong community, and even publicly engaging under my real name bring back fear and feelings of trauma. I'm not saying this to increase a sense of having been wronged but as an apology for this not being as long as it should be, or as well-written, and for the lateness/absence of any replies/followups.

comment by Zack_M_Davis · 2021-10-18T02:48:52.693Z · LW(p) · GW(p)

I talked and corresponded with Michael a lot during 2017–2020, and it seems likely that one of the psychotic breaks people are referring to is mine from February 2017? (Which Michael had nothing to do with causing, by the way.) I don't think you're being fair.

"jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself)

I'm confident this is only a Ziz-ism: I don't recall Michael using the term, and I just searched my emails for jailbreak, and there are no hits from him.

again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs [...] describing how it was a Vassar-related phenomenon

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)

But, well ... if you genuinely thought that institutions and a community that you had devoted a lot of your life to building up, were now failing to achieve their purposes, wouldn't you want to talk to people about it? If you genuinely thought that certain chemicals would make your friends lives' better, wouldn't you recommend them?

Michael is a charismatic guy who has strong views and argues forcefully for them. That's not the same thing as having mysterious mind powers to "make people paranoid" or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I'm sure he'd be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.

borderline psychosis, which the Vassarites mostly interpreted as success ("these people have been jailbroken out of the complacent/conformist world, and are now correctly paranoid and weird")

I can't speak for Michael or his friends, and I don't want to derail the thread by going into the details of my own situation. (That's a future community-drama post, for when I finally get over enough of my internalized silencing-barriers to finish writing it.) But speaking only for myself, I think there's a nearby idea that actually makes sense: if a particular social scene is sufficiently crazy (e.g., it's a cult), having a mental breakdown is an understandable reaction. It's not that mental breakdowns are in any way good—in a saner world, that wouldn't happen. But if you were so unfortunate to be in a situation where the only psychologically realistic outcomes were either to fall into conformity with the other cult-members, or have a stress-and-sleep-deprivation-induced psychotic episode as you undergo a "deep emotional break with the wisdom of [your] pack" [LW · GW], the mental breakdown might actually be less bad in the long run, even if it's locally extremely bad.

My main advice is that if he or someone related to him asks you if you want to take a bunch of drugs and hear his pitch for why the world is corrupt, you say no.

I recommend hearing out the pitch and thinking it through for yourself. (But, yes, without drugs; I think drugs are very risky and strongly disagree with Michael on this point.)

ZD said Vassar broke them out of a mental hospital. I didn't ask them how.

(Incidentally, this was misreporting on my part, due to me being crazy at the time and attributing abilities to Michael that he did not, in fact, have. Michael did visit me in the psych ward, which was incredibly helpful—it seems likely that I would have been much worse off if he hadn't come—but I was discharged normally; he didn't bust me out.)

Replies from: Yvain, Yvain, ChristianKl
comment by Scott Alexander (Yvain) · 2021-10-18T10:43:13.426Z · LW(p) · GW(p)

I don't want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn't harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I'm suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their "it's correct to be freaking about learning your entire society is corrupt and gaslighting" shtick. 

comment by Scott Alexander (Yvain) · 2021-10-18T11:24:00.823Z · LW(p) · GW(p)

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)

[...]

Michael is a charismatic guy who has strong views and argues forcefully for them. That's not the same thing as having mysterious mind powers to "make people paranoid" or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I'm sure he'd be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.

I more or less Outside View agree with you on this, which is why I don't go around making call-out threads or demanding people ban Michael from the community or anything like that (I'm only talking about it now because I feel like it's fair for the community to try to defend itself after Jessica attributed all of this to the wider community instead of Vassar specifically) "This guy makes people psychotic by talking to them" is a silly accusation to go around making, and I hate that I have to do it!

But also, I do kind of notice the skulls and they are really consistent, and I would feel bad if my desire not to say this ridiculous thing resulted in more people getting hurt.

I think the minimum viable narrative here is, as you say, something like "Michael is very good at spotting people right on the verge of psychosis, and then he suggests they take drugs." Maybe a slightly more complicated narrative involves bringing them to a state of total epistemic doubt where they can't trust any institutions or any of the people they formerly thought were their friends, although now this is getting back into the "he's just having normal truth-seeking conversation" objection. He also seems really good at pushing trans people's buttons in terms of their underlying anxiety around gender dysphoria (see the Ziz post) , so maybe that contributes somehow. I don't know how it happens, I'm sufficiently embarrassed to be upset about something which looks like "having a nice interesting conversation" from the outside, and I don't want to violate liberal norms that you're allowed to have conversations - but I think those norms also make it okay to point out the very high rate at which those conversations end in mental breakdowns.

Maybe one analogy would be people with serial emotional abusive relationships - should we be okay with people dating Brent? Like yes, he had a string of horrible relationships that left the other person feeling violated and abused and self-hating and trapped. On the other, most of this, from the outside, looked like talking. He explained why it would be hurtful for the other person to leave the relationship or not do what he wanted, and he was convincing and forceful enough about it that it worked (I understand he also sometimes used violence, but I think the narrative still makes sense without it). Even so, the community tried to make sure people knew if they started a relationship with him they would get hurt, and eventually got really insistent about that. I do feel like this was a sort of boundary crossing of important liberal norms, but I think you've got to at least leave that possibility open for when things get really weird.

Replies from: mathenjoyer, Viliam
comment by mathenjoyer · 2021-10-22T02:17:08.384Z · LW(p) · GW(p)

Thing 0:

Scott.

Before I actually make my point I want to wax poetic about reading SlateStarCodex.

In some post whose name I can't remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, "Huh, I need to only be convinced by true things."

This is extremely relatable to my lived experience. I am a stereotypical "high-functioning autist." I am quite gullible, formerly extremely gullible. I maintain sanity by aggressively parsing the truth values of everything I hear. I am extremely literal. I like math.

To the degree that "rationality styles" are a desirable artifact of human hardware and software limitations, I find your style of thinking to be the most compelling.

Thus I am going to state that your way of thinking about Vassar has too many fucking skulls.

Thing 1:

Imagine two world models:

  1. Some people want to act as perfect nth-order cooperating utilitarians, but can't because of human limitations. They are extremely scrupulous, so they feel anguish and collapse emotionally. To prevent this, they rationalize and confabulate explanations for why their behavior actually is perfect. Then a moderately schizotypal man arrives and says: "Stop rationalizing." Then the humans revert to the all-consuming anguish.
  2. A collection of imperfect human moral actors who believe in utilitarianism act in an imperfect utilitarian way. An extremely charismatic man arrives who uses their scrupulosity to convince them they are not behaving morally, and then leverages their ensuing anguish to hijack their agency.

Which of these world models is correct? Both, obviously, because we're all smart people here and understand the Machiavellian Intelligence Hypothesis.

Thing 2:

Imagine a being called Omegarapist. It has important ideas about decision theory and organizations. However, it has an uncontrollable urge to rape people. It is not a superintelligence; it merely an extremely charismatic human. (This is a refutation of the Brent Dill analogy. I do not know much about Brent Dill.)

You are a humble student of Decision Theory. What is the best way to deal with Omegarapist?

  1. Ignore him. This is good for AI-box reasons, but bad because you don't learn anything new about decision theory. Also, humans with strange mindstates are more likely to provide new insights, conditioned on them having insights to give (this condition excludes extreme psychosis).
  2. Let Omegarapist out. This is a terrible strategy. He rapes everybody, AND his desire to rape people causes him to warp his explanations of decision theory.

Therefore we should use Strategy 1, right? No. This is motivated stopping. Here are some other strategies.

1a. Precommit to only talk with him if he castrates himself first.

1b. Precommit to call in the Scorched-Earth Dollar Auction Squad (California law enforcement) if he has sex with anybody involved in this precommitment then let him talk with anybody he wants.

I made those in 1 minute of actually trying.

Returning to the object level, let us consider Michael Vassar. 

Strategy 1 corresponds to exiling him. Strategy 2 corresponds to a complete reputational blank-slate and free participation. In three minutes of actually trying, here are some alternate strategies.

1a. Vassar can participate but will be shunned if he talks about "drama" in the rationality community or its social structure. 

1b. Vassar can participate but is not allowed to talk with one person at once, having to always be in a group of 3.

2a. Vassar can participate but has to give a detailed citation, or an extremely prominent low-level epistemic status mark, to every claim he makes about neurology or psychiatry. 

I am not suggesting any of these strategies, or even endorsing the idea that they are possible. I am asking: WHY THE FUCK IS EVERYONE MOTIVATED STOPPING ON NOT LISTENING TO WHATEVER HE SAYS!!!

I am a contractualist and a classical liberal. However, I recognized the empirical fact that there are large cohorts of people who relate to language exclusively for the purpose of predation and resource expropriation. What is a virtuous man to do?

The answer relies on the nature of language. Fundamentally, the idea of a free marketplace of ideas doesn't rely on language or its use; it relies on the asymmetry of a weapon. The asymmetry of a weapon is a mathematical fact about information processing. It exists in the territory. f you see an information source that is dangerous, build a better weapon.

You are using a powerful asymmetric weapon of Classical Liberalism called language. Vassar is the fucking Necronomicon. Instead of sealing it away, why don't we make another weapon? This idea that some threats are temporarily too dangerous for our asymmetric weapons, and have to be fought with methods other than reciprocity, is the exact same epistemology-hole found in diversity-worship.

"Diversity of thought is good."

"I have a diverse opinion on the merits of vaccination."

"Diversity of thought is good, except on matters where diversity of thought leads to coercion or violence."

"When does diversity of thought lead to coercion or violence?"

"When I, or the WHO, say so. Shut up, prole."

This is actually quite a few skulls, but everything has quite a few skulls. People die very often. 

Thing 3:

Now let me address a counterargument:

Argument 1: "Vassar's belief system posits a near-infinitely powerful omnipresent adversary that is capable of ill-defined mind control. This is extremely conflict-theoretic, and predatory."

Here's the thing: rationality in general in similar. I will use that same anti-Vassar counterargument as a steelman for sneerclub.

Argument 2: "The beliefs of the rationality community posit complete distrust in nearly every source of information and global institution, giving them an excuse to act antisocially. It describes human behavior as almost entirely Machiavellian, allowing them to be conflict-theoretic, selfish, rationalizing, and utterly incapable of coordinating. They 'logically deduce' the relevant possibility of eternal suffering or happiness for the human species (FAI and s-risk), and use that to control people's current behavior and coerce them into giving up their agency."

There is a strategy that accepts both of these arguments. It is called epistemic learned helplessness. It is actually a very good strategy if you are a normie. Metis and the reactionary concept of "traditional living/wisdom" are related principles. I have met people with 100 IQ who I would consider highly wise, due to skill at this strategy (and not accidentally being born religious, which is its main weak point.)

There is a strategy that rejects both of these arguments. It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype. Very few people follow this strategy so it is hard to give examples, but I will leave a quote from an old Scott Aaronson paper that I find very inspiring. "In pondering these riddles, I don’t have any sort of special intuition, for which the actual arguments and counterarguments that I can articulate serve as window-dressing. The arguments exhaust my intuition."

THERE IS NO EFFECTIVE LONG-TERM STRATEGY THAT REJECTS THE SECOND ARGUMENT BUT ACCEPTS THE FIRST! THIS IS WHERE ALL THE FUCKING SKULLS ARE! Why? Because it requires a complex notion of what arguments to accept, and the more complex the notion, the easier it will be to rationalize around, apply inconsistently, or Goodhart. See "A formalist manifesto" by Moldbug for another description of this. (This reminds me of how UDT/FDT/TDT agents behave better than causal agents at everything, but get counterfactually mugged, which seems absurd to us. If you try to come up with some notion of "legitimate information" or "self-locating information" to prevent an agent from getting mugged, it will similarly lose functionality in the non-anthropic cases. [See the Sleeping Beauty problem for a better explanation.])

The only real social epistemologies are of the form:

"Free speech, but (explicitly defined but also common-sensibly just definition of ideas that lead to violence)."

Mine is particular is, "Free speech but no (intentionally and directly inciting panic or violence using falsehoods)."

To put it a certain way, once you get on the Taking Ideas Seriously train, you cannot get off. 

Thing 4:

Back when SSC existed, I got bored one day and looked at the blogroll. I discovered Hivewired. It was bad. Through Hivewired I discovered Ziz. I discovered the blackmail allegations while sick with a fever and withdrawing off an antidepressant. I had a mental breakdown, feeling utterly betrayed by the rationality community despite never even having talked to anyone in it. Then I rationalized it away. To be fair, this was reasonable considering the state in which I processed the information. However, the thought processes used to dismiss the worry were absolutely rationalizations. I can tell because I can smell them.

Fast forward a year. I am at a yeshiva to discover whether I want to be religious. I become an anti-theist and start reading rationality stuff again. I check out Ziz's blog out of perverse curiosity. I go over the allegations again. I find a link to a more cogent, falsifiable, and specific case. I freak the fuck out. Then I get to work figuring how which parts are actually true.

MIRI payed out to blackmail. There's an unironic Catholic working at CFAR and everyone thinks this is normal. He doesn't actually believe in god, but he believes in belief, which is maybe worse. CFAR is a collection of rationality workshops, not a coordinated attempt to raise the sanity waterline (Anna told me this in a private communication, and this is public information as far as I know), but has not changed its marketing to match. Rationalists are incapable of coordinating, which is basically their entire job. All of these problems were foreseen by the Sequences, but no one has read the Sequences because most rationalists are an army of sci-fi midwits who read HPMOR then changed the beliefs they were wearing. (Example: Andrew Rowe. I'm sorry but it's true, anyways please write Arcane Ascension book 4.)

I make contact with the actual rationality community for the first time. I trawl through blogs, screeds, and incoherent symbolist rants about morality written as a book review of The Northern Caves. Someone becomes convinced that I am a internet gangstalker who created an elaborate false identity of a 18-year-old gap year kid to make contact with them. Eventually I contact Benjamin Hoffman, who leads me to Vassar, who leads to the Vassarites.

He points out to be a bunch of things that were very demoralizing, and absolutely true. Most people act evil out of habituation and deviancy training, including my loved ones. Global totalitarianism is a relevant s-risk as societies become more and more hysterical due to a loss of existing wisdom traditions, and too low of a sanity waterline to replace them with actual thinking. (Mass surveillance also helps.)

I work on a project with him trying to create a micro-state within the government of New York City. During and after this project I am increasingly irritable and misanthropic. The project does not work. I effortlessly update on this, distance myself from him, then process the feeling of betrayal by the rationality community and inability to achieve immortality and a utopian society for a few months. I stop being a Vassarite. I continue to read blogs to stay updated on thinking well, and eventually I unlearn the new associated pain. I talk with the Vassarites as friends and associates now, but not  as a club member.

What does this story imply? Michael Vassar induced mental damage in me, partly through the way he speaks and acts. However, as a primary effect of this, he taught me true things. With basic rationality skills, I avoid contracting the Vassar, then I healed the damage to my social life and behavior caused by this whole shitstorm (most of said damage was caused by non-Vassar factors).

Now I am significantly happier, more agentic, and more rational.

Thing 5

When I said what I did in Thing 1, I meant it. Vassar gets rid of identity-related rationalizations. Vassar drives people crazy. Vassar is very good at getting other people to see the moon in finger pointing at the moon problems and moving people out of local optimums into better local optimums. This requires the work of going downwards in the fitness landscape first. Vassar's ideas are important and many are correct. It just happens to be that he might drive you insane. The same could be said of rationality. Reality is unfair; good epistemics isn't supposed to be easy. Have you seen mathematical logic? (It's my favorite field).

An example of an important idea that may come from Vassar, but is likely much older:

Control over a social hierarchy goes to a single person; this is a pluralist preference aggregation system. In those, the best strategy is to vote only in the two blocks who "matter." Similarly, if you need to join and war and know you will be killed if your side loses, you should join the winning side. Thus humans are attracted to powerful groups of humans. This is a (grossly oversimplified) evolutionary origin of one type of conformity effect.

Power is the ability to make other human beings do what you want. There are fundamentally two strategies to get it: help other people so that they want you to have power, or hurt other people to credibly signal that you already have power. (Note the correspondence of these two to dominance and prestige hierarchies). Credibly signaling that you have a lot of power is almost enough to get more power.

However, if you have people harming themselves to signal your power, if they admit publicly that they are harming themselves, they can coordinate with neutral parties to move the Schelling point and establish a new regime. Thus there are two obvious strategies to achieving ultimate power: help people get what they want (extremely difficult), make people publicly harm themselves while shouting how great they feel (much easier). The famous bad equilibrium of 8 hours of shocking oneself per day is an obvious example.

Benjamin Ross Hoffman's blog is very good, but awkwardly organized. He conveys explicit, literal models of these phenomena that are very useful and do not carry the risk of filling your head with whispers from the beyond. However, they have less impact because of it.

Thing 6:

I'm almost done with this mad effortpost. I want to note one more thing. Mistake theory works better than conflict theory. THIS IS NOT NORMATIVE.

Facts about the map-territory distinction and facts about the behaviors of mapmaking algorithms are facts about the territory. We can imagine a very strange world where conflict theory is a more effective way to think. One of the key assumptions of conflict theorists is that complexity or attempts to undermine moral certainty are usually mind control. Another key assumption is that there are entrenched power groups, or individual malign agents, will use these things to hack you.

These conditions are neither necessary nor sufficient for conflict theory to be better than mistake theory. I have an ancient and powerful technique called "actually listening to arguments." When I'm debating with someone who I know to use bad faith, I decrypt everything they say into logical arguments. Then I use those logical arguments to modify my world model. One might say adversaries can used biased selection and rationalization to make you less logical despite this strategy. I say, on an incurable hardware and wetware level, you are already doing this. (For example, any Bayesian agent of finite storage space is subject to the Halo Effect, as you described in a post once.) Having someone do it in a different direction can helpfully knock you out of your models and back into reality, even if their models are bad. This is why it is still worth decrypting the actual information content of people you suspect to be in bad faith.

Uh, thanks for reading, I hope this was coherent, have a nice day.

Replies from: Unreal, cousin_it, FeepingCreature, Hazard, xtz05qw
comment by Unreal · 2021-10-22T10:21:28.557Z · LW(p) · GW(p)

I enjoyed reading this. Thanks for writing it. 

One note though: I think this post (along with most of the comments) isn't treating Vassar as a fully real person with real choices. It (also) treats him like some kind of 'force in the world' or 'immovable object'. And I really want people to see him as a person who can change his mind and behavior and that it might be worth asking him to take more responsibility for his behavior and its moral impacts. I'm glad you yourself were able to "With basic rationality skills, avoid contracting the Vassar, then [heal] the damage to [your] social life." 

But I am worried about people treating him like a force of nature that you make contact with and then just have to deal with whatever the effects of that are. 

I think it's pretty immoral to de-stabilize people to the point of maybe-insanity, and I think he should try to avoid it, to whatever extent that's in his capacity, which I think is a lot. 

"Vassar's ideas are important and many are correct. It just happens to be that he might drive you insane."

I might think this was a worthwhile tradeoff if I actually believed the 'maybe insane' part was unavoidable, and I do not believe it is. I know that with more mental training, people can absorb more difficult truths without risk of damage. Maybe Vassar doesn't want to offer this mental training himself; that isn't much of an excuse, in my book, to target people who are 'close to the edge' (where 'edge' might be near a better local optimum) but who lack solid social support, rationality skills, mental training, or spiritual groundedness and then push them. 

His service is well-intentioned, but he's not doing it wisely and compassionately, as far as I can tell. 

Replies from: SaidAchmiz, mathenjoyer, Unreal
comment by Said Achmiz (SaidAchmiz) · 2021-10-22T10:52:36.746Z · LW(p) · GW(p)

I think that treating Michael Vassar as an unchangeable force of nature is the right way to go—for the purposes of discussions precisely like this one. Why? Because even if Michael himself can (and chooses to) alter his behavior in some way (regardless of whether this is good or bad or indifferent), nevertheless there will be other Michael Vassars out there—and the question remains, of how one is to deal with arbitrary Michael Vassars one encounters in life.

In other words, what we’ve got here is a vulnerability (in the security sense of the word). One day you find that you’re being exploited by a clever hacker (we decline to specify whether he is a black hat or white hat or what). The one comes to you and recommends a patch. But you say—why should we treat this specific attack as some sort of unchangeable force of nature? Rather we should contact this hacker and persuade him to cease and desist. But the vulnerability is still there…

Replies from: ChristianKl
comment by ChristianKl · 2021-10-22T13:09:49.932Z · LW(p) · GW(p)

I think you can either have a discussion that focuses on an individual and if you do it makes sense to model them with agency or you can have more general threat models. 

If you however mix the two you are likely to get confused in both directions. You will project ideas from your threat model into the person and you will take random aspects of the individual into your threat model that aren't typical for the threat.

comment by mathenjoyer · 2021-10-23T02:48:10.540Z · LW(p) · GW(p)

I am not sure how much 'not destabilize people' is an option that is available to Vassar.

My model of Vassar is as a person who is constantly making associations, and using them to point at the moon. However, pointing at the moon can convince people of nonexistent satellites and thus drive people crazy. This is why we have debates instead of koan contests.

Pointing at the moon is useful when there is inferential distance; we use it all the time when talking with people without rationality training. Eliezer used it, and a lot of "you are expected to behave better for status reasons look at my smug language"-style theist-bashing, in the Sequences. This was actually highly effective, although it had terrible side effects.

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, "Vassar, just only say things that you think will have a positive effect on the person." 1. He already does that. 2. That is advocating that Vassar manipulate people. See Valencia in Worth the Candle.

In the pathological case of Vassar, I think the naive strategy of "just say the thing you think is true" is still correct.

Mental training absolutely helps. I would say that, considering that the people who talk with Vassar are literally from a movement called rationality, it is a normatively reasonable move to expect them to be mentally resilient. Factually, this is not the case. The "maybe insane" part is definitely not unavoidable, but right now I think the problem is with the people talking to Vassar, and not he himself.

I'm glad you enjoyed the post.

Replies from: Unreal, Benquo, ChristianKl
comment by Unreal · 2021-10-23T03:41:37.521Z · LW(p) · GW(p)

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication.

My suggestion for Vassar is not to 'try not to destabilize people' exactly. 

It's to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he's interacted with about what it's like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you're speaking to as though they're a full, real human—not a pair of ears to be talked into or a mind to insert things into). When he talks theory, I often get the sense he is talking "at" rather than talking "to" or "with". The listener practically disappears or is reduced to a question-generating machine that gets him to keep saying things. 

I expect this process could take a long time / run into issues along the way, and so I don't think it should be rushed. Not expecting a quick change. But claiming there's no available option seems wildly wrong to me. People aren't fixed points and generally shouldn't be treated as such. 

Replies from: mathenjoyer, ChristianKl
comment by mathenjoyer · 2021-10-23T06:00:36.850Z · LW(p) · GW(p)

This is actually very fair. I think he does kind of insert information into people.

I never really felt like a question-generating machine, more like a pupil at the foot of a teacher who is trying to integrate the teacher's information.

I think the passive, reactive approach you mention is actually a really good idea of how to be more evidential in personal interaction without being explicitly manipulative.

Thanks!

comment by ChristianKl · 2021-10-23T12:06:37.299Z · LW(p) · GW(p)

It's to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he's interacted with about what it's like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you're speaking to as though they're a full, real human—not a pair of ears to be talked into or a mind to insert things into). 

I think I interacted with Vassar four times in person, so I might get some things wrong here, but I think that he's pretty disassociated from his body which closes a normal channel of perceiving impacts on the person he's speaking with. This thing looks to me like some bodily process generating stress / pain and being a cause for disassociation. It might need a body worker to fix whatever goes on there to create the conditions for perceiving the other person better.

Beyond that Circling might be an enviroment in which one can learn to interact with others as humans who have their own feelings but that would require opening up to the Circling frame. 

comment by Benquo · 2021-10-23T03:08:17.466Z · LW(p) · GW(p)

I think this line of discussion would be well served by marking a natural boundary in the cluster "crazy." Instead of saying "Vassar can drive people crazy" I'd rather taboo "crazy" and say:

Many people are using their verbal idea-tracking ability to implement a coalitional strategy instead of efficiently compressing external reality. Some such people will experience their strategy as invalidated by conversations with Vassar, since he'll point out ways their stories don't add up. A common response to invalidation is to submit to the invalidator by adopting the invalidator's story. Since Vassar's words aren't selected to be a valid coalitional strategy instruction set, attempting to submit to him will often result in attempting obviously maladaptive coalitional strategies.

People using their verbal idea-tracking ability to implement a coalitional strategy cannot give informed consent to conversations with Vassar, because in a deep sense they cannot be informed of things through verbal descriptions, and the risk is one that cannot be described without the recursive capacity of descriptive language.

Personally I care much more, maybe lexically more, about the upside of minds learning about their situation, than the downside of mimics going into maladaptive death spirals, though it would definitely be better all round if we can manage to cause fewer cases of the latter without compromising the former, much like it's desirable to avoid torturing animals, and it would be desirable for city lights not to interfere with sea turtles' reproductive cycle by resembling the moon too much.

Replies from: pjemb, mathenjoyer
comment by pjen (pjemb) · 2021-10-29T21:11:40.532Z · LW(p) · GW(p)

My problem with this comment is it takes people who:

  • can't verbally reason without talking things through (and are currently stuck in a passive role in a conversation)

and who:

  • respond to a failure of their verbal reasoning
    • under circumstances of importance (in this case moral importance)
    • and conditions of stress, induced by
      • trying to concentrate while in a passive role
      • failing to concentrate under conditions of high moral importance

by simply doing as they are told - and it assumes they are incapable of reasoning under any circumstances.

It also then denies people who are incapable of independent reasoning the right to be protected from harm.

comment by mathenjoyer · 2021-10-23T03:13:49.372Z · LW(p) · GW(p)

EDIT: Ben is correct to say we should taboo "crazy."

This is a very uncharitable interpretation (entirely wrong). The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren't as positive utility as they thought. (entirely wrong)

I also don't think people interpret Vassar's words as a strategy and implement incoherence. Personally, I interpreted Vassar's words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away. I think the other people just no longer have a coalitional strategy installed and don't know how to function without one. This is what happened to me and why I repeatedly lashed out at others when I perceived them as betraying me, since I no longer automatically perceived them as on my side. I rebuilt my rapport with those people and now have more honest relationships with them. (still endorsed)

Beyond this, I think your model is accurate.

Replies from: SaidAchmiz, Benquo
comment by Said Achmiz (SaidAchmiz) · 2021-10-23T06:58:29.147Z · LW(p) · GW(p)

The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought.

“That which can be destroyed by the truth should be”—I seem to recall reading that somewhere.

And: “If my actions aren’t as positive utility as I think, then I desire to believe that my actions aren’t as positive utility as I think”.

If one has such a mental makeup that finding out that one’s actions have worse effects than one imagined causes genuine psychological collapse, then perhaps the first order of business is to do everything in one’s power to fix that (really quite severe and glaring) bug in one’s psyche—and only then to attempt any substantive projects in the service of world-saving, people-helping, or otherwise doing anything really consequential.

Replies from: mathenjoyer
comment by mathenjoyer · 2021-10-24T03:30:37.481Z · LW(p) · GW(p)

Thank you for echoing common sense!

comment by Benquo · 2021-10-24T00:35:07.326Z · LW(p) · GW(p)

What is psychological collapse?

For those who can afford it, taking it easy for a while is a rational response to noticing deep confusion [LW · GW], continuing to take actions based on a discredited model would be less appealing, and people often become depressed when they keep confusedly trying to do things that they don't want to do.

Are you trying to point to something else?

Personally, I interpreted Vassar’s words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away.

What specific claims turned out to be false? What counterevidence did you encounter?

Replies from: mathenjoyer
comment by mathenjoyer · 2021-10-24T03:30:04.991Z · LW(p) · GW(p)

Specific claim: the only nontrivial obstacle in front of us is not being evil

This is false. Object-level stuff is actually very hard.

Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)

This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person's language. Ideas and institutions have the agency; they wear people like skin.

Specific claim: this is how to take over New York.

Didn't work.

Replies from: Benquo, Benquo
comment by Benquo · 2021-11-21T00:53:22.197Z · LW(p) · GW(p)

Specific claim: this is how to take over New York.

Didn’t work.

I think this needs to be broken up into 2 claims:

1 If we execute strategy X, we'll take over New York. 2 We can use straightforward persuasion (e.g. appeals to reason, profit motive) to get an adequate set of people to implement strategy X.

2 has been falsified decisively. The plan to recruit candidates via appealing to people's explicit incentives failed, there wasn't a good alternative, and as a result there wasn't a chance to test other parts of the plan (1).

That's important info and worth learning from in a principled way. Definitely I won't try that sort of thing again in the same way, and it seems like I should increase my credence both that plans requiring people to respond to economic incentives by taking initiative to play against type will fail, and that I personally might be able to profit a lot by taking initiative to play against type, or investing in people who seem like they're already doing this, as long as I don't have to count on other unknown people acting similarly in the future.

But I find the tendency to respond to novel multi-step plans that would require someone do take initiative by sitting back and waiting for the plan to fail, and then saying, "see? novel multi-step plans don't work!" extremely annoying. I've been on both sides of that kind of transaction, but if we want anything to work out well we have to distinguish cases of "we / someone else decided not to try" as a different kind of failure from "we tried and it didn't work out."

Replies from: mathenjoyer
comment by mathenjoyer · 2021-12-18T10:39:41.306Z · LW(p) · GW(p)

This is actually completely fair. So is the other comment.

comment by Benquo · 2021-11-21T00:36:24.527Z · LW(p) · GW(p)

Specific claim: the only nontrivial obstacle in front of us is not being evil

This is false. Object-level stuff is actually very hard.

This seems to be conflating the question of "is it possible to construct a difficult problem?" with the question of "what's the rate-limiting problem?". If you have a specific model for how to make things much better for many people by solving a hard technical problem before making substantial progress on human alignment, I'd very much like to hear the details. If I'm persuaded I'll be interested in figuring out how to help.

So far this seems like evidence to the contrary, though, as it doesn't look like you thought you could get help making things better for many people by explaining the opportunity.

comment by ChristianKl · 2021-10-23T11:56:08.575Z · LW(p) · GW(p)

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, "Vassar, just only say things that you think will have a positive effect on the person." 1. He already does that. 2. That is advocating that Vassar manipulate people. 

You are making a false dichomaty here. You are assuming that everything that has a negative effect on a person is manipulation. 

As Vassar himself sees the situation people believe a lot of lies for reasons of fitting in socially in society. From that perspective getting people to stop believing in those lies will make it harder to fit socially into society. 

If you would get a Nazi guard at Ausschwitz into a state where the moral issue of their job can't be disassociated anymore, that's very predicably going to have a negative effect on that prison guard. 

Vassar position would be that it would be immoral to avoid talking about the truth about the nature of their job when talking with the guard in a motivation to make life easier for the guard. 

comment by Unreal · 2021-10-22T10:24:01.911Z · LW(p) · GW(p)

To the extent I'm worried about Vassar's character, I am as equally worried about the people around him. It's the people around him who should also take responsibility for his well-being and his moral behavior. That's what friends are for. I'm not putting this all on him. To be clear. 

comment by cousin_it · 2021-10-22T08:59:01.724Z · LW(p) · GW(p)

I think it's a fine way of think about mathematical logic, but if you try to think this way about reality, you'll end up with views that make internal sense and are self-reinforcing but don't follow the grain of facts at all. When you hear such views from someone else, it's a good idea to see which facts they give in support. Do their facts seem scant, cherrypicked, questionable when checked? Then their big claims are probably wrong.

The people who actually know their stuff usually come off very different. Their statements are carefully delineated: "this thing about power was true in 10th century Byzantium, but not clear how much of it applies today".

Also, just to comment on this:

It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype.

I think it's somewhat changeable. Even for people like us, there are ways to make our processing more "fuzzy". Deliberately dimming [LW · GW] some things, rounding [LW(p) · GW(p)] others. That has many benefits: on the intellectual level you learn to see many aspects of a problem instead of hyperfocusing on one; emotionally you get more peaceful when thinking about things; and interpersonally, the world is full of small spontaneous exchanges happening on the "warm fuzzy" level, it's not nearly so cold a place as it seems, and plugging into that market is so worth it.

Replies from: mathenjoyer
comment by mathenjoyer · 2021-10-23T03:08:48.215Z · LW(p) · GW(p)

On the third paragraph:

I rarely have problems with hyperfixation. When I do, I just come back to the problem later, or prime myself with a random stimulus. (See Steelmanning Divination.)

Peacefulness is enjoyable and terminally desirable, but in many contexts predators want to induce peacefulness to create vulnerability. Example: buying someone a drink with ill intent. (See "Safety in numbers" by Benjamin Ross Hoffman. I actually like relaxation, but agree with him that feeling relaxed in unsafe environments is a terrible idea. Reality is mostly an unsafe environment. Am getting to that.)

I have no problem enjoying warm fuzzies. I had problems with them after first talking with Vassar, but I re-equilibrated. Warm fuzzies are good, helpful, and worth purchasing. I am not a perfect utilitarian. However, it is important that when you buy fuzzies instead of utils, as Scott would put it, you know what you are buying. Many will sell fuzzies and market them as utils.

I sometimes round things, it is not inherently bad.

Dimming things is not good. I like being alive. From a functionalist perspective, the degree to which I am aroused (with respect to the senses and the mind) is the degree to which I am a real, sapient being. Dimming is sometimes terminally valuable as relaxation, and instrumentally valuable as sleep, but if you believe in Life, Freedom, Prosperity And Other Nice Transhumanist Things then dimming being bad in most contexts follows as a natural consequence.

On the second paragraph:

This is because people compartmentalize. After studying a thing for a long time, people will grasp deep nonverbal truths about that thing. Sometimes they are wrong; without the legibility of the elucidation, false ideas such gained are difficult to destroy. Sometimes they are right! Mathematical folklore is an example: it is literally metis among mathematicians.

Highly knowledgeable and epistemically skilled people delineate. Sometimes the natural delineation is "this is true everywhere and false nowhere." See "The Proper Use of Humility," and for an example of how delineations often should be large, "Universal Fire."

On the first paragraph:

Reality is hostile through neutrality. Any optimizing agent naturally optimizes against most other optimization targets when resources are finite. Lifeforms are (badly) optimized for inclusive genetic fitness. Thermodynamics looks like the sort of Universal Law that an evil god would construct. According to a quick Google search approximately 3,700 people die in car accidents per day and people think this is completely normal. 

Many things are actually effective. For example, most places in the United States have drinkable-ish running water. This is objectively impressive. Any model must not be entirely made out of "the world is evil" otherwise it runs against facts. But the natural mental motion you make, as a default, should be, "How is this system produced by an aggressively neutral, entirely mechanistic reality?"

See the entire Sequence on evolution, as well as Beyond the Reach of God.

comment by FeepingCreature · 2021-10-22T10:15:07.827Z · LW(p) · GW(p)

I mostly see where you're coming from, but I think the reasonable answer to "point 1 or 2 is a false dichotomy" is this classic, uh, tumblr quote (from memory):

"People cannot just. At no time in the history of the human species has any person or group ever just. If your plan relies on people to just, then your plan will fail."

This goes especially if the thing that comes after "just" is "just precommit."

My expectation is that interaction with Vassar is that the people who espouse 1 or 2 expect that the people interacting are incapable of precommitting to the required strength. I don't know if they're correct, but I'd expect them to be, because I think people are just really bad at precommitting in general. If precommitting was easy, I think we'd all be a lot more fit and get a lot more done. Also, Beeminder would be bankrupt.

Replies from: mathenjoyer
comment by mathenjoyer · 2021-10-23T03:28:49.861Z · LW(p) · GW(p)

This is a very good criticism! I think you are right about people not being able to "just."

My original point with those strategies was to illustrate an instance of motivated stopping about people in the community who have negative psychological effects, or criticize popular institutions. Perhaps it is the case that people genuinely tried to make a strategy but automatically rejected my toy strategies as false. I do not think it is, based on "vibe" and on the arguments that people are making, such as "argument from cult."

I think you are actually completely correct about those strategies being bad. Instead, I failed to point out that I expect a certain level of mental robustness-to-nonsanity from people literally called "rationalists." This comes off as sarcastic but I mean it completely literally.

Precommitting isn't easy, but rationality is about solving hard problems. When I think of actual rationality, I think of practices such as "five minutes of actually trying" and alkjash's "Hammertime." Humans have a small component of behavior that is agentic, and a huge component of behavior that is non-agentic and installed by vaguely agentic processes (simple conditioning, mimicry, social learning.) Many problems are solved immediately and almost effortlessly by just giving the reins to the small part.

Relatedly, to address one of your examples, I expect at least one of the following things to be true about any given competent rationalist.

  1. They have a physiological problem.
  2. They don't believe becoming fit to be worth their time, and have a good reason to go against the naive first-order model of "exercise increases energy and happiness set point."
  3. They are fit.

Hypocritically, I fail all three of these criterion. I take full blame for this failure and plan on ameliorating it. (You don't have to take Heroic Responsibility for the world, but you have to take it about yourself.)

A trope-y way of thinking about it is: "We're supposed to be the good guys!" Good guys don't have to be heroes, but they have to be at least somewhat competent, and they have to, as a strong default, treat potential enemies like their equals.

comment by Hazard · 2021-10-22T03:18:20.523Z · LW(p) · GW(p)

I found many things you shared useful. I also expect that because of your style/tone you'll get down voted :(

comment by xtz05qw · 2021-10-22T07:49:27.755Z · LW(p) · GW(p)

It's not just Vassar. It's how the whole community has excused and rationalized away abuse. I think the correct answer to the omega rapist problem isn't to ignore him but to destroy his agency entirely. He's still going to alter his decision theory towards rape even if castrated.

Replies from: mathenjoyer
comment by mathenjoyer · 2021-10-23T03:47:32.369Z · LW(p) · GW(p)

I think you are entirely wrong.

However, I gave you a double-upvote because you did nothing normatively wrong. The fact that you are being mass-downvoted just because you linked to that article and because you seem to be associated with Ziz (because of the gibberish name and specific conception of decision theory) is extremely disturbing.

Can we have LessWrong not be Reddit? Let's not be Reddit. Too late, we're already Reddit. Fuck.

You are right that, unless people can honor precommitments perfectly and castration is irreversible even with transhuman technology, Omegarapist will still alter his decision theory. Despite this, there are probably better solutions than killing or disabling him. I say this not out of moral ickiness, but out of practicality.

-

Imagine both you are Omegarapist are actual superintelligences. Then you can just make a utility function-merge to avoid the inefficiency of conflict, and move on with your day.

Humans have an similar form of this. Humans, even when sufficiently distinct in moral or factual position as to want to kill each other, often don't. This is partly because of an implicit assumption that their side, the correct side, will win in the end, and that this is less true if they break the symmetry and use weapons. Scott uses the example of a pro-life and pro-choice person having dinner together, and calls it "divine intervention."

There is an equivalent of this with Omegarapist. Make some sort of pact and honor it: he won't rape people, but you won't report his previous rapes to the Scorched Earth Dollar Auction squad. Work together on decision theory the project is complete. Then agree either to utility-merge with him in the consequent utility function, or just shoot him. I call this "swordfighting at the edge of a cliff while shouting about our ideologies." I would be willing to work with Moldbug on Strong AI, but if we had to input the utility function, the person who would win would be determined by a cinematic swordfight. In a similar case with my friend Sudo Nim, we could just merge utilities.

If you use the "shoot him" strategy, Omegarapist is still dead. You just got useful work out of him first. If he rapes people, just call in the Dollar Auction squad. The problem here isn't cooperating with Omegarapist, it's thinking to oneself "he's too useful to actually follow precommitments about punishing" if he defects against you. This is fucking dumb. There's a great webnovel called Reverend Insanity which depicts what organizations look like when everyone uses pure CDT like this. It isn't pretty, and it's also a very accurate depiction of the real world landscape.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-10-23T05:06:09.697Z · LW(p) · GW(p)

Oh come on. The post was downvoted because it was inflammatory and low quality. It made a sweeping assertion while providing no evidence except a link to an article that I have no reason to believe is worth reading. There is a mountain of evidence that being negative is not a sufficient cause for being downvoted on LW, e.g. the OP.

Replies from: TekhneMakre, mathenjoyer
comment by TekhneMakre · 2021-10-23T05:32:58.792Z · LW(p) · GW(p)

(FYI, the OP has 154 votes and 59 karma, so it is both heavily upvoted and heavily downvoted.)

comment by mathenjoyer · 2021-10-23T06:35:53.423Z · LW(p) · GW(p)

You absolutely have a reason to believe the article is worth reading.

If you live coordinated with an institution, spending 5 minutes of actually trying (every few months) to see if that institution is corrupt is a worthy use of time.

Replies from: SaidAchmiz, sil-ver
comment by Said Achmiz (SaidAchmiz) · 2021-10-23T07:03:51.025Z · LW(p) · GW(p)

I read the linked article, and my conclusion is that it’s not even in the neighborhood of “worth reading”.

comment by Rafael Harth (sil-ver) · 2021-10-23T13:51:35.784Z · LW(p) · GW(p)

I don't think I live coordinated with CFAR or MIRI, but it is true that, if they are corrupt, this is something I would like to know.

However, that's not sufficient reason to think the article is worth reading. There are many articles making claims that, if true, I would very much like to know (e.g. someone arguing that the Christian Hell exists).

I think the policy I follow (although I hadn't made it explicit until now) is to ignore claims like this by default but listen up as soon as I have some reason to believe that the source is credible.

Which incidentally was the case for the OP. I have spent a lot more than 5 minutes reading it & replies, and I have, in fact, updated on my view of CRAF and Miri. It wasn't a massive update in the end, but it also wasn't negligible. I also haven't downvoted the OP, and I believe I also haven't downvoted any comments from jessicata. I've upvoted some.

Replies from: mathenjoyer
comment by mathenjoyer · 2021-10-24T03:26:40.599Z · LW(p) · GW(p)

This is fair, actually.

comment by Viliam · 2021-10-18T17:24:56.287Z · LW(p) · GW(p)

Michael is very good at spotting people right on the verge of psychosis

...and then pushing them.

Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.

So, this seems deliberate. [EDIT: Or not. Zack makes a fair point.] He is not even hiding it, if you listen carefully.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2021-10-21T05:59:14.322Z · LW(p) · GW(p)

Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.

So, this seems deliberate.

Because high-psychoticism people are the ones who are most likely to understand what he has to say.

This isn't nefarious. Anyone trying to meet new people to talk to, for any reason, is going to preferentially seek out people who are a better rather than worse match. Someone who didn't like our robot cult could make structurally the same argument about, say, efforts to market Yudkowsky's writing (like spending $28,000 distributing copies of Harry Potter and the Methods to math contest winners) [EA · GW]: why, they're preying on innocent high-IQ systematizers and filling their heads with scary stories about the coming robot apocalypse!

I mean, technically, yes. But in Yudkowsky and friends' worldview, the coming robot apocalypse is actually real, and high-IQ systematizers are the people best positioned to understand this important threat. Of course they're going to try to market their memes to that neurotype-demographic. What do you expect them to do? What do you expect Michael to do?

Replies from: steven0461, Unreal
comment by steven0461 · 2021-10-21T20:29:09.862Z · LW(p) · GW(p)

There's a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don't have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.

If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you'd object to that targeting strategy even though they'd be able to make an argument structurally the same as your comment.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-21T20:47:54.055Z · LW(p) · GW(p)

Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it's even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. people who think machine learning / deep learning are important.

In general this seems really expected and unobjectionable? "If I'm trying to convince people of X, I'm going to find people who already believe a lot of the pre-requisites for understanding X and who might already assign X a non-negligible prior". This is how pretty much all systems of ideas spread, I have trouble thinking of a counterexample.

I mean, do a significant number of people not select who they talk with based on who already agrees with them to some extent and is paying attention to similar things?

Replies from: steven0461
comment by steven0461 · 2021-10-21T23:14:45.490Z · LW(p) · GW(p)

If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn't care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.

Replies from: petemichaud-1, jessica.liu.taylor
comment by PeteMichaud (petemichaud-1) · 2021-10-22T12:35:54.890Z · LW(p) · GW(p)

The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from "psychotic," and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren't already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is connected to everything else. Obviously neither extreme state is desirable, but leaning one way or another could be very helpful in different contexts.  

See also: indexicality.

On the other hand, back in my reflective beliefs, I think psychosis is a much scarier failure mode than "autism," on this scale, and I would not personally pursue any actions that pushed people toward it without, among other things, a supporting infrastructure of some kind for processing the psychotic state without losing the plot (social or cultural would work, but whatever).

comment by jessicata (jessica.liu.taylor) · 2021-10-21T23:17:11.726Z · LW(p) · GW(p)

I wouldn't find it objectionable. I'm not really sure what morally relevant distinction is being pointed at here, apocalyptic beliefs might make the inferential distance to specific apocalyptic hypotheses lower.

Replies from: steven0461, dxu
comment by steven0461 · 2021-10-22T00:48:50.023Z · LW(p) · GW(p)

Well, I don't think it's obviously objectionable, and I'd have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like "we'd all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we're talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren't generally either truth-tracking or good for them" seems plausible to me. But I think it's obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.

comment by dxu · 2021-10-21T23:29:49.941Z · LW(p) · GW(p)

I don't have well-formed thoughts on this topic, but one factor that seems relevant to me has a core that might be verbalized as "susceptibility to invalid methods of persuasion", which seems notably higher in the case of people with high "apocalypticism" than people with the other attributes described in the grandparent. (A similar argument applies in the case of people with high "psychoticism".)

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-21T23:36:40.808Z · LW(p) · GW(p)

That might be relevant in some cases but seems unobjectionable both in the psychoticism case and the apocalypse case. I would predict that LW people cluster together in personality measurements like OCEAN and Eysenck, it's by default easier to write for people of a similar personality to yourself. Also, people notice high rates of Asperger's-like characteristics around here, which are correlated with Jewish ethnicity and transgenderism (also both frequent around here).

comment by Unreal · 2021-10-21T15:34:21.395Z · LW(p) · GW(p)

It might not be nefarious. 

But it might also not be very wise. 

I question Vassar's wisdom, if what you say is indeed true about his motives. 

I question whether he's got the appropriate feedback loops in place to ensure he is not exacerbating harms. I question whether he's appropriately seeking that feedback rather than turning away from the kinds he finds overwhelming, distasteful, unpleasant, or doesn't know how to integrate. 

I question how much work he's done on his own shadow and whether it's not inadvertently acting out in ways that are harmful. I question whether he has good friends he trusts who would let him know, bluntly, when he is out of line with integrity and ethics or if he has 'shadow stuff' that he's not seeing. 

I don't think this needs to be hashed out in public, but I hope people are working closer to him on these things who have the wisdom and integrity to do the right thing. 

comment by ChristianKl · 2021-10-18T08:09:36.777Z · LW(p) · GW(p)

But, well ... if you genuinely thought that institutions and a community that you had devoted a lot of your life to building up, were now failing to achieve their purposes, wouldn't you want to talk to people about it? If you genuinely thought that certain chemicals would make your friends lives' better, wouldn't you recommend them?

Rumor has it that https://www.sfgate.com/news/bayarea/article/Man-Gets-5-Years-For-Attacking-Woman-Outside-13796663.php is due to Vassar recommended drugs. In the OP that case does get blamed on CFAR's enviroment without any mentioning of that part.

When talking about whether or not CFAR is responsible for that stories factors like that seem to me to matter quite a bit. I'd love whether anyone who's nearer can confirm/deny the rumor and fill in missing pieces. 

Replies from: andrew-rettek-1, jimrandomh
comment by Andrew Rettek (andrew-rettek-1) · 2021-10-18T12:39:47.755Z · LW(p) · GW(p)

As I mentioned elsewhere, I was heavily involved in that incident for a couple months after it happened and I looked for causes that could help with the defense. AFAICT No drugs were taken in the days leading up to the mental health episode or arrest (or people who took drugs with him lied about it).

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-18T14:08:17.395Z · LW(p) · GW(p)

I, too, asked people questions after that incident and failed to locate any evidence of drugs.

comment by jimrandomh · 2021-10-18T23:55:07.356Z · LW(p) · GW(p)

As I heard this story, Eric was actively seeking mental health care on the day of the incident, and should have been committed before it happened, but several people (both inside and outside the community) screwed up. I don't think anyone is to blame for his having had a mental break in the first place.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-19T06:50:51.797Z · LW(p) · GW(p)

I now got some better sourced information from a friend who's actually in good contact with Eric. Given that I'm also quite certain that there were no drugs involved and that isn't a case of any one person being mainly responsible for it happening but multiple people making bad decisions. I'm currently hoping that Eric will tell his side himself so that there's less indirection about the information sourcing so I'm not saying more about the detail at this point in time.

Replies from: EricB, Yvain
comment by EricB · 2021-10-19T17:00:14.728Z · LW(p) · GW(p)

Edit: The below is only one mid sized part of a much larger and weirder story. While it was significant, there were a lot of other things also going very wrong with my life, and without those vulnerability factors I would likely have not run into major problems from just the parts described in this post. (also minor changes to the 3rd to last bullet point)

I think it probably makes sense to clarify some parts of the story relevant to the discussions here.

  • My psychosis was brought on by many factors, particularly extreme physical and mental stressors and exposure to various intense memes. I have written a document explaining this in more detail which I have shared with some people privately, and can share with others who were involved or have reason to be interested, but covering it in detail is beyond the scope of this post (which I'll mostly keep to the Vassar-related aspects).
  • At the time of my psychotic break I believed that someone from Vassar's group had spiked me with LSD, though I no longer believe this (I can't totally rule it out, but it does seem implausible, and my physical and mental health were poor enough that my quite vivid experiences are explainable as placebo/the start pf psychosis).
  • Vassar was central to my delusions, at the time of my arrest I had a notebook in which I had scrawled "Vassar is God" and "Vassar is the Devil" many times. This was partly due to a conversation I had had with him in which he said my "pattern must be erased from the world" in response to me defending EA, but was mostly due to indirect ripples of his influence flowing through someone who joined his group who I had much greater contact with.
  • This other person had been engaged in an intense psychological vortex with me in the months leading up to my psychotic break, and I had (in a greatly weakened state) been encouraged to form and engage with a model of her within my mind. She referred to part of our interaction as her "roleplaying an unfriendly AI", and this new part seemed extremely hostile to me at the time. Much of the reason I continued engaging was that I had hope of turning her to the side of light.
  • This person joined Vassar's group, and I tried to encourage her to question his clearly intense psychological maneuvering. She ended up telling me
    • "michael asked me if he should fix anna and make her see material reality.
      whether he should purge her green.
      i laughed happily at that
      marvelled at its glory"
  • This sent me into a spiral of fear for Anna (Salamon), whether to tell her or not, and a bunch of delusions linked to TDT-esque thinking, which ended up pushing me from merely highly unstable into actually-psychotic (though there was a continued descent through many levels of madness over the next ~36 hours before I reached the point where I would attempt suicide and assault the mental health worker I had requested).
  • One other topical note since we seem to be doing the expose lots of things to the light thing: I visited Leverage the same day as the above, and spoke to several people while fairly clearly delusional. They did encourage me to be careful with my mind and do only small psychological works, but in an ideal world they might have sent me back to my friends or noticed and flagged the paranoid delusion that my friends would mind control me and I needed to rent a hotel rather than going back to them, or even better suggested professional help. Edit: I am told that I was advised to go back home by them, but don't remember this (which definitely does not mean it didn't happen, I was in a bad state and my memory is patchy).
  • In the hotel that night I fell apart fairly spectacularly after I performed a mental motion which I interpreted as giving my model of Vassar something like root access to my brain while trying to re-stabilize myself.
  • I could tell much more about the story, there are many weird details, but I think this covers the most important parts relevant to the conversation here.

I don't think Vassar is to blame in the sense that he either intended or could reasonably have foreseen these specific consequences, but his style of going for extremely high impact psychological interventions does sometimes have negative ripples on the mental health of those in orbit. I hope he's learned this and is more careful now, in particular I hope he's aware that he can do harm as well as good.

Replies from: Ruby, jessica.liu.taylor, Avi Weiss, elityre, Benquo
comment by Ruby · 2021-10-19T17:14:13.902Z · LW(p) · GW(p)

Thank you for sharing such personal details for the sake of the conversation.

comment by jessicata (jessica.liu.taylor) · 2021-10-19T17:27:27.669Z · LW(p) · GW(p)

Thanks for sharing the details of your experience. Fyi I had a trip earlier in 2017 where I had the thought "Michael Vassar is God" and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.

If I'm trying to put my finger on a real effect here, it's related to how Michael Vassar was one of the initial people who set up the social scene (e.g. running singularity summits and being executive director of SIAI), being on the more "social/business development/management" end relative to someone like Eliezer; so if you live in the scene, which can be seen as a simulacrum, the people most involved in setting up the scene/simulacrum have the most aptitude at affecting memes related to it, like a world-simulator programmer has more aptitude at affecting the simulation than people within the simulation (though to a much lesser degree of course).

As a related example, Von Neumann was involved in setting up post-WWII US Modernism, and is also attributed extreme mental powers by modernism (e.g. extreme creativity in inventing a wide variety of fields); in creating the social system, he also has more memetic influence within that system, and could more effectively change its boundaries e.g. in creating new fields of study.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-27T16:21:51.139Z · LW(p) · GW(p)

Fyi I had a trip earlier in 2017 where I had the thought "Michael Vassar is God" and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.

2017 would be the year Eric's episode happened as well. Did this result in multiple conversation about "Michael Vassar is God" that Eric might then picked up when he hang around the group?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-27T20:24:20.086Z · LW(p) · GW(p)

I don't know, some of the people were in common between these discussions so maybe, but my guess would be that it wasn't causal, only correlational. Multiple people at the time were considering Michael Vassar to be especially insightful and worth learning from.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-27T21:17:02.276Z · LW(p) · GW(p)

I haven't used the word god myself nor have heard it used by other people to refer to someone who's insightful and worth learning from. Traditionally, people learn from prophets and not from gods.

comment by Avi (Avi Weiss) · 2021-10-19T17:16:11.573Z · LW(p) · GW(p)

Can someone please clarify what is meant in this conext by 'Vassar's group', or the term 'Vassarites' used by others?

My intution previously was that Michael Vassar had no formal 'group' or insitution of any kind, and it was just more like 'a cluster of friends who hung out together a lot', but this comment makes it seem like something more official.

Replies from: David Hornbein, Benquo
comment by David Hornbein · 2021-10-19T21:11:29.364Z · LW(p) · GW(p)

While "Vassar's group" is informal, it's more than just a cluster of friends; it's a social scene with lots of shared concepts, terminology, and outlook (although of course not every member holds every view and members sometimes disagree about the concepts, etc etc). In this way, the structure is similar to social scenes like "the AI safety community" or "wokeness" or "the startup scene" that coordinate in part on the basis of shared ideology even in the absence of institutional coordination, albeit much smaller. There is no formal institution governing the scene, and as far as I've ever heard Vassar himself has no particular authority within it beyond individual persuasion and his reputation.

Median Group is the closest thing to a "Vassarite" institution, in that its listed members are 2/3 people who I've heard/read describing the strong influence Vassar has had on their thinking and 1/3 people I don't know, but AFAIK Median Group is just a project put together by a bunch of friends with similar outlook and doesn't claim to speak for the whole scene or anything.

Replies from: Benquo
comment by Benquo · 2021-10-20T06:33:31.729Z · LW(p) · GW(p)

As a member of that cluster I endorse this description.

comment by Benquo · 2021-10-19T20:12:44.672Z · LW(p) · GW(p)

Michael and I are sometimes-housemates and I've never seen or heard of any formal "Vassarite" group or institution, though he's an important connector in the local social graph, such that I met several good friends through him.

comment by Eli Tyre (elityre) · 2021-10-20T06:47:58.164Z · LW(p) · GW(p)

Thank you very much for sharing. I wasn't aware of any of these details.

comment by Benquo · 2021-10-19T17:09:08.807Z · LW(p) · GW(p)

It sounds like you're saying that based on extremely sparse data you made up a Michael Vassar in your head to drive you crazy. More generally, it seems like a bunch of people on this thread, most notably Scott Alexander, are attributing spooky magical powers to him. That is crazy cult behavior and I wish they would stop it.

ETA: In case it wasn't clear, "that" = multiple people elsewhere in the comments attributing spooky mind control powers to Vassar. I was trying to summarize Eric's account concisely, because insofar as it assigns agency at all I think it does a good job assigning it where it makes sense to, with the person making the decisions.

Replies from: dxu, EricB
comment by dxu · 2021-10-19T22:06:25.387Z · LW(p) · GW(p)

Reading through the comments here, I perceive a pattern of short-but-strongly-worded comments from you, many of which seem to me to contain highly inflammatory insinuations while giving little impression of any investment of interpretive labor. It's not [entirely] clear to me what your goals are, but barring said goals being very strange and inexplicable indeed, it seems to me extremely unlikely that they are best fulfilled by the discourse style you have consistently been employing.

To be clear: I am annoyed by this. I perceive your comments as substantially lower-quality than the mean, and moreover I am annoyed that they seem to be receiving engagement far in excess of what I believe they deserve, resulting in a loss of attentional resources that could be used engaging more productively (either with other commenters, or with a hypothetical-version-of-you who does not do this). My comment here is written for the purpose of registering my impressions, and making it common-knowledge among those who share said impressions (who, for the record, I predict are not few) that said impressions are, in fact, shared.

(If I am mistaken in the above prediction, I am sure the voters will let me know in short order.)

I say all of the above while being reasonably confident that you do, in fact, have good intentions. However, good intentions do not ipso facto result in good comments, and to the extent that they have resulted in bad comments, I think one should point this fact out as bluntly as possible, which is why I worded the first two paragraphs of this comment the way I did. Nonetheless, I felt it important to clarify that I do not stand against [what I believe to be] your causes here, only the way you have been going about pursuing those causes.


(For the record: I am unaffiliated with MIRI, CFAR, Leverage, MAPLE, the "Vassarites", or the broader rationalist community as it exists in physical space. As such, I have no direct stake in this conversation; but I very much do have an interest in making sure discussion around any topics this sensitive are carried out in a mature, nuanced way.)

Replies from: Benquo
comment by Benquo · 2021-10-20T07:11:36.473Z · LW(p) · GW(p)

If you want to clarify whether I mean to insinuate something in a particular comment, you could ask, like I asked Eliezer [LW(p) · GW(p)]. I'm not going to make my comments longer without a specific idea of what's unclear, that seems pointless.

comment by EricB · 2021-10-19T17:29:39.310Z · LW(p) · GW(p)

Yes, saying I generated a model of him based on sparse data which was linked to me imploding so spectacularly is fair. However, I do think the reason I ended up doing that with him was related to his interactions with me and other people, though not because of spooky magical powers. Just an intense drive to break people out of something like the standard reality and a focus on learning how to do that as effectively as possible.

Which is not an entirely bad thing, it just happens that for some people it does end up tipping them over the edge into a bad place.

Replies from: Benquo, Benquo
comment by Benquo · 2021-10-20T07:15:15.610Z · LW(p) · GW(p)

Thanks for verifying. In hindsight my comment reads as though it was condemning you in a way I didn't mean to; sorry about that.

The thing I meant to characterize as "crazy cult behavior" was people in the comments here attributing things like what you did in your mind to Michael Vassar's spooky mind powers. You seem to be trying to be helpful and informative here. Sorry if my comment read like a personal attack.

comment by Benquo · 2021-10-20T16:39:08.724Z · LW(p) · GW(p)

This can be unpacked into an alternative to the charisma theory.

Many people are looking for a reference person to tell them what to do. (This is generally consistent with the Jaynesian family of hypotheses.) High-agency people are unusually easy to refer to, because they reveal the kind of information that allows others to locate them. There's sufficient excess demand that even if someone doesn't issue any actual orders, if they seem to have agency, people will generalize from sparse data to try to construct a version of that person that tells them what to do.

A more culturally central example than Vassar is Dr Fauci, who seems to have mostly reasonable opinions about COVID, but is worshipped by a lot of fanatics with crazy beliefs about COVID.

The charisma hypothesis describes this as a fundamental attribute of the person being worshipped, rather than a behavior of their worshippers.

comment by Scott Alexander (Yvain) · 2021-10-19T09:29:14.871Z · LW(p) · GW(p)

If this information isn't too private, can you send it to me? scott@slatestarcodex.com

Replies from: EricB
comment by EricB · 2021-10-19T17:41:48.777Z · LW(p) · GW(p)

I've forwarded you the document. It's kinda personal so I'd prefer it not be posted publicly, but I'm mostly okay with it being shared with individuals who have reason to want to understand better.

comment by jessicata (jessica.liu.taylor) · 2021-10-18T13:53:19.744Z · LW(p) · GW(p)

I want to point out that the level of mental influence being attributed to Michael in this comment and others (e.g. that he's "causing psychotic breaks" and "jailbreaking people" through conversation, "that listening too much to Vassar [causes psychosis], predictably") isn't obviously less than the level of mental influence Leverage attributed to people in terms of e.g. mental objects. Some people in the thread [LW(p) · GW(p)] are self-congratulating on the rationalists not being as crazy and abusive as Leverage was in worrying that people were spreading harmful psychological objects to each other, and therefore isolating these people from their friends. Yet many in this comment thread are, literally, calling for isolating Michael Vassar from his friends on the basis of his mental influence on others.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2021-10-18T19:30:08.780Z · LW(p) · GW(p)

Yes, I agree with you that all of this is very awkward.

I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.

But we have to admit at least small violations of it even to get the concept of "cult". Not just the sort of weak cults we're discussing here, but even the really strong cults like Heaven's Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven's Gate is bad for them, and leave. When we use the word "cult", we're implicitly agreeing that this doesn't always work, and we're bringing in creepier and less comprehensible ideas like "charisma" and "brainwashing" and "cognitive dissonance".

(and the same thing with the concept of "emotionally abusive relationship")

I don't want to call the Vassarites a cult because I'm sure someone will confront me with a Cult Checklist that they don't meet, but I think that it's not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it's weird that you can get that far without robes and chanting or anything, but I never claimed to really understand exactly how cults work, plus I'm sure the drugs helped.

I think believing cults are possible is different in degree if not in kind from Leverage "doing seances...to call on demonic energies and use their power to affect the practitioners' social standing". I'm claiming, though I can't prove it, that what I'm saying is more towards the "believing cults are possible" side.

I'm actually very worried about this! I hate admitting cults are possible! If you admit cults are possible, you have to acknowledge that the basic liberal model has gaps, and then you get things like if an evangelical deconverts to atheism, the other evangelicals can say "Oh, he's in a cult, we need to kidnap and deprogram him since his best self wouldn't agree with the deconversion." I want to be extremely careful in when we do things like that, which is why I'm not actually "calling for isolating Michael Vassar from his friends". I think in the Outside View we should almost never do this!

But you were the one to mention this cluster of psychotic breaks, and I am trying to provide what I think is a more accurate perspective on them. Maybe in the future we learn that this was because of some weird neuroinflammatory virus that got passed around at a Vassarite meeting and we laugh that we were ever dumb enough to think a person/group could transmit psychotic breaks. But until then, I think the data point that all of this was associated with Vassar and the Vassarites is one we shouldn't just ignore.

Replies from: ChristianKl, jessica.liu.taylor, Benquo
comment by ChristianKl · 2021-10-19T07:15:27.917Z · LW(p) · GW(p)

It seems to me like in the case of Leverage, them working 75 hours per week reduced the time the could have used to use Reason to conclude that they are in a system that's bad for them. 

That's very different from someone having a few conversation with Vassar and then adopting a new belief and spending a lot of the time reasoning about that alone and the belief being stable without being embedded into a strong enviroment that makes independent thought hard because it keeps people busy.

A cult in it's nature is a social institution and not just a meme that someone can pass around via having a few conversations.

Replies from: Viliam
comment by Viliam · 2021-10-19T08:36:57.592Z · LW(p) · GW(p)

Perhaps the proper word here might be "manipulation" or "bad influence".

Replies from: Holly_Elmore, ChristianKl
comment by Holly_Elmore · 2021-10-19T23:20:13.235Z · LW(p) · GW(p)

I think "mind virus" is fair. Vassar spoke a lot about how the world as it is can't be trusted. I remember that many of the people in his circle spoke, seemingly apropos of nothing, about how bad involuntary commitment is, so that by the time someone was psychotic their relationship with psychiatry and anyone who would want to turn to psychiatry to help them was poisoned. Within the envelope of those beliefs you can keep a lot of other beliefs safe from scrutiny. 

comment by ChristianKl · 2021-10-22T10:25:29.631Z · LW(p) · GW(p)

The thing with "bad influence" is that it's a pretty value-laden thing. In a religious town the biology teacher who tells the children about evolution and explains how it makes sense that our history goes back a lot further then a few thousands years is reasonably described as bad influence by the parents. 

The religion teacher gets the children to doubt the religious authorities. Those children then can also be a bad influence on others by also getting them to doubt authorities. In a similar war Vassar gets people to question other authorities and social conventions and how those ideas can then be passed on. 

Vassar speaks about things like Moral Mazes [LW · GW]. Memes like that make people distrust institutions. There are the kind of bad influence that can get people to quit their job.

Talking about the biology teacher like they are intend to start an evolution cult feels a bit misleading.

comment by jessicata (jessica.liu.taylor) · 2021-10-21T00:50:06.577Z · LW(p) · GW(p)

It seems to me that, at least in your worldview, this question of whether and what sort of subtle mental influence between people is possible is extremely important, to the point where different answers to the question could lead to pretty different political philosophies.

Let's consider a disjunction: 1: There isn't a big effect here, 2: There is a big effect here.

In case 1:

  • It might make sense to discourage people from talking too much about "charisma", "auras", "mental objects", etc, since they're pretty fake, really not the primary factors to think about when modeling society.
  • The main problem with the relevant discussions at Leverage is that they're making grandiose claims of mind powers and justifying e.g. isolating people on the basis of these, not actual mental influence.
  • The case made against Michael, that he can "cause psychotic breaks" by talking with people sometimes (or, in the case of Eric B, by talking sometimes with someone who is talking sometimes with the person in question), has no merit. People are making up grandiose claims about Michael to justify scapegoating him, it's basically a witch hunt. We should have a much more moderated, holistic picture where there are multiple people in a social environment affecting a person, and the people closer to them generally have more influence, such that causing psychotic breaks 2 hops out is implausible, and causing psychotic breaks with only occasional conversation (and very little conversation close to the actual psychotic episode) is also quite unlikely.
  • There isn't a significant falsification of liberal individualism.

In case 2:

  • Since there's a big effect, it makes sense to spend a lot of energy speculating on "charisma", "auras", "mental objects", and similar hypotheses. "Charisma" has fewer details than "auras" which has fewer details than "mental objects"; all of them are hypotheses someone could come up with in the course of doing pre-paradigmatic study of the phenomenon, knowing that while these initial hypotheses will make mis-predictions sometimes, they're (in expectation) moving in the direction of clarifying the phenomenon. We shouldn't just say "charisma" and leave it at that, it's so important that we need more details/gears.
  • Leverage's claims about weird mind powers are to some degree plausible, there's a big phenomenon here even if their models are wrong/silly in some places. The weird social dynamics are a result of an actual attempt to learn about and manage this extremely important phenomenon.
  • The claim that Michael can cause psychotic breaks by talking with people is plausible. The claim that he can cause psychotic breaks 2 hops out might be plausible depending on the details (this is pretty similar to a "mental objects" claim).
  • There is a significant falsification of liberal individualism. Upon learning about the details of how mental influence works, you could easily conclude that some specific form of collectivism is much more compatible with human nature than liberal individualism.

(You could make a spectrum or expand the number of dimensions here, I'm starting with a binary here to make the poles obvious)

It seems like you haven't expressed a strong belief whether we're in case 1 or case 2. Some things you've said are more compatible with case 1 (e.g. Leverage worrying about mental objects being silly, talking about demons being a psychiatric emergency, it being appropriate for MIRI to stop me from talking about demons and auras, liberalism being basically correct even if there are exceptions). Some are more compatible with case 2 (e.g. Michael causing psychotic breaks, "cults" being real and actually somewhat bad for liberalism to admit the existence of, "charisma" being a big important thing).

I'm left with the impression that your position is to some degree inconsistent (which is pretty normal, propagating beliefs fully is hard) and that you're assigning low value to investigating the details of this very important variable.

(I myself still have a lot of uncertainty here; I've had the impression of subtle mental influence happening from time to time but it's hard to disambiguate what's actually happening, and how strong the effect is. I think a lot of what's going on is people partly-unconsciously trying to synchronize with each other in terms of world-model and behavioral plans, and there existing mental operations one can do that cause others' synchronization behavior to have weird/unexpected effects.)

Replies from: Yvain, Natália Mendonça
comment by Scott Alexander (Yvain) · 2021-10-21T01:04:00.887Z · LW(p) · GW(p)

I agree I'm being somewhat inconsistent, I'd rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I'm trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you're open to that.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-21T01:12:29.512Z · LW(p) · GW(p)

Yes, I'd be open to answering email questions.

comment by Natália Mendonça · 2021-10-21T02:05:21.694Z · LW(p) · GW(p)

This misses the fact that people’s ability to negatively influence others might vary very widely, making it so that it is silly to worry about, say, 99.99% of people strongly negatively influencing you, but reasonable to worry about the other 0.01%. If Michael is one of those 0.01%, then Scott’s worldview is not inconsistent.

Replies from: TekhneMakre, jessica.liu.taylor
comment by TekhneMakre · 2021-10-21T02:32:58.374Z · LW(p) · GW(p)

If it's reasonable to worry about the .01%, it's reasonable to ask how the ability varies. There's some reason, some mechanism. This is worth discussing even if it's hard to give more than partial, metaphorical hypotheses. And if there are these .01% of very strong influencers, that is still an exception to strong liberal individualism.

comment by jessicata (jessica.liu.taylor) · 2021-10-21T02:26:45.156Z · LW(p) · GW(p)

That would still admit some people at Leverage having significant mental influence, especially if they got into weird mental tech that almost no one gets into. A lot of the weirdness is downstream of them encountering "body workers" who are extremely good at e.g. causing mental effects by touching people's back a little; these people could easily be extremal, and Leverage people learned from them. I've had sessions with some post-Leverage people where it seemed like really weird mental effects are happening in some implicit channel (like, I feel a thing poking at the left side of my consciousness and the person says, "oh, I just did an implicit channel thing, maybe you felt that"), I've never experienced effects like that (without drugs, and not obviously on drugs either though the comparison is harder) with others including with Michael, Anna, or normal therapists. This could be "placebo" in a way that makes it ultimately not that important but still, if we're admitting that 0.01% of people have these mental effects then it seems somewhat likely that this includes some Leverage people.

Also, if the 0.01% is disproportionately influential (which, duh), then getting more detailed models than "charisma" is still quite important.

Replies from: EI
comment by EI · 2021-10-21T02:33:42.112Z · LW(p) · GW(p)

these people could easily be extremal and might spread their knowledge.

Yeah I've felt that too. Sometimes I don't know if things should've been said because they are mostly going to be taken out of context, but I felt like sharing because I'm seeking answers too. I probably won't find any, but I'm grateful that I still feel like it's worthwhile to try. If I don't even try anymore, then everything would be truly meaningless since I would've given up on everything else before then. The last lizard in the universe. There is a point of no return, and I'm not sure if I've reached it or not. I think my annoyances of everyday life is more of a reflection of that. If I had emotional investment in anything, I would've just focused on that. Instead I have nothing to focus on, so I end up falling victim to getting into mini emotional loops over little things even though I know they are trivial, but I have to be self-aware enough to pull myself out of it. Like I said, emotional investment is probably the hardest thing to be self-aware about. It's like those ADHD focus episodes where you can become too focused on one thing and thinking too much about it. The threats of more good times was quite a bit of my world for awhile. If I had something else that's meaningful to focus on, then I probably wouldn't have focused so much emotionally on getting out. The problem has to do with not aligning the emotions associated with my general blandness that I feel. They contradict each other. The only solution is to truly not care about those threats of more good times, but that's only because I've taken psychedelics. If I haven't, then my thoughts would probably be more confined and easier to work with.

comment by Benquo · 2021-10-19T20:27:13.119Z · LW(p) · GW(p)

One important implication of "cults are possible" is that many normal-seeming people are already too crazy to function as free citizens of a republic.

In other words, from a liberal perspective, someone who can't make their own decisions about whether to hang out with Michael Vassar and think about what he says is already experiencing a severe psychiatric emergency and in need of a caretaker, since they aren't competent to make their own life decisions. They're already not free, but in the grip of whatever attractor they found first.

Personally I bite the bullet and admit that I'm not living in a society adequate to support liberal democracy, but instead something more like what Plato's Republic would call tyranny. This is very confusing because I was brought up to believe that I lived in a liberal democracy. I'd very much like to, someday.

Replies from: Holly_Elmore, Jayson_Virissimo
comment by Holly_Elmore · 2021-10-19T23:22:30.821Z · LW(p) · GW(p)

I think there are less extreme positions here. Like "competent adults can make their own decisions, but they can't if they become too addicted to certain substances." I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.  

Replies from: Benquo
comment by Benquo · 2021-10-19T23:40:00.670Z · LW(p) · GW(p)

competent adults can make their own decisions, but they can’t if they become too addicted to certain substances

I think the principled liberal perspective on this is Bryan Caplan's: drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.

I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.

I don't think that many people are "fundamentally incapable of being free." But it seems like some people here are expressing grievances that imply that either they themselves or some others are, right now, not ready for freedom of association.

The claim that someone is dangerous enough that they should be kept away from "vulnerable people" is a declaration of intent to deny "vulnerable people" freedom of association for their own good. (No one here thinks that a group of people who don't like Michael Vassar shouldn't be allowed to get together without him.)

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-19T23:57:09.308Z · LW(p) · GW(p)

drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.

I really don't think this is an accurate description of what is going on in people's mind when they are experiencing drug dependencies. I've spent a good chunk of my childhood with an alcoholic father, and he would have paid most of his wealth to stop being addicted to drinking, went through great lengths trying to tie himself to various masts to stop, and generally expressed a strong preference for somehow being able to self-modify the addiction away, but ultimately failed to do so. 

Of course, things might be different for different people, but at least in the one case where I have a very large amount of specific data, this seems like it's a pretty bad model of people's preferences. Based on the private notebooks of his that I found after his death, this also seemed to be his position in purely introspective contexts without obvious social desirability biases. My sense is that he would have strongly preferred someone to somehow take control away from him, in this specific domain of his life.

Replies from: Benquo, NancyLebovitz
comment by Benquo · 2021-10-20T00:06:50.689Z · LW(p) · GW(p)

This seems like some evidence that the principled liberal position is false - specifically, that it is not self-ratifying. If you ask some people what their preferences are, they will express a preference for some of their preferences to be thwarted, for their own good.

Contractarianism can handle this sort of case, but liberal democracy with inalienable rights cannot, and while liberalism is a political philosophy, contractarianism is just a policy proposal, with no theory of citizenship or education.

comment by NancyLebovitz · 2021-11-19T10:16:48.172Z · LW(p) · GW(p)

https://en.wikipedia.org/wiki/Olivier_Ameisen

A sidetrack, but a French surgeon found that Baclofen (a muscle relaxant) cured his alcoholism by curing the craving. He was surprised to find that it cured compulsive spending when he didn't even realize he had a problem.

He had a hard time raising money for an official experiment, and it came out inconclusive, and he died before the research got any further.


 

comment by Jayson_Virissimo · 2021-10-19T20:50:48.060Z · LW(p) · GW(p)

This is more-or-less Aristotle's defense of (some cases of) despotic rule: it benefits those that are naturally slaves (those whose deliberative faculty functions below a certain threshold) in addition to the despot (making it a win-win scenario).

Replies from: Benquo
comment by Benquo · 2021-10-20T04:45:40.434Z · LW(p) · GW(p)

Aristotle seems (though he's vague on this) to be thinking in terms of fundamental attributes, while I'm thinking in terms of present capacity, which can be reduced by external interventions such as schooling.

Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.

*As far as I know I didn't know any such people before 2020; it's very easy for members of the educated class to mistake our bubble for statistical normality.

Replies from: Hazard, NancyLebovitz
comment by Hazard · 2021-10-20T13:28:41.688Z · LW(p) · GW(p)

Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.

This is very interesting to me! I'd like to hear more about how the two group's behavior looks diff, and also your thoughts on what's the difference that makes the difference, what are the pieces of "being brought up to go to college" that lead to one class of reactions?

Replies from: CharlieTheBananaKing, ChristianKl, Benquo
comment by CharlieTheBananaKing · 2021-10-21T19:28:22.992Z · LW(p) · GW(p)

I have talked to Vassar, while he has a lot of "explicit control over conversations" which could be called charisma, I'd hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)

My hypothesis is the following:  I've met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people's identity ("I'm an EA person, thus I'm a good person doing important work"). Two anecdotes to illustrate this:
- I'd recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we're both self-declared rationalists!) because I'd realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity. 
- I'd had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: "Only if it works on the alignment problem, everything else is irrelevant to me". 

Vassar very persuasively argues against EA and work done at MIRI/CFAR (He doesn't disagree alignment is a problem AFAIK). Assuming people largely defined by these ideas, one can see how that could be threatening to their identity. I've read "I'm an evil person" from multiple people relating their "Vassar-psychosis" experience. To me it's very easy to see how one could get there if the defining part of the identity is "I'm a good person because I work on EA/Alignment" + "EA/Aligment is a scam" arguments. 
It also makes Vassar look like a genius (God), because "why wouldn't the rest of the rationalists see the arguments", while it's really just a group-bias phenomenon, where the social truth of the rationalist group is that obviously EA is good and AI alignment terribly important.

This would probably predict, that the people experiencing "Vassar-psychosis" would've a stronger-than-average constructed identity based on EA/CFAR/MIRI?

Replies from: michael-chen
comment by Michael Chen (michael-chen) · 2021-10-22T18:14:36.525Z · LW(p) · GW(p)

What are your or Vassar's arguments against EA or AI alignment? This is only tangential to your point, but I'd like to know about it if EA and AI alignment are not important.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-23T16:03:26.588Z · LW(p) · GW(p)

The general argument is that EA's are not really doing what they say they do. One example from Vassar would be that when it comes to COVID-19 for example there seem to be relatively little effective work by EA's. In contrast Vassar considered giving prisoners access to personal equipment the most important and organized effectively for that to happen. 

EA's created in EA Global an enviroment where someone who wrote a good paper warning about the risks of gain-of-function research doesn't address that directly but only talks indirectly about it to focus on more meta-issues. Instead of having conflicts with people doing gain-of-function research the EA community mostly ignored it's problems and funded work that's in less conflict with the establishment. There's nearly no interest in learning from those errors in the EA community and people rather avoid conflicts.

If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information. 

AI alignment is important but just because one "works on AI risk" doesn't mean that the work actually decreases AI risk. Tying your personal identity to being someone who works to decrease AI risk makes it hard to clearly reason about whether one's actions actually do. OpenAI would be an organization where people who see themselves as "working on AI alignment" work and you can look at the recent discussion that whether or not that work reduces or increases actual risk is in open debate.

In a world where human alignment doesn't work to prevent dangerous gain of function experiments from happening thinking about AI alignment instead of the problem of human alignment where it's easier to get feedback might be the wrong strategic focus.

Replies from: NancyLebovitz, jkaufman
comment by NancyLebovitz · 2021-11-19T10:18:50.164Z · LW(p) · GW(p)

Did Vassar argue that existing EA organizations weren't doing the work they said they were doing, or that EA as such was a bad idea? Or maybe that it was too hard to get organizations to do it?

Replies from: jessica.liu.taylor, ChristianKl, jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-11-19T15:26:07.128Z · LW(p) · GW(p)

He argued

(a) EA orgs aren't doing what they say they're doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally), and it's hard to get organizations to do what they say they do

(b) Utilitarianism isn't a form of ethics, it's still necessary to have principles, as in deontology or two-level consequentialism

(c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn't well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved

(d) It usually makes more sense to focus on ways of helping others that also build capacities, including gathering more information, to increase long-term positive impact

comment by ChristianKl · 2021-11-19T17:31:32.918Z · LW(p) · GW(p)

If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him. 

The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.

Vassar's action themselves are about doing altruistic actions more directly by looking for who are most powerless who need help and working to help them. In the COVID case he identified prisoners and then worked on making PPE available for them.

You might see his thesis is that "effective" in EA is about adding a management layer for directing interventions and that management layer has the problems that the immoral maze sequence describes. According to Vassar someone who wants to be altrustic shouldn't delegate his judgements of what's effective and thus warrents support to other people.

comment by jessicata (jessica.liu.taylor) · 2021-11-19T15:26:43.002Z · LW(p) · GW(p)

He argued

(a) EA orgs aren't doing what they say they're doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally) (b) Utilitarianism isn't a form of ethics, it's still necessary to have principles, as in deontology or two-level consequentialism (c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn't well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved

comment by jefftk (jkaufman) · 2021-10-24T00:57:42.374Z · LW(p) · GW(p)

If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information.

Link? I'm not finding it

Replies from: ChristianKl
comment by ChristianKl · 2021-10-24T07:28:01.101Z · LW(p) · GW(p)

https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=zqcynfzfKma6QKMK9

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-10-25T20:41:26.948Z · LW(p) · GW(p)

I think what you're pointing to is:

I have a large number of negative Leverage experiences between 2015-2017 that I never wrote up due to various complicated adversarial dynamics surrounding Leverage and CEA (as well as various NDAs and legal threats, made by both Leverage and CEA, not leveled at me, but leveled at enough people around me that I thought I might cause someone serious legal trouble if I repeat a thing I heard somewhere in a more public setting)

I'm getting a bit pedantic, but I wouldn't gloss this as "CEA used legal threats to cover up Leverage related information". Partly because the original bit is vague, but also because "cover up" implies that the goal is to hide information.

For example, imagine companies A and B sue each other, which ends up with them settling and signing an NDA. Company A might accept an NDA because they want to move on from the suit and agreeing to an NDA does that most effectively. I would not describe this as company A using legal threats to cover up B-related information.

Replies from: habryka4, ChristianKl, ChristianKl
comment by habryka (habryka4) · 2021-10-26T18:59:25.254Z · LW(p) · GW(p)

Yep, I think the situation is closer to what Jeff describes here, though, I honestly don't actually know, since people tend to get cagey when the topic comes up.

comment by ChristianKl · 2021-10-25T23:02:06.342Z · LW(p) · GW(p)

In the timeframe CEA and Leverage where doing together the Pareto Fellowship. If you read the common knowledge post you find people finding that they were mislead by CEA because the announcement didn't mention that the Pareto Fellowship was largely run by Leverage.

On their mistakes page CEA, they have a section about the Pareto Fellowship but it hides the fact that Leverage was involved in the Pareto Fellowship but says "The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers."

That does look to me like hiding information about the cooperation between Leverage and CEA. 

I do think that publically presuming that people who hide information have something to hide is useful. If there's nothing to hide I'd love to know what happened back then or who thinks what happened should stay hidden. At the minimum I do think that CEA witholding the information that the people who went to their programs spend their time in what now appears to be a cult is something that CEA should be open about in their mistakes page. 

Replies from: habryka4, jkaufman
comment by habryka (habryka4) · 2021-10-26T19:04:30.226Z · LW(p) · GW(p)

Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage's history with Effective Altruism. I think this was bad, and continues to be bad.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-27T18:06:52.816Z · LW(p) · GW(p)

My initial thought on reading this was 'this seems obviously bad', and I assumed this was done to shield CEA from reputational risk.

Thinking about it more, I could imagine an epistemic state I'd be much more sympathetic to: 'We suspect Leverage is a dangerous cult, but we don't have enough shareable evidence to make that case convincingly to others, or we aren't sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause others to be drawn into its orbit and suffer harm. But we don't feel confident enough to feel comfortable tarring Leverage in public, or our evidence was shared in confidence and we can't say anything we expect others to find convincing. So we'll have to just steer clear of the topic for now.'

Still seems better to just not address the subject if you don't want to give a fully accurate account of it. You don't have to give talks on the history of EA!

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-27T18:35:20.153Z · LW(p) · GW(p)

I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like "Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth".

Replies from: ChristianKl
comment by ChristianKl · 2021-10-27T19:39:22.838Z · LW(p) · GW(p)

"Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth"

That has the collary: "We don't expect EA's to care enough about the truth/being transparent that this is a huge reputational risk for us."

comment by jefftk (jkaufman) · 2021-10-26T01:38:30.546Z · LW(p) · GW(p)

It does look weird to me that CEA doesn't include this on the mistakes page when they talk about Pareto. I just sent CEA an email to ask:

Hi CEA,

On https://www.centreforeffectivealtruism.org/our-mistakes I see "The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers. We realized during and after the program that senior management did not provide enough oversight of the program. For example, reports by some applicants indicate that the interview process was unprofessional and made them deeply uncomfortable."

Is there a reason that the mistakes page does not mention the involvement of Leverage in the Pareto Fellowship? [1]

Jeff

[1] https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=znudKxFhvQxgDMv7k [LW(p) · GW(p)]

Replies from: jkaufman
comment by jefftk (jkaufman) · 2021-10-27T19:18:29.147Z · LW(p) · GW(p)

They wrote back, linking me to https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QcdhTjqGcSc99sNN [LW(p) · GW(p)]

("we're working on a couple of updates to the mistakes page, including about this")

comment by ChristianKl · 2021-11-18T17:25:58.951Z · LW(p) · GW(p)

I talked with Geoff and according to him there's no legal contract between CEA and Leverage that prevents information sharing. All information sharing is prevented by organization internal NDA's.

Replies from: habryka4
comment by habryka (habryka4) · 2021-11-18T22:33:31.182Z · LW(p) · GW(p)

Huh, that's surprising, if by that he means "no contracts between anyone currently at Leverage and anyone at CEA". I currently still think it's the case, though I also don't see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees? 

Replies from: ChristianKl
comment by ChristianKl · 2021-11-19T08:34:37.241Z · LW(p) · GW(p)

What he said is compatible with Ex-CEA people still being bound by the NDA's they signed they were at CEA. I don't think anything happened that releases ex-CEA people from NDAs.

The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn't unilaterally lift the settlement contract.

Public pressure on CEA seems to be necessary to get the information out in the open.

comment by ChristianKl · 2021-10-21T09:46:32.847Z · LW(p) · GW(p)

Talking with Vassar feels very intellectually alive. Maybe, like a high density of insight porn. I imagine that the people Ben talks about wouldn't get much enjoyment out of insight porn either, so that emotional impact isn't there.

There's probably also an element that plenty of people who can normally follow an intellectual conversation can't keep up a conversation with Vassar and then are filled after a conversation with a bunch of different ideas that lack order in their mind. I imagine that sometimes there's an idea overload that prevents people from critically thinking through some of the ideas.

If you have a person who hasn't gone to college, they are used to encountering people who make intellectual arguments that go over their head and have a way to deal with that. 

From meeting Vassar, I don't feel like he has the kind of charisma that someone like Valentine has (which I guess Valentine has downstream of doing a lot of bodywork stuff). 

Replies from: Benquo
comment by Benquo · 2021-10-22T03:58:29.714Z · LW(p) · GW(p)

This seems mostly right; they're more likely to think "I don't understand a lot of these ideas, I'll have to think about this for a while" or "I don't understand a lot of these ideas, he must be pretty smart and that's kinda cool" than to feel invalidated by this and try to submit to him in lieu of understanding.

comment by Benquo · 2021-10-22T02:31:16.135Z · LW(p) · GW(p)

The people I know who weren't brought up to go to college have more experience navigating concrete threats and dangers, which can't be avoided through conformity, since the system isn't set up to take care of people like them. They have to know what's going on to survive. This results in an orientation less sensitive to subtle threats of invalidation, and that sees more concrete value in being informed by someone.

In general this means that they're much more comfortable with the kind of confrontation Vassar engages in, than high-class people are.

Replies from: Hazard
comment by Hazard · 2021-10-23T01:42:59.344Z · LW(p) · GW(p)

This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.

comment by NancyLebovitz · 2021-11-19T10:01:54.880Z · LW(p) · GW(p)

This is interesting to me because I was brought up to go to college, but I didn't take it seriously (plausibly from depression or somesuch), and I definitely think of him as a guy with an interesting perspective. Okay, a smart guy with an interesting perspective, but not a god.

It had never occurred to me before that maybe people who were brought up to assume they were going to college might generally have a different take on the world than I do.

comment by jessicata (jessica.liu.taylor) · 2021-10-17T22:27:05.562Z · LW(p) · GW(p)

I feel pretty defensive reading and responding to this comment, given a previous conversation with Scott Alexander where he said his professional opinion would be that people who have had a psychotic break should be on antipsychotics for the rest of their life (to minimize risks of future psychotic breaks). This has known severe side effects like cognitive impairment and brain shrinkage and lacks evidence of causing long-term improvement. When I was on antipsychotics, my mental functioning was much lower (noted by my friends) and I gained weight rapidly. (I don't think short-term use of antipsychotics was bad, in my case)

It is in this context that I'm reading that someone talking about the possibility of mental subprocess implantation ("demons") should be "treated as a psychological emergency", when the Eric Bryulant case had already happened, and talking about the psychological processes was necessary for making sense of the situation. I feared involuntary institutionalization at the time, quite a lot, for reasons like this.

If someone expresses opinions like this, and I have reason to believe they would act on them, then I can't believe myself to have freedom of speech. That might be better than them not sharing the opinions at all, but the social structural constraints this puts me under are obvious to anyone trying to see them.

Given what happened, I don't think talking to a normal therapist would have been all that bad in 2017, in retrospect; it might have reduced the overall amount of psychiatric treatment needed during that year. I'm still really opposed to the coercive "you need professional help" framing in response to sharing weird thoughts that might be true, instead of actually considering them, like a Bayesian.

Replies from: Yvain, sil-ver
comment by Scott Alexander (Yvain) · 2021-10-17T23:18:14.646Z · LW(p) · GW(p)

I don't remember the exact words in our last conversation. If I said that, I was wrong and I apologize.

My position is that in schizophrenia (which is a specific condition and not just the same thing as psychosis), lifetime antipsychotics might be appropriate. EG this paper suggests continuing for twelve months after a first schizophrenic episode and then stopping and seeing how things go, which seems reasonable to me. It also says that if every time you take someone off antipsychotics they become fully and dangerous psychotic again, then lifetime antipsychotics are probably their best bet. In a case like that, I would want the patient's buy-in, ie if they were medicated after a psychotic episode I would advise them of the reasons why continued antipsychotic use was recommended in their case, if they said they didn't want it we would explore why given the very high risk level, and if they still said they didn't want it then I would follow their direction.

I didn't get a chance to talk to you during your episode, so I don't know exactly what was going on. I do think that psychosis should be thought of differently than just "weird thoughts that might be true", as more of a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom. I think in mild psychosis it's possible to snap someone back to reality where they agree their weird thoughts aren't true, but in severe psychosis it isn't (I remember when I was a student I tried so hard to convince someone that they weren't royalty, hours of passionate debate, and it just did nothing). I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom. Analogy to eg someone having chest pain from a heart attack, and you give them painkillers for the pain but don't treat the heart attack.

(although there's a separate point where it would be wrong and objectifying to falsely claim someone who's just thinking differently is psychotic or pre-psychotic, given that you did end up psychotic it doesn't sound like the people involved were making that mistake)

My impression is that some medium percent of psychotic episodes end in permanent reduced functioning, and some other medium percent end in suicide or jail or some other really negative consequence, and this is scary enough that treating it is always an emergency, and just treating the symptom but leaving the underlying condition is really risky.

I agree many psychiatrists are terrible and that wanting to avoid them is a really sympathetic desire, but when it's something really serious like psychosis I think of this as like wanting to avoid surgeons (another medical profession with more than its share of jerks!) when you need an emergency surgery.

Replies from: jessica.liu.taylor, hg00, TekhneMakre
comment by jessicata (jessica.liu.taylor) · 2021-10-17T23:25:14.830Z · LW(p) · GW(p)

I don’t remember the exact words in our last conversation. If I said that, I was wrong and I apologize.

Ok, the opinions you've described here seem much more reasonable than what I remember, thanks for clarifying.

I do think that psychosis should be thought of differently than just “weird thoughts that might be true”, since it’s a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom.

I agree, yes. I think what I was afraid of at the time was being called crazy and possibly institutionalized for thinking somewhat weird thoughts that people would refuse to engage with, and showing some signs of anxiety/distress that were in some ways a reaction to my actual situation. By the time I was losing sleep etc, things were quite different at a physiological level and it made sense to treat the situation as a psychiatric emergency.

If you can show someone that they're making errors that correspond to symptoms of mild psychosis, then telling them that and suggesting corresponding therapies to help with the underlying problem seems pretty reasonable.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2021-10-17T23:44:23.947Z · LW(p) · GW(p)

Thanks, if you meant that, when someone is at a very early stage of thinking strange things, you should talk to them about it and try to come to a mutual agreement on how worrying this is and what the criteria would be for psych treatment, instead of immediately dehumanizing them and demanding the treatment right away, then I 100% agree.

comment by hg00 · 2021-10-18T09:53:08.251Z · LW(p) · GW(p)

I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom.

If psychosis is caused by an underlying physiological/biochemical process, wouldn't that suggest that e.g. exposure to Leverage Research wouldn't be a cause of it?

If being part of Leverage is causing less reality-based thoughts and nudging someone into mild psychosis, I would expect that being part of some other group could cause more reality-based thoughts and nudge someone away from mild psychosis. Why would causation be possible in one direction but not the other?

I guess another hypothesis here is that some cases are caused by social/environmental factors and others are caused by biochemical factors. If that's true, I'd expect changing someone's environment to be more helpful for the former sort of case.

comment by TekhneMakre · 2021-10-17T23:38:25.826Z · LW(p) · GW(p)

[probably old-hat [ETA: or false], but I'm still curious what you think] My (background unexamined) model of psychosis-> schizophrenia is that something, call it the "triggers", sets a person on a trajectory of less coherence / grounding; if the trajectory isn't corrected, they just go further and further. The "triggers" might be multifarious; there might be "organic" psychosis and "psychic" psychosis, where the former is like what happens from lead poisoning, and the latter is, maybe, what happens when you begin to become aware of some horrible facts. If your brain can rearrange itself quickly enough to cope with the newly known reality, your trajectory points back to the ground. If it can't, you might have a chain reaction where (1) horrible facts you were previously carefully ignoring, are revealed because you no longer have the superstructure that was ignore-coping with them; (2) your ungroundedness opens the way to unepistemic beliefs, some of which might be additionally horrifying if true; (3) you're generally stressed out because things are going wronger and wronger, which reinforces everything.

If this is true, then your statement:

. I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that's kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom

is only true for some values of "guide them back to reality-based thoughts". If you're trying to help them go back to ignore-coping, you might partly succeed, but not in a stable way, because you only pushed the ball partway back up the hill, to mix metaphors--the ball is still on a slope and will roll back down when you stop pushing, the horrible fact is still revealed and will keeping being horrifying. But there's other things you could do, like helping them find a non-ignore-cope for the fact; or show them enough that they become convinced that the belief isn't true.

comment by Rafael Harth (sil-ver) · 2021-10-17T22:55:13.397Z · LW(p) · GW(p)

There is this basic idea (I think from an old blogpost that Eliezer wrote) that if someone says there are goblins in the closet, dismissing them outright is confusing rationality with trust in commonly held claims, whereas the truly rational thing is to just open the closet and look.

I think this is correct in principle but not applicable in many real-world cases. The real reason why even rational people routinely dismiss many weird explanations for things isn't that they have sufficient evidence against them, it's that the weird explanation is inconsistent with a large set of high confidence beliefs that they currently hold. If someone tells me that they can talk to their deceased parents, I'm probably not going to invest the time to test whether they can obtain novel information this way; I'm just going to assume they're delusional because I'm confident spirits don't exist.

That said, if that someone helped write the logical induction paper, I personally would probably hear them out regardless of how weird the thing sounds. Nonetheless, I think it remains true that dismissing beliefs without considering the evidence is often necessary in practice.

Replies from: TekhneMakre, CronoDAS
comment by TekhneMakre · 2021-10-17T23:11:37.541Z · LW(p) · GW(p)
If someone tells me that they can talk to their deceased parents, I'm probably not going to invest the time to test whether they can obtain novel information this way; I'm just going to assume they're delusional because I'm confident spirits don't exist.

This is failing to track ambiguity in what's being refered to. If there's something confusing happening--something that seems important or interesting, but that you don't yet have words to well-articulate it--then you try to say what you can (e.g. by talking about "demons"). In your scenario, you don't know exactly what you're dismissing. You can confidently dismiss, in the absence of extraordinary evidence, that (1) their parents's brains have been rotting in the ground, and (2) they are talking with their parents, in the same way you talk to a present friend; you can't confidently dismiss, for example, that they are, from their conscious perspective, gaining information by conversing with an entity that's naturally thought of as their parents (which we might later describe as, they have separate structure in them, not integrated with their "self", that encoded thought patterns from their parents, blah blah blah etc.). You can say "oh well yes of course if it's *just a metaphor* maybe I don't want to dismiss them", but the point is that from a partially pre-theoretic confusion, it's not clear what's a metaphor and it requires further work to disambiguate what's a metaphor.

comment by CronoDAS · 2021-10-18T06:38:17.543Z · LW(p) · GW(p)

As the joke goes, there's nothing crazy about talking to dead people. When dead people respond, then you start worrying.

comment by nshepperd · 2021-10-18T02:22:59.509Z · LW(p) · GW(p)

I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.

Based on the things I am reading about what has happened, blame, ostracism, and cancelling seem like the bare minimum of what we should do.

Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve.

This is really, really serious. If this happened to someone closer to me I'd be out for blood, and probably legal prosecution.

Let's not minimize how fucked up this is.

Replies from: jessica.liu.taylor, devi
comment by jessicata (jessica.liu.taylor) · 2021-10-18T02:28:23.066Z · LW(p) · GW(p)

Olivia, Devi and I all talked to people other than Michael Vassar, such as Anna Salamon. We gravitated towards the Berkeley community, which was started around Eliezer's writing. None of us are calling for blame, ostracism, or cancelling of Michael. Michael helped all of us in ways no one else did. None of us have a motive to pursue a legal case against him. Ziz's sentence you quoted doesn't implicate Michael in any crimes.

The sentence is also misleading given Devi didn't detransition afaik.

Replies from: Viliam, nshepperd
comment by Viliam · 2021-10-18T09:48:50.679Z · LW(p) · GW(p)

Jessicata, I will be blunt here. This article you wrote was [EDIT: expletive deleted] misleading. Perhaps you didn't do it on purpose; perhaps this is what you actually believe. But from my perspective, you are an unreliable narrator.

Your story, original version:

  • I worked for MIRI/CFAR
  • I had a psychotic breakdown, and I believed I was super evil
  • the same thing also happened to a few other people
  • conclusion: MIRI/CFAR is responsible for all this

Your story, updated version:

  • I worked for MIRI/CFAR
  • then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil
  • I actually used the drugs
  • I had a psychotic breakdown, and I believed I was super evil
  • the same thing also happened to a few other people
  • conclusion: I still blame MIRI/CFAR, and I am trying to downplay Vassar's role in this

If you can't see how these two stories differ, then... I don't have sufficiently polite words to describe it, so let's just say that to me these two stories seem very different.

Lest you accuse me of gaslighting, let me remind you that I am not doubting any of the factual statements you made. (I actually tried to collect them here [LW(p) · GW(p)], to separate them from the long stream of dark insinuations.) What I am saying is that you omitted a few "details", which perhaps seem irrelevant to you, but in my opinion fundamentally change the meaning of the story.

At this moment, we just have to agree to disagree, I guess.

In my opinion, the greatest mistake MIRI/CFAR made in this story, was being associated with Michael Vassar in the first place (and that's putting it mildly; at some moment it seemed like Eliezer was in love with him, he so couldn't stop praising his high intelligence... well, I guess he learned that "alignment is more important than intelligence" applies not just to artificial intelligences but also to humans), providing him social approval and easy access to people who then suffered as a consequence. They are no longer making this mistake. Ironically, now it's you, after having positioned yourself as a victim, who is blinded by his intelligence, and doesn't see the harm he causes. But the proper way to stop other people from getting hurt is to make it known that listening too much to Vassar does this, predictably. So that he can no longer use the rationalist community as a "social proof" to get people's trust.

EDIT: To explain my unkind words "after having positioned yourself as a victim", the thing I am angry about is that you publicly describe your suffering as a way to show people that MIRI/CFAR is evil. But when it turns of that Michael Vassar is more directly responsible for it, suddenly the angle changes and he actually "helped you".

So could you please make up your mind? Is having a psychotic breakdown and spending a few weeks catatonic in hospital a good thing or a bad thing? Is it trauma, or is it jailbreaking? Because it seems like you call it bad when you attribute it to MIRI/CFAR, but when other people suggest that Vassar was responsible, then it seems a bit like no big deal, definitely not anything to blame him for.

Replies from: Eliezer_Yudkowsky, TekhneMakre, Unreal, countingtoten, jessica.liu.taylor
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-10-18T15:33:05.471Z · LW(p) · GW(p)

I could be very wrong, but the story I currently have about this myself is that Vassar himself was a different and saner person before he used too much psychedelics. :( :( :(

Replies from: orthonormal, Viliam, jimrandomh
comment by orthonormal · 2021-10-19T01:32:01.556Z · LW(p) · GW(p)

Non-agenda'd question: about when did you notice changes in him?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-10-19T05:37:08.812Z · LW(p) · GW(p)

My autobiographical episodic memory is nowhere near good enough to answer this question, alas.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-19T07:21:05.730Z · LW(p) · GW(p)

Do you have a timeline of when you think that shift happened? That might make it easier for other people who knew Vassar at the time to say whether their observation matched yours.

comment by Viliam · 2021-10-18T17:35:40.503Z · LW(p) · GW(p)

That... must have hurt a lot.

(I hope your story is right.)

comment by jimrandomh · 2021-10-19T09:24:14.588Z · LW(p) · GW(p)

I saw some him make some questionable drug use decisions at Burning Man in 2011 and 2012, including larger than normal doses, and I don't think I saw all of it.

Replies from: Tenoke
comment by Tenoke · 2021-10-21T12:57:08.536Z · LW(p) · GW(p)

A lot of people take a lot of drugs on big events like Burning Man with little issue. In my observation, it's typically the overly frequent and/or targeted psychedelic use that causes such big changes at least in those that start of fairly stable.

comment by TekhneMakre · 2021-10-18T11:00:18.968Z · LW(p) · GW(p)
you publicly describe your suffering as a way to show people that MIRI/CFAR is evil.

Could you expand more on this? E.g. what are a couple sentences in the post that seem most trying to show this.

Because it seems like you call it bad when you attribute it to MIRI/CFAR, but when other people suggest that Vassar was responsible, then it seems a bit like no big deal, definitely not anything to blame him for.

I appreciate the thrust of your comment, including this sentence, but also this sentence seems uncharitable, like it's collapsing down stuff that shouldn't be collapsed. For example, it could be that the MIRI/CFAR/etc. social field could set up (maybe by accident, or even due to no fault of any of the "central" people) the conditions where "psychosis" is the best of the bad available options; in which case it makes sense to attribute causal fault to the social field, not to a person who e.g. makes that clear to you, and therefore more proximal causes your breakdown. (Of course there's disagreement about whether that's the state of the world, but it's not necessarily incoherent.)

I do get the sense that jessicata is relating in a funny way to Michael Vassar, e.g. by warping the narrative around him while selectively posing as "just trying to state facts" in relation to other narrative fields; but this is hard to tell, since it's also what it might look like if Michael Vassar was systematically scapegoated, and jessicata is reporting more direct/accurate (hence less bad-seeming) observations.

comment by Unreal · 2021-10-18T13:16:02.437Z · LW(p) · GW(p)

Where did jessicata corroborate this sentence "then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil" ? 

comment by countingtoten · 2021-10-18T11:56:43.862Z · LW(p) · GW(p)

I should note that, as an outsider, the main point I recall Eliezer making in that vein is that he used Michael Vassar as a model for the character who was called Professor Quirrell. As an outsider, I didn't see that as an unqualified endorsement - though I think your general message should be signal-boosted.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-18T14:21:10.652Z · LW(p) · GW(p)

The claim that Michael Vassar is substantially like Quirrell seems to me strange. Where did you get the claim that Eliezer modelled Vassar after Quirrell?

To make the claim a bit more based on public data, take Vassar's TedX talk. I think it gives a good impression of how Vassar thinks. There are some official statistics that claim for Jordan that life expectancy, so I think there's a good chance that Vassar here actually believes what he says.

If you however look deeper then Jordan's life expectancy is not as high as is asserted by Vassar. Given that the video is in the public record that's an error that everybody can find who tries to check what Vassar is saying. I don't think it's in Vassar's interest to give a public talk like that with claims that are easily found to be wrong by factchecking. Quirrell wouldn't have made an error like this but is a lot more controlled. 

Eliezer made Vassar president of the precursor of MIRI. That's a strong signal of trust and endorsement.

Replies from: countingtoten, Davis_Kingsley
comment by Davis_Kingsley · 2021-10-18T14:43:13.518Z · LW(p) · GW(p)

Eliezer has openly said Quirrell's cynicism is modeled after a mix of Michael Vassar and Robin Hanson.

comment by jessicata (jessica.liu.taylor) · 2021-10-18T14:10:17.022Z · LW(p) · GW(p)

But from my perspective, you are an unreliable narrator.

I appreciate you're telling me this given that you believe it. I definitely am in some ways, and try to improve over time.

then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil

I said in the text that (a) there were conversations about corruption in EA institutions, including about the content of Ben Hoffman's posts, (b) I was collaborating with Michael Vassar at the time, (c) Michael Vassar was commenting about social epistemology. I admit that connecting points (a) and (c) would have made the connection clearer, but it wouldn't have changed the text much.

In cases where someone was previously part of a "cult" and later says it was a "cult" and abusive in some important ways, there has to be a stage where they're thinking about how bad the social context was, and practically always, that involves conversations with other people who are encouraging them to look at the ways their social context is bad. So my having conversations where people try to convince me CFAR/MIRI are evil is expected given what else I have written.

Besides this, "in order to get a psychotic breakdown" is incredibly false about his intentions, as Zack Davis points out [LW · GW].

I actually used the drugs

This was not in the literally initial version of the post but was included within a few hours, I think, when someone pointed out to me that it was relevant.

But the proper way to stop other people from getting hurt is to make it known that listening too much to Vassar does this, predictably.

As I pointed out [LW(p) · GW(p)], this doesn't obviously attribute less "spooky mind powers" to Michael Vassar compared with what Leverage was attributing to people, where Leverage attributing this (and isolating people from each other on the basis of it) was considered crazy and abusive. Maybe he really was this influential, but logical consistency is important here.

But when it turns of that Michael Vassar is more directly responsible for it, suddenly the angle changes and he actually “helped you”.

In this comment [LW(p) · GW(p)] I'm saying he has an unclear and probably low amount of responsibility, so this is a misread.

So could you please make up your mind?

I was pretty clear in the text that there were trauma symptoms resulting from these events and they also had advantages such as gaining a new perspective, and that overall I don't regret working at MIRI. I was also clear that there are relatively better and worse social contexts in which to experience psychosis symptoms, and hospitalization indicates a relatively worse social context.

comment by nshepperd · 2021-10-18T02:42:24.988Z · LW(p) · GW(p)

None of us are calling for blame, ostracism, or cancelling of Michael.

What I'm saying is that the Berkeley community should be.

Ziz’s sentence you quoted doesn’t implicate Michael in any crimes.

Supplying illicit drugs is a crime (but perhaps the drugs were BYO?). IDK if doing so and negligently causing permanent psychological injury is a worse crime, but it should be.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T02:46:33.279Z · LW(p) · GW(p)

I'm not going to comment on drug usage in detail for legal reasons, except to note that there are psychedelics legal in some places, such as marijuana in CA.

It doesn't make sense to attribute unique causal responsibility for psychotic breaks to anyone, except maybe to the person it's happening to. There are lots of people all of us were talking to in that time period who influenced us, and multiple people were advocating psychedelic use. Not all cases happened to people who were talking significantly with Michael around the time. As I mentioned in the OP, as I was becoming more psychotic, people tried things they thought might help, which generally didn't, and they could have done better things instead. Even causal responsibility doesn't imply blame, e.g. Eliezer had some causal responsibility due to writing things that attracted people to the Berkeley scene where there were higher-variance psychological outcomes. Michael was often talking with people who were already "not ok" in important ways, which probably affects the statistics.

comment by devi · 2021-10-18T16:59:17.952Z · LW(p) · GW(p)

Please see my comment on the grandparent [LW(p) · GW(p)].

I agree with Jessica's general characterization that this is better understood as multi-causal rather than the direct cause of actions by one person.

comment by jimrandomh · 2021-10-18T23:40:53.352Z · LW(p) · GW(p)

Relevant bit of social data: Olivia is the most irresponsible-with-drugs person I've ever met, by a sizeable margin; and I know of one specific instance (not a person named in your comment or any other comments on this post) where Olivia gave someone an ill-advised drug combination and they had a bad time (though not a psychotic break).

Replies from: Viliam
comment by Viliam · 2021-10-19T08:50:12.036Z · LW(p) · GW(p)

gave someone an ill-advised drug combination and they had a bad time

I don't remember specific names, but something similar happened at one of the first rationality minicamps. Technically, this was not about drugs but some supplements (i.e. completely legal things), but there was someone mixing various kinds of powders and saying "yeah, trust me, I have a lot of experience with this, I did a lot of research, it is perfectly safe to take a dose this high, really", and then an ambulance had to be called.

So, I assume you meant that Olivia goes even far beyond this, right?

Replies from: jimrandomh
comment by jimrandomh · 2021-10-19T09:49:33.127Z · LW(p) · GW(p)

My memory of the RBC incident you're referring to was that it wasn't supplements that did it, it was a caffeine overdose from energy drinks leading into a panic attack. But there were certainly a lot of supplements around and they could've played a role I didn't know about.

When I say that I believe Olivia is irresponsible with drugs, I'm not excluding the unscheduled supplements, but the story I referred to involved the scheduled kind.

comment by Scott Alexander (Yvain) · 2021-10-18T20:55:15.797Z · LW(p) · GW(p)

I've posted an edit/update above after talking to Vassar.

comment by gwern · 2021-10-18T02:03:44.503Z · LW(p) · GW(p)

A question for the 'Vassarites', if they will: were you doing anything like the "unihemispheric sleep" exercise (self-inducing hallucinations/dissociative personalities by sleep deprivation) the Zizians are described as doing?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T02:10:44.265Z · LW(p) · GW(p)

No. All sleep deprivation was unintentional (anxiety-induced in my case).

comment by ChristianKl · 2021-10-18T07:58:18.783Z · LW(p) · GW(p)

I banned him from SSC meetups for a combination of reasons including these

If you make bans like these it would be worth to communicate them to the people organizing SSC meetups. Especially, when making bans for safety reasons of meetup participants not communicating those bans seems very strange to me.

Vassar lived a while after he left the Bay Area in Berlin and for decisions whether or not to make an effort to integrate someone like him (and invite him to LW and SSC meetups) such kind of information is valuable and Bay people not sharing it but claiming that they do anything that would work in practice like a ban feels misleading. 

For reasons I don't fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything's kind of been frozen in place since then.

I think Vassar left the Bay area more then a year before COVID happened. As far as I remember his stated reasoning was something along the lines of everyone in the Bay Area getting mindkilled by leftish ideology.

Replies from: Yvain, Viliam
comment by Scott Alexander (Yvain) · 2021-10-18T10:34:57.020Z · LW(p) · GW(p)

It was on the Register of Bans, which unfortunately went down after I deleted the blog. I admit I didn't publicize it very well because this was a kind of sensitive situation and I was trying to do it without destroying his reputation.

Replies from: ChristianKl, ChristianKl
comment by ChristianKl · 2021-10-18T10:59:49.110Z · LW(p) · GW(p)

If there are bans that are supposed to be enforced, mentioning that in the mails that go out to organizers for a ACX everywhere event would make sense. I'm not 100% sure that I got all the mails because Ruben forwarded mails for me (I normally organize LW meetups in Berlin and support Ruben with the SSC/ACX meetups), but in those there was no mention of the word ban.

I don't think it needs to be public but having such information in a mail like the one Aug 23 would likely to be necessary for a good portion of the meetup organizers to know that there an expectation that certain people aren't welcome.

comment by ChristianKl · 2021-10-18T22:28:56.626Z · LW(p) · GW(p)

https://www.lesswrong.com/posts/iWWjq5BioRkjxxNKq/michael-vassar-at-the-slatestarcodex-online-meetup [LW · GW] seems to have happened after that point in time. Vassar not only attended a Slate Star Codex but was central in it and presenting his thoughts.

Replies from: JoshuaFox
comment by JoshuaFox · 2021-10-21T12:56:13.807Z · LW(p) · GW(p)

I organized that, so let me say that:

  • That online meetup, or the invitation to Vassar, was not officially affiliated to or endorsed by SSC. Any responsibility for inviting him is mine.
  • I have  conversed with him a few times, as follows:
  • I met him in Israel around 2010. He was quite interesting, though he did try to get me to withdraw my retirement savings to invest with him. He was somewhat persuasive. During our time in conversation, he made some offensive statements, but I am perhaps less touchy about such things than the younger generation.
  • In 2012, he explained  Acausal Trade to me, and that was the seed of  this post [? · GW]. That discussion was quite sensible and I thank him for that.
  • A few years later, I invited him to speak at LessWrong Israel.  At that time I thought him a mad genius -- truly both.  His talk was verging on incoherence, with flashes of apparent insight.
  • Before the online meetup, 2021, he insisted on a preliminary talk; he made statements that produced twinges of persuasiveness. (Introspecting that is kind of interesting, actually.) I stayed with it for 2 or more hours before begging off, because it was fascinating in a way. I was able to analyze his techniques as Dark Arts. Apparently I am mature enough to shrug off such techniques.
  • His talk at my online meetup was even less coherent than any before, with multiple offensive elements. Indeed, I believe it was a mistake to have him on.

If I have offended anyone, I apologize, though  I believe that letting someone speak is generally not something to be afraid of. But I wouldn't invite him again.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-21T14:07:20.991Z · LW(p) · GW(p)

It seems to me that despite organizing multiple SSC events you had no knowledge that Vassar was banned from SSC events. Neither had anyone reading the event anouncement to the extend that they would tell you that Vassar was banned before the event happened.

To me that suggests that there's a problem of not sharing information about who's banned to those organizing meetups in an effective way, so that a ban has the consequence one would expect it to have.

comment by Viliam · 2021-10-18T10:31:13.463Z · LW(p) · GW(p)

It might be useful to have a global blacklist somewhere. Possible legal consequences, if someone decides to sue you for libel. (Perhaps the list should only contain the names, not the reasons?)

EDIT: Nevermind. There are more things I would like to say about this, but this is not the right place. Later I may write a separate article explaining the threat model I had in mind.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-18T15:14:20.825Z · LW(p) · GW(p)

Legal threats matter a great deal for what can be done in a situation like this.

When it comes to a "global blacklist" there's the question about governance. Who decides who's on and who isn't. When it comes to SSC or ACX meetups the governance question is clear. Anybody who's organizing a meetup under those labels should follow Scott's guidance. 

That however only works if that information is communicated to meetup organizers. 

comment by Desrtopa · 2021-10-18T01:55:31.851Z · LW(p) · GW(p)

So, it's been a long time since I actually commented on Less Wrong, but since the conversation is here...

Hearing about this is weird for me, because I feel like, compared to the opinions I heard about him from other people in the community, I kind of... always had uncomfortable feelings about Mike Vassar? And I say this without having had direct personal contact with him except, IIRC, maybe one meetup I attended where he was there and we didn't talk directly, although we did occasionally participate in some of the same conversations online.

 

By all accounts, it sounds like he's always been quite charismatic in person, and this isn't the first time I've heard someone describe him as a "wizard." But empirically, there are some people who're very charismatic who propagate some really bad ideas and whose impacts on the lives of people around them, or on society at large, can be quite negative. As of last I was paying attention to him, I wouldn't have expected Mike Vassar to have that negative an effect on the lives of the people around him, but I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought to be taken. He evoked in a lot of people that feeling of "if these ideas are true, this is really huge," but... there's no shortage of ideas of ideas you can say that about, and I was always confused by the degree of credence people gave that his ideas were worth taking seriously. He always gave me a cult leaderish impression, in a way that, say, Eliezer never did, as encouraging other people to take seriously ideas which I couldn't understand why they didn't treat with more skepticism. 

I haven't thought about him in quite some time now, but I still distinctly remember that feeling of "why do these smart people around me take this person so seriously? I just don't see how his explanations of his ideas justify that."

Replies from: vanessa-kosoy, Viliam, CronoDAS
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-18T12:51:26.644Z · LW(p) · GW(p)

I met Vassar once. He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd. Something about his delivery was so captivating, that it took me a while to "shake off the fairy dust" and realize just how silly some of his claims were, even when it should have been obvious from the start. Moreover, his worldview seemed heavily based on paranoidal / conspiracy-theory type of thinking. So, yes, I'm not too surprised by Scott's revelations about him.

Replies from: Wei_Dai
comment by Wei_Dai · 2021-10-20T03:21:35.720Z · LW(p) · GW(p)

He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd.

Yeah, it definitely didn't work on me. I believe I wrote this thread [LW(p) · GW(p)] shortly after my one-and-only interaction with him, in which he said a lot of things that made me very skeptical but that I couldn't easily refute, or had much time to think about before he would move on to some other topic. (Interestingly, he actually replied in that thread even though I didn't mention him by name.)

It saddens me to learn that his style of conversation/persuasion "works" on many people who otherwise seem very smart and capable (and even self-selected for caring about being rational). It seems like pretty bad news as far as what kind of epistemic situation humanity is in (e.g., how easily we will be manipulated by even slightly-smarter-than-human AIs / human-AI systems).

Replies from: Wei_Dai, Kenny
comment by Wei_Dai · 2021-10-20T07:13:17.052Z · LW(p) · GW(p)

(Interestingly, he actually replied in that thread even though I didn’t mention him by name.)

Oh, this is because the OP that I was replying to [LW · GW] did mention him by name:

One of the things that makes Michael Vassar an interesting person to be around is that he has an opinion about everything. If you locked him up in an empty room with grey walls, it would probably take the man about thirty seconds before he'd start analyzing the historical influence of the Enlightenment on the tradition of locking people up in empty rooms with grey walls.

comment by Kenny · 2021-10-20T03:31:36.640Z · LW(p) · GW(p)

It should raise reasonable suspicion when you see that most of the time they link/reference internal posts or just an external post from some random blog talking about some really obscure and abstract ideas without ever touching on anything concrete. Even when they reference an actual research paper, they don't bother reading the paper itself and just copy and paste the abstract like that's good enough to support their argument like mysticism.

Vetting of sources that are referenced is important, but you can only do so much, especially in the information age. There are a lot of stuff out there. Everyone wants to pass on their jeans, but in cults, people actually try to have sex with the women involved. Such a shame, it just becomes a means to an end for their personal greed.

comment by Viliam · 2021-10-18T10:26:40.363Z · LW(p) · GW(p)

I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought to be taken.

Heh, the same feeling here. I didn't have much opportunity to interact with him in person. I remember repeatedly hearing praise about how incredibly smart he is (from people whom I admired), then trying to find something smart written by him, and feeling unimpressed and confused, like maybe I wasn't reading the right texts or I failed to discover the hidden meaning that people smarter than me have noticed.

Hypothesis 1: I am simply not smart enough to recognize his greatness. I can recognize people one level above me, and they can recognize people one level above them, but when I try to understand someone two levels above me, it's all gibberish to me.

Hypothesis 2: He is more persuasive in person than in writing. (But once he impressed you in person, you will now see greatness in his writing, too. Maybe because of halo affect. Maybe because now you understand the hidden layers of what he actually meant by that.) Maybe he is more persuasive in person because he can make his message optimized for the receiver; which might be a good thing, or a bad thing.

Hypothesis 3: He gives high-variance advice. Some of it amazingly good, some of it horribly wrong. When people take him seriously, some of them benefit greatly, others suffer. Those who benefitted will tell the story. (Those who suffered will leave the community.)

My probability distribution was gradually shifting from 1 to 3.

Replies from: AnnaSalamon, Avi Weiss
comment by AnnaSalamon · 2021-10-19T15:40:19.002Z · LW(p) · GW(p)

Not a direct response to you, but if anyone who hasn't talked to Vassar is wanting an example of Vassar-conversation that may be easier to understand or get some sense from than most examples would (though it'll have a fair bit in it that'll probably still seem false/confusing), you might try Spencer Greenberg's podcast with Vassar.

Replies from: elityre, Avi Weiss
comment by Eli Tyre (elityre) · 2021-10-20T00:14:25.382Z · LW(p) · GW(p)

As a datapoint: I listened to that podcast 4 times, and took notes 3 of those 4 times, to try and clearly parse what he's saying. I certainly did not fully succeed. 

My notes.

It seems like he said some straightforwardly contradictory things? For instance, that strong conflict theorists trust their own senses and feelings more, but also trust them less?

I would really like to understand what he's getting at by the way, so if it is clearer for you than it is for me, I'd actively appreciate clarification.

Replies from: Unreal
comment by Unreal · 2021-10-20T03:04:07.061Z · LW(p) · GW(p)

i tried reading / skimming some of that summary

it made me want to scream 

what a horrible way to view the world / people / institutions / justice 

i should maybe try listening to the podcast to see if i have a similar reaction to that 

Replies from: JenniferRM
comment by JenniferRM · 2021-10-28T18:53:46.927Z · LW(p) · GW(p)

Seeing as how you posted this 9 days ago, I hope you did not bite off more than you could chew, and I hope you do not want to scream anymore.

In Harry Potter the standard practice seems to be to "eat chocolate" and perhaps "play with puppies" after exposure to ideas that are both (1) possibly true, and (2) very saddening to think about.

Then there is Gendlin's Litany [LW · GW] (and please note that I am linking to a critique, not to unadulterated "yay for the litany" ideas) which I believe is part of Lesswrong's canon somewhat on purpose. In the critique there are second and third thoughts along these lines, which I admire for their clarity, and also for their hopefulness.

Ideally [a better version of the Litany] would communicate: “Lying to yourself will eventually screw you up worse than getting hurt by a truth,” instead of “learning new truths has no negative consequences.”

This distinction is particularly important when the truth at hand is “the world is a fundamentally unfair place that will kill you without a second thought if you mess up, and possibly even if you don’t.”

EDIT TO CLARIFY: The person who goes about their life ignoring the universe’s Absolute Neutrality is very fundamentally NOT already enduring this truth. They’re enduring part of it (arguably most of it), but not all. Thinking about that truth is depressing for many people. That is not a meaningless cost. Telling people they should get over that depression and make good changes to fix the world is important. But saying that they are already enduring everything there was to endure, seems to me a patently false statement, and makes your argument weaker, not stronger.

The reason to include the Litany (flaws and all?) in a canon would be specifically to try to build a system of social interactions that can at least sometimes talk about understanding the world as it really is. 

Then, atop this shared understanding of a potentially sad world, the social group with this litany as common knowledge might actually engage in purposive (and "ethical"?) planning processes that will work because the plans are built on an accurate perception of the barriers and risks of any given plan. In theory, actions based on such plans would mostly tend to "reliably and safely accomplish the goals" (maybe not always, but at least such practices might give one an edge) and this would work even despite the real barriers and real risks that stand between "the status quo" and "a world where the goal has been accomplished"... thus, the litany itself:

What is true is already so.
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.

And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived.
People can stand what is true,
for they are already enduring it.

In my personal experience, as a person with feelings, is that I can only work on "the hot stuff" mostly only in small motions, mostly/usually as a hobby, because otherwise the totalizing implications of some ideas threaten to cause an internal information cascade [LW · GW] that is probably abstractly undesirable, and if the cascade happens it might require the additional injection of additional cognitive and/or emotional labor of a very unusual sort in order to escape from the metaphorical "gravity well" of perspectives like this, which have internal logic that "makes as if to demand" that the perspective not be dropped, except maybe "at one's personal peril". 

Running away from the first hint of a non-trivial infohazard, especially an infohazard being handled without thoughtful safety measures, is a completely valid response in my book.

Another great option is "talk about it with your wisest and most caring grand parent (or parent)".

Another option is to look up the oldest versions of the idea, and examine their sociological outcomes (good and bad, in a distribution), and consider if you want to be exposed to that outcome distribution. 

Also, you don't have to jump in. You can take baby steps (one per week or one per month or one per year) and re-apply your safety checklist after each step?

Personally, I try not to put "ideas that seem particularly hot" on the Internet, or in conversations, by default, without verifying things about the audience, and but I could understand someone who was willing to do so.

However also, I don't consider a given forum to be "the really real forum, where the grownups actually talk"... unless infohazards like this cause people to have some reaction OTHER than traumatic suffering displays (and upvotes of the traumatic suffering display from exposure to sad ideas).

This leads me to be curious about any second thoughts or second feelings you've had, but only if you feel ok sharing them in this forum. Could you perhaps reply with:
<silence> (a completely valid response, in my book)
"Mu." (that is, being still in the space, but not wanting to pose or commit)
"The ideas still make me want to scream, but I can afford emitting these ~2 bits of information." or 
"I calmed down a bit, and I can think about this without screaming now, and I wrote down several ideas and deleted a bunch of them and here's what's left after applying some filters for safety: <a few sentences with brief short authentic abstractly-impersonal partial thoughts>".

comment by Avi (Avi Weiss) · 2021-10-19T15:47:23.368Z · LW(p) · GW(p)

There's also these 2 podcasts which cover quite a variety of topics, for anyone who's interested:
You've Got Mel - With Michael Vassar
Jim Rutt Show - Michael Vassar on Passive-Aggressive Revolution

comment by Avi (Avi Weiss) · 2021-10-18T10:31:10.315Z · LW(p) · GW(p)

I haven't seen/heard anything particularly impressive from him either, but perhaps his 'best work' just isn't written down anywhere?

comment by CronoDAS · 2021-10-18T06:44:34.948Z · LW(p) · GW(p)

My impression as an outsider (I met him once and heard and read some things people were saying about him) was that he seemed smart but also seemed like kind of a kook...

comment by jessicata (jessica.liu.taylor) · 2021-12-18T21:00:14.973Z · LW(p) · GW(p)

I have replied to this comment in a top-level post [LW · GW].

comment by Dr_Manhattan · 2021-10-19T00:31:15.765Z · LW(p) · GW(p)

Since comments get occluded you should refer to an edit/update somewhere at the top if you want it to be seen by those who already read your original comment.

comment by Yoav Ravid · 2021-10-27T18:32:23.735Z · LW(p) · GW(p)

Is this the highest rated comment on the site?

comment by mingyuan · 2021-10-19T20:41:49.413Z · LW(p) · GW(p)

Okay, meta: This post has over 500 comments now and it's really hard to keep a handle on all of the threads. So I spent the last 2 hours trying to outline the main topics that keep coming up. Most top-level comments are linked to but some didn't really fit into any category, so a couple are missing; also apologies that the structure is imperfect.

Topic headers are bolded and are organized very roughly in order of how important they seem (both to me personally and in terms of the amount of air time they've gotten). 

Replies from: Ruby
comment by Ruby · 2021-10-19T20:56:14.153Z · LW(p) · GW(p)

This is hugely helpful, a great community service! Thanks so much, mingyuan.

comment by Aella · 2021-10-17T21:35:28.698Z · LW(p) · GW(p)

I find something in me really revolts at this post, so epistemic status… not-fully-thought-through-emotions-are-in-charge?

Full disclosure: I am good friends with Zoe; I lived with her for the four months leading up to her post, and was present to witness a lot of her processing and pain. I’m also currently dating someone named in this post, but my reaction to this was formed before talking with him.

First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away. If the points in the post felt more compelling, then I’d probably be more down for an argument of “we should bin these together and look at this as a whole”, but as it stands the stuff listed in here feels like it’s describing something significantly less damaging, and of a different kind of damage. I’m also annoyed that this post relies so heavily on Zoe’s, and the comparison feels like it cheapens what Zoe went through. I keep having a recurring thought that the author must have utterly failed to understand the intensity of the very direct impact from Leverage’s operations on Zoe. Most of my emotions here come from a perception that this post is actively hurting a thing I value.

Second, I suspect this post makes a crucial mistake in mistaking symptoms for the cause. Or, rather, I think there’s a core inside of what made Leverage damaging, and it’s really really hard to name it. Zoe’s post seemed like a good effort to triangulate it, but this above post feels like it focuses on the wrong things, or a different brand of analogous things, without understanding the core of what Zoe was trying to get at. Missing the core of the post is an easy mistake to make, given how it's really hard to name directly, but in this case I'm particularly sensitive to the analogy seeming superficial, given how much this post seems to be relying on Zoe's post for validation.

One example for this is comparing Zoe’s mention of someone at Leverage having a psychotic break to the author having a psychotic break. But Zoe’s point was that Leverage treated the psychotic break as an achievement, not that the psychotic break happened. 

Third, and I think this has been touched on by other comments, is that this post feels… sort of dishonest to me? I feel like something is trying to get slipped into my brain without me noticing. Lots of parts the post sort of implicitly presents things as important, or asks you to draw conclusions without explicitly pointing out those conclusions. I might be… overfitting or tryin to see a thing because I’m emotionally charged, but I’m gonna attempt to articulate the thing anyway:

For example, the author summarizes Zoe as saying that Leverage considered Geoff Anders to be extremely special, e.g. Geoff being possibly a better philosopher than Kant.

In Zoe’s post, her actual quote is of a Leverage person saying “I think there’s good reason to believe Geoff is the best philosopher who’s ever lived, better than Kant. I think his existence on earth right now is an historical event.” 

This is small but an actually important difference, and has the effect of slightly downplaying Leverage.

The author here then goes on to say that she doesn’t remember anyone saying Eliezer was a better philosopher than Kant, but that she guesses people would say this, and then points out probably nobody at MIRI read Kant. 

The effect of this is it asks the reader to associate perception of Eliezer’s status with Geoff’s (both elevated) by drawing the comparison of Kant to Eliezer (that hadn’t actually been drawn before), and then implies rationalists being misinformed (not reading Kant).

This is arguably a really uncharitable read, and I’m not very convinced it’s ‘true’, but I think the ‘effect’ is true; as in, this is the impression I got when reading quickly the first time. And the impression isn’t supported in the rest of the words, of course - the author says they don’t have reason to believe MIRI people would view Eliezer as more relevant than philosophers they respected, and that nobody there really respected Kant. But the general sense I get from the overall post is this type of pattern, repeated over and over - a sensation of being asked to believe something terrible, and then when I squint the words themselves are quite reasonable. This makes it feel slippery to me, or I feel like I’ve been struck from behind and when I turn around there’s someone smiling as they’re reaching out to shake my hand.

And to be clear, I don’t think all the comparisons are wrong, or that there’s nothing of value here. It can be super hard to sensemake with confusing narrative stuff, and there’s inevitably going to be some clumsiness in attempting to do it. I think it’s worthwhile and important to be paying close attention to the ways organizations might be having adverse effects on their members, particularly in our type of communities, and I support pointing out even small things and don’t want people to feel like they’re making too big a deal out of something not. But the way this deal is made bothers me, and I feel defensive and have stories in me about this doing more harm than good.

Replies from: mingyuan, Eliezer_Yudkowsky, Ruby, jessica.liu.taylor, Benito, Benquo, romeostevensit, 4thWayWastrel, hg00, farp
comment by mingyuan · 2021-10-20T02:52:30.833Z · LW(p) · GW(p)

I want to note that this post (top-level) now has more than 3x the number of comments that Zoe's does (or nearly 50% more comments than the Zoe+BayAreaHuman posts combined, if you think that's a more fair comparison), and that no one has commented on Zoe's post in 24 hours. [ETA: This changed while I was writing this comment. The point about lowered activity still stands.]

This seems really bad to me — I think that there was a lot more that needed to be figured out wrt Leverage, and this post has successfully sucked all the attention away from a conversation that I perceive to be much more important. 

I keep deleting sentences because I don't think it's productive to discuss how upset this makes me, but I am 100% with Aella here. I was wary of this post to begin with and I feel something akin to anger at what it did to the Leverage conversation.

I had some contact with Leverage 1.0 — had some friends there, interviewed for an ops job there, and was charted a few times by a few different people. I have also worked for both CFAR and MIRI, though never as a core staff member at either organization; and more importantly, I was close friends with maybe 50% of the people who worked at CFAR from mid-2017 to mid-2020. Someone very close to me previously worked for both CFAR and Leverage. With all that backing me up: I am really very confident that the psychological harm inflicted by Leverage was both more widespread and qualitatively different than anything that happened at CFAR or MIRI (at least since mid-2017; I don't know what things might have been like back in, like, 2012). 

The comments section of this post is full of CFAR and MIRI employees attempting to do collaborative truth-seeking. The only comments made by Leverage employees in comparable threads were attempts at reputation management. That alone tells you a lot!

CFAR and MIRI have their flaws, and several people clearly have legitimate grievances with them. I personally did not have a super great experience working for either organization (though that has nothing to do with anything Jessica mentioned in this post; just run-of-the-mill workplace stuff). Those flaws are worth looking at, not only for the edification of the people who had bad experiences with MIRI and CFAR, but also because we care about being good people building effective organizations to make the world a better place. They do not, however, belong in a conversation about the harm done by Leverage. 

(Just writing a sentence saying that Leverage was harmful makes me feel uncomfortable, feels a little dangerous, but fuck it, what are they going to do, murder me?)

Again, I keep deleting sentences, because all I want to talk about is the depth of my agreement with Aella, and my uncharitable feelings towards this post. So I guess I'll just end here.

Replies from: ChristianKl, Avi Weiss, AnnaSalamon, Viliam, Kenny
comment by ChristianKl · 2021-10-21T10:07:50.774Z · LW(p) · GW(p)

It seems like it's relatively easy for people to share information in the CFAR+MIRI conversation. On the other hand, for those people who have actually the most central information to share in the Leverage conversation it's not as easy to share them. 

In many cases I would expect that private in person conversation are needed to progress the Leverage debate and that just takes time. Those people at leverage who want to write up their own experience likely benefit from time to do that.

Practically, helping Anna get an overview over timeline of members and funders and getting people to share stories with Aella seems to be the way going forward that's largely not about leaving LW comments.

comment by Avi (Avi Weiss) · 2021-10-20T07:52:02.824Z · LW(p) · GW(p)

I agree with the intent of your comment mingyuan, but perhaps the reason for the asymmetry in activity on this post is simply due to the fact that there are an order of magnitude (or several orders of magnitude?) more people with some/any experience and interaction with CFAR/MIRI (especially CFAR) compared to Leverage?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2021-10-22T09:13:14.952Z · LW(p) · GW(p)

I think some of it has got to be that it's somehow easier to talk about CFAR/MIRI, rather than a sheer number of people thing. I think Leverage is somehow unusually hard to talk about, such that maybe we should figure out how to be extraordinarily kind/compassionate/gentle to anyone attempting it, or something.

Replies from: Spiracular, Avi Weiss
comment by Spiracular · 2021-10-22T14:23:57.643Z · LW(p) · GW(p)

I agree that Leverage has been unusually hard to talk about bluntly or honestly, and I think this has been true for most of its existence.

I also think the people at the periphery of Leverage, are starting to absorb the fact that they systematically had things hidden from them. That may be giving them new pause, before engaging with Leverage as a topic.

(I think that seems potentially fair, and considerate. To me, it doesn't feel like the same concern applies in engaging about CFAR. I also agree that there were probably fewer total people exposed to Leverage, at all.)


...actually, let me give you a personal taste of what we're dealing with?

The last time I choose to talk straightforwardly and honestly about Leverage, with somebody outside of it? I had to hard-override an explicit but non-legal privacy agreement*, to get a sanity check. When I was honest about having done so shortly thereafter, I completely and permanently lost one of my friendships as a result.

Lost-friend says they were traumatized as a result of me doing this. That having "made the mistake of trusting me" hurt their relationships with other Leveragers. That at the time, they wished they'd lied to me, which stung.

I talked with the person I used as a sanity-check recently, and I get the sense that I still only managed to squeeze out ~3-5 sentences of detail at the time.

(I get the sense that I still did manage to convey a pretty balanced account of what was going through my head at the time. Somehow.)


It is probably safer to talk now, than it was then. At least, that's my current view. 2 year's distance, community support, a community that is willing to be more sympathetic to people who get swept up in movements, and a taste of what other people were going through (and that you weren't the only person going through this), does tend to help matters.

(Edit: They've also shared the Ecosystem Dissolution Information Arrangement, which I find a heartening move. They mention that it was intended to be more socially-enforced than legally-binding. I don't like all of their framing around it, but I'll pick that fight later.)

It wouldn't surprise me at all, if most of this gets sorted out privately for now. Depending a bit on how this ends -- largely on whether I think this kind of harm is likely to recur or not--- I might not even have an objection to that.

But when it comes to Leverage? These are some of the kinds of thoughts and feelings, that I worry we may later see played a role in keeping this quiet.

Replies from: Spiracular, Unreal, Spiracular
comment by Spiracular · 2021-11-01T20:26:07.903Z · LW(p) · GW(p)

I'm finally out about my story here [LW(p) · GW(p)]! But I think I want to explain a bit of why I wasn't being very clear, for a while.

I've been "hinting darkly" in public rather than "telling my full story" due to a couple of concerns:

  1. I don't want to "throw ex-friend under the bus," to use their own words! Even friend's Leverager partner (who they weren't allowed to visit, if they were "infected with objects") seemed more "swept-up in the stupidity" than "malicious." I don't know how to tell my truth, without them feeling drowned out. I do still care about that. Eurgh.

  2. Via models that come out of my experience with Brent: I think this level of silence, makes the most sense if some ex-Leveragers did get a substantial amount of good out of the experience (sometimes with none of the bad, sometimes alongside it), and/or if there's a lot of regrettable actions taken by people who were swept up in this at the time, by people who would ordinarily be harmless under normal circumstances. I recognize that bodywork was very helpful to my friend, in working through some of their (unrelated) trauma. I am more than a little reluctant to put people through the sort of mob-driven invalidation I felt, in the face of the early intensely-negative community response to the Brent expose?

Surprisingly irrelevant for me: I am personally not very afraid of Geoff! Back when I was still a nobody, I brute-forced my way out of an agonizing amount of social-anxiety through sheer persistence. My social supports range both wide and deep. I have pretty strong honesty policies. I am not currently employed, so even attacking my workplace is a no-go. I'm planning to marry someone cool this January. Truth be told? I pity any fool who tries to character-assassinate me.

...but I know that others are scared of Geoff. I have heard the phrase "Geoff will do anything to win" bandied about so often, that I view it as something of a stereotyped phrase among Leveragers. I am honestly not sure how concerned I actually should be about it! But it feels like evidence of a narrative that I find pretty concerning, although I don't know how this narrative emerged.

comment by Unreal · 2021-10-22T15:56:39.509Z · LW(p) · GW(p)

The last time I choose to talk straightforwardly and honestly about Leverage, with somebody outside of it? I had to hard-override a privacy concern*, to get a sanity check. When I was honest about having done so shortly thereafter, I completely lost one of my friendships as a result.

Lost-friend says they were traumatized as a result of me doing this. That having "made the mistake of trusting me" hurt their relationships with other Leveragers. That at the time, they wished they'd lied to me which stung.

Any thoughts on why this was coming about in the culture? 

If anyone feels that way (like the lost friend) and wants to talk to me about it, I'd be interested in learning more about it. 

comment by Spiracular · 2021-10-22T14:50:02.972Z · LW(p) · GW(p)

* I could tell that this had some concerning toxic elements, and I needed an outside sanity-check. I think under the circumstances, this was the correct call for me. I do not regret picking the particular person I chose as a sanity-check. I am also very sympathetic to other people not feeling able to pull this, given the enormous cost to doing it at the time.

This is not a strong systematic assessment of how I usually treat privacy agreements. My harm-assessment process is usually structured a bit like this [LW(p) · GW(p)], with some additional pressure from an "agreement-to-secrecy," and also factors in the meta-secrecy-agreements around "being able to be held to secrecy agreements" and "being honest about how well you can be held to secrecy agreements."

No, I don't feel like having a long discussion about privacy policies right now. But if you care? My thoughts on information-sharing policy were valuable enough to get me into the 2019 Review.

If you start on this here, I will ignore you.

comment by Avi (Avi Weiss) · 2021-10-22T09:31:03.770Z · LW(p) · GW(p)

The fact that the people involved apparently find it uniquely difficult to talk about is a pretty good indication that Leverage != CFAR/MIRI in terms of cultishness/harms etc.

comment by AnnaSalamon · 2021-10-22T09:09:12.194Z · LW(p) · GW(p)

Yes; I want to acknowledge that there was a large cost here. (I wasn't sure, from just the comment threads; but I just talked to a couple people who said they'd been thinking of writing up some observations about Leverage but had been distracted by this.)

I am personally really grateful for a bunch of the stuff in this post and its comment thread. But I hope the Leverage discussion really does get returned to, and I'll try to lend some momentum that way. Hope some others do too, insofar as some can find ways to actually help people put things together or talk.

comment by Viliam · 2021-10-20T10:34:56.576Z · LW(p) · GW(p)

Seems to me that, given the current situation, it would probably be good to wait maybe two more days until this debate naturally reaches the end. And then restart the debate about Leverage.

Otherwise, we risk having two debates running in parallel, interfering with each other.

The comments section of this post is full of CFAR and MIRI employees attempting to do collaborative truth-seeking. The only comments made by Leverage employees in comparable threads were attempts at reputation management. That alone tells you a lot!

Then it is good that this debate happened. (Despite my shock when I saw it first.) It's just the timing with regards to the debate about Leverage that is unfortunate.

Replies from: Puxi Deek
comment by Puxi Deek · 2021-10-20T11:15:52.965Z · LW(p) · GW(p)

When everyone knows everyone else it's more like Facebook than say Reddit. I don't know why so many real life organizations are basing their discussions on these open forums online. Maybe they want to attract more people to think about certain problems. Maybe they want to spread their jeans. Either way, normal academic research don't involving knocking on people's doors and ask them if they are interested in doing such and such research. To a less extreme degree, they don't even ask their family and friends to join their research circle. When you befriend your coworkers in the corporate world, things can get real messy real quick, depending on to what extent they are involved/interfering with your life outside of work. Maybe that's why they are distinguishing themselves from your typical workplace.

Replies from: Viliam
comment by Viliam · 2021-10-20T15:31:11.031Z · LW(p) · GW(p)

MIRI and CFAR are non-profits, they need to approach fundraising and talent-seeking differently than universities or for-profit corporations.

In addition, neither of them is pure research institution. MIRI's mission includes making people who work on AI, or make important decisions about AI, aware of the risks involved. CFAR's mission includes teaching of rationality techniques. Both of them require communication with public.

This doesn't explain all the differences, but at least some of them.

comment by Kenny · 2021-10-20T03:09:16.810Z · LW(p) · GW(p)

The only comments made by Leverage employees in comparable threads were attempts at reputation management. That alone tells you a lot!

So much of this on this site, it's incredible. Makes me wonder if people are consciously doing it. If they are, then why would they even join this cult in the first place? Personally I've observed that the people who easily join cults are rather very impressionable. Even my wife got duped by a couple of middle aged men. It's a different type of intelligence and skill set than the stuff they employ at colleges and research institutions.

Replies from: Viliam
comment by Viliam · 2021-10-20T11:00:20.256Z · LW(p) · GW(p)

Uhh. Sadly, this attitude is quite common, so I will try to explain. Some people are in general more gullible or easier to impress, yes. But that is just a part of equation. The remaining parts are:

  • everyone is more vulnerable to manipulation that is compatible with their already existing opinions and desires;
  • people are differently vulnerable at different moment of their lives, so it's a question of luck whether you encounter the manipulation at your strongest or weakest moment;
  • the environment can increase or decrease your resistance: how much free time you have, how many people make a coordinated effort to convince you, whether you have enough opportunity to meet other people or stay alone and reflect on what is happening, whether something keeps you worried and exhausted, etc.

So, some people might easily believe in Mother Gaia, but never in Artificial Intelligence, for other people it is the other way round. You can manipulate some people by appealing to their selfish desires, other people by appealing to their feelings of compassion.

Many people are just lucky that they never met a manipulative group targetting specifically their weaknesses, exactly at a vulnerable moment of their lives. It is easy to laugh at people whose weaknesses are different from yours, when they fail in a situation that exploits their weaknesses.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-10-17T23:30:15.458Z · LW(p) · GW(p)

By way of narrowing down this sense, which I think I share, if it's the same sense: leaving out the information from Scott's comment [LW(p) · GW(p)] about a MIRI-opposed person who is advocating psychedelic use and causing psychotic breaks in people, and particularly this person talks about MIRI's attempts to have any internal info compartments as a terrible dark symptom of greater social control that you need to jailbreak away from using psychedelics, and then those people have psychotic breaks - leaving out this info seems to be not something you'd do in a neutrally intended post written from a place of grave concern about community dynamics.  It's taking the Leverage affair and trying to use it to make a point, and only including the info that would make that point, and leaving out info that would distract from that point.  And I'm not going to posture like that's terribly bad inhuman behavior, but we can see it and it's okay to admit to ourselves that we see it.

And it's also okay for somebody to think that the original Leverage affair needed to be discussed on its own terms, and not be carefully reframed in exactly the right way to make a point about a higher-profile group the author wanted to discuss instead; or to think that Leverage did a clearly bad thing, and we need to have norms against that clearly bad thing and finish up on making those norms before it's proper for anyone to reframe the issue as really being about a less clear bad thing somewhere higher-profile; and then this post is going against that and it's okay for them to be unhappy about that part.

Replies from: Benquo
comment by Benquo · 2021-10-18T01:28:28.984Z · LW(p) · GW(p)

not something you'd do in a neutrally intended post written from a place of grave concern about community dynamics

 

I'm not going to posture like that's terribly bad inhuman behavior, but we can see it and it's okay to admit to ourselves that we see it

These have the tone of allusions to some sort of accusation, but as far as I can tell you're not actually accusing Jessica of any transgression here, just saying that her post was not "neutrally intended," which - what would that mean? A post where Gricean implicature was not relevant?

Can you clarify whether you meant to suggest Jessica was doing some specific harmful thing here or whether this tone is unendorsed?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-10-18T02:52:20.638Z · LW(p) · GW(p)

Okay, sure.  If what Scott says is true, and it matches my recollections of things I heard earlier - though I can attest to very little of it of my direct observation - then it seems like this post was written with knowledge of things that would make the overall story arc it showed, look very different, and those things were deliberately omitted.  This is more manipulation than I myself would personally consider okay to use in a situation like this one, though I am ever mindful of Automatic Norms and the privilege of being more verbally facile than others in which facts I can include but still make my own points.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T02:53:59.000Z · LW(p) · GW(p)

See Zack's reply here [LW(p) · GW(p)] and mine here [LW(p) · GW(p)]. Overall I didn't think the amount of responsibility was high enough for this to be worth mentioning.

comment by Ruby · 2021-10-17T22:17:50.976Z · LW(p) · GW(p)

First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away...

I want to second this reaction (basically your entire second paragraph). I have been feeling the same but hadn't worked up the courage to say it.

Replies from: Freyja, Viliam, Eliezer_Yudkowsky
comment by Freyja · 2021-10-18T16:58:00.273Z · LW(p) · GW(p)

I am also mad at what I see to be piggybacking on Zoe’s post, downplaying of the harms described in her post, and a subtle redirection of collective attention away from potentially new, timid accounts of things that happened to a specific group of people within Leverage and seem to have a lot of difficulty talking about it.

I hope that the sustained collective attention required to witness, make sense of and address the accounts of harm coming out of the psychology division of Leverage doesn’t get lost as a result of this post being published when it was.

comment by Viliam · 2021-10-18T10:40:08.271Z · LW(p) · GW(p)

For a moment I actually wondered whether this was a genius-level move by Leverage, but then I decided that I am just being paranoid. But it did derail the previous debate successfully.

On the positive side, I learned some new things. Never heard about Ziz before, for example.

EDIT:

Okay, this is probably silly, but... there is no connection between the Vassarites and Leverage, right? I just realized that my level of ignorance does not justify me dismissing a hypothesis so quickly. And of course, everyone knows everyone, but there are different levels of "knowing people", and... you know what I mean, hopefully. I will defer to judgment of people from Bay Area about this topic.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-19T07:24:03.032Z · LW(p) · GW(p)

Outside of "these people probably talked to each other like once every few months" I think there is no major connection between Leverage and the Vassarites that I am aware of.

Replies from: Viliam
comment by Viliam · 2021-10-19T08:56:55.120Z · LW(p) · GW(p)

Thanks.

I mostly assumed this; I suppose in the opposite case someone probably would have already mentioned that. But I prefer to have it confirmed explicitly.

comment by jessicata (jessica.liu.taylor) · 2021-10-17T21:51:21.078Z · LW(p) · GW(p)

The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away.

I'm assuming that sensemaking is easier, rather than harder, with more relevant information and stories shared. I guess if it's pulling the spotlight away, it's partially because it's showing relevant facts about things other than Leverage, and partially because people will be more afraid of scapegoating Leverage if the similarities to MIRI/CFAR are obvious. I don't like scapegoating, so I don't really care if it's pulling the spotlight away for the second reason.

If the points in the post felt more compelling, then I’d probably be more down for an argument of “we should bin these together and look at this as a whole”, but as it stands the stuff listed in here feels like it’s describing something significantly less damaging, and of a different kind of damage.

I don't really understand what Zoe went through, just reading her post (although I have talked with other ex-Leverage people about the events). You don't understand what I went through, either. It was really, really psychologically disturbing. I sound paranoid writing what I wrote, but this paranoia affected so many people. What I thought was a discourse community broke down into low-trust behavior and gaslighting and I feared violence. Someone outside the central Berkeley community just messaged me saying it's really understandable that I'd fear retribution given how important the relevant people thought the project was, it was a real risk.

Or, rather, I think there’s a core inside of what made Leverage damaging, and it’s really really hard to name it.

I'm really interested in the core being described better in the Leverage case. It would be unlikely that large parts of such a core wouldn't apply to other cases even if not to MIRI/CFAR specifically. I know I haven't done the best job I could have nailing down what was fucky about the MIRI/CFAR environment at 2017, but I've tried harder to (in the online space) more than anyone but Ziz, AFAICT.

This is small but an actually important difference, and has the effect of slightly downplaying Leverage.

I agree, will edit the post accordingly. I do think the fact that people were saying we wouldn't have a chance to save the world without Eliezer shows that they consider him extremely historically special.

But the general sense I get from the overall post is this type of pattern, repeated over and over—a sensation of being asked to believe something terrible, and then when I squint the words themselves are quite reasonable.

Sorry, it's possible that I'm writing not nearly as clearly as I could, and the stress of what happened might contribute some to that. But it's hard for me to identify how I'm communicating unclearly from your or Logan's description, which are both pretty vague.

But the way this deal is made bothers me, and I feel defensive and have stories in me about this doing more harm than good.

I appreciate that you're communicating about your defensiveness and not just being defensive without signalling that.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-18T08:14:53.616Z · LW(p) · GW(p)

I don't really understand what Zoe went through, just reading her post (although I have talked with other ex-Leverage people about the events). You don't understand what I went through, either. It was really, really psychologically disturbing. I sound paranoid writing what I wrote, but this paranoia affected so many people. 

It would have probably better if you would have focused on your experience and drop all of the talk about Zoe from this post. That would make it easier for the reader to just take the information value from your experience.

I think that your post is still valuable information but that added narrative layer makes it harder to interact with then it would have been if it would have been focused more on your experience.

comment by Ben Pace (Benito) · 2021-10-17T23:19:21.647Z · LW(p) · GW(p)

One example for this is comparing Zoe’s mention of someone at Leverage having a psychotic break to the author having a psychotic break. But Zoe’s point was that Leverage treated the psychotic break as an achievement, not that the psychotic break happened. 

From the quotes in Scott's comment [LW · GW], it seems to me also the case that Michael Vassar also treated Jessica's and Ziz's psychoses as an achievement.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2021-10-18T03:28:11.893Z · LW(p) · GW(p)

it seems to me also the case that Michael Vassar also treated Jessica's [...] psycho[sis] as an achievement

Objection: hearsay. How would Scott know this? (I wrote a separate reply about the ways in which I think Scott's comment is being unfair.) [LW(p) · GW(p)] As some closer-to-the-source counterevidence against the "treating as an achievement" charge, I quote a 9 October 2017 2:13 p.m. Signal message in which Michael wrote to me:

Up for coming by? I'd like to understand just how similar your situation was to Jessica's, including the details of her breakdown. We really don't want this happening so frequently.

(Also, just, whatever you think of Michael's many faults, very few people are cartoon villains [LW · GW] that want their friends to have mental breakdowns.)

Replies from: Benito
comment by Ben Pace (Benito) · 2021-10-18T03:37:33.544Z · LW(p) · GW(p)

Thanks for the counter-evidence.

comment by Benquo · 2021-10-17T22:44:18.388Z · LW(p) · GW(p)

First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away.

If we're trying to solve problems rather than attack the bad people, then the boundaries of the discussion should be determined by the scope of the problem, not by which people we're saying are bad. If you're trying to attack the bad people without standards or a theory of what's going on, that's just mob violence.

Replies from: Aella, Benito
comment by Aella · 2021-10-17T23:17:41.996Z · LW(p) · GW(p)

I... think I am trying to attack the bad people? I'm definitely conflict-oriented around Leverage; I believe that on some important level treating that organization or certain people in it as good-intentioned-but-misguided is a mistake, and a dangerous one. I don't think this is true for MIRI/CFAR; as is summed up pretty well in the last section of Orthonormal's post here. [LW(p) · GW(p)] I'm down for the boundaries of the discussion being determined by the scope of the problem, but I perceive the original post here to be outside the scope of the problem. 

I'm also not sure how to engage with your last sentence. I do have theories for what is going on (but regardless I'm not sure if you give a mob a theory that makes it not a mob).

Replies from: Benquo, Benquo, Benquo
comment by Benquo · 2021-10-18T18:05:05.386Z · LW(p) · GW(p)

This is explicitly opposed to Zoe's stated intentions.

Other people, including me and Jessica, also want to reveal and discuss bad behavior, but don't consent to violence in the name of our grievances.

Agnes Callard's article is relevant here: I Don’t Want You to ‘Believe’ Me. I Want You to Listen.

We want to reveal problems so that people can try to understand and solve those problems. Transforming an attempt to discussion of abuse into a scapegoating movement silences victims, preventing others from trying to interpret and independently evaluate the content of what they are saying, simplifying it to a bid to make someone the enemy.

Historically, the idea that instead of trying to figure out which behaviors are bad and police them, we need to try to quickly attack the bad people, is how we get Holocausts and Stalinist purges. In this case I don't see any upside.

Replies from: Aella, Unreal
comment by Aella · 2021-10-18T18:35:49.302Z · LW(p) · GW(p)

I perceive you as doing a conversational thing here that I don't like, where you like... imply things about my position without explicitly stating them? Or talk from a heavy frame that isn't explicit? 

  1. Which stated intentions? Where she asks people 'not to bother those who were there'? What thing do you think I want to do that Zoe doesn't want me to do? 
  2. Are you claiming I am advocating violence? Or simply implying it?
  3. Are you trying to argue that I shouldn't be conflict oriented because Zoe doesn't want me to be? The last part feels a little weird for someone to tell me, as I'm good friends with Zoe and have talked with her extensively about this.
  4. I support revealing problems so people can understand and solve them. I also don't like whatever is happening in this original article due to reasons you haven't engaged with.
  5. You're saying transforming an attempt to discuss abuse into scapegoating silences victims, keeps other ppl from evaluating the content, and simplifies it a bid to make someone the enemy. But in the comment you were responding to, I was talking about Leverage, not the author of this post. I view Leverage and co. as bad actors, but you sort of... reframe it to make it sound like I'm using a conflict mindset towards Jessica?
  6. You're also not engaging with the points I made, and you're responding to arguments I don't condone.

I don't really view you as engaging in good faith at this point, so I'm precommitting not to respond to you after this.

comment by Unreal · 2021-10-19T01:28:09.448Z · LW(p) · GW(p)

Flagging that... I somehow want to simultaneously upvote and downvote Benquo's comment here. 

Upvote because I think he's standing for good things. (I'm pretty anti-scapegoating, especially of the 'quickly' kind that I think he's concerned about.) 

Downvote because it seems weirdly in the wrong context, like he's trying to punch at some kind of invisible enemy. His response seems incongruous with Aella's actual deal.  

I have some probability on miscommunication / misunderstanding. 

But also ... why ? are you ? why are your statements so 'contracting' ? Like they seem 'narrowizing' of the discussion in a way that seems like it philosophically tenses with your stated desire for 'revealing problems'. And they also seem weirdly 'escalate-y' like somehow I'm more tense in my body as I read your comments, like there's about to be a fight? Not that I sense any anger in you, but I sense a 'standing your ground' move that seems like it could lead to someone trying to punch you because you aren't budging. 

This is all metaphorical language for what I feel like your communication style is doing here. 

Replies from: Benquo
comment by Benquo · 2021-10-19T21:19:10.556Z · LW(p) · GW(p)

Thanks for separating evaluation of content from evaluation of form. That makes it easy for me to respond to your criticism of my form without worrying so much that it's a move to suppress imperfectly expressed criticism.

The true causal answer is that when I perceive someone as appealing to a moralistic framework, I have a tendency to criticize their perspective from inside a moralistic frame, even though I don't independently endorse moralizing. While this probably isn't the best thing I could do if I were perfectly poised, I don't think this is totally pointless either. Attempts to scapegoat someone via moralizing rely on the impression that symmetric moral reasoning is being done, so they can be disrupted by insistent opposition from inside that frame.

You might think of it as standing in territory I think someone else has unjustly claimed, and drawing attention to that fact. One might get punched sometimes in such circumstances, but that's not so terrible; definitely not as bad as being controlled by fear, and it helps establish where recourse/justice is available and where it isn't, which is important information to have! Occasionally bright young people with a moral compass get in touch with me because they can see that I'm conspicuously behaving in a not-ethically-backwards way in proximity to something interesting but sketchy that they were considering getting involved with. Having clear examples to point to is helpful, and confrontation produces clear examples.

A contributing factor is that I (and I think Jessica too) felt time pressure here because it seems to me like there is an attempt to build social momentum against a specific target, which transforms complaints from complementary contributions to a shared map, into competing calls for action. I was seriously worried that if I didn't interrupt that process, some important discourse opportunities would be permanently destroyed. I endorse that concern.

Replies from: Unreal
comment by Unreal · 2021-10-19T22:00:39.940Z · LW(p) · GW(p)

The true causal answer is that when I perceive someone as appealing to a moralistic framework, I have a tendency to criticize their perspective from inside a moralistic frame, even though I don't independently endorse moralizing.

o

hmmm, well i gotta chew on that more but

Aella seems like a counter-productive person to stand your ground against. I sense her as mainly being an 'advocate' for Zoe. She claims wanting to attack the bad people, but compared with other commenters, I sense less 'mob violence' energy from her and ... maybe more fear that an important issue will be dropped / ignored. (I am not particularly afraid of this; the evidence against Leverage is striking and damning enough that it doesn't seem like it will readily be dropped, even if the internet stops talking about it. In fact I hope to see the internet talking about it a bit less, as more real convos happen in private.) 

I'm a bit worried about the way Scott's original take may have pulled us towards a shared map too quickly. There's also a general anti-jessicata vibe I'm getting from 'the room' but it's non-specific and has a lot to do with karma vote patterns. Naming these here for the sake of group awareness and to note I am with you in spirit, not an attempt to add more politics or fighting. 

I was seriously worried that if I didn't interrupt that process, some important discourse opportunities would be permanently destroyed. I endorse that concern.

Hmmmm I feel like advocating for a slightly different mental stance. Instead of taking it upon yourself to interrupt a process in order to gain a particular outcome, what if you did a thing in a way that inspires people to follow because you're being a good role model? If you're standing for what's right, it can inspire people into also doing the right thing. And if no one follows you, you accept that as the outcome; rather than trying to 'make sure' something happens? 

Attachment to an outcome (like urgently trying to avoid 'opportunities being permanently destroyed') seems like it subtly disempowers people and perpetuates more of the pattern that I think we both want less of in the world? Checking to see where a disagreement might be found... 

Replies from: Benquo
comment by Benquo · 2021-10-20T00:02:20.253Z · LW(p) · GW(p)

I think it seems hard to find a disagreement because we don't disagree about much here.

Aella seems like a counter-productive person to stand your ground against. I sense her as mainly being an ‘advocate’ for Zoe. She claims wanting to attack the bad people, but compared with other commenters, I sense less ‘mob violence’ energy from her

Aella was being basically cooperative in revealing some details about her motives, as was Logan. But that behavior is only effectively cooperative if people can use that information to build shared maps. I tried to do that in my replies, albeit imperfectly & in a way that picked a bit more of a fight than I ideally would have.

I feel like advocating for a slightly different mental stance. Instead of taking it upon yourself to interrupt a process in order to gain a particular outcome, what if you did a thing in a way that inspires people to follow because you’re being a good role model?

At leisure, I do this. I'm working on a blog post trying to explain some of the structural factors that cause orgs like Leverage to go wrong in the way Zoe described. I've written extensively about both scapegoating and mind control outside the context of particular local conflicts, and when people seem like they're in a helpable state of confusion I try to help them. I spent half an hour today using a massage gun on my belly muscles, which improved my reading comprehension of your comment and let me respond to it more intelligently.

But I'm in an adversarial situation. There are optimizing processes trying to destroy what I'm trying to build, trying to threaten people into abandoning their perspectives and capitulating to violence.

It seems like you're recommending that I build new capacities instead of defending old ones. If I'm deciding between those, I shouldn't always get either answer. Instead, for any process damaging me, I should compare these two quantities:

(A) The cost of replacement - how much would it cost me to repair the damage or build an equivalent amount of capacity elsewhere?

(B) The cost of preventing the damage.

I should work on prevention when B<A, and building when A>B.

Since I expect my adversaries to make use of resources they seize to destroy more of what I care about, I need to count that towards the total expected damage caused (and therefore the cost of replacement).

If I'd been able to costlessly pause the world for several hours to relax and think about the problem, I would almost certainly have been able to write a better reply to Aella, one that would score better on the metric you're proposing, while perhaps still accomplishing my "defense" goals.

I'm taking Tai Chi lessons in large part because I think ability to respond to fights without getting triggered is a core bottleneck for me, so I'm putting many hours of my time into being able to perform better on that metric. But I'm not better yet, and I've got to respond to the situations I'm in now with the abilities I've got now.

Replies from: Unreal
comment by Unreal · 2021-10-20T02:45:23.172Z · LW(p) · GW(p)

Well I feel somewhat more relaxed now, seeing that you're engaging in a pretty open and upfront manner. I like Tai Chi :) 

The main disagreement I see is that you are thinking strategically and in a results-oriented fashion about actions you should take; you're thinking about things in terms of resource management and cost-benefit analysis. I do not advocate for that. Although I get that my position is maybe weird? 

I claim that kind of thinking turns a lot of situations into finite games. Which I believe then contributes to life-ending / world-ending patterns. 

... 

But maybe a more salient thing: I don't think this situation is quite as adversarial as you're maybe making it out to be? Or like, you seem to be adding a lot to an adversarial atmosphere, which might be doing a fair amount of driving towards more adversarial dynamics in the group in general. 

I think you and I are not far apart in terms of values, and so ... I kind of want to help you? But also ... if you're attached to certain outcomes being guaranteed, that's gonna make it hard... 

Replies from: Benquo
comment by Benquo · 2021-10-20T04:15:53.683Z · LW(p) · GW(p)

I don't understand where guarantees came into this. I don't understand how I could answer a question of the form "why did you do X rather than Y" without making some kind of comparison of the likely outcomes of X and Y.

I do know that in many cases people falsely claim to be comparing costs and benefits honestly, or falsely claim that some resource is scarce, as part of a strategy of coercion. I have no reason to do this to myself but I see many people doing it and maybe that's part of what turned you off from the idea.

On the other hand, there's a common political strategy where a dominant coalition establishes a narrative that something should be provided universally without rationing, or that something should be absolutely prevented without acknowledging taboo tradeoffs. Since this policy can't be implemented as stated, it empowers people in the position to decide which exceptions to make, and benefits the kinds of people who can get exceptions made, at the expense of less centrally connected people.

It seems to me like thinking about tradeoffs is the low-conflict alternative to insisting on guaranteed outcomes.

Generalizing from your objection to thinking about things in terms of resource management and cost-benefit analysis and your reaction to Eli's summary of Michael and Spencer's podcast [LW(p) · GW(p)], it seems like you're experiencing a strong aversion (though not an infinitely strong one, since you said you might try listening to the podcast) to assimilating information about conflict or resource constraints, which will make it hard for you to understand behaviors determined by conflicts or resource constraints, which is a LOT of behavior.*

If you can point out specific mistakes I'm making, or at least try to narrow down your sense that I'm falsely assuming adversariality, we can try to discuss it.


  • But not all. Sexual selection seems like a third thing, though it might only common because it helps evolution find solutions to the other two - it would be surprising to see a lot of sexual selection across many species on a mature planet if it didn't pay rent somehow.
Replies from: Unreal
comment by Unreal · 2021-10-20T17:06:20.577Z · LW(p) · GW(p)

Uhhh sorry, the thing about 'guarantees' was probably a mis-speak. 

For reference, I used to be a competitive gamer. This meant I used to use resource management and cost-benefit analysis a lot in my thinking. I also ported those framings into broader life, including how to win social games. I am comfortable thinking in terms of resource constraints, and lived many years of my life in that mode. (I was very skilled at games like MTG, board games, and Werewolf/Mafia.) 

I have since updated to realize how that way of thinking was flawed and dissociated from reality.

I don't understand how I could answer a question of the form "why did you do X rather than Y" without making some kind of comparison of the likely outcomes of X and Y.

I wrote a whole response to this part, but ... maybe I'm missing you. 

Thinking strategically seems fine to the extent that one is aligned with love / ethics / integrity and not acting out of fear, hate, or selfishness. The way you put your predicament caused me to feel like you were endorsing a fear-aligned POV. 

"Since I expect my adversaries to make use of resources they seize to destroy more of what I care about," "But I'm in an adversarial situation. There are optimizing processes trying to destroy what I'm trying to build, trying to threaten people into abandoning their perspectives and capitulating to violence." 

The thing I should have said... was not about the strategy subplot, sorry, ... rather, I have an objection to the seeming endorsement of acting from a fear-aligned place. Maybe I was acting out of fear myself... and failed to name the true objection. 

... 

Those above quotes are the strongest evidence I have that you're assuming adversarial-ness in the situation, and I do not currently know why you believe those quoted statements. Like the phrase about 'adversaries' sounds like you're talking about theoretical ghosts to me. But maybe you have real people in mind. 

I'm curious if you want to elaborate. 

Replies from: Benquo
comment by Benquo · 2021-10-20T17:56:26.032Z · LW(p) · GW(p)

the phrase about ‘adversaries’ sounds like you’re talking about theoretical ghosts to me. But maybe you have real people in mind.

I'm talking about optimizing processes coordinating with copies of themselves, distributed over many people. My blog post Civil Law and Political Drama is a technically precise description of this, though Towards optimal play as Villager in a mixed game adds some color that might be helpful. I don't think my interests are opposed to the autonomous agency of almost anyone. I do think that some common trigger/trauma behavior patterns are coordinating against autonomous human agency.

The gaming detail helps me understand where you're coming from here. I don't think the right way to manage my resource constraints looks very much like playing a game of MtG. I am in a much higher-dimensional environment where most of my time should be spent playing/exploring, or resolving tension patterns that impede me from playing/exploring. My endorsed behavior pattern looks a little more like the process of becoming a good MtG player, or discovering that MtG is the sort of thing I want to get good at. (Though empirically that's not a game it made sense to me to invest in becoming good at - I chose Tai Chi instead for reasons!)

rather, I have an objection to the seeming endorsement of acting from a fear-aligned place.

I endorse using the capacities I already have, even when those capacities are imperfect.

When responding to social conflict, it would almost always be more efficient and effective for me to try to clarify things out of a sense of open opportunity, than from a fear-based motive. This can be true even when a proper decision-theoretic model the situation would describe it as an adversarial one with time pressure; I might still protect my interests better by thinking in a free and relaxed way about the problem, than tensing up like a monkey facing a physical threat.

But a relaxed attitude is not always immediately available to me, and I don't think I want to endorse always taking the time to detrigger before responding to something in the social domain.

Part of loving and accepting human beings as they are, without giving up on intention to make things better, is appreciating and working with the benefits people produce out of mixed motives. There's probably some irrational fear-based motivation in Elon Musk's and Jeff Bezos's work ethic, and maybe they'd have found more efficient and effective ways to help the world if their mental health were better, but I'm really, really glad I get to use Amazon, and that Tesla and SpaceX and Starlink exist, and it's not clear to me that I'd want to advise younger versions of them to spend a lot of time working on themselves first. That seems like making purity the enemy of the good.

Replies from: Unreal
comment by Unreal · 2021-10-20T18:25:49.533Z · LW(p) · GW(p)

optimizing processes coordinating with copies of themselves, distributed over many people

Question about balance: how do you not end up reifying these in your mind, creating a paranoid sense of 'there be ghosts lurking in shadows' ? 

This question seems central to me because the poison I detect in Vassar-esque-speak is 

a) Memetically more contagious stories seem to include lurking ghosts / demons / shadows because adding a sense of danger or creating paranoia is sticky and salient. Vassar seems to like inserting a sense of 'hidden danger' or 'large demonic forces' into his theories and way of speaking about things. I'm worried this is done for memetic intrigue, viability, and stickiness, not necessarily because it's more true. It makes people want to listen to him for long periods of time, but I don't sense it being an openly curious kind of listening but a more addicted / hungry type of listening. (I can detect this in myself.) 

I guess I'm claiming Vassar has an imbalance between the wisdom/truth of his words and the power/memetic viability of his words. With too much on the side of power. 

b) Reifying these "optimizing processes coordinating" together, maybe "against autonomous human agency" or whatever... seems toxic and harmful for a human mind that takes these very seriously. Unless it comes with ample antidote in the form of (in my world anyway) a deep spiritual compassion / faith and a wisdom-oriented understanding of everyone's true nature, among other things in this vein. But I don't detect Vassar is offering this antidote, so it just feels like poison to me. One might call this poison a deep cynicism, lack of faith / trust, a flavor of nihilism, or "giving into the dark side." 

I do believe Vassar might, in an important sense, have a lot of faith in humanity... but nonetheless, his way of expressing gives off a big stench of everything being somehow tainted and bad. And the faith is not immediately detectable from listening to him, nor do I sense his love. 

I kind of suspect that there's some kind of (adversarial) optimization process operating through his expression, and he seems to have submitted to this willingly? And I am curious about what's up with that / whether I'm wrong about this. 

Replies from: Benquo
comment by Benquo · 2021-10-20T18:48:00.847Z · LW(p) · GW(p)

Question about balance: how do you not end up reifying these in your mind, creating a paranoid sense of ‘there be ghosts lurking in shadows’ ?

Mostly just by trying to think about this stuff carefully, and check whether my responses to it add up & seem constructive. I seem to have been brought up somehow with a deep implicit faith that any internal problem I have, I can solve by thinking about - i.e. that I don't have any internal infohazards. So, once I consciously notice the opportunity, it feels safe to be curious about my own fear, aggression, etc. It seems like many other people don't have this faith, which would make it harder for them to solve this class of problem; they seem to think that knowing about conflicts they're engaged in would get them hurt by making them blameworthy; that looking the thing in the face would mark them for destruction.

My impression is that insofar as I'm paranoid, this is part of the adversarial process I described, which seems to believe in something like ontologically fundamental threats that can't be reduced to specific mechanisms by which I might be harmed, and have to be submitted to absolutely. This model doesn't stand up to a serious examination, so examining it honestly tends to dissolve it.

I've found psychedelics helpful here. Psilocybin seems to increase the conscious salience of fear responses, which allows me to analyze them. In one of my most productive shrooms trips, I noticed that I was spending most of my time pretending to be a reasonable person, under the impression that an abstract dominator wouldn't allow me to connect with other people unless I passed as a simulacrum of a rational agent. I noticed that it didn't feel available to just go to the other room and ask my friends for cuddles because I wanted to, and I considered maybe just huddling under the blankets scared in my bedroom until the trip ended and I became a simulacrum again. Then I decided I had no real incentive do to this, and plenty of incentive to go try to interact with my friends without pretending to be a person, so I did that and it worked.

THC seems to make paranoid thoughts more conscious, which allows me to consciously work through their implications and decide whether I believe them.

I agree that stories with a dramatic villain seem more memetically fit and less helpful, and I avoid them when I notice the option to.

Replies from: Unreal
comment by Unreal · 2021-10-21T15:56:36.830Z · LW(p) · GW(p)

Thanks for your level-headed responses. At this point, I have nothing further to talk about on the object-level conversation (but open to anything else you want to discuss). 

For information value, I do want to flag that... 

I'm noticing an odd effect from talking with you. It feels like being under a weighted blanket or a 'numbing' effect. It's neither pleasant nor unpleasant.

My sketchpad sense of it is: Leaning on the support of Reason. Something wants me to be soothed, to be reassured, that there is Reasonableness and Order, and it can handle things. That most things can be Solved with ... correct thinking or conceptualization or model-building or something. 

So, it's a projection and all, but I don't trust this "thing" whatever it is, much. It also seems to have many advantages. And it may make it pretty hard for me to have a fully alive and embodied conversation with you. 

Curious if any of this resonates with you or with anyone else's sense of you, or if I'm off the mark. But um also this can be ignored or taken offline as well, since it's not adding to the overall conversation and is just an interpersonal thing. 

Replies from: Benquo
comment by Benquo · 2021-10-21T18:04:38.262Z · LW(p) · GW(p)

I did feel inhibited from having as much fun as I'd have liked to in this exchange because it seemed like while you were on the whole trying to make a good thing happen, you were somewhat scared in a triggered and triggerable way. This might have caused the distortion you're describing. Helpful and encouraging to hear that you picked up on that and it bothered you enough to mention.

Replies from: Unreal
comment by Unreal · 2021-10-21T18:23:33.964Z · LW(p) · GW(p)

Your response here is really perplexing to me and didn't go in the direction I expected at all. I am guessing there's some weird communication breakdown happening. ¯\_(ツ)_/¯ I guess all I have left is: I care about you, I like you, and I wish well for you. <3 

Replies from: Benquo
comment by Benquo · 2021-10-23T03:23:17.857Z · LW(p) · GW(p)

It seems like you're having difficulty imagining that I'm responding to my situation as I understand it, and I don't know what else you might think I'm doing.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2021-10-26T20:07:00.966Z · LW(p) · GW(p)

I read the comment you're responding to as suggesting something like "your impression of Unreal's internal state was so different from her own experience of her internal state that she's very confused".

Replies from: Benquo
comment by Benquo · 2021-10-18T01:29:33.538Z · LW(p) · GW(p)

What do you see as the scope of the problem? Are there standards you want to enforce here? If so, what are they?

comment by Benquo · 2021-10-18T17:56:42.353Z · LW(p) · GW(p)

This is explicitly opposed to Zoe's stated intentions.

This discourages people like me from revealing abuse we've suffered, since it tries to stop me from helping my friends by informing them about what they're doing wrong, providing the alternative behavior of hurting them because they're doing something wrong.

It seems like you are declaring an intent to act in favor of perpetuating and covering up patterns of abuse.

The idea that people, rather than behaviors, are bad and should be eliminated, is how we get Holocausts and Stalinist purges rather than behavioral standards and judicial investigations of conduct.

comment by Ben Pace (Benito) · 2021-10-21T07:41:03.223Z · LW(p) · GW(p)

What do you think the problem is that Jessica is trying to solve? (I'm also interested in what problem you think Zoe is trying to solve.)

comment by romeostevensit · 2021-10-18T06:34:02.414Z · LW(p) · GW(p)

One of the things that can feel like gaslighting in a community that attracts highly scrupulous people is when posting about your interpretation of your experience is treated as a contractual obligation to defend the claims and discuss any possible misinterpretations or consequences of what is a challenging thing to write in the first place.

Replies from: Aella, Duncan_Sabien
comment by Aella · 2021-10-18T18:17:36.224Z · LW(p) · GW(p)

I feel like here and in so many other comments in this discussion that there's important and subtle distinctions that are being missed. I don't have any intention to conditionlessly accept and support all accusations made (I have seen false accusations cause incredible harm and suicidality in people close to me). I do expect people who make serious claims about organizations to be careful about how they do it. I think Zoe's Leverage post easily met my standard, but that this post here triggered a lot of warning flags for me, and I find it important to pay attention to those.

comment by Duncan_Sabien · 2021-10-18T06:45:19.020Z · LW(p) · GW(p)

Speaking of highly scrupulous...

I think that the phrases "treated as a contractual obligation" and "any possible misinterpretations or consequences" are both hyperbole, if they are (as they seem) intended as fair summaries or descriptions of what Aella wrote above.

I think there's a skipped step here, where you're trying to say that what Aella wrote above might imply those things, or might result in those things, or might be tantamount to those things, but I think it's quite important to not miss that step.

Before objecting to Aella's [A] by saying "[B] is bad!" I think one should justify or at least explicitly assert [A—>B]

Replies from: romeostevensit
comment by romeostevensit · 2021-10-18T15:49:40.383Z · LW(p) · GW(p)

Yes, and to clarify I am not attempting to imply that there is something wrong with Aella's comment. It's more like this is a pattern I have observed and talked about with others. I don't think people playing a part in a pattern that has some negative side effects should necessarily have a responsibility frame around that, especially given that one literally can't track all various possible side effects of actions. I see epistemic statuses as partially attempting to give people more affordance for thinking about possible side effects of the multi context nature of online comms and that was used to good effect here, I likely would have had a more negative reaction to Aella's post if it hadn't included the epistemic status.

comment by 4thWayWastrel · 2021-10-17T22:11:50.241Z · LW(p) · GW(p)

I empathise with the feeling of slipperyness in the OP, I feel comfortable attributing that to the subject matter rather than malice.

If I had an experience that matched zoe's to the degree jessicata's did (superficially or otherwise) I'd feel compelled to post it. I found it helpful in the question of whether "insular rationalist group gets weird and experiences rash of psychotic breaks" is a community problem, or just a problem with stray dude.

Replies from: Aella
comment by Aella · 2021-10-17T23:32:10.564Z · LW(p) · GW(p)

Scott's comment [LW(p) · GW(p)] does seem to verify the "insular rationalist group gets weird and experiences rash of psychotic breaks" trend, but it seems to be a different group than the one named in the original post.

comment by hg00 · 2021-10-18T09:16:14.389Z · LW(p) · GW(p)

The community still seems in the middle of sensemaking around Leverage

Understanding how other parts of the community were similar/dissimilar to Leverage seems valuable from a sensemaking point of view.

Lots of parts the post sort of implicitly presents things as important, or asks you to draw conclusions without explicitly pointing out those conclusions.

I think you may be asking your reader to draw the conclusion that this is a dishonest way to write, without explicitly pointing out that conclusion :-) Personally, I see nothing wrong with presenting only observations.

comment by farp · 2021-10-18T00:14:55.554Z · LW(p) · GW(p)

First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away.

Yeesh. I don't think we should police victims' timing. That seems really evil to me. We should be super skeptical of any attempts to tell people to shut up about their allegations, and "your timing is very insensitive to the real victims" really does not pass the smell test for me.

Replies from: Viliam, Aella, farp
comment by Viliam · 2021-10-18T13:01:15.575Z · LW(p) · GW(p)

Some context, please. Imagine the following scenario:

  • Victim A: "I was hurt by X."
  • Victim B: "I was hurt by Y."

There is absolutely nothing wrong with this, whether it happens the same day, the next day, or week later. Maybe victim B was encouraged by (reactions to) victim A's message, maybe it was just a coincidence. Nothing wrong with that either.

Another scenario:

  • Victim A: "I was hurt by X."
  • Victim B: "I was also hurt by X (in a different way, on another day etc.)."

This is a good thing to happen; more evidence, encouragement for further victims to come out.

But this post is different in a few important ways. First, Jessicata piggybacks on Zoe's story a lot, insinuating analogies, but providing very little actual data. (If you rewrote the article to avoid referring to Zoe, it would be 10 times shorter.) Second, Jessicata repeatedly makes comparison between Zoe's experience at Leverage and her experience at MIRI/CFAR, and usually concludes that Leverage was less bad (for reasons that are weird to me, such as because their abuse was legible, or because they provided space for people to talk about demons and exorcise them). Here are some quotes:

I want to disagree with a frame that says that the main thing that's bad was that Leverage (or MIRI/CFAR) was a "cult".  This makes it seem like what happened at Leverage is much worse than what could happen at a normal company.  But, having read Moral Mazes and talked to people with normal corporate experience (especially in management), I find that "normal" corporations are often quite harmful to the psychological health of their employees, e.g. causing them to have complex PTSD symptoms, to see the world in zero-sum terms more often, and to have more preferences for things to be incoherent.

Leverage was an especially legible organization, with a relatively clear interior/exterior distinction, while CFAR was less legible, having a set of events that different people were invited to, and many conversations including people not part of the organization.  Hence, it is easier to attribute organizational responsibility at Leverage than around MIRI/CFAR.

Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community.  While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible.

Leverage definitely had large problems with these discussions, and perhaps tried to reach more intersubjective agreement about them than was plausible (leading to over-reification, as Zoe points out), but they seem less severe than the problems resulting from refusing to have them, such as psychiatric hospitalization and jail time.

Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing.

An ex-Leverage person I know comments that "one of the things I give Geoff the most credit for is actually ending the group when he realized he had gotten in over his head. That still left people hurt and shocked, but did actually stop a lot of the compounding harm."  (While Geoff is still working on a project called "Leverage", the initial "Leverage 1.0" ended with most of the people leaving.) This is to some degree happening with MIRI and CFAR, with a change in the narrative about the organizations and their plans, although the details are currently less legible than with Leverage.

I hope that those that think this is "not that bad" (perhaps due to knowing object-level specifics around MIRI/CFAR justifying these decisions) consider how they would find out whether the situation with Leverage was "not that bad", in comparison, given the similarity of the phenomena observed in both cases; such an investigation may involve learning object-level specifics about what happened at Leverage.  I hope that people don't scapegoat; in an environment where certain actions are knowingly being taken by multiple parties, singling out certain parties has negative effects on people's willingness to speak without actually producing any justice.

...uhm, does this sound a bit like a defense of Leverage, or at least saying "Zoe, your experience in Leverage was not as bad as my experience in MIRI/CFAR"? That is poor taste, especially when the debate about Zoe's experience hasn't finished yet.

Third, this comparison and downplaying is made even worse by the fact that many supposed analogies are not that much analogical:

  • Zoe had mental trauma after her experience in Leverage. Jessicata had mental trauma after her experience in MIRI/CFAR, and after she started experimenting with drugs, inspired by critics of MIRI/CFAR.
  • Zoe had to sign an NDA, covering lot of what was happening in Leverage, and now she worries about possible legal consequences of her talking about her abuse. Jessicata didn't have to sign anything... but hey, she was once discouraged from writing a blog on AI timeline... which is just as bad, except much worse because MIRI/CFAR is less transparent about being evil. (Sorry, I am too sarcastic here, I find it difficult to say these things with a straight face.)
  • Zoe was convinced by Leverage that everything that happened to her was her own fault. Jessicata joined a group of MIRI/CFAR haters who believed that everything was evil but especially MIRI/CFAR, and then she ended up believing that she was evil... yeah, again, fair analogy! Leverage at least tells you openly that you are a loser, but the insidious MIRI/CFAR uses some super complicated plot, manipulating their haters to convince you about the same thing.
  • etc. (I am out of time, and also being sarcastic is against the norms of LW, so I better end here.)

In summary, it is the combination of: piggybacking on another victim's story, making analogies that are not really analogies, and then downplaying the first victim's experience... plus the timing right in the middle of debating the first victim's experience... that makes it so bad.

comment by Aella · 2021-10-18T00:41:46.950Z · LW(p) · GW(p)

I don't think "don't police victims' timing" is an absolute rule; not policing the timing is a pretty good idea in most cases. I think this is an exception. 

And if I wasn't clear, I'll explicitly state my position here: I think it's good to pay close attention to negative effects communities have on its members, and I am very pro people talking about this, and if people feel hurt by an organization it seems really good to have this publicly discussed. 

But I believe the above post did not simply do that. It also did other things, which is frame things I perceive in misleading ways, leave out key information relevant to a discussion (as per Eliezer's comment here), [LW(p) · GW(p)] and also rely very heavily directly on Zoe's account at Leverage to bring validity to their own claims when I perceive Leverage as have been being both significantly worse and worse in a different category of way. If the above post hadn't done these things, I don't think I would have any issue with the timing.

Replies from: farp
comment by farp · 2021-10-18T05:51:36.096Z · LW(p) · GW(p)

I hope that other people, when considering whether to come forward with allegations, do not worry about timing or pulling the spotlight away from other victims. Even if they think their allegations might be stupid or low quality (which is in fact a very common fear among victims).

Replies from: Duncan_Sabien
comment by Duncan_Sabien · 2021-10-18T06:13:12.768Z · LW(p) · GW(p)

Strong downvote for choosing to entirely ignore the points/claims/arguments that Aella laid out, in favor of reiterating your frame with no new detail, as if that were a rebuttal.

Seems like a cheap rhetorical trick designed to say "I'm on the side of the good, and if you disagree with me, well ..."

(Or, more precisely, I predict that if we polled one hundred humans on their takeaway from reading the thread, more than sixty of them would tick "yes" next to "to the best of your ability to judge, was this person being snide/passive-aggressive/trying to imply that Aella doesn't largely agree?"  Which seems pretty lacking in reasonable good faith, coming on the heels of her explicitly stating that not policing timing is a pretty good idea in most cases.)

comment by farp · 2021-10-18T00:16:17.215Z · LW(p) · GW(p)

I really doubt that Zoe takes great comfort in seeing other people getting strung up after making allegations.

Replies from: Aella
comment by Aella · 2021-10-18T00:46:23.330Z · LW(p) · GW(p)

I'm not sure what you're trying to do here - call on Zoe as an authority to disapprove of me? Would it update you at all if the answer was what you doubted?

Replies from: farp
comment by farp · 2021-10-18T05:49:17.418Z · LW(p) · GW(p)

I am making an obvious point that how we treat people who make allegations in one case will affect people's comfort in another case. 

I am not sure what I would conclude if in fact Zoe was glad that Jessica was recieving a negative response, but it would be surprising and interesting, and counter-evidence towards ^

Replies from: Aella
comment by Aella · 2021-10-18T18:22:25.973Z · LW(p) · GW(p)

As I mentioned in my post, I am good friends with Zoe and I sent her my comment here right after I posted it. She approved.

comment by Ben Pace (Benito) · 2021-10-17T00:27:59.345Z · LW(p) · GW(p)

Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness.

Just zooming in on this, which stood out to me personally as a particular thing I'm really tired of. 

If you're not disagreeing with people about important things then you're not thinking. There are many options for how to negotiate a significant disagreement with a colleague, including spending lots of time arguing about it, finding a compromise action, or stopping collaborating with the person (if it's a severe disagreement, which often it can be). But telling someone that by disagreeing they're claiming to be 'better' than another person in some way always feels to me like an attempt to 'control' the speech and behavior of the person you're talking to, and I'm against it.

It happens a lot. I recently overheard someone (who I'd not met before) telling Eliezer Yudkowsky that he's not allowed to have extreme beliefs about AGI outcomes. I don't recall the specific claim, just that EY's probability mass for the claim was in the 95-99% range. The person argued that because EY disagrees with some other thoughtful people on that question, he shouldn't have such confidence.

(At the time, I noticed I didn't have to be around or listen to that person and just wandered away. Poor Eliezer stayed and tried to give a thoughtful explanation for why the argument seemed bad.)

Perhaps more important to my subsequent decisions, the AI timelines shortening triggered an acceleration of social dynamics.

I noticed this too. I thought a bunch of people were affected by it in a sort of herd behavior way (not focused so much on MIRI/CFAR, I'm talking more broadly in the rationality/EA communities). I do think key parts of the arguments about how to think about timelines and takeoff are accurate (e.g. 1 [LW · GW], 2 [LW · GW]), but I feel like many people weren't making decisions because of reasons; instead they noticed their 'leaders' were acting scared and then they also acted scared, like a herd. 

In both the Leverage situation and the AI timelines situation, I felt like nobody involved was really appreciating how much fuckery the information siloing was going to cause (and did cause) to the way the individuals in the ecosystem made decisions.

This was one of the main motivations behind my choice of example in the opening section of my 3.5 yr old post A Sketch of Good Communication [LW · GW] btw (a small thing but still meant to openly disagree with the seeming consensus that timelines determined everything). And then later I wrote about the social dynamics a bunch more [LW(p) · GW(p)] 2yrs ago when trying to expand on someone else's question on the topic.

Replies from: Eliezer_Yudkowsky, peter_hurford, elityre, alexander-1, Viliam
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-10-17T18:29:27.442Z · LW(p) · GW(p)

I affirm the correctness of Ben Pace's anecdote about what he recently heard someone tell me.

"How dare you think that you're better at meta-rationality than Eliezer Yudkowsky, do you think you're special" - is somebody trolling?  Have they never read anything I've written in my entire life?  Do they have no sense, even, of irony?  Yeah, sure, it's harder to be better at some things than me, sure, somebody might be skeptical about that, but then you ask for evidence or say "Good luck proving that to us all eventually!"  You don't be like, "Do you think you're special?"  What kind of bystander-killing argumentative superweapon is that?  What else would it prove?

I really don't know how I could make this any clearer.  I wrote a small book whose second half was about not doing exactly this.  I am left with a sense that I really went to some lengths to prevent this, I did what society demands of a person plus over 10,000% (most people never write any extended arguments against bad epistemology at all, and society doesn't hold that against them), I was not subtle.  At some point I have to acknowledge that other human beings are their own people and I cannot control everything they do - and I hope that others will also acknowledge that I cannot avert all the wrong thoughts that other people think, even if I try, because I sure did try.  A lot.  Over many years.  Aimed at that specific exact way of thinking.  People have their own wills, they are not my puppets, they are still not my puppets even if they have read some blog posts of mine or heard summaries from somebody else who once did; I have put in at least one hundred times the amount of effort that would be required, if any effort were required at all, to wash my hands of this way of thinking.

Replies from: jessica.liu.taylor, Benquo, lsusr, throwaway46237896
comment by jessicata (jessica.liu.taylor) · 2021-10-17T18:49:34.366Z · LW(p) · GW(p)

The irony was certainly not lost on me; I've edited the post to make this clearer to other readers.

comment by Benquo · 2021-10-17T19:06:04.377Z · LW(p) · GW(p)

I'm glad you agree that the behavior Jessica describes is explicitly opposed to the content of the Sequences, and that you clearly care a lot about this. I don't think anyone can reasonably claim you didn't try hard to get people to behave better, or could reasonably blame you for the fact that many people persistently try to do the opposite of what you say, in the name of Rationality.

I do think it would be a very good idea for you to investigate why & how the institutions you helped build and are still actively participating in are optimizing against your explicitly stated intentions. Anna's endorsement of this post seems like reasonably strong confirmation that organizations nominally committed to your agenda are actually opposed to it, unless you're actually checking. And MIRI/CFAR donors seem to for the most part think that you're aware of and endorse those orgs' activities.

When Jessica and another recent MIRI employee asked a few years ago for some of your time to explain why they'd left, your response was:

My guess is that I could talk over Signal voice for 30 minutes or in person for 15 minutes on the 15th, with an upcoming other commitment providing a definite cutoff point and also meaning that it wouldn't otherwise be an uninterrupted day.  That's not enough time to persuade each other things, but I suspect that neither would be a week, and hopefully it's enough that you can convey to me any information you want me to know and don't want to write down.  Again, for framing, this is a sort of thing I basically don't do anymore due to stamina limitations--Nate talks to people, I talk to Nate.

You responded a little bit by email, but didn't seem very interested in what was going on (you didn't ask the kinds of followup questions that would establish common knowledge about agreement or disagreement), so your interlocutors didn't perceive a real opportunity to inform you of these dynamics at that time.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-17T19:22:37.028Z · LW(p) · GW(p)

Anna's endorsement of this post seems like reasonably strong confirmation that organizations nominally committed to your agenda are actually opposed to it,

Presumably Eliezer's agenda is much broader than "make sure nobody tries to socially enforce deferral to high-status figures in an ungrounded way" though I do think this is part of his goals.

The above seems to me like it tries to equivocate between "this is confirmation that at least some people don't act in full agreement with your agenda, despite being nominally committed to it" and "this is confirmation that people are actively working against your agenda". These two really don't strike me as the same, and I really don't like how this comment seems like it tries to equivocate between the two.

Of course, the claim that some chunk of the community/organizations Eliezer created are working actively against some agenda that Eliezer tried to set for them is plausible. But calling the above a "strong confirmation" of this fact strikes me as a very substantial stretch.

Replies from: Benquo
comment by Benquo · 2021-10-18T18:11:06.073Z · LW(p) · GW(p)

It's explicitly opposition to core Sequences content, which Eliezer felt was important enough to write a whole additional philosophical dialogue about after the main Sequences were done. Eliezer's response when informed about it was:

is somebody trolling? Have they never read anything I’ve written in my entire life? Do they have no sense, even, of irony?

That doesn't seem like Eliezer agrees with you that someone got this wrong by accident, that seems like Eliezer agrees with me that someone identifying as a Rationalist has to be trying to get core things wrong to end up saying something like that.

Replies from: Sniffnoy
comment by Sniffnoy · 2021-10-19T08:50:50.169Z · LW(p) · GW(p)

I don't think this follows. I do not see how degree of wrongness implies intent. Eliezer's comment rhetorically suggests intent ("trolling") as a way of highlighting how wrong the person is; he is free to correct me if I am wrong, but I am pretty sure that is not an actual suggestion of intent, only a rhetorical one.

I would say moreover, that this is the sort of mistake [LW · GW] that occurs, over and over, by default, with no intent necessary. I might even say that it is avoiding, not committing, this sort of mistake, that requires intent. Because this sort of mistake is just sort of what people fall into by default, and avoiding it requires active effort.

Is it contrary to everything Eliezer's ever written? Sure! But reading the entirety of the Sequences, calling yourself a "rationalist", does not in any way obviate the need to do the actual work of better group epistemology, of noticing such mistakes (and the path to them) and correcting/avoiding them.

I think we can only infer intent like you're talking about if the person in question is, actually, y'know, thinking about what they're doing. But I think people are really, like, acting on autopilot a pretty big fraction of the time; not autopiloting takes effort, and doing that work may be what a "rationalist" is supposed to do, it's still not the default. All I think we can infer from this is a failure to do the work to shift out of autopilot and think. Bad group epistemology via laziness rather than via intent strikes me as the more likely explanation.

Replies from: Benquo
comment by Benquo · 2021-10-20T15:50:21.721Z · LW(p) · GW(p)

This seems exactly backwards, if someone makes uncorrelated errors, they are probably unintentional mistakes. If someone makes correlated errors, they are better explained as part of a strategy.

Once is happenstance. Twice is coincidence. Three times is enemy action.

I can imagine, after reading the sequences, continuing to have the epistemic modesty bias in my own thoughts, but I don't see how I could have been so confused as to refer to it in conversation as a valid principle of epistemology.

Replies from: TekhneMakre, Sniffnoy
comment by TekhneMakre · 2021-10-20T16:35:33.137Z · LW(p) · GW(p)

Behavior is better explained as strategy than as error, if the behaviors add up to push the world in some direction (along a dimension that's "distant" from the behavior, like how "make a box with food appear at my door" is "distant" from "wiggle my fingers on my keyboard"). If a pattern of correlated error is the sort of pattern that doesn't easily push the world in a direction, then that pattern might be evidence against intent. For example, the conjunction fallacy will produce a pattern of wrong probability estimates with a distinct character, but it seems unlikely to push the world in some specific direction (beyond whatever happens when you have incoherent probabilities). (Maybe this argument is fuzzy on the edges, like if someone keeps trying to show you information and you keep ignoring it, you're sort of "pushing the world in a direction" when compared to what's "supposed to happen", i.e. that you update; which suggests intent, although it's "reactive" rather than "proactive", whatever that means. I at least claim that your argument is too general, proves too much, and would be more clear if it were narrower.)

Replies from: Benquo
comment by Benquo · 2021-10-20T18:24:29.106Z · LW(p) · GW(p)

The effective direction the epistemic modesty / argument from authority bias pushes things, is away from shared narrative as something that dynamically adjusts to new information, and towards shared narrative as a way to identify and coordinate who's receiving instructions from whom.

People frequently make "mistakes" as a form of submission, and it shouldn't be surprising that other types of systematic error function as a means of domination, i.e. of submission enforcement.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-20T23:22:16.984Z · LW(p) · GW(p)

(I indeed find this a more clear+compelling argument and appreciate you trying to make this known.)

comment by Sniffnoy · 2021-10-20T18:44:10.624Z · LW(p) · GW(p)

This seems exactly backwards, if someone makes uncorrelated errors, they are probably unintentional mistakes. If someone makes correlated errors, they are better explained as part of a strategy.

I mean, there is a word for correlated errors, and that word is "bias"; so you seem to be essentially claiming that people are unbiased? I'm guessing that's probably not what you're trying to claim, but that is what I am concluding? Regardless, I'm saying people are biased towards this mistake.

Or really, what I'm saying it's the same sort of phenomenon that Eliezer discusses here [LW · GW]. So it could indeed be construed as a strategy as you say; but it would not be a strategy on the part of the conscious agent, but rather a strategy on the part of the "corrupted hardware" itself. Or something like that -- sorry, that's not a great way of putting it, but I don't really have a better one, and I hope that conveys what I'm getting at.

Like, I think you're assuming too much awareness/agency of people. A person who makes correlated errors, and is aware of what they are doing, is executing a deliberate strategy. But lots of people who make correlated errors are just biased, or the errors are part of a built-in strategy they're executing, not deliberately, but by default without thinking about it, that requires effort not to execute.

We should expect someone calling themself a rationalist to be better, obviously, but, IDK, sometimes things go bad?

I can imagine, after reading the sequences, continuing to have this bias in my own thoughts, but I don't see how I could have been so confused as to refer to it in conversation as a valid principle of epistemology.

I mean people don't necessarily fully internalize everything they read, and in some people the "hold on what am I doing?" can be weak? <shrug>

I mean I certainly don't want to rule out deliberate malice like you're talking about, but neither do I think this one snippet is enough to strongly conclude it.

Replies from: Benquo
comment by Benquo · 2021-10-20T18:53:38.934Z · LW(p) · GW(p)

In most cases it seems intentional but not deliberate. People will resist pressure to change the pattern, or find new ways to execute it if the specific way they were engaged in this bias is effectively discouraged, but don't consciously represent to themselves their intent to do it or engage in explicit means-ends reasoning about it.

Replies from: Sniffnoy
comment by Sniffnoy · 2021-10-26T06:52:13.012Z · LW(p) · GW(p)

Yeah, that sounds about right to me. I'm not saying that you should assume such people are harmless or anything! Just that, like, you might want to try giving them a kick first -- "hey, constant vigilance, remember?" :P -- and see how they respond before giving up and treating them as hostile.

comment by lsusr · 2021-10-17T18:35:33.668Z · LW(p) · GW(p)

"How dare you think that you're better at meta-rationality than Eliezer Yudkowsky, do you think you're special" reads to me as something Eliezer Yudkowsky himself would never write.

comment by throwaway46237896 · 2021-10-18T18:53:13.307Z · LW(p) · GW(p)

You also wrote a whole screed about how anyone who attacks you or Scott Alexander is automatically an evil person with no ethics, and walked it back only after backlash and only halfway. You don't get to pretend you're exactly embracing criticism there, Yud - in fact, it was that post that severed my ties to this community for good.

Replies from: ESRogs, hg00
comment by ESRogs · 2021-10-19T14:47:05.003Z · LW(p) · GW(p)

FWIW I believe "Yud" is a dispreferred term (because it's predominantly used by sneering critics), and your comment wouldn't have gotten so many downvotes without it.

Replies from: TurnTrout, Raven, throwaway46237896
comment by TurnTrout · 2021-10-20T11:22:57.070Z · LW(p) · GW(p)

I strong-downvoted because because they didn't bother to even link to the so-called screed. (Forgive me for not blindly trusting throwaway46237896.)

Replies from: hg00, TAG
comment by hg00 · 2021-10-21T01:42:17.546Z · LW(p) · GW(p)

Something I try to keep in mind about critics is that people who deeply disagree with you are also not usually very invested in what you're doing, so from their perspective there isn't much of an incentive to put effort into their criticism. But in theory, the people who disagree with you the most are also the ones you can learn the most from.

You want to be the sort of person where if you're raised Christian, and an atheist casually criticizes Christianity, you don't reject the criticism immediately because "they didn't even take the time to read the Bible!"

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-24T02:20:37.226Z · LW(p) · GW(p)

I think I have a lot less (true, useful, action-relevant) stuff to learn from a random fundamentalist Christian than from Carl Shulman, even though I disagree vastly more with the fundamentalist than I do with Carl.

comment by TAG · 2021-10-20T12:36:07.793Z · LW(p) · GW(p)

The original "sneer club" comment?

comment by Evenflair (Raven) · 2021-10-20T04:11:49.873Z · LW(p) · GW(p)

Really? I do it because it's easier to type. Maybe I'm missing some historical context here.

Replies from: ESRogs
comment by ESRogs · 2021-10-20T11:12:42.495Z · LW(p) · GW(p)

Maybe I'm missing some historical context here.

For some reason a bunch of people started referring to him as "Big Yud" on Twitter. Here's some context regarding EY's feelings about it.

comment by throwaway46237896 · 2021-10-21T01:19:57.497Z · LW(p) · GW(p)

I'm a former member turned very hostile to the community represented here these days. So that's appropriate, I guess.

Replies from: hg00, ESRogs
comment by hg00 · 2021-10-21T01:37:21.571Z · LW(p) · GW(p)

Any thoughts on how we can help you be at peace?

comment by ESRogs · 2021-10-21T16:18:48.448Z · LW(p) · GW(p)

So that's appropriate, I guess.

I disagree that it's appropriate to use terms for people that they consider slurs because they're part of a community that you don't like.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2021-10-21T16:32:53.554Z · LW(p) · GW(p)

It's entirely appropriate! Expressing hostility is what slurs are for!

Replies from: Duncan_Sabien
comment by Duncan_Sabien · 2021-10-21T16:58:34.828Z · LW(p) · GW(p)

Prescriptive appropriateness vs. descriptive appropriateness.

ESRogs is pointing out a valuable item in a civilizing peace treaty; an available weapon that, if left unused, allows a greater set of possible cooperations to come into existence.  "Not appropriate" as a normative/hopeful statement, signaling his position as a signatory to that disarmament clause and one who hopes LW has already or will also sign on, as a subculture.

Zack is pointing out that, from the inside of a slur, it has precisely the purpose that ESRogs is labeling inappropriate.  For a slur to express hostility and disregard is like a hammer being used to pound nails.  "Entirely appropriate" as a descriptive/technical statement.

I think it would have been better if Zack had made that distinction, which I think he's aware of, but I'm happy to pop in to help; I suspect meeting that bar would've prevented him from saying anything at all in this case, which would probably have been worse overall.

comment by hg00 · 2021-10-20T23:11:57.867Z · LW(p) · GW(p)

What screed are you referring to?

Replies from: throwaway46237896
comment by throwaway46237896 · 2021-10-21T01:25:54.387Z · LW(p) · GW(p)

The rant (now somewhat redacted) can be found here, in response to the leaked emails of Scott more-or-less outright endorsing people like Steve Sailer re:"HBD". There was a major backlash against Scott at the time, resulting in the departure of many longtime members of the community (including me), and Eliezer's post was in response to that. It opened with:

it should have been obvious to anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside and has no ethics 

...which is, to put it mildly, absurd.

Replies from: philh, hg00
comment by philh · 2021-10-22T22:42:04.198Z · LW(p) · GW(p)

I think it would be good to acknowledge here Eliezer's edits. Like, do you think that

  • The thing you quote here is a reasonable way to communicate the thing Eliezer was trying to communicate, and that thing is absurd?
  • The thing you quoted is absurd, but not what Eliezer was trying to communicate, but the thing he was actually trying to communicate was also absurd?
  • The thing Eliezer was trying to communicate is defensible, but it's ridiculous that he initially tried to communicate it using those words?
  • What Eliezer initially said was a reasonable way to communicate what he meant, and his attempts to "clarify" are actually lying about what he meant?
  • Something else?

Idk, I don't want to be like "I'm fine with criticism but it's not really valid unless you deliver it standing on one foot in a bed of hot coals as is our local custom". And I think it's good that you brought this post into the conversation, it's definitely relevant to questions like how much does Eliezer tolerate criticism. No obligation on you to reply further, certainly. (And I don't commit to replying myself, if you do, so I guess take that into account when deciding if you will or not.)

But also... like, those edits really did happen, and I think they do change a lot.

I'm not sure how I feel about the post myself, there's definitely things like "I actually don't know if I can pick out the thing you're trying to point out" and "how confident are you you're being an impartial judge of the thing when it's directed at you". I definitely don't think it's a terrible post.

But I don't know what about the post you're reacting to, so like, I don't know if we're reacting different amounts to similar things, or you're reacting to things I think are innocuous, or you're reacting to things I'm not seeing (either because they're not there or because I have a blind spot), or what.

(The very first version was actually "...openly hates on Eliezer is probably...", which I think is, uh, more in need of revision than the one you quoted.)

Replies from: Duncan_Sabien, throwaway46237896
comment by Duncan_Sabien · 2021-10-22T23:00:29.115Z · LW(p) · GW(p)

Strong approval for the way this comment goes about making its point, and trying to bridge the inferential gap.

comment by throwaway46237896 · 2021-10-28T13:14:39.408Z · LW(p) · GW(p)

I think it would be good to acknowledge here Eliezer's edits.

 

I don't. He made them only after ingroup criticism, and that only happened because it was so incredibly egregious. Remember, this was the LAST straw for me - not the first.

The thing about ingroupy status-quo bias is that you'll justify one small thing after another, but when you get a big one-two punch - enough to shatter that bias and make you look at things from outside - your beliefs about the group can shift very rapidly. I had already been kind of leery about a number of things I'd seen, but the one-two-three punch of the Scott emails, Eliezer's response, and the complete refusal of anyone I knew in the community to engage with these things as a problem, was that moment for me.

Even if I did give him credit for the edit - which I don't, really - it was only the breaking point, not the sole reason I left.

Replies from: RobbBB, philh
comment by Rob Bensinger (RobbBB) · 2021-10-28T20:42:43.657Z · LW(p) · GW(p)

I believe Eliezer about his intended message, though I think it's right to dock him some points for phrasing it poorly -- being insufficiently self-critical is an attractor [? · GW] that idea-based groups have to be careful of, so if there's a risk of some people misinterpreting you as saying 'don't criticize the ingroup', you should at least take the time to define what you mean by "hating on", or give examples of the kinds of Topher-behaviors you have in mind.

There's a different attractor I'm worried about, which is something like "requiring community leaders to walk on eggshells all the time with how they phrase stuff, asking them to strictly optimize for impact/rhetoric/influence over blurting out what they actually believe, etc." I think it's worth putting effort into steering away from that outcome. But I think it's possible to be extra-careful about 'don't criticize the ingroup' signals without that spilling over into a generally high level of self-censorship.

Replies from: throwaway46237896
comment by throwaway46237896 · 2021-11-03T13:50:31.783Z · LW(p) · GW(p)

You can avoid both by not having leaders who believe in terrible things (like "black people are genetically too stupid to govern themselves") that they have to hide behind a veil of (im)plausible deniability.

comment by philh · 2021-10-29T22:55:18.057Z · LW(p) · GW(p)

Hm, so. Even just saying you don't give him credit for the edits is at least a partial acknowledgement in my book, if you actually mean "no credit" and not "a little credit but not enough". It helps narrow down where we disagree, because I do give him credit for them - I think it would be better if he'd started with the current version, but I think it would be much worse if he'd stopped with the first version.

But also, I guess I still don't really know what makes this a straw for you, last or otherwise. Like I don't know if it would still be a straw if Eliezer had started with the current version. And I don't really have a sense of what you think Eliezer thinks. (Or if you think "what Eliezer thinks" is even a thing it's sensible to try to talk about.) It seems you think this was really bad[1], worse than Rob's criticism (which I think I agree with) would suggest. But I don't know why you think that.

Which, again. No obligation to share, and I think what you've already shared is an asset to the conversation. But that's where I'm at.

[1]: I get this impression from your earlier comments. Describing it as "the last straw" kind of makes it sound like not a big deal individually, but I don't think that's what you intended?

Replies from: throwaway46237896
comment by throwaway46237896 · 2021-11-03T13:49:17.411Z · LW(p) · GW(p)

It would still be a straw if it started with the current version, because it is defending Scott for holding positions and supporting people I find indefensible. The moment someone like Steve Sailer is part of your "general theory of who to listen to", you're intellectually dead to me.

The last straw for me is that the community didn't respond to that with "wow, Scott's a real POS, time to distance ourselves from him and diagnose why we ever thought he was someone we wanted around". Instead, it responded with "yep that sounds about right". Which means the community is as indefensible as Scott is. And Eliezer, specifically, doing it meant that it wasn't even a case of "well maybe the rank and file have some problems but at least the leadership..."

comment by hg00 · 2021-10-21T01:35:23.249Z · LW(p) · GW(p)

Thanks. After thinking for a bit... it doesn't seem to me that Topher frobnitzes Scott, so indeed Eliezer's reaction seems inappropriately strong. Publishing emails that someone requested (and was not promised) privacy for is not an act of sadism.

Replies from: philh
comment by philh · 2021-10-22T10:09:36.886Z · LW(p) · GW(p)

I believe the idea was not that this was an act of frobnitzing, but that

  • Topher is someone who openly frobnitzes.
  • Now he's done this, which is bad.
  • It is unsurprising that someone who openly frobnitzes does other bad things too.
comment by Peter Wildeford (peter_hurford) · 2021-10-17T16:34:39.347Z · LW(p) · GW(p)

I don't recall the specific claim, just that EY's probability mass for the claim was in the 95-99% range. The person argued that because EY disagrees with some other thoughtful people on that question, he shouldn't have such confidence.

 

I think people conflate the very reasonable "I am not going to adopt your 95-99% range because other thoughtful people disagree and I have no particular reason to trust you massively more than I trust other people" with the different "the fact that other thoughtful people mean there's no way you could arrive at 95-99% confidence" which is false. I think thoughtful people disagreeing with you is decent evidence you are wrong but can still be outweighed.

comment by Eli Tyre (elityre) · 2021-10-18T04:09:52.947Z · LW(p) · GW(p)

If you're not disagreeing with people about important things then you're not thinking.

This is a great sentence. I kind of want it on a t-shirt.

comment by Alexander (alexander-1) · 2021-10-17T04:25:56.114Z · LW(p) · GW(p)

I sought a lesson we could learn from this situation, and your comment captured such a lesson well.

This is reminiscent of the message of the Dune trilogy. Frank Herbert warns about society's tendencies to "give over every decision-making capacity" to a charismatic leader. Herbert said in 1979:

The bottom line of the Dune trilogy is: beware of heroes. Much better rely on your own judgment, and your own mistakes.

comment by Viliam · 2021-10-17T14:27:49.989Z · LW(p) · GW(p)

If you're not disagreeing with people about important things then you're not thinking.

Indeed. And if the people object against someone disagreeing with them, that would imply they are 100% certain about being right.

I recently overheard someone (who I'd not met before) telling Eliezer Yudkowsky that he's not allowed to have extreme beliefs about AGI outcomes.

On one hand, this suggests that the pressure to groupthink is strong. On the other hand, this is evidence of Eliezer not being treated as an infallible leader... which I suppose is a good news in this avalanche of bad news.

(There is a method to reduce group pressure, by making everyone write their opinion first, and only then tell each other the opinions. Problem is, this stops working if you estimate the same thing repeatedly, because people already know what the group opinion was in the past.)

comment by Rob Bensinger (RobbBB) · 2021-10-17T22:23:10.164Z · LW(p) · GW(p)

Kate Donovan messaged me to say:

I think four people experiencing psychosis in a period of five years, in a community this large with high rates of autism and drug use, is shockingly low relative to base rates.

[...]

A fast pass suggests that my 1-3% for lifetime prevalence was right, but mostly appearing at 15-35.

And since we have conservatively 500 people in the cluster (a lot more people than that attended CFAR workshops or are in MIRI or CFAR's orbit), 4 is low. Given that I suspect the cluster is larger and I am pretty sure my numbers don't include drug induced psychosis, just primary psychosis.

The base rate seems important to take into account here, though per Jessica, "Obviously, for every case of poor mental health that 'blows up' and is noted, there are many cases that aren't." (But I'd guess that's true for the base-rate stats too?)

Replies from: jessica.liu.taylor, LGS, Gunnar_Zarncke
comment by jessicata (jessica.liu.taylor) · 2021-10-17T22:43:22.173Z · LW(p) · GW(p)

This is a good point regarding the broader community. I do think that, given that at least 2 cases were former MIRI employees, there might be a higher rate in that subgroup.

EDIT: It's also relevant that a lot of these cases happened in the same few years. 4 of the 5 cases of psychiatric hospitalization or jail time I know about happened in 2017, the other happened sometime 2017-2019. I think these people were in the 15-35 age range, which spans 20 years.

comment by LGS · 2021-10-19T04:11:08.826Z · LW(p) · GW(p)

I'm a complete outsider looking in here, so here's an outsider's perspective (from someone in CS academia, currently in my early 30s).

I've never heard or seen anyone, in real life, ever have psychosis. I know of 0 cases. Yeah, I know that people don't share such things, but I've heard of no rumors either.

By contrast, depression/anxiety seems common (especially among grad students) and I know of a couple of suicides. There was even a murder! But never psychosis; without the internet I wouldn't even know it's a real thing.

I don't know what the official base rate is, but saying "4 cases is low" while referring to the group of people I'm familiar with (smart STEM types) is, from my point of view, absurd.

The rate you quote is high. There may be good explanations for this: maybe rationalists are more open about their psychosis when they get it. Maybe they are more gossipy so each case of psychosis becomes widely known. Maybe the community is easier to enter for people with pre-existing psychotic tendencies. Maybe it's all the drugs some rationalists use.

But pretending the reported rate of psychosis is low seems counterproductive to me.

Replies from: JenniferRM, mingyuan, romeostevensit
comment by JenniferRM · 2021-10-29T05:18:14.728Z · LW(p) · GW(p)

I lived in a student housing cooperative for 3 years during my undergrad experience. These were non-rationalists. I lived with 14 people, then 35, then 35 (somewhat overlapping) people.

In these 3 years I saw 3 people go through a period of psychosis.

Once it was because of whippets, basically, and updated me very very strongly away from nitrous oxide being safe (it potentiates response to itself, so there's a positive feedback loop, and positive feedback loops in biology are intrinsically scary). Another time it was because the young man was almost too autistic to function in social environments and then feared that he'd insulted a woman and would be cast out of polite society for "that action and also for overreacting to the repercussions of the action". The last person was a mixture of marijuana and having his Christianity fall apart after being away from the social environment of his upbringing.

A striking thing about psychosis is that up close it really seems more like a biological problem rather than a philosophic one, whereas I had always theorized naively that there would be something philosophically interesting about it, with opportunities to learn or teach in a way that connected to the altered "so-called mental state".

I saw two of the three cases somewhat closely, and it wasn't "this person believes something false, in a way that maybe they could be talked out of" (which was my previous model of "being crazy").  It was more like "this human body has a brain that is executing microinstructions that might be part of a human-like performance of some coherent motion of the soul, if it progressed statefully, but instead it is a highly stuttering, almost stateless loop of nearly pure distress, repeating words over and over, and forgetting things within 2 seconds of hearing them, and calming itself, but then forgetting why it calmed itself, and then forgetting that it forgot, and so on, with very very obvious dysfunction".

I rarely talk about any of it out of respect for their privacy, but this is so long ago that anyone who can figure out who I'm talking about at this point (from what I've said) probably also knows the events in question.

It seemed almost indecent to have observed it, and it feels wrong to discuss, out of respect for their personhood. Which maybe doesn't make sense, but that is simply part of the tone of these memories. Two of the three left college and never came back, and another took a week off in perhaps a hotel or something, with parental support. People who were there spoke of it in hushed tones. It was spiritually scary.

My understanding is that base rates for schizophrenia are roughly 1% or 2% cross culturally, and are often on the introverted side of things. Also I think that many people rarely talk about the experiences (that they saw others go though, or that they went through), so you could know people who saw or experienced such things... and they might be very unlikely to ever volunteer their observations.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-29T07:50:15.358Z · LW(p) · GW(p)

Thanks for this account.

it wasn't "this person believes something false, in a way that maybe they could be talked out of" (which was my previous model of "being crazy").  It was more like "this human body has a brain that is executing microinstructions

Feels like there's more to the story here. Two of the cases you gave do sound like they had some mental thing (Christianity, social fear) that precipitated the psychosis, even if the psychosis itself was non-mental.

comment by mingyuan · 2021-10-19T21:09:01.502Z · LW(p) · GW(p)

I agree with other commenters that you are just less likely to see psychosis even if it's there, both because it's not ongoing in the way that depression and anxiety are, and because people are less likely to discuss it. I was only one step away from Jessica in the social graph in October of 2017 and never had any inkling that she'd had a psychotic episode until just now. I also wasn't aware that Zack Davis had ever had a psychotic episode, despite having met him several times and having read his blog a bit. I also lived with Olivia during the time that she was apparently inspiring psychosis in others. 

In fact, the only psychotic episodes I've known about are ones that had news stories written about them, which suggests to me that you are probably underestimating the extent to which people keep quiet about the psychotic episodes of themselves and those close to them. It seems in quite poor taste to gossip about, akin to gossiping about friends' suicide attempts (which I also assume happen much more often than I hear about — I think one generally only hears about the ones that succeed or that are publicized to spread awareness).

Just for thoroughness, here are the psychotic episodes I've known about, in chronological order:

  1. Eric Bruylant's, which has been discussed in other comments. I was aware that he was in jail because my housemates were trying to support him by showing up to his trials and stuff, and we still got mail for him (the case had happened pretty recently when I moved in). I think I found out the details — including learning that psychosis was involved — from the news story though.
  2. I was on a sports team in college, and the year after I graduated, one of my teammates had a psychotic break. I only heard about this because he was wandering the streets yelling and ended up trying to attack some campus police officers with a metal pipe and got shot (thankfully non-fatally).
  3. It's unclear to me if what happened with Ziz&co at Westminster Woods was a psychotic episode, but in any case I knew about it at the time, but only had the details clarified in the news story. 
Replies from: LGS
comment by LGS · 2021-10-20T11:43:56.611Z · LW(p) · GW(p)

I feel like people keep telling me that psychosis around me should be higher than what I hear about, which is irrelevant to my point: my point is the frequency in which I hear about psychosis in the rationalist community is like an order of magnitude higher than the frequency I hear about it elsewhere.

It doesn't matter whether people hide psychosis among my social group; the observation to explain is why people don't hide psychosis in the rationalist community to the same extent.

For example, you mention 2 separate example of Bay Area rationalists making the news for psychosis. I know of no people in my academic community who have made the news for psychosis. Assuming equal background rates, what is left to explain is why rationalists are more likely to make the news when they get psychosis.

Another example: there have now been 1-2 people who have admitted to psychosis in blog posts intended as public callouts. I know of no people in my academic community who have written public callout blog posts in which they say they've had psychosis. Is there an explanation for why rationalists who've had psychosis are more likely to write public callout blog posts?

Anyway, this discussion feels kind of moot now that I've read Scott Alexander's update to his comment. He says that several people (who knew each other) all had psychosis around the same time in 2017. No reasonable person can think this is merely baseline; some kind of social contagion is surely involved (probably just people sharing drugs or drug recommendations).

Replies from: tomcatfish, Puxi Deek
comment by tomcatfish · 2021-10-23T04:01:04.051Z · LW(p) · GW(p)

I think part of it is that this isn't related to your social network, but your news habits and how your news sources cover your social network.

You probably don't read newspapers that are as certain to write about your neighbor having any kind of "psychosis", but you read forums that tell you about Rationalists doing the same.

comment by Puxi Deek · 2021-10-20T11:51:12.615Z · LW(p) · GW(p)

Them leaving out the exact details of what went on with their groups make the whole discussion sketchy. Maybe they just want to keep the conversation to themselves. If that's the case, why are they posting on LW?

comment by romeostevensit · 2021-10-19T04:42:30.649Z · LW(p) · GW(p)

Sampling error. Psychosis is not an ongoing thing, yielding many fewer chances to observe it than months or years long depression or anxiety. Psychosis often manifests when people are already isolated due to worsening mental health, whereas depression and anxiety can be exactly exacerbated by the situations in which you would observe it i.e. socializing. Nor would people volunteer their experience due to much greater stigma.

Replies from: LGS, jessica.liu.taylor
comment by LGS · 2021-10-19T06:17:45.729Z · LW(p) · GW(p)

I am not comparing "number of psychosis among my friends" to "number of depression episodes among my friends". I am comparing "number of psychosis among my friends" to "number of psychosis among rationalists". Any sampling errors should apply equally to the rationalists (or if not, that demands an explanation).

The observation is that there's a lot more reported psychosis among rationalists than reported psychosis among (say) CS grad students. I don't have an explanation (and maybe there's an innocuous one), but I don't think people should be denying this fact.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-19T07:31:53.820Z · LW(p) · GW(p)

A hypothesis is that rationalists are a larger gossip community, so that e.g. you might hear about psychosis from 4 years ago in people you're nth-degree socially connected with, where maybe most other communities aren't like that?

Replies from: LGS
comment by LGS · 2021-10-19T09:41:51.770Z · LW(p) · GW(p)

Certainly possible! I mentioned this hypothesis upthread.

I wonder if there are ways to test it. For instance, do non-Bay-Arean rationalists also have a high rate of reported psychosis? I think not (not sure though), though perhaps most of the gossip centers on the Bay Area.

Are Bay Area rationalists also high in reported levels of other gossip-mediated things? I'm trying to name some, but most sexual ones are bad examples because of the polyamory confounder. How about: are Bay rationalists high in reported rates of plastic surgery? How about abortion? These seem like somewhat embarrassing things that you'd normally not find out about, but that people like to gossip about.

Or maybe people don't care to gossip about these things on the internet, because they are less interesting the psychosis.

Replies from: Freyja, TekhneMakre
comment by Freyja · 2021-10-19T16:45:32.240Z · LW(p) · GW(p)

I’m someone with a family history of psychosis and I spend quite a lot of time researching it—treatments, crisis response, cultural responses to it. There are roughly the same number of incidences of psychosis in my immediate to extended family than are described in this post in the extended rationalist community. Major predictive factors include stress, family history and use of marijuana (and, to a lesser extent, other psychedelics). I don’t have studies to back this up but I have an instinct based on my own experience that openness-to-experience and risk-of-psychosis are correlated in family risk factors. So given the drugs, stress and genetic openness, I’d expect generic Bay Area smart people to have a fairly high risk of psychosis compared to, say, people in more conservative areas already.

comment by TekhneMakre · 2021-10-19T10:20:55.182Z · LW(p) · GW(p)
I mentioned this hypothesis upthread.

(Sort of; you did say "more gossipy -> more widely known", but I wanted to specifically add the word "larger", the point being that a small + extra gossipy community would have a higher that usual report rate, and so would a large + extra gossipy (+ memory-ful) community; but the larger one would have more raw numbers, so you'd get a wrong estimate of the proportional rate if you estimated the size of the relevant reference class using intuitions based on small gossip communities. And maybe even a less gossipy but larger network would still have this effect; like, I *never* hear gossip about people in communities I'm not a part of, even if I talk to some people from those communities, so there's more structure than just the rate of gossip. It's more a question of how large is the "gossip-percolation connected component".)

comment by jessicata (jessica.liu.taylor) · 2021-10-19T05:22:39.157Z · LW(p) · GW(p)

See PhoenixFriend's comment [LW(p) · GW(p)], there were multiple cases I didn't know about, so a lot of people's thoughts about this post are recapitulating sampling bias from my own knowledge (which is from my own social network, e.g. oversampling trans people and people talking with Michael). This confirms that people are avoiding volunteering the information that they had a psychotic break.

Replies from: Duncan_Sabien
comment by Duncan_Sabien · 2021-10-19T16:59:04.033Z · LW(p) · GW(p)

PhoenixFriend alleges multiple cases you didn't know about, but so far no one else has affirmed that those cases existed or were closely connected with CFAR/MIRI.

I think it's entirely possible that those cases did exist and will be affirmed, but at the moment my state is "betting on skeptical."

comment by orthonormal · 2021-10-17T21:26:31.884Z · LW(p) · GW(p)

Thank you for writing this, Jessica. First, you've had some miserable experiences in the last several years, and regardless of everything else, those times sound terrifying and awful. You have my deep sympathy.

Regardless of my seeing a large distinction between the Leverage situation and MIRI/CFAR, I agree with Jessica that this is a good time to revisit the safety of various orgs in the rationality/EA space.

I almost perfectly overlapped with Jessica at MIRI from March 2015 to June 2017. (Yes, this uniquely identifies me. Don't use my actual name here anyway, please.) So I think I can speak to a great deal of this.

I'll run down a summary of the specifics first (or at least, the specifics I know enough about to speak meaningfully), and then at the end discuss what I see overall.

Claim: People in and adjacent to MIRI/CFAR manifest major mental health problems, significantly more often than the background rate.

I think this is true; I believe I know two of the first cases to which Jessica refers; and I'm probably not plugged-in enough socially to know the others. And then there's the Ziz catastrophe.

Claim: Eliezer and Nate updated sharply toward shorter timelines, other MIRI researchers became similarly convinced, and they repeatedly tried to persuade Jessica and others.

This is true, but non-nefarious in my genuine opinion, because it's a genuine belief and because given that belief, you'll have better odds of success if the whole team at least takes the hypothesis quite seriously.

(As for me, I've stably been at a point where near-term AGI wouldn't surprise me much, but the lack of it also wouldn't surprise me much. That's all it takes, really, to be worried about near-term AGI.)

Claim: MIRI started getting secretive about their research.

This is true, to some extent. Nate and Eliezer discussed with the team that some things might have to be kept secret, and applied some basic levels of it to things we thought at the time might be AGI-relevant instead of only FAI-relevant. I think that here, the concern was less about AGI timelines and more about the multipolar race caused by DeepMind vs OpenAI. Basically any new advance gets deployed immediately in our current world.

However, I don't recall ever being told I'm not allowed to know what someone else is working on, at least in broad strokes. Maybe my memory is faulty here, but it diverges from Jessica's. 

(I was sometimes coy about whether I knew anything secret or not, in true glomarization fashion; I hope this didn't contribute to that feeling.)

There are surely things that Eliezer and Nate only wanted to discuss with each other, or with a specific researcher or two.

Claim: MIRI had rarity narratives around itself and around Eliezer in particular.

This is true. It would be weird if, given MIRI's reason for being, it didn't at least have the institutional rarity narrative—if one believed somebody else were just as capable of causing AI to be Friendly, clearly one should join their project instead of starting one's own.

About Eliezer, there was a large but not infinite rarity narrative. We sometimes joked about the "bus factor": if researcher X were hit by a bus, how much would the chance of success drop? Setting aside that this is a ridiculous and somewhat mean thing to joke about, the usual consensus was that Eliezer's bus quotient was the highest one but that a couple of MIRI's researchers put together exceeded it. (Nate's was also quite high.)

(My expectation is that the same would not have been said about Geoff within Leverage.)

Claim: Working at MIRI/CFAR made it harder to connect with people outside the community.

There's an extent to which this is true of any community that includes an idealistic job (i.e. a paid political activist probably has likeminded friends and finds it a bit more difficult to connect outside that circle). Is it true beyond that?

Not for me, at least. I maintained my ties with the other community I'd been plugged into (social dancing) and kept in good touch with my family (it helps that I have a really good family). As with the above example, the social path of least resistance would have been to just be friends with the same network of people in one's work orbit, but there wasn't anything beyond that level of gravity in effect for me.

Claim: CFAR got way too far into Shiny-Woo-Adjacent-Flavor-Of-The-Week.

This is a unfair framing... because I agree with Jessica's claim 100%. Besides Kegan Levels and the MAPLE dalliance, there was the Circling phase and probably much else I wasn't around for.

As for causes, I've been of the opinion that Anna Salamon has a lot of strengths around communicating ideas, but that her hiring has had as many hits as misses. There's massive churn, people come in with their Big Ideas and nobody to stop them, and also people come in who aren't in a good emotional place for their responsibilities. I think CFAR would be better off if Anna delegated hiring to someone else. [EDIT: Vaniver corrects me to say that Pete Michaud has been mostly in charge of hiring for the past several years, in which case I'm criticizing him rather than Anna for any bad hiring decisions during that time.]

Overall Thoughts

Essentially, I think there's one big difference between issues with MIRI/CFAR and issues at Leverage:

The actions of CFAR/MIRI harmed people unintentionally, as evidenced by the result that people burned out and left quickly and with high frequency. The churn, especially in CFAR, hurt the mission, so it was definitely not the successful result of any strategic process.

Geoff Anders and others at Leverage harmed people intentionally, in ways that were intended to maintain control over those people. And to a large extent, that seems to have succeeded until Leverage fell apart.

Specifically, [accidentally triggering psychotic mental states by conveying a strange but honestly held worldview without adding adequate safeties] is different from [intentionally triggering psychotic mental states in order to pull people closer and prevent them from leaving], which is Zoe's accusation. Even if it's possible for a mental breakdown to be benign under the right circumstances, and even if an unplanned one is more likely to result in very very wrong circumstances, I'm far more terrified of a group that strategically plans for its members to have psychosis with the intent of molding those members further toward the group's mission.

Unintentional harm is still harm, of course! It might have even been greater harm in total! But it makes a big difference when it comes to assessing how realistic a project of reform might be.

There are surely some deep reforms along these lines that CFAR/MIRI must consider. For one thing: scrupulosity, in the context of AI safety, seems to be a common thread in several of these breakdowns. I've taken this seriously enough in the past to post extensively on it here [? · GW]. I'd like CFAR/MIRI leadership to carefully update on how scrupulosity hurts both their people and their mission, and think about changes beyond surface-level things like adding a curriculum on scrupulosity. The actual incentives ought to change.

Finally, a good amount of Jessica's post (similarly to Zoe's post) concerns her inner experiences, on which she is the undisputed expert. I'm not ignoring those parts above. I just can't say anything about them, merely that as a third person observer it's much easier to discuss the external realities than the internal ones. (Likewise with Zoe and Leverage.)

Replies from: Gunnar_Zarncke, orthonormal, Vaniver, vanessa-kosoy
comment by Gunnar_Zarncke · 2021-10-17T23:15:43.582Z · LW(p) · GW(p)

: People in and adjacent to MIRI/CFAR manifest major mental health problems, significantly more often than the background rate.

I think this is true

My main complaint about this and the Leverage post is the lack of base-rate data. How many people develop mental health problems in a) normal companies, b) startups, c) small non-profits, d) cults/sects? So far, all I have seen are two cases. And in the startups I have worked at, I would also have been able to find mental health cases that could be tied to the company narrative. Humans being human narratives get woven. And the internet being the internet, some will get blown out of proportion. That doesn't diminish the personal experience at all. I am updating only slightly on CFAR or MIRI. And basically not at all on "things look better from the outside than from the inside."

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-18T02:02:56.048Z · LW(p) · GW(p)

In particular, I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed depression or anxiety (link). I think given the kind of undirected, often low-paid, work that many have been doing for the last decade, I think that's the right reference class to draw from, and my current guess is we are roughly at that same level, or slightly below it (which is a crazy high number, and I think should give us a lot of pause). 

Replies from: Linch
comment by Linch · 2021-10-18T11:21:19.346Z · LW(p) · GW(p)

I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed [emphasis mine] depression or anxiety (link)

I'm confused about how you got to this conclusion, and think it is most likely false. Neither your link, the linked study, or the linked meta-analysis in the linked study of your link says this. Instead the abstract of the linked^3 meta-analysis says:

Among 16 studies reporting the prevalence of clinically significant symptoms of depression across 23,469 Ph.D. students, the pooled estimate of the proportion of students with depression was 0.24 (95% confidence interval [CI], 0.18-0.31; I2 = 98.75%). In a meta-analysis of the nine studies reporting the prevalence of clinically significant symptoms of anxiety across 15,626 students, the estimated proportion of students with anxiety was 0.17 (95% CI, 0.12-0.23; I2 = 98.05%).

Further, the discussion section of the linked^3 study emphasizes:

While validated screening instruments tend to over-identify cases of depression (relative to structured clinical interviews) by approximately a factor of two67,68, our findings nonetheless point to a major public health problem among Ph.D. students.

So I think there is at least two things going on here:

  1. Most people with clinically significant significant symptoms do not go get diagnosed, so "clinically significant symptoms of" depression/anxiety is a noticeably lower bar than "actually clinically diagnosed"
  2. As implied in the quoted discussion above, if everybody were to seek diagnosis, only ~half of the rate of symptomatic people would be clinically diagnosed as having depression/anxiety.
    1. For those keeping score, this is ~12% for depression and 8.5% for anxiety, with some error bars.

Separately, I also think:

my current guess is we are roughly at that same level, or slightly below it

is wrong. My guess is that xrisk reducers have worse mental health on average compared to grad students. (I also believe this, with lower confidence, about people working in other EA cause areas like animal welfare, global poverty, or non-xrisk longtermism, as well as serious rationalists who aren't professionally involved in EA cause areas).

Replies from: Gunnar_Zarncke, habryka4
comment by Gunnar_Zarncke · 2021-10-18T12:09:33.004Z · LW(p) · GW(p)

Note that the pooled prevalence is 24% (CI 18-31). But it differs a lot across studies, symptoms, and location. In the individual studies, the range is really from zero to 50% (or rather to 38% if you exclude a study with only 6 participants). I think a suitable reference class would be the University of California which has 3,190 participants and a prevalence of 38%.  

Replies from: Linch, habryka4
comment by Linch · 2021-10-18T21:00:38.119Z · LW(p) · GW(p)

Sorry, am I misunderstanding something? I think taking "clinically significant symptoms", specific to the UC system, as a given is wrong because it did not directly address either of my two criticisms:

1. Clinically significant symptoms =/= clinically diagnosed even in worlds where there is a 1:1 relationship between clinically significant symptoms and would have been clinically diagnosed, as many people do not get diagnosed

2. Clinically significant symptoms do not have a 1:1 relationship with would have been clinically diagnosed.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2021-10-18T22:27:16.219Z · LW(p) · GW(p)

Well, I agree that the actual prevalence you have in mind would be roughly half of 38% i.e. ~20%. That is still much higher than the 12% you arrived at. And either value is so high that there is little surprise some severe episodes of some people happened in a 5-year frame. 

comment by habryka (habryka4) · 2021-10-18T17:20:22.285Z · LW(p) · GW(p)

The UC Berkeley study was the one that I had cached in my mind as generating this number. I will reread it later today to make sure that it's right, but it sure seems like the most relevant reference class, given the same physical location.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2021-10-18T19:50:54.417Z · LW(p) · GW(p)

I had a look at the situation in Germany and it doesn't look much better. 17% of students are diagnosed with at least one psychical disorder. This is based on the health records of all students insured by one of the largest public health insurers in Germany (about ten percent of the population):

https://www.barmer.de/blob/144368/08f7b513fdb6f06703c6e9765ee9375f/data/dl-barmer-arztreport-2018.pdf 

comment by habryka (habryka4) · 2021-10-18T19:37:29.646Z · LW(p) · GW(p)

I feel like the paragraph you cited just seems like the straightforward explanation of where my belief comes from? 

Among 16 studies reporting the prevalence of clinically significant symptoms of depression across 23,469 Ph.D. students, the pooled estimate of the proportion of students with depression was 0.24 (95% confidence interval [CI], 0.18–0.31; I2 = 98.75%). In a meta-analysis of the nine studies reporting the prevalence of clinically significant symptoms of anxiety across 15,626 students, the estimated proportion of students with anxiety was 0.17 (95% CI, 0.12–0.23; I2 = 98.05%)

24% of people have depression, 17% have anxiety, resulting in something like 30%-40% having one or the other. 

I did not remember the section about the screening instruments over-identifying cases of depression/anxiety by approximately a factor of two, which definitely cuts down my number, and I should have adjusted it in my above comment. I do think that factor of ~2 does maybe make me think that we are doing a bit worse than grad students, though I am not super sure.

Replies from: Linch
comment by Linch · 2021-10-18T20:45:26.456Z · LW(p) · GW(p)

Sorry, maybe this is too nitpicky, but clinically significant symptoms =/= clinically diagnosed, even in worlds where the clinically significant symptoms are severe enough to be diagnosed as such.

If you instead said in "population studies 30-40% of graduate students have anxiety or depression severe enough to be clinically diagnosed as such were they to seek diagnosis" then I think this will be a normal misreading from not jumping through enough links.

Put another way, if someone in mid-2020 told me that they had symptomatic covid and was formally diagnosed with covid, I would expect that they had worse symptoms than someone who said they had covid symptoms and later tested for covid antibodies. This is because jumping through the hoops to get a clinical diagnosis is nontrivial Bayesian evidence of severity and not just certainty, under most circumstances, and especially when testing is limited and/or gatekeeped (which is true for many parts of the world for covid in 2020, and is usually true in the US for mental health). 

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-18T20:51:16.945Z · LW(p) · GW(p)

Ah, sorry, yes. Me being unclear on that was also bad. The phrasing you give is the one I intended to convey, though I sure didn't do it.

Replies from: Linch
comment by Linch · 2021-10-18T21:03:20.349Z · LW(p) · GW(p)

Thanks, appreciate the update!

comment by orthonormal · 2021-10-17T21:35:32.681Z · LW(p) · GW(p)

Additionally, as a canary statement: I was also never asked to sign an NDA.

comment by Vaniver · 2021-10-17T23:23:52.319Z · LW(p) · GW(p)

I think CFAR would be better off if Anna delegated hiring to someone else.

I think Pete did (most of?) the hiring as soon as he became ED, so I think this has been the state of CFAR for a while (while I think Anna has also been able to hire people she wanted to hire).

Replies from: petemichaud-1
comment by PeteMichaud (petemichaud-1) · 2021-10-18T07:14:42.324Z · LW(p) · GW(p)

It's always been a somewhat group-involved process, but yes, I was primarily responsible for hiring for roughly 2016 through the end of 2017, then it would have been Tim. But again, it's a small org and always involved some involvement of the whole group. 

Replies from: elityre
comment by Eli Tyre (elityre) · 2021-10-18T08:11:54.178Z · LW(p) · GW(p)

Without denying that it is a small org and staff usually have some input over hiring, that input is usually informal.

My understanding is that in the period when Anna was ED, there was an explicit all-staff discussion when they were considering a hire (after the person had done a trial?). In the Pete era, I'm sure Pete asked for staff members' opinions, and if (for instance) I sent him an email with my thoughts on a potential hire, he would take that info into account, but there was not institutional group meeting. 

comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-17T22:27:16.006Z · LW(p) · GW(p)

if one believed somebody else were just as capable of causing AI to be Friendly, clearly one should join their project instead of starting one's own.

Nitpicking: there are reasons to have multiple projects, for example it's convenient to be in the same geographic location but not anyone can relocate to any place.

Replies from: orthonormal
comment by orthonormal · 2021-10-17T22:43:11.063Z · LW(p) · GW(p)

Sure - and MIRI/FHI are a decent complement to each other, the latter providing a respectable academic face to weird ideas. 

Generally though, it's far more productive to have ten top researchers in the same org rather than having five orgs each with two top researchers and a couple of others to round them out. Geography is a secondary concern to that.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-17T22:54:27.014Z · LW(p) · GW(p)

A "secondary concern" in the sense that, we should work remotely? Or in the sense that everyone should relocate? Because the latter is unrealistic: people have families, friends, communities, not anyone can uproot themself.

Replies from: orthonormal
comment by orthonormal · 2021-10-17T23:52:59.665Z · LW(p) · GW(p)

A secondary concern in that it's better to have one org that has some people in different locations, but everyone communicating heavily, than to have two separate organizations.

Replies from: Davidmanheim, vanessa-kosoy
comment by Davidmanheim · 2021-10-18T06:53:08.479Z · LW(p) · GW(p)

I think this is much more complex than you're assuming. As a sketch of why, costs of communication scale poorly, and the benefits of being small and coordinating centrally often beats the costs imposed by needing to run everything as one organization. (This is why people advise startups to outsource non-central work.)

comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-18T09:29:04.502Z · LW(p) · GW(p)

This might be the right approach, but notice that no existing AI risk org does that. They all require physical presence.

Replies from: novalinium
comment by novalinium · 2021-10-18T17:31:50.165Z · LW(p) · GW(p)

Anthropic does not require consistent physical presence.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-18T18:27:33.865Z · LW(p) · GW(p)

AFAICT, Anthropic is not an existential AI safety org per se, they're just doing a very particular type of research which might help with existential safety. But also, why do you think they don't require physical presence?

Replies from: novalinium, Vaniver
comment by novalinium · 2021-10-18T23:13:53.262Z · LW(p) · GW(p)

If you're asking why I believe that they don't require presence, I've been interviewing with them and that's my understanding from talking with them. The first line of copy on their website is

Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.

Sounds pretty much like a safety org to me.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-19T11:24:47.756Z · LW(p) · GW(p)

If you're asking why I believe that they don't require presence, I've been interviewing with them and that's my understanding from talking with them.

Are you talking about "you can work from home and come to the office occasionally", or "you can live on a different continent"?

Sounds pretty much like a safety org to me.

I found no mention of existential risk on their web page. They seem to be a commercial company, aiming at short-to-mid-term applications. I doubt they have any intention to do e.g. purely theoretical research, especially if it has no applications to modern systems. So, what they do can still be meritorious and relevant to reducing existential risk. But, the context of this discussion is: can we replace all AI safety orgs by just one org. And, Anthropic is too specialized to serve such a role.

comment by Vaniver · 2021-10-18T19:20:14.951Z · LW(p) · GW(p)

I believe Anthropic doesn't expect its employees to be in the office every day, but I think this is more pandemic-related than it is a deliberate organizational design choice; my guess is that most Anthropic employees will be in the office a year from now.

comment by nostalgebraist · 2021-10-17T21:01:48.911Z · LW(p) · GW(p)

First, thank you for writing this.

Second, I want to jot down a thought I've had for a while now, and which came to mind when I read both this and Zoe's Leverage post.

To me, it looks like there is a recurring phenomenon in the rationalist/EA world where people...

  • ...become convinced that the future is in their hands: that the fate of the entire long-term future ("the future light-cone") depends on the success of their work, and the work of a small circle of like-minded collaborators
  • ...become convinced that (for some reason) only they, and their small circle, can do this work (or can do it correctly, or morally, etc.) -- that in spite of the work's vast importance, in spite of the existence of billions of humans and surely at least thousands with comparable or superior talent for this type of work, it is correct/necessary for the work to be done by this tiny group
  • ...become less concerned with the epistemic side of rationality -- "how do I know I'm right? how do I become more right than I already am?" -- and more concerned with gaining control and influence, so that the long-term future may be shaped by their own (already-obviously-correct) views
  • ...spend more effort on self-experimentation and on self-improvement techniques, with the aim of turning themselves into a person capable of making world-historic breakthroughs -- if they do not feel like such a person yet, they must become one, since the breakthroughs must be made within their small group
  • ...become increasingly concerned with a sort of "monastic" notion of purity or virtue: some set of traits which few-to-no people possess naturally, which are necessary for the great work, and which can only be attained through an inward-looking process of self-cultivation that removes inner obstacles, impurities, or aversive reflexes ("debugging," making oneself "actually try")
  • ...suffer increasingly from (understandable!) scrupulosity and paranoia, which compete with the object-level work for mental space and emotional energy
  • ...involve themselves in extreme secrecy, factional splits with closely related thinkers, analyses of how others fail to achieve monastic virtue, and other forms of zero- or negative-sum conflict which do not seem typical of healthy research communities
  • ...become probably less productive at the object-level work, and at least not obviously more productive, and certainly not productive in the clearly unique way that would be necessary to (begin to) justify the emphasis on secrecy, purity, and specialness

I see all of the above in Ziz's blog, for example, which is probably the clearest and most concentrated example I know of the phenomenon.  (This is not to say that Ziz is wrong about everything, or even to say Ziz is right or wrong about anything -- only to observe that her writing is full of factionalism, full of concern for "monastic virtue," much less prone to raise the question "how do I know I'm right?" than typical rationalist blogging, etc.)  I got the same feeling reading about Zoe's experience inside Leverage.  And I see many of the same things reported in this post.

I write from a great remove, as someone who's socially involved with parts of the rationalist community, but who has never even been to the Bay Area -- indeed, as someone skeptical that AI safety research is even very important!  This distance has the obvious advantages and disadvantages.

One of the advantages, I think, is that I don't even have inklings of fear or scrupulosity about AI safety.  I just see it as a technical/philosophical research problem.  An extremely difficult one, yes, but one that is not clearly special or unique, except possibly in its sheer level of difficulty.

So, I expect it is similar to other problems of that type.  Like most such problems, it would probably benefit from a much larger pool of researchers: a lot of research is just perfectly-parallelizable brute-force search, trying many different things most of which will not work.

It would be both surprising news, and immensely bad news, to learn that only a tiny group of people could (or should) work on such a problem -- that would mean applying vastly less parallel "compute" to the problem, relative to what is theoretically available, and that when the problem is forbiddingly difficult to begin with.

Of course, if this were really true, then one ought to believe that it is true.  But it surprises me how quick many rationalists are to accept this type of claim, on what looks from the outside like very little evidence.  And it also surprises me how quickly the same people accept unproven self-improvement techniques, even ideas that look like wishful thinking ("I can achieve uniquely great things if I just actually try, something no one else is doing..."), as substitutes for what they lose by accepting insularity.  Ways to make up for the loss in parallel compute by trying to "overclock" the few processors left available.

From where I stand, this just looks like a hole people go into, which harms them while -- sadly, ironically -- not even yielding the gains in object-level productivity it purports to provide.  The challenge is primarily technical, not personal or psychological, and it is unmoved by anything but direct attacks on its steep slopes.

(Relevant: in grad school, I remember feeling envious of some of my colleagues, who seemed able to do research easily, casually, without any of my own inner turmoil.  I put far more effort into self-cultivation, but they were far more productive.  I was, perhaps, "trying hard to actually try"; they were probably not even trying, just doing.  I was, perhaps, "working to overcome my akrasia"; they simply did not have my akrasia to begin with.

I believe that a vast amount of good technical research is done by such people, perhaps even the vast majority of good technical research.  Some AI safety researchers are like this, and many people like this could do great AI safety research, I think; but they are utterly lacking in "monastic virtue" and they are the last people you will find attached to one of these secretive, cultivation-focused monastic groups.)

Replies from: Davis_Kingsley, cousin_it, hg00, TekhneMakre, Gunnar_Zarncke, pktechgirl, TAG
comment by Davis_Kingsley · 2021-10-18T07:34:59.684Z · LW(p) · GW(p)

I worked for CFAR full-time from 2014 until mid to late 2016, and have worked for CFAR part-time or as a frequent contractor ever since. My sense is that dynamics like those you describe were mostly not present at CFAR, or insofar as they were present weren't really the main thing. I do think CFAR has not made as much research progress as I would like, but I think the reasoning for that is much more mundane and less esoteric than the pattern you describe here.

The fact of the matter is that for almost all the time I've been involved with CFAR, there just plain hasn't been a research team. Much of CFAR's focus has been on running workshops and other programs rather than on dedicated work towards extending the art; while there have occasionally been people allocated to research, in practice even these would often end up getting involved in workshop preparation and the like.

To put things another way, I would say it's much less "the full-time researchers are off unproductively experimenting on their own brains in secret" and more "there are no full-time researchers". To the best of my knowledge CFAR has not ever had what I would consider a systematic research and development program -- instead, the organization has largely been focused on delivering existing content and programs, and insofar as the curriculum advances it does so via iteration and testing at workshops rather than a more structured or systematic development process.

I have historically found this state of affairs pretty frustrating (and am working to change it), but I think that it's a pretty different dynamic than the one you describe above.


(I suppose it's possible that the systematic and productive full-time CFAR research team was so secretive that I didn't even know it existed, but this seems unlikely...)

comment by cousin_it · 2021-10-17T22:50:22.373Z · LW(p) · GW(p)

Maybe offtopic, but the "trying too hard to try" part rings very true to me. Been on both sides of it.

The tricky thing about work, I'm realizing more and more, is that you should just work. That's the whole secret. If instead you start thinking how difficult the work is, or how important to the world, or how you need some self-improvement before you can do the work effectively, these thoughts will slow you down and surprisingly often they'll be also completely wrong. It always turns out later that your best work wasn't the one that took the most effort, or felt the most important at the time; you were just having a nose-down busy period, doing a bunch of things, and only the passage of time made clear which of them mattered.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2021-10-18T05:27:23.738Z · LW(p) · GW(p)

Without commenting on the object level, I am really happy to see someone lay this out in terms of patterns that apply to a greater or lesser extent.

comment by hg00 · 2021-10-18T09:30:25.896Z · LW(p) · GW(p)

Does anyone have thoughts about avoiding failure modes of this sort?

Especially in the "least convenient possible world" where some of the bullet points are actually true -- like, if we're disseminating principles for wannabe AI Manhattan Projects, and we're optimizing the principles for the possibility that one of the wannabe AI Manhattan Projects is the real deal, what principles should we disseminate?


Most of my ideas are around "staying grounded" -- spend significant time hanging out with "normies" who don't buy into your worldview, maintain your sense of humor, fully unplug from work at least one day per week, have hobbies outside of work (perhaps optimizing explicitly for escapism in the form of computer games, TV shows, etc.) Possibly live somewhere other than the Bay Area, someplace with fewer alternative lifestyles and a stronger sense of community. (I think Oxford has been compared favorably to Berkeley with regard to presence of homeless people, at least.)

But I'm just guessing, and I encourage others to share their thoughts. Especially people who've observed/experienced mental health crises firsthand -- how could they have been prevented?

EDIT: I'm also curious how to think about scrupulosity. It seems to me that team members for an AI Manhattan Project should ideally have more scrupulosity/paranoia than average, for obvious reasons. ("A bit above the population average" might be somewhere around "they can count on one hand the number of times they blacked out while drinking" -- I suspect communities like ours already select for high-ish levels of scrupulosity.) However, my initial guess is that instead of directing that scrupulosity towards implementation of some sort of monastic ideal, they should instead direct that scrupulosity towards trying to make sure their plan doesn't fail in some way they didn't anticipate, trying to make sure their code doesn't have any bugs, monitoring their power-seeking tendencies, seeking out informed critics to learn from, making sure they themselves aren't a single point of failure, making sure that important secrets stay secret, etc. (what else should be on this list?) But, how much paranoia/scrupulosity is too much?

Replies from: romeostevensit, abiggerhammer, ChristianKl, Avi Weiss
comment by romeostevensit · 2021-10-18T16:00:20.900Z · LW(p) · GW(p)

IMO, A large number of mental health professionals simply aren't a good fit for high intelligence people having philosophical crises. People know this and intuitively avoid the large hassle and expense of sorting through a large number of bad matches. Finding solid people to refer to who are not otherwise associated with the community in any way would be helpful.

Replies from: RobbBB, ozziegooen, Zian
comment by Rob Bensinger (RobbBB) · 2021-10-18T18:37:30.866Z · LW(p) · GW(p)

I know someone who may be able to help with finding good mental health professionals for those situations; anyone who's reading this is welcome to PM me for contact info.

comment by ozziegooen · 2021-10-18T20:18:14.380Z · LW(p) · GW(p)

There's an "EA Mental Health Navigator" now to help people connect to the right care.
https://eamentalhealth.wixsite.com/navigator

I don't know how good it is yet. I just emailed them last week, and we set up an appointment for this upcoming Wednesday. I might report back later, as things progress.

comment by Zian · 2021-10-19T23:43:20.065Z · LW(p) · GW(p)

Unfortunately, by participating in this community (LW/etc.), we've disqualified ourselves from asking Scott to be our doctor (should I call him "Dr. Alexander" when talking about him-as-a-medical-professional while using his alias when he's not in a clinical environment?).

I concur with your comment about having trouble finding a good doctor for people like us. p(find a good doctor) is already low and difficult given the small n (also known as the doctor shortage). If you combine p(doctor works well with people like us), the result may rapidly approach epsilon.

It seems that the best advice is to make n bigger by seeking care in a place with a large per capita population of the doctors you need. For example, by combining https://nccd.cdc.gov/CKD/detail.aspx?Qnum=Q600 with the US Census ACS 2013 population estimates (https://data.census.gov/cedsci/table?t=Counts,%20Estimates,%20and%20Projections%3APopulation%20Total&g=0100000US%240400000&y=2013&tid=ACSDT1Y2013.B01003&hidePreview=true&tp=true), we see that the following states had >=0.9 primary care doctors per 1,000 people:

  • District of Columbia (1.4)
  • Vermont (1.1)
  • Massachusetts (1.0)
  • Maryland (0.9)
  • Minnesota (0.9)
  • Rhode Island (0.9)
  • New York (0.9)
  • Connecticut (0.9)
comment by abiggerhammer · 2021-10-26T05:57:19.157Z · LW(p) · GW(p)

Does anyone have thoughts about avoiding failure modes of this sort?

Meredith from Status451 here. I've been through a few psychotic episodes of my own, often with paranoid features, for reasons wholly unrelated to anything being discussed at the object-level here; they're unpleasant enough, both while they're going on and while cleaning up the mess afterward, that I have strong incentives to figure out how to avoid these kinds of failure modes! The patterns I've noticed are, of course, only from my own experience, but maybe relating them will be helpful.

  • Instrumental scrupulousness is a fantastic tool. By "instrumental scrupulousness" I simply mean pointing my scrupulousness at trying to make sure I'm not doing something I can't undo. More or less what you describe in your edit, honestly. As for how much is too much, you absolutely don't want to paralyse yourself into inaction through constantly second-guessing yourself. Real artists ship, after all!
  • Living someplace with good mental health care has been super crucial for me. In my case that's Belgium. I've only had to commit myself once, but it saved my life and was, bizarrely, one of the most autonomy-respecting experiences I've ever had. The US healthcare system is caught in a horrifically large principal-agent problem, and I don't know if it can extricate itself. Yeeting myself to another continent was literally the path of least resistance for me to find adequate, trustworthy care.
  • Secrecy is overrated and most things are nothingburgers. I've learned to identify certain thought patterns -- catastrophisation, for example -- as maladaptive, and while it'll probably always be a work in progress, the worst thing that actually does happen is usually far less awful than I imagined.

The "quit trying so hard and just do it" approach that you and nostalgebraist are gesturing at pays rent, IMO. Christian's and Avi's advice about cultivating stable and rewarding friendships and family relationships also comports with my experience.

comment by ChristianKl · 2021-10-18T14:49:43.230Z · LW(p) · GW(p)

I do think that encouraging people to stay in contact with their family and work to have good relationships is very useful. Family can provide a form of grounding that having small talk with normies while going dancing or persuing other hobbies doesn't provide. 

When deciding whether a personal development group is culty I think it's a good test to ask whether or not the work of the group lead to the average person in the group having better or worse relationships with their parents. 

comment by Avi (Avi Weiss) · 2021-10-18T10:10:23.456Z · LW(p) · GW(p)

I agree, and think it's important to 'stay grounded' in the 'normal world' if you're involved in any sort of intense organization or endeavor.

You've made some great suggestions.

I would also suggest that having a spouse who preferably isn't too involved, or involved at all, and maybe even some kids, is another commonality among people who find it easier to avoid going too far down these rabbit holes. Also, having a family is positive in countless other ways, and what I consider part of the 'good life' for most people.

comment by TekhneMakre · 2021-10-17T22:13:34.839Z · LW(p) · GW(p)
It would be both surprising news, and immensely bad news, to learn that only a tiny group of people could (or should) work on such a problem -- that would mean applying vastly less parallel "compute" to the problem, relative to what is theoretically available, and that when the problem is forbiddingly difficult to begin with.  

I have substantial probability on an even worse state: there's *multiple* people or groups of people, *each* of which is *separately* necessary for AGI to go well. Like, metaphorically, your liver, heart, and brain would each be justified in having a "rarity narrative". In other words, yes, the parallel compute is necessary--there's lots of data and ideas and thinking that has to happen--but there's a continuum of how fungible the compute is relative to the problems that need to be solved, and there's plenty of stuff at the "not very fungible but very important" end. Blood is fungible (though you definitely need it), but you can't just lose a heart valve, or your hippocampus, and be fine.

Replies from: nostalgebraist
comment by nostalgebraist · 2021-10-17T22:43:46.198Z · LW(p) · GW(p)

I didn't mention it in the comment, but having a larger pool of researchers is not only useful for doing "ordinary" work in parallel -- it also increases the rate at which your research community discovers and accumulates outlier-level, irreplaceable genius figures of the Euler/Gauss kind.

If there are some such figures already in the community, great, but there are presumably others yet to be discovered.  That their impact is currently potential, not actual, does not make its sacrifice any less damaging.

Replies from: TekhneMakre
comment by TekhneMakre · 2021-10-17T22:53:28.615Z · LW(p) · GW(p)

Yep. (And I'm happy this overall discussion is happening, partly because, assuming rarity narratives are part of what leads to all this destructive psychic stuff as you described, then if a research community wants to work with people about whom rarity narratives would actually be somewhat *true*, the research community has as an important subgoal to figure out how to have true rarity narratives in a non-harmful way.)

comment by Gunnar_Zarncke · 2021-10-17T23:01:54.809Z · LW(p) · GW(p)

Most of these bullet points seem to apply to some degree to every new and risky endeavor ever started. How risky things are is often unclear at the start. Such groups are build from committed people. Small groups develop their own dynamics. Fast growth leads to social growing pains. Lack of success leads to a lot of additional difficulties. Also: Evaporative cooling. And if (partial) success happens even more growth leads to needed management level etc etc. And later: Hindsight bias. 

comment by Elizabeth (pktechgirl) · 2021-10-18T05:28:35.588Z · LW(p) · GW(p)

Without commenting on the object level, I am really happy to see someone lay this out in terms of patterns that apply to a greater or lesser extent, with correlations but not in lockstep.

comment by TAG · 2021-10-17T21:14:47.903Z · LW(p) · GW(p)

Best. Comment. Ever.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-18T02:09:52.155Z · LW(p) · GW(p)

Mod note: I don't think LessWrong is the right place for this kind of comment. Please don't leave more of these. I mean, you will get downvoted, but we might also ban you from this and similar threads if you do more of that.

Replies from: Duncan_Sabien, Benquo
comment by Duncan_Sabien · 2021-10-18T05:05:33.912Z · LW(p) · GW(p)

It seems worthwhile to give a little more of the "why" here, lest people just walk away with the confusing feeling that there are invisible electric fences that they need to creep and cringe away from.

I'll try to lay out the why, and if I'm wrong or off, hopefully one of the mods or regular users will elaborate.

Some reasons why this type of comment doesn't fit the LW garden:

  • Low information density.  We want readers to be rewarded for each comment that strays across their visual field.
  • Cruxless/opaque/nonspecific.  While it's quite valid to leave a comment in support of another comment, we want it to be clear to readers why the other comment was deserving of more-support-than-mere-upvoting.
  • Self-signaling.  We want LW to both be, and feel, substantially different from the generic internet-as-a-whole, meaning that some things which are innocuous but strongly reminiscent of run-of-the-mill internetting provoke a strong "no, not that" reaction.
  • Driving things toward "sides."  There's the good stuff and the bad stuff, the good people and the bad people.  Fundamental bucketing, less attention to detail and gradients and complexity.

Having just laid out this case, I now feel bad about a similar comment that I made today, and am going to go either edit or delete it, in the pursuit of fairness and consistency.

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-18T05:30:50.629Z · LW(p) · GW(p)

Ah, sorry, yeah, I agree my mod notice wasn't specific enough. Most of my mod notice was actually about a mixture of this comment, and this other comment [LW(p) · GW(p)], that felt like it was written by the same generator, but feels more obviously bad to me (and probably to others too). 

Like, the other comment that TAG left on this post felt like it was really trying to just be some kind of social flag that is common on the rest of the internet. Like, it felt like some kind of semi-ironic "Boo, outgroup" comment, and this comment felt like it was a parallel "Yay, ingroup!" comment, both of which felt like two sides of the same bad coin.

I think occasional "woo, this is great!" comments seem kind of good to me, though I also wouldn't want them to become as everpresent on here as the rest of the internet, if they are generated by a genuine sense of excitement and compassion. But I feel like I would want those comments to not come from the same generator that then generates a snarky "oh, just like this idiot..." comment. And if I had to choose between either having both or neither, I would choose neither.

comment by Benquo · 2021-10-18T18:17:14.339Z · LW(p) · GW(p)

Are you going to tell Eliezer the same thing? https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe#EJPSjPv7nNzsam947 [LW(p) · GW(p)]

Replies from: habryka4
comment by habryka (habryka4) · 2021-10-19T07:29:28.808Z · LW(p) · GW(p)

No, Eliezer's comment seems like a straightforward "I am making a non-anonymous upvote" which is indeed a functionality I also sometimes want, since sometimes the identity of the upvoter definitely matters. The comment above seems like it's doing something different, especially in combination with the other comment I linked to.

comment by Eli Tyre (elityre) · 2021-10-18T07:41:47.662Z · LW(p) · GW(p)

[Edit: I want to note that this is represents only a fraction of my overall feelings and views on this whole thing.]

I don't want to concentrate on the question of which is "worse"; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases.

I feel some annoyance at this sentence. I appreciate the stated goal of just trying to understand what happened in the different situations, without blaming or trying to evaluate which is worse.

But then the post repeatedly (in every section!) makes reference to Zoe's post, comparing her experience at Leverage to your (and others') experience at MIRI/CFAR, taking specific elements from her account and drawing parallels to your own. This is the main structure of the post!

Some more or less randomly chosen examples (ctrl-f "Leverage" or "Zoe" for lots more):

Zoe begins by listing a number of trauma symptoms she experienced.  I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break.

...

Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past events secretively.  This matches my experience.

...

Zoe discusses an unofficial NDA people signed as they left, agreeing not to talk badly of the organization.  While I wasn't pressured to sign an NDA, there were significant security policies discussed at the time (including the one about researchers not asking each other about research). 

...

Like Zoe, I experienced myself and others being distanced from old family and friends, who didn't understand how high-impact the work we were doing was.

If the goal is just to clarify what happened and not at all to blame or compare, then why not...just state what happened at MIRI/CFAR without comparing to the Leverage case, at all?

You (Jessica) say, "I will be noting parts of Zoe's post and comparing them to my own experience, which I hope helps to illuminate common patterns; it really helps to have an existing different account to prompt my memory of what happened." But in that case, why not use her post as a starting point for organizing your own thoughts, but then write something about MIRI/CFAR that stands on its own terms?

. . . 

To answer my own question...

My guess is that you adopted this essay structure because you want to argue that the things that happened at Leverage were not a one-off random thing, they were structurally (not just superficially) similar to dynamics at MIRI / CFAR. That is, there is a common cause in of similar symptoms, between those two cases.

If so, my impression is that this essay is going too fast, by introducing a bunch of new interpretation-laden data, and fitting that data into a grand theory of similarity between Leverage and MIRI all at once. Just clarifying the facts about what happened is a different (hard) goal than describing the general dynamics underlying those events. I think we'll make more progress if we do the first, well, before moving on to the second.

In effect, because the data is presented as part of some larger theory, I have to do extra cognitive work to evaluate the data on its own terms, instead of slipping into the frame of evaluating whether the larger theory is true or false, or whether my affect towards MIRI should be the same as my affect toward Leverage, or something. It made it harder instead of easier for me to step out of the frame of blame and "who was most bad?".


 

Replies from: elityre
comment by Eli Tyre (elityre) · 2021-10-18T08:02:57.565Z · LW(p) · GW(p)

This feels especially salient because a number of the specific criticisms, in my opinion, don't hold up to scrutiny, but this is obscured by the comparison to Leverage.

Like for any cultural characteristic X, there will be healthy and unhealthy versions. For instance, there are clearly good healthy versions of "having a culture of self improvement and debugging", and also versions that are harmful. 

For each point Zoe contends that (at least some parts of Leverage) had a destructive version, and you point out that there was a similar thing at MIRI/CFAR. And for many (but not all) of those points, 1) I agree that there was a similar dynamic at MIRI/CFAR, and also 2) I think that the MIRI CFAR version was much less harmful than what Zoe describes.

For instance,

Zoe is making the claim that (at least some parts of) Leverage had an unhealthy and destructive culture of debugging. You, Jessica, make the claim that CFAR had a similar culture of debugging, and that this is similarly bad. My current informed impression is that CFAR's self improvement culture both had some toxic elements and is/was also an order of magnitude better than what Zoe describes.

Assuming that for a moment that my assessment about CFAR is true (of course, it might not be), your comparing debugging at CFAR to debugging at Leverage is confusing to the group cognition, because they have been implicitly lumped together [? · GW]. 

Now, more people's estimation of CFAR's debugging culture will rise or fall with their estimation of Leverage's debugging culture. And recognizing this, consciously or unconsciously, people are now incentivized to bias their estimation of one or of the other (because they want to defend CFAR, or defend Leverage, or attack CFAR, or attack Leverage).

I'm under this weird pressure, because if I state "Anna debugging with me while I worked at CFAR might seems bad, but it was actually mostly innocuous" is kind of awkward, because this seems to be implying that that what happened at Leverage was also not so bad. 

And on the flip side, I'll feel more cagey about talking about the toxic elements of CFAR's debugging culture, because in context, that seems to be implying that it was as bad as Zoe's account of Leverage. 

"Debugging culture" is just one example. For many of these points, I think further investigation might show that the thing that happened at one org was meaningfully different from the thing that happened at the other org, in which case, bucketing them together from the getgo seems counterproductive to me. 

Drawing the parallels between MIRI/CFAR and Leverage, point by point, makes it awkward to consider each org's pathologies on it's own terms. It makes it seem like if one was bad, then the other was probably bad too, even though it is at least possible that one org had mostly healthy versions of some cultural elements and the other had mostly unhealthy versions of similar elements, or (even more likely) they each had a different mix of pathologies.

I contend that if the goal is to get clear on the facts, we want to do the opposite thing: we want to, as much as possible, consider the details of the cases independently, attempting to do original seeing [LW · GW], so that we can get a good grasp of what happened in each situation. 

And only after we've clarified what happened might we want to go back and see if there are common dynamics in play.

Replies from: elityre, Vladimir_Nesov
comment by Eli Tyre (elityre) · 2021-10-22T08:20:37.464Z · LW(p) · GW(p)

Ok. After thinking further and talking about it with others, I've changed my mind about the opinion that I expressed in this comment, for two reasons.

1) I think there is some pressure to scapegoat Leverage, by which I mean specifically, "write off Leverage as reprehensible, treat it as 'an org that we all know is bad', and move on, while feeling good about our selves for not being bad they way that they were". 

Pointing out some ways that MIRI or CFAR are similar to Leverage disrupts that process. Anyone who both wants to scapegoat Leverage and also likes MIRI has to contend with some amount of cognitive dissonance. (A person might productively resolve this cognitive dissonance by recognizing what I contend are real disanalogies between the two cases, but they do have to come to terms with it at all.)

If you mostly want to scapegoat, this is annoying, but I think we should be making it harder, not easier, to scapegoat in this way.

2) My current personal opinion is that the worst things that happened at MIRI or CFAR are not in the same league as what was describes as happening in (at least some parts of) Leverage in Zoe's post, both in terms of the deliberateness of the bad dynamics and the magnitude the harm they caused.

I think that talking about MIRI or CFAR is mostly a distraction from understanding what happened at Leverage, and what things anyone here should do next. However, there are some similarities between Leverage on the one hand and CFAR or MIRI on the other, and Jessica had some data about the latter which might be relevant to people's view about Leverage.

Basically, there's an epistemic processing happening in these comments and on general principles, it is better for people to share info that they think is relevant, so that the epistemic process has the option of disregarding it or not.

 

 

I do think that Jessica writing this post will predictably have reputational externalities that I don't like and I think are unjustified. 

Broadly, I think that onlookers not paying much attention would have concluded from Zoe's post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe's and Jessica's post, is likely to conclude that Leverage and MIRI are similarly bad cults.

I think that both of these views are incorrect simplifications. But I think that the second story is less accurate than the first, and so I think it is a cost if Jessica's post promotes the second view. I have some annoyance about that.

However, I think that we mostly shouldn't be in the business of trying to cater to bystanders who are not invested in understanding what is actually going on in detail, and we especially should not compromise the discourse of people who are invested in understanding.

 

I still wish that this post had been written differently in a number of ways (such as emphasizing more strongly that in Jessica's opinion management in corporate America is worse than MIRI or Leverage), but I acknowledging that writing such a post is hard.

Replies from: Hazard
comment by Hazard · 2021-10-22T21:06:33.468Z · LW(p) · GW(p)

I'm not sure what writing this comment felt like for you, but from my view it seems like you've noticed a lot of the dynamics about scapegoating and info-suppression fields that Ben and Jessica have blogged about in the past (and occasionally pointed out in the course of these comments, though less clearly). I'm going to highlight a few things.

I do think that Jessica writing this post will predictably have reputational externalities that I don't like and I think are unjustified. 

Broadly, I think that onlookers not paying much attention would have concluded from Zoe's post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe's and Jessica's post, is likely to conclude that Leverage and MIRI are similarly bad cults.

I totally agree with this. I also think that to the degree to which an "onlooker not paying much attention" concludes this is the degree to which they are habituated to engaging with discussion of wrongdoing as scapegoating games. This seems to be very common (though incredibly damaging) behavior. Scapegoating works on the associative/impressionistic logic of "looks", and Jessica's post certainly makes CFAR/MIRI "look" bad. This post can be used as "material" or "fuel" for scapegoating, regardless of whether Jessica's intent in writing it. Though it can't be used honestly to scapegoat (if there even is such a thing). Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about "HEY, DON'T USE THIS TO SCAPEGOAT", and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.

(aside, from both my priors on Jess and my reading of the post it was clear to me that Jess wasn't trying to scapegoat CFAR/MIRI. It also simply isn't in Jess's interests for them to be scapegoated)

Another thought: CFAR/MIRI already "look" crazy to most people who might check them out. UFAI, cryonics, acausal trade, are all things that "look" crazy. And yet we're all able to talk about them on LW without worry about "how it looks" because over many many conversations, many sequences, blog posts, comments, etc have created a community with different common knowledge about what will result in people ganging up on you.

Something that we as a community don't talk a lot about is power structures, coercion, emotional abuse, manipulation, etc. We don't collectively build and share models on their mechanics and structure. As such, I think it's expected that when "things get real" people abandon commitment to the truth in favor of "oh shit, there's an actual conflict, I or others could be scapegoated, I am not safe, I need to protect my people from being scapegoated at all cost".

However, I think that we mostly shouldn't be in the business of trying to carter to bystanders who are not invested in understanding what is actually going on in detail, and we especially should not compromise the discourse of people who are invested in understanding.

I totally agree, and I think if you explore this sense you already sorta see how commitment to making sure things "look okay" quickly  becomes a commitment to suppress information about what happened.

(aside, these are some of Ben's post that have been most useful to me for understanding some of this stuff)

Blame Games

Can Crimes Be Discussed Literally?

Judgement, Punishment, and Information-Supression Fields

Replies from: jessica.liu.taylor, elityre
comment by jessicata (jessica.liu.taylor) · 2021-10-22T21:25:02.070Z · LW(p) · GW(p)

I appreciate this comment, especially that you noticed the giant upfront paragraph that's relevant to the discussion :)

One note on reputational risk: I think I took reasonable efforts to reduce it, by emailing a draft to people including Anna Salamon beforehand. Anna Salamon added Matt Graves (Vaniver) to the thread, and they both said they'd be happy with me posting after editing (Matt Graves had a couple specific criticisms of the post). I only posted this on LW, not on my blog or Medium. I didn't promote it on Twitter except to retweet someone who was already tweeting about it. I don't think such reputation risk reduction on my part was morally obligatory (it would be really problematic to require people complaining about X organization to get approval from someone working at that organization), just possibly helpful anyway.

Spending more than this amount of effort managing reputation risks would seriously risk important information not getting published at all, and too little of that info being published would doom the overall ambitious world-saving project by denying it relevant knowledge about itself. I'm not saying I acted optimally, just, I don't see the people complaining about this making a better tradeoff in their own actions or advising specific policies that would improve the tradeoff.

comment by Eli Tyre (elityre) · 2021-10-23T09:48:00.458Z · LW(p) · GW(p)

Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about "HEY, DON'T USE THIS TO SCAPEGOAT"

I think that's literally true, but also they way you wrote this sentence implies that that is unusual or uncommon.

I think that's backwards. If a person was intentionally and deliberately motivated to scapegoat some other person or group, it is an effective rhetorical move to say "I'm not trying to punish them, I just want to talk freely about some harms."

By pretending that you're not attacking the target, you protect yourself somewhat from counter attack. Now you can cause reputational damage, and if people try to punish you for doing that, you can retreat to the Motte of "but I was just trying to talk about what's going on. I specifically said not to punish any one!"

and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.

This also seems to strong to me. I expect that many movement EAs will read the  post Zoe's and think "well, that's enough information for me to never have anything to do with Geoff or Leverage." This isn't because they're not interested in justice, it's because they don't have time time or the interest to investigate every allegation, so they're using some rough heuristics and policies such as "if something looks sufficiently like a dangerous cult, don't even bother giving it the benefit of the doubt."

Replies from: Hazard
comment by Hazard · 2021-10-23T16:12:03.549Z · LW(p) · GW(p)

When I was drafting my comment, the original version of the text you first quoted was, "Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about 'HEY DON'T USE THIS TO SCAPEGOAT' (which people are totally capable of ignoring)", guess I should have left that in there. I don't think it's uncommon to ignore such disclaimers, I do think it actively opposes behaviors and discourse norms I wish to see in the world.

I agree that putting a "I'm not trying to blame anyone" disclaimer can be a pragmatic rhetorical move for someone attempting to scapegoat. There's an alternate timeline version of Jessica that wrote this post as a well crafted, well defended rhetorical attack, where the literal statements in the post all clearly say "don't fucking scapegoat anyone, you fools" but all the associative and impressionistic "dark implications" (Vaniver's language) say "scapegoat CFAR/MIRI!" I want to draw your attention to the fact that for a potential dark implication to do anything, you need people who can pick up that signal. For it to be an effective rhetorical move, you need a critical mass of people who are well practiced in ignoring literal speech, who understand on some level that the details don't matter, and are listening in for "who should we blame?"

To be clear, I think there is such a critical mass! I think this is very unfortunate! (though not awkward, as Scott put it) There was a solid 2+ days where Scott and Vaniver's insistence on this being a game of "Scapegoat Vassar vs scapegoat CFAR/MIRI" totally sucked me in, and instead of reading the contents of anyone's comments I was just like "shit, who's side do I join? How bad would it be if people know I hung out with Vassar once? I mean I really loved my time at CFAR, but I'm also friends with Ben and Jess. Fuck, but I also think Eli is a cool guy! Shit!" That mode of thinking I engaged in is a mode that can't really get me what I want, which is larger and larger groups of people that understand scapegoating dynamics and related phenomena. 

This also seems to strong to me. I expect that many movement EAs will read the  post Zoe's and think "well, that's enough information for me to never have anything to do with Geoff or Leverage." This isn't because they're not interested in justice, it's because they don't have time time or the interest to investigate every allegation, so they're using some rough heuristics and policies such as "if something looks sufficiently like a dangerous cult, don't even bother giving it the benefit of the doubt."

Okay, I think my statement was vague enough to be mistaken for a statement I think is too strong. Though I expect you might consider my clarification too strong as well :)

I was thinking about the "in any way that matters" part. I can see how that implies a sort of disregard for justice that spans across time. Or more specifically, I can see how you would think it implies that certain conversations you've had with EA friends were impossible, or that they were lying/confabulating the whole convo, and you don't think that's true. I don't think that's the case either. I'm thinking about it as more piece-wise behavior. One will sincerely care about justice, but in that moment where they read Jess's post, ignore the giant disclaimer about scapegoating, and try to scapegoat MIRI/CFAR/Leverage, in that particular moment the cognitive processes generating their actions aren't aligned with justice, and are working against it. Almost like an "anti-justice traumatic flashback" but most of the time it's much more low-key and less intense than what you will read about in the literature on flashback. Malcolm Ocean does a great job of describing this sort of "falling into a dream" in his post Dream Mashups (his post is not about scapegoating, its about ending up running a cognitive algo that hurts you without noticing).

To be clear, I not saying such behavior is contemptible, blameworthy, bad, or to-be-scapegoated. I am saying it's very damaging, and I want more people to understand how it works. I want to understand how it works more. I would love to not get sucked into as many anti-justice dreams where I actively work against creating the sort of world I want to live in.

So when I said "not aligned with justice in any important relevant way", that was more a statement about "how often and when will people fall into these dreams?" Sorta like the concept of "fair weather friend", my current hunch is that people fall into scapegoating behavior exactly when it would be most helpful for them to not. While reading a post about "here's some problems I see in this institution that is at the core of our community" is exactly when it is most important for one's general atemporal commitment to justice to be present in one's actual thoughts and actions. 

comment by Vladimir_Nesov · 2021-10-18T11:42:20.115Z · LW(p) · GW(p)

This works as a general warning against awareness of hypotheses that are close to but distinct from the prevailing belief. The goal should be to make this feasible, not to become proficient in noticing the warning signs and keeping away from this.

I think the feeling that this kind of argument is fair is a kind of motivated cognition that's motivated by credence. That is, if a cognitive move (argument, narrative, hypothesis) puts forward something false, there is a temptation to decry it for reasons that would prove too much, that would apply to good cognitive moves just as well if considered in their context, which credence-motivated cognition won't be doing.

comment by Vanessa Kosoy (vanessa-kosoy) · 2021-10-17T10:08:05.629Z · LW(p) · GW(p)

Full disclosure: I am a MIRI Research Associate. This means that I receive funding from MIRI, but I am not a MIRI employee and I am not privy to its internal operation or secrets.

First of all, I am really sorry you had these horrible experiences.

A few thoughts:

Thought 1: I am not convinced the analogy between Leverage and MIRI/CFAR holds up to scrutiny. I think that Geoff Anders is most likely a bad actor, whereas MIRI/CFAR leadership is probably acting in good faith. There seems to be significantly more evidence of bad faith in Zoe's account than in Jessica's account, and the conclusion is reinforced by adding evidence from other accounts. In addition, MIRI definitely produced some valuable public research whereas I doubt the same can be said of Leverage, although I haven't been following Leverage so I am not confident about the latter (ofc it's in principle possible for a deeply unhealthy organization to produce some good outputs, and good outputs certainly don't excuse abuse of personnel, but I do think good outputs provide some evidence against such abuse).

It is important not to commit the fallacy of gray [LW · GW]: it would risk both judging MIRI/CFAR too harshly and judging Leverage insufficiently harshly. The comparison Jessica makes to "normal corporations" reinforces this impression: I have much experience in the industry, and although it's possible I've been lucky in some ways, I still very much doubt the typical company is nearly as bad as Leverage.

Thought 2: From my experience, AI alignment is a domain of research that intrinsically comes with mental health hazards. First, the possibility of impending doom and the heavy sense of responsibility are sources of stress. Second, research inquiries often enough lead to "weird" metaphysical questions that risk overturning the (justified or unjustified) assumptions we implicitly hold to maintain a sense of safety in life. I think it might be the closest thing in real life to the Lovecraftian notion of "things that are best not to know because they will drive you mad". Third, the sort of people drawn to the area and/or having the necessary talents seem to often also come with mental health issues (I am including myself in this group).

This might be regarded as an argument to blame MIRI less for the mental health fallout described by Jessica, but this is also an argument to pay more attention to the problem. It would be best if we could provide the people working in the area with the tools and environment to deal with these risks.

Thought 3: The part that concerned me the most in Jessica's account (in part due to its novelty to me) is MIRI's internal secrecy policy. While it might be justifiable to have some secrets to which only some employees are privy, it seems very extreme to require going through an executive because even the mere fact that a secret project exists is too dangerous. MIRI's secrecy policy seemed questionable to me even before, but this new spin makes it even more dubious.

Overall, I wish MIRI was more transparent, so that for example its supporters would know about this internal policy. I realize there are tradeoffs involved, but I am not convinced MIRI chose the right balance. To me it feels like overconfidence about MIRI's ability to steer the right way without the help of external critique.

Moreover, I'm a little worried that MIRI's lack of transparency might pose a risk for the entire AI safety project. Tbh, one of my first thoughts when I saw the headline of the OP was "oh no, what if some scandal around MIRI blows up and the shockwave buries the entire community". And I guess some people might think this is a reason for more secrecy. IMO it's a reason for less secrecy (not necessarily less secrecy about technical AI stuff, but less secrecy about management and high-level plans). If we don't have any skeletons in the closest, we don't need to worry about the day they will come out. And eventually everything comes out, more or less. When most of everything is in the open, the community can find the right balance around it, and the reputation system is much more robust.

Thought 4: "Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness." I think (hope?) this is not at all a prevalent stance in the community (or at least in its leading echelons), but just for the record I want to note my strong position that the "someone" in this story is very misguided. Like I said, I don't think community is currently comparable to Leverage, but this is the sort of thing that can push us in that direction.

Replies from: Dojan, ChristianKl, jessica.liu.taylor
comment by Dojan · 2021-10-17T14:37:40.041Z · LW(p) · GW(p)

Plus a million points for "IMO it's a reason for less secrecy"!

If you put a lid on something you might contain it in the short term, but only at the cost of increasing the pressure: And pressure wants out, and the higher the pressure the more explosive it will be when it inevitably does come out. 

I have heard too many accounts like this, in person and anecdotally, on the web and off for me to currently be interested in working or even getting to closely involved with any of the organizations in question. The only way to change this for me is to believably cultivate a healthy, transparent and supportive environment. 

This made me go back and read "Every Cause wants to be a Cult" (Eliezer, 2007) [LW · GW], which includes quotes like this one:
"Here I just want to point out that the worthiness of the Cause does not mean you can spend any less effort in resisting the cult attractor. And that if you can point to current battle lines, it does not mean you confess your Noble Cause unworthy. You might think that if the question were, “Cultish, yes or no?” that you were obliged to answer, “No,” or else betray your beloved Cause."

comment by ChristianKl · 2021-10-17T17:15:04.753Z · LW(p) · GW(p)

Thought 2: From my experience, AI alignment is a domain of research that intrinsically comes with mental health hazards. First, the possibility of impending doom and the heavy sense of responsibility are sources of stress. Second, research inquiries often enough lead to "weird" metaphysical questions that risk overturning the (justified or unjustified) assumptions we implicitly hold to maintain a sense of safety in life. I think it might be the closest thing in real life to the Lovecraftian notion of "things that are best not to know because they will drive you mad". Third, the sort of people drawn to the area and/or having the necessary talents seem to often also come with mental health issues (I am including myself in this group).

That sounds like MIRI should have a councillor on it's staff.

Replies from: crabman
comment by philip_b (crabman) · 2021-10-17T17:59:28.035Z · LW(p) · GW(p)

That would make them more vulnerable to claims that they use organizational mind control on their employees, and at the same time make it more likely that they would actually use it.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-18T08:39:28.210Z · LW(p) · GW(p)

You would likely hire someone who's traditionally trained, credentialed and has work experience instead of doing a bunch of your own psych-experiments, likely in a tradition like gestalttherapy that focuses on being nonmanipulative. 

Replies from: benjamin-j-campbell
comment by benjamin.j.campbell (benjamin-j-campbell) · 2021-10-18T14:31:36.133Z · LW(p) · GW(p)

There's an easier solution that doesn't run the risk of being or appearing manipulative. You can contract external and independent councillors and make them available to your staff anonymously. I don't know if there's anything comparable in the US, but in Australia they're referred to as Employee Assistance Programs (EAPs). Nothing you discuss with the councillor can be disclosed to your workplace, although in rare circumstances there may be mandatory reporting to the police (e.g. if abuse or ongoing risk of a minor is involved).

This also goes a long way toward creating a place where employees can talk about things they're worried will seem crazy in work contexts.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-18T15:08:51.954Z · LW(p) · GW(p)

Solutions like that might work, but it's worth noting that just having an average therapist likely won't be enough.

If you actually care about a level of security that protects secrets against intelligence agencies, operational security of the office of the therapist is a concern. 

Governments that have security clearances don't want their employees to talk with therapists who don't have the secuirty clearances about classified information.

Talking nonjudgmentally with someone who has reasonable fears that the humanity won't survive the next ten years because of fast AI timelines is not easy. 

comment by jessicata (jessica.liu.taylor) · 2021-10-17T21:19:26.474Z · LW(p) · GW(p)

As far as I can tell, normal corporate management is much worse than Leverage. The kind of people from that world will, sometimes when prompted in private conversations, say things like:

  • Standard practice is to treat negotiations with other parties as zero-sum games.
  • "If you look around the table and can't tell who the sucker is, it's you" is a description of a common, relevant social dynamic in corporate meetings.
  • They have PTSD symptoms from working in corporate management, and are very threat-sensitive in general.
  • They learned from experience to treat social reality in general as fake, everything as an act.
  • They learned to accept that "there's no such thing as not being lost", like they've lost the ability to self-locate in a global map (I've experienced losing this to a significant extent).
  • Successful organizations get to be where they are by committing crimes, so copying standard practices from them is copying practices for committing crimes.

This is, to a large extent, them admitting to being bad actors, them and others having been made so by their social context. (This puts the possibility of "Geoff Anders being a bad actor" into perspective)

MIRI is, despite the problems noted in the post, as far as I can tell the most high-integrity organization doing AI safety research. FHI contributes some, but overall lower-quality research; Paul Christiano does some relevant research; OpenAI's original mission was actively harmful, and hasn't done much relevant safety research as far as I can tell. MIRI's public output in the past few years since I left has been low, which seems like bad sign for its future performance, but what it's done so far has been quite a large portion of the relevant research. I'm not particularly worried about scandals sinking the overall non-MIRI AI safety world's reputation, given the degree to which it is of mixed value.

Replies from: nostalgebraist, vanessa-kosoy
comment by nostalgebraist · 2021-10-18T00:11:33.554Z · LW(p) · GW(p)

As far as I can tell, normal corporate management is much worse than Leverage

Your original post drew a comparison between MIRI and Leverage, the latter of which has just been singled out for intense criticism.

If I take the quoted sentence literally, you're saying that "MIRI was like Leverage" is a gentler critique than "MIRI is like your regular job"?

If the intended message was "my job was bad, although less bad than the jobs of many people reading this, and instead only about as bad as Leverage Research," why release this criticism on the heels of a post condemning Leverage as an abusive cult?  If you believe the normally-employed among LessWrong readers are being abused by sub-Leverage hellcults, all the time, that seems like quite the buried lede!

Sorry for the intense tone, it's just ... this sentence, if taken seriously, reframes the entire post for me in a big, weird, bad way.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T00:49:08.984Z · LW(p) · GW(p)

I thought I was pretty clear, at the end of the post, that I wasn't sad that I worked at MIRI instead of Google or academia. I'm glad I left when I did, though.

The conversations I'm mentioning with corporate management types were suprising to me, as were the contents of Moral Mazes, and Venkatesh Rao's writing. So "like a regular job" doesn't really communicate the magnitude of the harms to someone who doesn't know how bad normal corporate management is. It's hard for me to have strong opinions given that I haven't worked in corporate management, though. Maybe a lot of places are pretty okay.

I've talked a lot with someone who got pretty high in Google's management hierarchy, who seems really traumatized (and says she is) and who has a lot of physiological problems, which seem overall worse than mine. I wouldn't trade places with her, mental health-wise.

MIRI wouldn't make sense as a project if most regular jobs were fine, people who were really ok wouldn't have reason to build unfriendly AI. I discussed with some friends about the benefits of working at Leverage vs. MIRI vs. the US Marines, and we agreed that Leverage and MIRI were probably overall less problematic, but the fact that the US marines signal that they're going to dominate/abuse people is an important advantage relative to the alternatives, since it sets expectations more realistically.

Replies from: elityre, Vaniver, T3t
comment by Eli Tyre (elityre) · 2021-10-18T02:33:32.604Z · LW(p) · GW(p)

MIRI wouldn't make sense as a project if most regular jobs were fine, people who were really ok wouldn't have reason to build unfriendly AI.

I just want to note that this is a contentious claim. 

There is a competing story, and one much more commonly held among people who work for or support MIRI, that the world is heading towards an unaligned intelligence explosion due to the combination of a coordination problem and very normal motivated reasoning about the danger posed by lucrative and prestigious projects.

One could make the claim "healthy" people (whatever that means) wouldn't exhibit those behaviors, ie that they would be able to coordinate and avoid rationalizing. But that's a non-standard view. 

I would prefer that you specifically flag it as a non-standard view, and then either make the argument for that view over the more common one, or highlight that you're not going into detail on the argument and that you don't expect others to accept the claim.

As it is, it feels a little like this is being slipped in as if it is a commonly accepted premise.  

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-18T02:41:43.993Z · LW(p) · GW(p)

I agree this is a non-standard view.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2021-10-18T10:33:29.964Z · LW(p) · GW(p)

Yes, I would! Any pointers? 
(to avoid miscommunication I'm reading this to say that people are more likely to build UFAI because of traumatizing environment vs. normal reasons Eli mentioned)

comment by Vaniver · 2021-10-18T01:12:13.682Z · LW(p) · GW(p)

Note that there's an important distinction between "corporate management" and "corporate employment"--the thing where you say "yeesh, I'm glad I'm not a manager at Google" is substantially different from the thing where you say "yeesh, I'm glad I'm not a programmer at Google", and the audience here has many more programmers than managers.

[And also Vanessa's experience [LW(p) · GW(p)] matches my impressions, tho I've spent less time in industry.]

[EDIT: I also thought it was clear that you meant this more as a "this is what MIRI was like" than "MIRI was unusually bad", but I also think this means you're open to nostalgebraist's objection, that you're ordering things pretty differently from how people might naively order them.]

Replies from: iceman
comment by iceman · 2021-10-18T13:45:49.552Z · LW(p) · GW(p)

My experience was that if you were T-5 (Senior), you had some overlap with PM and management games, and at T-6 (Staff), you were often in them. I could not handle the politics to get to T-7. Programmers below T-5 are expected to earn promotions or to leave.

Google's a big company, so it might have been different elsewhere internally. My time at Google certainly traumatized me, but probably not to the point of anything in this or the Leverage thread.

Replies from: jkaufman