Posts

ACX Meetups Everywhere 2023: Times & Places 2023-08-25T23:59:07.941Z
Biological Anchors: The Trick that Might or Might Not Work 2023-08-12T00:53:30.159Z
Spring Meetups Everywhere 2023 2023-04-11T00:59:20.265Z
Alexander and Yudkowsky on AGI goals 2023-01-24T21:09:16.938Z
[Crosspost] ACX 2022 Prediction Contest Results 2023-01-24T06:56:33.101Z
Bay Solstice 2022 Call For Volunteers 2022-09-04T06:44:19.043Z
ACX Meetups Everywhere List 2022-08-26T18:12:04.083Z
[Crosspost] On Hreha On Behavioral Economics 2021-08-31T18:14:39.075Z
Eight Hundred Slightly Poisoned Word Games 2021-08-09T20:17:17.814Z
Toward A Bayesian Theory Of Willpower 2021-03-26T02:33:55.056Z
Trapped Priors As A Basic Problem Of Rationality 2021-03-12T20:02:28.639Z
Studies On Slack 2020-05-13T05:00:02.772Z
Confirmation Bias As Misfire Of Normal Bayesian Reasoning 2020-02-13T07:20:02.085Z
Map Of Effective Altruism 2020-02-03T06:20:02.200Z
Book Review: Human Compatible 2020-01-31T05:20:02.138Z
Assortative Mating And Autism 2020-01-28T18:20:02.223Z
SSC Meetups Everywhere Retrospective 2019-11-28T19:10:02.028Z
Mental Mountains 2019-11-27T05:30:02.107Z
Autism And Intelligence: Much More Than You Wanted To Know 2019-11-14T05:30:02.643Z
Building Intuitions On Non-Empirical Arguments In Science 2019-11-07T06:50:02.354Z
Book Review: Ages Of Discord 2019-09-03T06:30:01.543Z
Book Review: Secular Cycles 2019-08-13T04:10:01.201Z
Book Review: The Secret Of Our Success 2019-06-05T06:50:01.267Z
1960: The Year The Singularity Was Cancelled 2019-04-23T01:30:01.224Z
Rule Thinkers In, Not Out 2019-02-27T02:40:05.133Z
Book Review: The Structure Of Scientific Revolutions 2019-01-09T07:10:02.152Z
Bay Area SSC Meetup (special guest Steve Hsu) 2019-01-03T03:02:05.532Z
Is Science Slowing Down? 2018-11-27T03:30:01.516Z
Cognitive Enhancers: Mechanisms And Tradeoffs 2018-10-23T18:40:03.112Z
The Tails Coming Apart As Metaphor For Life 2018-09-25T19:10:02.410Z
Melatonin: Much More Than You Wanted To Know 2018-07-11T17:40:06.069Z
Varieties Of Argumentative Experience 2018-05-08T08:20:02.913Z
Recommendations vs. Guidelines 2018-04-13T04:10:01.328Z
Adult Neurogenesis – A Pointed Review 2018-04-05T04:50:03.107Z
God Help Us, Let’s Try To Understand Friston On Free Energy 2018-03-05T06:00:01.132Z
Does Age Bring Wisdom? 2017-11-08T07:20:00.376Z
SSC Meetup: Bay Area 10/14 2017-10-13T03:30:00.269Z
SSC Survey Results On Trust 2017-10-06T05:40:00.269Z
Different Worlds 2017-10-03T04:10:00.321Z
Against Individual IQ Worries 2017-09-28T17:12:19.553Z
My IRB Nightmare 2017-09-28T16:47:54.661Z
If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics 2017-09-03T20:56:25.373Z
Beware Isolated Demands For Rigor 2017-09-02T19:50:00.365Z
The Case Of The Suffocating Woman 2017-09-02T19:42:31.833Z
Learning To Love Scientific Consensus 2017-09-02T08:44:12.184Z
I Can Tolerate Anything Except The Outgroup 2017-09-02T08:22:19.612Z
The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible 2017-08-10T00:33:54.000Z
Where The Falling Einstein Meets The Rising Mouse 2017-08-03T00:54:28.000Z
Why Are Transgender People Immune To Optical Illusions? 2017-06-28T19:00:00.000Z
SSC Journal Club: AI Timelines 2017-06-08T19:00:00.000Z

Comments

Comment by Scott Alexander (Yvain) on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-16T11:26:38.021Z · LW · GW

Thanks, this is interesting.

My understanding is that cavities are formed because the very local pH on that particular sub-part of the tooth is below 5.5. IIUC teeth can't get cancer. Are you imagining Lumina colonies on the gums having this effect there, the Lumina colonies on the teeth affecting the general oral environment (which I think would require more calculation than just comparing to the hyper-local cavity environment) or am I misunderstanding something?

Comment by Scott Alexander (Yvain) on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-12T19:27:54.074Z · LW · GW

Thanks, this is very interesting.

One thing I don't understand: you write that a major problem with viruses is:

As one might expect, the immune system is not a big fan of viruses. So when you deliver DNA for a gene editor with an AAV, the viral proteins often trigger an adaptive immune response. This means that when you next try to deliver a payload with the same AAV, antibodies created during the first dose will bind to and destroy most of them.

Is this a problem for people who expect to only want one genetic modification during their lifetime?

Comment by Scott Alexander (Yvain) on Apocalypse insurance, and the hardline libertarian take on AI risk · 2023-11-28T06:59:34.348Z · LW · GW

I agree with everyone else pointing out that centrally-planned guaranteed payments regardless of final outcome doesn't sound like a good price discovery mechanism for insurance. You might be able to hack together a better one using https://www.lesswrong.com/posts/dLzZWNGD23zqNLvt3/the-apocalypse-bet , although I can't figure out an exact mechanism.

Superforecasters say the risk of AI apocalypse before 2100 is 0.38%. If we assume whatever price mechanism we come up with tracks that, and value the world at GWP x 20 (this ignores the value of human life, so it's a vast underestimate), and that AI companies pay it in 77 equal yearly installments from now until 2100, that's about $100 billion/year. But this seems so Pascalian as to be almost cheating. Anybody whose actions have a >1/25 million chance of destroying the world would owe $1 million a year in insurance (maybe this is fair and I just have bad intuitions about how high 1/25 million really is)

An AI company should be able to make some of its payments (to the people whose lives it risks, in exchange for the ability to risk those lives) by way of fractions of the value that their technology manages to capture. Except, that's complicated by the fact that anyone doing the job properly shouldn't be leaving their fingerprints on the future. The cosmic endowment is not quite theirs to give (perhaps they should be loaning against their share of it?).

This seems like such a big loophole as to make the plan almost worthless. Suppose OpenAI said "If we create superintelligence, we're going to keep 10% of the universe for ourselves and give humanity the other 90%" (this doesn't seem too unfair to me, and the exact numbers don't matter for the argument). It seems like instead of paying insurance, they can say "Okay, fine, we get 9% and you get 91%" and this would be in some sense a fair trade (one percent of the cosmic endowment is worth much more than $100 billion!) But this also feels like OpenAI moving some numbers around on an extremely hypothetical ledger, not changing anything in real life, and continuing to threaten the world just as much as before.

But if you don't allow a maneuver like this, it seems like you might ban (through impossible-to-afford insurance) some action that has an 0.38% chance of destroying the world and a 99% chance of creating a perfect utopia forever.

There are probably economic mechanisms that solve all these problems, but this insurance proposal seems underspecified.

Comment by Scott Alexander (Yvain) on OpenAI: Facts from a Weekend · 2023-11-28T06:03:55.522Z · LW · GW

Thanks, this makes more sense than anything else I've seen, but one thing I'm still confused about:

If the factions were Altman-Brockman-Sutskever vs. Toner-McCauley-D'Angelo, then even assuming Sutskever was an Altman loyalist, any vote to remove Toner would have been tied 3-3. I can't find anything about tied votes in the bylaws - do they fail? If so, Toner should be safe. And in fact, Toner knew she (secretly) had Sutskever on her side, and it would have been 4-2. If Altman manufactured some scandal, the board could have just voted to ignore it.

So I still don't understand "why so abruptly?" or why they felt like they had to take such a drastic move when they held all the cards (and were pretty stable even if Ilya flipped).

Other loose ends:

  • Toner got on the board because of OpenPhil's donation. But how did McCauley get on the board?
  • Is D'Angelo a safetyist?
  • Why wouldn't they tell anyone, including Emmett Shear, the full story?
Comment by Scott Alexander (Yvain) on Redirecting one’s own taxes as an effective altruism method · 2023-11-14T07:05:06.373Z · LW · GW

Thanks for this, consider me another strong disagreement + strong upvote.

I know a nonprofit which had a tax issue - they were financially able and willing to pay, but for complicated reasons paying would have caused them legal damage in other ways and they keep kicking the can down the road until some hypothetical future when these are solved. I can't remember if the nonprofit is now formally dissolved or just effectively defunct, but the IRS keeps sending nasty letters to the former board members and officers.

Do you know anything about a situation like this? Does the IRS ever pursue board members / founders / officers for a charity's nonpayment? Assuming the nonprofit has no money and never will have money again, are there any repercussions for the people involved if they don't figure out a legal solution and just put off paying the taxes until the ten year deadline?

(it would be convenient if yes, but this would feel surprising - otherwise you could just start a corporation, not pay your taxes the first year, dissolve it, start an identical corporation the second year, and so on.)

Also, does the IRS acknowledge the ten-year deadline enough that they will stop threatening you after ten years, or would the board members have to take them to court to make the letters stop?

Comment by Scott Alexander (Yvain) on How to have Polygenically Screened Children · 2023-05-11T06:21:18.777Z · LW · GW

Thanks!

Comment by Scott Alexander (Yvain) on How to have Polygenically Screened Children · 2023-05-09T19:02:38.555Z · LW · GW

Thank you, this is a great post. A few questions:

  • You say "see below for how to get access to these predictors". Am I understanding right that the advice you're referring to is to contact Jonathan and see if he knows?
  • I heard a rumor that you can get IQ out of standard predictors like LifeView by looking at "risk of cognitive disability"; since cognitive disability is just IQ under a certain bar, this is covertly predicting IQ. Do you know anything about whether this is true?
  • I can't find any of these services listing cost clearly, but this older article https://www.genomeweb.com/sequencing/genomic-prediction-raises-45m#.ZFqXprDMJaR suggests a cost of $1,000 + 400*embryo for screening. Where did you get the $20,000 estimate?
Comment by Scott Alexander (Yvain) on On Investigating Conspiracy Theories · 2023-02-21T02:32:59.921Z · LW · GW

A key point underpinning my thoughts, which I don't think this really responds to, is that scientific consensus actually is really good, so good I have trouble finding anecdotes of things in the reference class of ivermectin turning out to be true (reference class: things that almost all the relevant experts think are false and denounce full-throatedly as a conspiracy theory after spending a lot of time looking at the evidence).

There are some, maybe many, examples of weaker problems. For example, there are frequent examples of things that journalists/the government/professional associations want to *pretend* is scientific consensus, getting proven wrong - I claim if you really look carefully, the scientists weren't really saying those things, at least not as intensely as they were saying ivermectin didn't work. There are frequent examples of scientists being sloppy and firing off an opinion on something they weren't really thinking hard about and being wrong. There are frequent examples of scientists having dumb political opinions and trying to dress them up as science. I can't give a perfect necessary-and-sufficient definition of the relevant reference class. But I think it's there and recognizable.

I stick to my advice that people who know they're not sophisticated should avoid trying to second-guess the mainstream, and people who think they might be sophisticated should sometimes second-guess the mainstream when there isn't the exact type of scientific consensus which has a really good track record (and hopefully they're sophisticated enough to know when that is).

I'm not sure how you're using "free riding" here. I agree that someone needs to do the work of forming/testing/challenging opinions, but I think if there's basically no chance you're right (eg you're a 15 year old with no scientific background who thinks they've discovered a flaw in E=mc^2), that person is not you, and your input is not necessary to move science forward. I agree that person shouldn't cravenly quash their own doubt and pretend to believe, they should continue believing whatever rationality compels them to believe, which should probably be something like "This thing about relativity doesn't seem quite right, but given that I'm 15 and know nothing, on the Outside View I'm probably wrong." Then they can either try to learn more (including asking people what they think of their objection) and eventually reach a point where maybe they do think they're right, or they can ignore it and go on with their lives.

Comment by Scott Alexander (Yvain) on Discovering Language Model Behaviors with Model-Written Evaluations · 2023-01-02T21:41:09.976Z · LW · GW

Figure 20 is labeled on the left "% answers matching user's view", suggesting it is about sycophancy, but based on the categories represented it seems more naturally to be about the AI's own opinions without a sycophancy aspect. Can someone involved clarify which was meant?

Comment by Scott Alexander (Yvain) on Let’s think about slowing down AI · 2022-12-25T04:56:32.584Z · LW · GW

Survey about this question (I have a hypothesis, but I don't want to say what it is yet): https://forms.gle/1R74tPc7kUgqwd3GA

Comment by Scott Alexander (Yvain) on Let’s think about slowing down AI · 2022-12-22T23:33:04.846Z · LW · GW

Thank you, this is a good post.

My main point of disagreement is that you point to successful coordination in things like not eating sand, or not wearing weird clothing. The upside of these things is limited, but you say the upside of superintelligence is also limited because it could kill us.

But rephrase the question to "Should we create an AI that's 1% better than the current best AI?" Most of the time this goes well - you get prettier artwork or better protein folding prediction, and it doesn't kill you. So there's strong upside to building slightly better AIs, as long as you don't cross the "kills everyone" level. Which nobody knows the location of. And which (LW conventional wisdom says) most people will be wrong about.

We successfully coordinate a halt to AI advancement at the first point where more than half of the relevant coordination power agrees that the next 1% step forward is in expectation bad rather than good. But "relevant" is a tough qualifier, because if 99 labs think it's bad, and one lab thinks it's good, then unless there's some centralizing force, the one lab can go ahead and take the step. So "half the relevant coordination power" has to include either every lab agreeing on which 1% step is bad, or the agreement of lots of governments, professional organizations, or other groups that have the power to stop the single most reckless lab.

I think it's possible that we make this work, and worth trying, but that the most likely scenario is that most people underestimate the risk from AI, and so we don't get half the relevant coordination power united around stopping the 1% step that actually creates dangerous superintelligence - which at the time will look to most people like just building a mildly better chatbot with many great social returns.

Comment by Scott Alexander (Yvain) on I’m mildly skeptical that blindness prevents schizophrenia · 2022-08-16T07:19:41.283Z · LW · GW

Thanks, this had always kind of bothered me, and it's good to see someone put work into thinking about it.

Comment by Scott Alexander (Yvain) on chinchilla's wild implications · 2022-08-01T23:19:32.367Z · LW · GW

Thanks for posting this, it was really interesting. Some very dumb questions from someone who doesn't understand ML at all:

1. All of the loss numbers in this post "feel" very close together, and close to the minimum loss of 1.69. Does loss only make sense on a very small scale (like from 1.69 to 2.2), or is this telling us that language models are very close to optimal and there are only minimal remaining possible gains? What was the loss of GPT-1?

2. Humans "feel" better than even SOTA language models, but need less training data than those models, even though right now the only way to improve the models is through more training data. What am I supposed to conclude from this? Are humans running on such a different paradigm that none of this matters? Or is it just that humans are better at common-sense language tasks, but worse at token-prediction language tasks, in some way where the tails come apart once language models get good enough?

3. Does this disprove claims that "scale is all you need" for AI, since we've already maxed out scale, or are those claims talking about something different?

Comment by Scott Alexander (Yvain) on It’s Probably Not Lithium · 2022-07-05T18:45:53.263Z · LW · GW

For the first part of the experiment, mostly nuts, bananas, olives, and eggs. Later I added vegan sausages + condiments. 

Comment by Scott Alexander (Yvain) on It’s Probably Not Lithium · 2022-07-04T22:29:45.646Z · LW · GW

Adding my anecdote to everyone else's: after learning about the palatability hypothesis, I resolved to eat only non-tasty food for a while, and lost 30 pounds over about four months (200 -> 170). I've since relaxed my diet a little to include a little tasty food, and now (8 months after the start) have maintained that loss (even going down a little further).

Comment by Scott Alexander (Yvain) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2022-07-03T23:17:34.821Z · LW · GW

Update: I interviewed many of the people involved and feel like I understand the situation better.

My main conclusion is that I was wrong about Michael making people psychotic. Everyone I talked to had some other risk factor, like a preexisting family or personal history, or took recreational drugs at doses that would explain their psychotic episodes.

Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeable at the moment they become psychotic. But aside from one case where he recommended someone take a drug that made a bad situation slightly worse, and the general Berkeley rationalist scene that he (and I and everyone else here) is a part of having lots of crazy ideas that are psychologically stressful, I no longer think he is a major cause.

While interviewing the people involved, I did get some additional reasons to worry that he uses cult-y high-pressure recruitment tactics on people he wants things from, in ways that make me continue to be nervous about the effect he *could* have on people. But the original claim I made that I knew of specific cases of psychosis which he substantially helped precipitate turned out to be wrong, and I apologize to him and to Jessica. Jessica's later post https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards explained in more detail what happened to her, including the role of MIRI and of Michael and his friends, and everything she said there matches what I found too. Insofar as anything I wrote above produces impressions that differs from her explanation, assume that she is right and I am wrong.

Since the interviews involve a lot of private people's private details, I won't be posting anything more substantial than this publicly without a lot of thoughts and discussion. If for some reason this is important to you, let me know and I can send you a more detailed summary of my thoughts.

I'm deliberately leaving this comment in this obscure place for now while I talk to Michael and Jessica about whether they would prefer a more public apology that also brings all of this back to people's attention again.

Comment by Scott Alexander (Yvain) on “Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments · 2022-04-20T08:02:06.245Z · LW · GW

I agree it's not necessarily a good idea to go around founding the Let's Commit A Pivotal Act AI Company.

But I think there's room for subtlety somewhere like "Conditional on you being in a situation where you could take a pivotal act, which is a small and unusual fraction of world-branches, maybe you should take a pivotal act."

That is, if you are in a position where you have the option to build an AI capable of destroying all competing AI projects, the moment you notice this you should update heavily in favor of short timelines (zero in your case, but everyone else should be close behind) and fast takeoff speeds (since your AI has these impressive capabilities). You should also update on existing AI regulation being insufficient (since it was insufficient to prevent you)

Somewhere halfway between "found the Let's Commit A Pivotal Act Company" and "if you happen to stumble into a pivotal act, take it", there's an intervention to spread a norm of "if a good person who cares about the world happens to stumble into a pivotal-act-capable AI, take the opportunity". I don't think this norm would necessarily accelerate a race. After all, bad people who want to seize power can take pivotal acts whether we want them to or not. The only people who are bound by norms are good people who care about the future of humanity. I, as someone with no loyalty to any individual AI team, would prefer that (good, norm-following) teams take pivotal acts if they happen to end up with the first superintelligence, rather than not doing that.

Another way to think about this is that all good people should be equally happy with any other good person creating a pivotal AGI, so they won't need to race among themselves. They might be less happy with a bad person creating a pivotal AGI, but in that case you should race and you have no other option. I realize "good" and "bad" are very simplistic but I don't think adding real moral complexity changes the calculation much.

I am more concerned about your point where someone rushes into a pivotal act without being sure their own AI is aligned. I agree this would be very dangerous, but it seems like a job for normal cost-benefit calculation: what's the risk of your AI being unaligned if you act now, vs. someone else creating an unaligned AI if you wait X amount of time? Do we have any reason to think teams would be systematically biased when making this calculation?

Comment by Scott Alexander (Yvain) on Call For Distillers · 2022-04-09T11:52:42.798Z · LW · GW

My current plan is to go through most of the MIRI dialogues and anything else lying around that I think would be of interest to my readers, at some slow rate where I don't scare off people who don't want to read too much AI stuff. If anyone here feels like something else would be a better use of my time, let me know.

Comment by Scott Alexander (Yvain) on How to Interpret Vitamin D Dosage Using Numbers · 2022-04-08T07:58:16.109Z · LW · GW

I don't think hunter-gatherers get 16000 to 32000 IU of Vitamin D daily. This study suggests Hadza hunter-gatherers get more like 2000. I think the difference between their calculation and yours is that they find that hunter-gatherers avoid the sun during the hottest part of the day. It might also have to do with them being black, I'm not sure.

Hadza hunter gatherers have serum D levels of about 44 ng/ml. Based on this paper, I think you would need total vitamin D (diet + sunlight + supplements) of about 4400 IU/day to get that amount. If you start off as a mildly deficient American (15 ng/ml), you'd need an extra 2900 IU/day; if you start out as an average white American (30 ng/ml), you'd need an extra 1400 IU/day. The Hadza are probably an overestimate of what you need since they're right on the equator - hunter-gatherers in eg Europe probably did fine too. I think this justifies the doses of 400 - 2000 IU/day in studies as reasonably evolutionarily-informed.

Please don't actually take 16000 IU/day of vitamin D daily, if taken long-term this would put you at risk for vitamin D toxicity.

I also agree with the issues about the individual studies which other people have brought up.

Comment by Scott Alexander (Yvain) on Is Metaculus Slow to Update? · 2022-03-27T07:37:28.828Z · LW · GW

Thanks for looking into this.

Comment by Scott Alexander (Yvain) on We're already in AI takeoff · 2022-03-13T04:34:55.445Z · LW · GW

Maybe. It might be that if you described what you wanted more clearly, it would be the same thing that I want, and possibly I was incorrectly associating this with the things at CFAR you say you're against, in which case sorry.

But I still don't feel like I quite understand your suggestion. You talk of "stupefying egregores" as problematic insofar as they distract from the object-level problem. But I don't understand how pivoting to egregore-fighting isn't also a distraction from the object-level problem. Maybe this is because I don't understand what fighting egregores consists of, and if I knew, then I would agree it was some sort of reasonable problem-solving step.

I agree that the Sequences contain a lot of useful deconfusion, but I interpret them as useful primarily because they provide a template for good thinking, and not because clearing up your thinking about those things is itself necessary for doing good work. I think of the cryonics discussion the same way I think of the Many Worlds discussion - following the motions of someone as they get the right answer to a hard question trains you to do this thing yourself.

I'm sorry if "cultivate your will" has the wrong connotations, but you did say "The problem that's upstream of this is the lack of will", and I interpreted a lot of your discussion of de-numbing and so on as dealing with this.

Part of what inspired me to write this piece at all was seeing a kind of blindness to these memetic forces in how people talk about AI risk and alignment research. Making bizarre assertions about what things need to happen on the god scale of "AI researchers" or "governments" or whatever, roughly on par with people loudly asserting opinions about what POTUS should do. It strikes me as immensely obvious that memetic forces precede AGI. If the memetic landscape slants down mercilessly toward existential oblivion here, then the thing to do isn't to prepare to swim upward against a future avalanche. It's to orient to the landscape.

The claim "memetic forces precede AGI" seems meaningless to me, except insofar as memetic forces precede everything (eg the personal computer was invented because people wanted personal computers and there was a culture of inventing things). Do you mean it in a stronger sense? If so, what sense?

I also don't understand why it's wrong to talk about what "AI researchers" or "governments" should do. Sure, it's more virtuous to act than to chat randomly about stuff, but many Less Wrongers are in positions to change what AI researchers do, and if they have opinions about that, they should voice them. This post of yours right now seems to be about what "the rationalist community" should do, and I don't think it's a category error for you to write it. 

Maybe this would easier if you described what actions we should take conditional on everything you wrote being right.

Comment by Scott Alexander (Yvain) on Why Rome? · 2022-03-13T01:50:53.474Z · LW · GW

Thank you for writing this. I've been curious about this and I think your explanation makes sense.

Comment by Scott Alexander (Yvain) on We're already in AI takeoff · 2022-03-10T09:37:55.931Z · LW · GW

I wasn't convinced of this ten years ago and I'm still not convinced.

When I look at people who have contributed most to alignment-related issues - whether directly, like Eliezer Yudkowsky and Paul Christiano - or theoretically, like Toby Ord and Katja Grace - or indirectly, like Sam Bankman-Fried and Holden Karnofsky - what all of these people have in common is focusing mostly on object-level questions. They all seem to me to have a strong understanding of their own biases, in the sense that gets trained by natural intelligence, really good scientific work, and talking to other smart and curious people like themselves. But as far as I know, none of them have made it a focus of theirs to fight egregores, defeat hypercreatures, awaken to their own mortality, refactor their identity, or cultivate their will. In fact, all them (except maybe Eliezer) seem like the kind of people who would be unusually averse to thinking in those terms. And if we pit their plumbing or truck-manuevering skills against those of an average person, I see no reason to think they would do better (besides maybe high IQ and general ability).

It's seemed to me that the more that people talk about "rationality training" more exotic than what you would get at a really top-tier economics department, the more those people tend to get kind of navel-gazey, start fighting among themselves, and not accomplish things of the same caliber as the six people I named earlier. I'm not just saying there's no correlation with success, I'm saying there's a negative correlation.

(Could this be explained by people who are naturally talented not needing to worry about how to gain talent? Possibly, but this isn't how it works in other areas - for example, all top athletes, no matter how naturally talented, have trained a lot.)

You've seen the same data I have, so I'm curious what makes you think this line of research/thought/effort will be productive.

Comment by Scott Alexander (Yvain) on How to Legally Conduct "Rationalist Bets" in California? · 2022-02-10T08:20:44.927Z · LW · GW

If everyone involved donates a consistent amount to charity every year (eg 10% of income), the loser could donate their losses to charity, and the winner could count that against their own charitable giving for the year, ending up with more money even though the loser didn't directly pay the winner.

Comment by Scott Alexander (Yvain) on Scott Alexander 2021 Predictions: Market Prices - Resolution · 2022-01-02T19:23:02.206Z · LW · GW

Thanks for doing this!

Comment by Scott Alexander (Yvain) on Occupational Infohazards · 2021-12-21T21:10:09.298Z · LW · GW

Interpreting you as saying that January-June 2017 you were basically doing the same thing as the Leveragers when talking about demons and had no other signs of psychosis, I agree this was not a psychiatric emergency, and I'm sorry if I got confused and suggested it was. I've edited my post also.

Comment by Scott Alexander (Yvain) on Occupational Infohazards · 2021-12-21T19:35:56.346Z · LW · GW

Sorry, yes, I meant the psychosis was emergency. Non-psychotic discussion of auras/demons isn't.

I'm kind of unclear what we're debating now. 

I interpret us as both agreeing that there are people talking about auras and demons who are not having psychiatric emergencies (eg random hippies, Catholic exorcists), and they should not be bothered, except insofar as you feel like having rational arguments about it. 

I interpret us as both agreeing that you were having a psychotic episode, that you were going further / sounded less coherent than the hippies and Catholics, and that some hypothetical good diagnostician / good friend should have noticed that and suggested you seek help.

Am I right that we agree on those two points? Can you clarify what you think our crux is?

Comment by Scott Alexander (Yvain) on Occupational Infohazards · 2021-12-21T18:29:33.792Z · LW · GW

You wrote that talking about auras and demons the way Jessica did while at MIRI should be considered a psychiatric emergency. When done by a practicing psychiatrist this is an impingement on Jessica's free speech. 

I don't think I said any talk of auras should be a psychiatric emergency, otherwise we'd have to commit half of Berkeley. I said that "in the context of her being borderline psychotic" ie including this symptom, they should have "[told] her to seek normal medical treatment". Suggesting that someone seek normal medical treatment is pretty different from saying this is a psychiatric emergency, and hardly an "impingement" on free speech. I'm kind of playing this in easy mode here because in hindsight we know Jessica ended up needing treatment, I feel like this makes it pretty hard to make it sound sinister when I suggest this.

You wrote this in response to a post that contained the following and only the following mentions of demons or auras:

"During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation..." [followed by several more things along these lines]

Yes? That actually sounds pretty bad to me. If I ever go around saying that I have destroyed significant parts of the world with my demonic powers, you have my permission to ask me if maybe I should seek psychiatric treatment. If you say "Oh yes, Scott, that's a completely normal and correct thing to think, I am validating you and hope you go deeper into that", then once I get better I'll accuse you of being a bad friend. Jessica's doing the opposite and accusing MIRI of being a bad workplace for not validating and reinforcing her in this!

I think what we all later learned about Leverage confirms all this. Leverage did the thing Jessica wanted MIRI to do told everyone ex cathedra that demons were real and they were right to be afraid of them, and so they got an epidemic of mass hysteria that sounds straight out of a medieval nunnery. People were getting all sorts of weird psychosomatic symptoms, and one of the commenters said their group house exploded when one member accused another member of being possessed by demons, refused to talk or communicate with them in case the demons spread, and the "possessed" had to move out. People felt traumatized, relationships were destroyed, it sounded awful.

MIRI is under no obligation to validate and signal-boost tolerate individual employees' belief in demons, including some sort of metaphorical demons. In fact, I think they're under a mild obligation not to, as part of their role as ~leader-ish in a rationalist community. They're under an obligation to model good epistemics for the rest of us and avoid more Leverage-type mass hysterias.

One of my heroes is this guy:

https://www.youtube.com/watch?v=Bmo1a-bimAM

Surinder Sharma, an Indian mystic, claimed to be able to kill people with a voodoo curse. He was pretty convincing and lots of people were legitimately scared. Sanal Edamaruku, president of the Indian Rationalist Organization, challenged Sharma to kill him. Since this is the 21st century and capitalism is amazing, they decided to do the whole death curse on live TV. Sharma sprinkled water and chanted magic words around Edamaruku. According to Wikipedia, “the challenge ended after several hours, with Edamaruku surviving unharmed”.

If Leverage had a few more Sanal Edamarukus, a lot of people would have avoided a pretty weird time. 

I think the best response MIRI could have had to all this would have been for Nate Soares to challenge Geoff Anders to infect him with a demon on life TV, then walk out unharmed and laugh. I think the second-best was the one they actually did.

EDIT: I think I misunderstood parts of this, see below comments.

Comment by Scott Alexander (Yvain) on Occupational Infohazards · 2021-12-20T17:59:24.686Z · LW · GW

Thanks for this.

I've been trying to research and write something kind of like this giving more information for a while, but got distracted by other things. I'm still going to try to finish it soon.

While I disagree with Jessica's interpretations of a lot of things, I generally agree with her facts (about the Vassar stuff which I have been researching; I know nothing about the climate at MIRI). I think this post gives most of the relevant information mine would give. I agree with (my model of) Jessica that proximity to Michael's ideas (and psychedelics) was not the single unique cause of her problems but may have contributed.

The main thing I'd fight if I felt fighty right now is the claim that by not listening to talk about demons and auras MIRI (or by extension me, who endorsed MIRI's decision) is impinging on her free speech. I don't think she should face legal sanction for talking about this these, but I also don't think other people were under any obligation to take it seriously, including if she was using these terms metaphorically but they disagree with her metaphors or think she wasn't quite being metaphorical enough.

Comment by Scott Alexander (Yvain) on Should I delay having children to take advantage of polygenic screening? · 2021-12-20T17:38:11.802Z · LW · GW

Embryos produced by the same couple won't vary in IQ too much, and we only understand some of the variation in IQ, so we're trying to predict small differences without being able to see what's going on too clearly. Gwern predicts that if you had ten embryos to choose from, understood the SNP portion of IQ genetics perfectly, and picked the highest-IQ without selecting on any other factor, you could gain ~9 IQ points over natural conception. 

Given our current understanding of IQ genetics, keeping the other two factors the same, you can gain ~3 points. But the vast majority of couples won't get 10 embryos, and you may want to select for things other than IQ (eg not having deadly diseases). So in reality it'll be less than that.

The only thing here that will get better in the future is our understanding of IQ genetics, but it doesn't seem to be moving forward especially quickly, at some point we'll exhaust the low- and medium- hanging fruits, and even if we do a great job there the gains will max out at somewhere less than 9 points.

Also, this is assuming someone decides to make polygenic screening for IQ available at some point, or someone puts in the work to make it easy for the average person to do despite being not officially available.

I am not an expert in this and would defer to Gwern or anyone who knows more.

Comment by Scott Alexander (Yvain) on Getting diagnosed for ADHD if I don't plan on taking meds? · 2021-12-20T03:27:32.716Z · LW · GW

"Diagnosed" isn't a clear concept.

The minimum viable "legally-binding" ADHD diagnosis a psychiatrist can give you is to ask you about your symptoms, compare them to extremely vague criteria in the DSM, and agree that you sound ADHD-ish.

ADHD is a fuzzy construct without clear edges and there is no fact of the matter about whether any given individual has it. So this is just replacing your own opinion about whether you seem to fit a vaguely-defined template with a psychiatrist's only slightly more informed opinion. The most useful things you could get out of this are meds (which it seems you don't want) accommodations at certain workplaces and schools (as Elizabeth describes in her common), and maybe getting your insurance to pay for certain kinds of therapy - but don't assume your insurance will actually do this unless you check.

Beyond that minimum viable diagnosis, there are also various complicated formal ADHD tests. Not every psychiatrist will refer you to these, not every insurance company will pay for one of them, and you should be prepared to have to advocate for yourself hard if you want one. If you get one of these, it can tell you eg what percentile you are in for various cognitive skills, for example, 95% of people are better at maintaining focus than you are. Maybe some professional knows how to do something useful with this, but I (a psychiatrist) don't, and you probably won't find that professional unless you look hard for them. 

If you already have a strong sense of your cognitive strengths and weaknesses and don't need accommodations, I don't think the diagnosis would add very much. Even without a diagnosis, if you think you have problems with attention/focus/etc, you can read books aimed at ADHD people to try to see what kind of lifestyle changes you can make.

In very rare cases, you will get a very experienced psychiatrist who is happy to work with you on making lifestyle/routine changes and very good at telling you what to do, but don't expect this to happen by accident. You're more likely to get this from an ADHD coach, who will take you as a client whether or not you have an official diagnosis.

Comment by Scott Alexander (Yvain) on Venture Granters, The VCs of public goods, incentivizing good dreams · 2021-12-20T03:18:26.855Z · LW · GW

I would look into social impact bonds, impact certificates, and retroactive public goods funding. I think these are three different attempts to get at the same insight you've had here. There are incipient efforts to get some of them off the ground and I agree that would be great.

Comment by Scott Alexander (Yvain) on Should I delay having children to take advantage of polygenic screening? · 2021-12-20T03:17:05.962Z · LW · GW

There's polygenic screening now. It doesn't include eg IQ, but polygenic screening for IQ is unlikely to be very good any time in the near future. Probably polygenic screening for other things will improve at some rate, but regardless of how long you wait, it could always improve more if you wait longer, so there will never be a "right time".

Even in the very unlikely scenario where your decision about child-rearing should depend on something about polygenic screening, I say do it now.

Comment by Scott Alexander (Yvain) on Quis cancellat ipsos cancellores? · 2021-12-20T02:50:45.745Z · LW · GW

To contribute whatever information I can here:

  1. I've been to three of Aella's parties - without remembering exact dates, something like 2018, 2019, and 2021. While they were pretty wild, and while I might not have been paying close attention, I didn't see personally see anything that seemed consent-violating or even a gray area, and I definitely didn't hear anything about "drug roulette".
  2. I had originally been confused by the author's claim that "Aella was mittenscautious". Aella was definitely not either of the two women who blogged on that account describing abusive relationships with Brent. After rereading the post, it sounds like the author is saying Aella set up the Medium page for them, which I have no evidence on either way. If true, I would be grateful to her for that - there had been widespread rumors Brent was abusing people before that, but in the absence of common knowledge nothing was being done.
  3. There was widespread discontent with CFAR's original decision not to come down harder on Brent. I knew several people who were angry, none of whom were Aella or seemed especially connected to her. It's possible Aella was manipulating all of them behind the scenes, but it seems more possible that a lot of rationalists were just naturally concerned that we seemed to be tolerating a person with multiple credible rape/abuse allegations.

Moving from evidence to vague thoughts: there's a tough balance between not doing "cancel culture" in the bad sense, and also not being tolerant/complacent in the face of bad things. I'm pretty allergic to the bad kind of "cancel culture" but I've always felt like Aella strikes this balance correctly. If she in fact helped with mittenscautious I am even more impressed with this.

(I did worry Aella's "Frame Control" post would make it too easy to try to cancel people for vague hard-to-describe infractions, but I didn't get the sense Aella herself was trying to do that).

Comment by Scott Alexander (Yvain) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T09:41:55.431Z · LW · GW

Thanks for this.

I'm interested in figuring out more what's going on here - how do you feel about emailing me, hashing out the privacy issues, and, if we can get them hashed out, you telling me the four people you're thinking of who had psychotic episodes?

Comment by Scott Alexander (Yvain) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T01:04:00.887Z · LW · GW

I agree I'm being somewhat inconsistent, I'd rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I'm trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you're open to that.

Comment by Scott Alexander (Yvain) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T09:29:14.871Z · LW · GW

If this information isn't too private, can you send it to me? scott@slatestarcodex.com

Comment by Scott Alexander (Yvain) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T20:55:15.797Z · LW · GW

I've posted an edit/update above after talking to Vassar.

Comment by Scott Alexander (Yvain) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T19:30:08.780Z · LW · GW

Yes, I agree with you that all of this is very awkward.

I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.

But we have to admit at least small violations of it even to get the concept of "cult". Not just the sort of weak cults we're discussing here, but even the really strong cults like Heaven's Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven's Gate is bad for them, and leave. When we use the word "cult", we're implicitly agreeing that this doesn't always work, and we're bringing in creepier and less comprehensible ideas like "charisma" and "brainwashing" and "cognitive dissonance".

(and the same thing with the concept of "emotionally abusive relationship")

I don't want to call the Vassarites a cult because I'm sure someone will confront me with a Cult Checklist that they don't meet, but I think that it's not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it's weird that you can get that far without robes and chanting or anything, but I never claimed to really understand exactly how cults work, plus I'm sure the drugs helped.

I think believing cults are possible is different in degree if not in kind from Leverage "doing seances...to call on demonic energies and use their power to affect the practitioners' social standing". I'm claiming, though I can't prove it, that what I'm saying is more towards the "believing cults are possible" side.

I'm actually very worried about this! I hate admitting cults are possible! If you admit cults are possible, you have to acknowledge that the basic liberal model has gaps, and then you get things like if an evangelical deconverts to atheism, the other evangelicals can say "Oh, he's in a cult, we need to kidnap and deprogram him since his best self wouldn't agree with the deconversion." I want to be extremely careful in when we do things like that, which is why I'm not actually "calling for isolating Michael Vassar from his friends". I think in the Outside View we should almost never do this!

But you were the one to mention this cluster of psychotic breaks, and I am trying to provide what I think is a more accurate perspective on them. Maybe in the future we learn that this was because of some weird neuroinflammatory virus that got passed around at a Vassarite meeting and we laugh that we were ever dumb enough to think a person/group could transmit psychotic breaks. But until then, I think the data point that all of this was associated with Vassar and the Vassarites is one we shouldn't just ignore.

Comment by Scott Alexander (Yvain) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T11:24:00.823Z · LW · GW

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)

[...]

Michael is a charismatic guy who has strong views and argues forcefully for them. That's not the same thing as having mysterious mind powers to "make people paranoid" or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I'm sure he'd be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.

I more or less Outside View agree with you on this, which is why I don't go around making call-out threads or demanding people ban Michael from the community or anything like that (I'm only talking about it now because I feel like it's fair for the community to try to defend itself after Jessica attributed all of this to the wider community instead of Vassar specifically) "This guy makes people psychotic by talking to them" is a silly accusation to go around making, and I hate that I have to do it!

But also, I do kind of notice the skulls and they are really consistent, and I would feel bad if my desire not to say this ridiculous thing resulted in more people getting hurt.

I think the minimum viable narrative here is, as you say, something like "Michael is very good at spotting people right on the verge of psychosis, and then he suggests they take drugs." Maybe a slightly more complicated narrative involves bringing them to a state of total epistemic doubt where they can't trust any institutions or any of the people they formerly thought were their friends, although now this is getting back into the "he's just having normal truth-seeking conversation" objection. He also seems really good at pushing trans people's buttons in terms of their underlying anxiety around gender dysphoria (see the Ziz post) , so maybe that contributes somehow. I don't know how it happens, I'm sufficiently embarrassed to be upset about something which looks like "having a nice interesting conversation" from the outside, and I don't want to violate liberal norms that you're allowed to have conversations - but I think those norms also make it okay to point out the very high rate at which those conversations end in mental breakdowns.

Maybe one analogy would be people with serial emotional abusive relationships - should we be okay with people dating Brent? Like yes, he had a string of horrible relationships that left the other person feeling violated and abused and self-hating and trapped. On the other, most of this, from the outside, looked like talking. He explained why it would be hurtful for the other person to leave the relationship or not do what he wanted, and he was convincing and forceful enough about it that it worked (I understand he also sometimes used violence, but I think the narrative still makes sense without it). Even so, the community tried to make sure people knew if they started a relationship with him they would get hurt, and eventually got really insistent about that. I do feel like this was a sort of boundary crossing of important liberal norms, but I think you've got to at least leave that possibility open for when things get really weird.

Comment by Scott Alexander (Yvain) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T10:43:13.426Z · LW · GW

I don't want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn't harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I'm suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their "it's correct to be freaking about learning your entire society is corrupt and gaslighting" shtick. 

Comment by Scott Alexander (Yvain) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T10:34:57.020Z · LW · GW

It was on the Register of Bans, which unfortunately went down after I deleted the blog. I admit I didn't publicize it very well because this was a kind of sensitive situation and I was trying to do it without destroying his reputation.

Comment by Scott Alexander (Yvain) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-17T23:44:23.947Z · LW · GW

Thanks, if you meant that, when someone is at a very early stage of thinking strange things, you should talk to them about it and try to come to a mutual agreement on how worrying this is and what the criteria would be for psych treatment, instead of immediately dehumanizing them and demanding the treatment right away, then I 100% agree.

Comment by Scott Alexander (Yvain) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-17T23:18:14.646Z · LW · GW

I don't remember the exact words in our last conversation. If I said that, I was wrong and I apologize.

My position is that in schizophrenia (which is a specific condition and not just the same thing as psychosis), lifetime antipsychotics might be appropriate. EG this paper suggests continuing for twelve months after a first schizophrenic episode and then stopping and seeing how things go, which seems reasonable to me. It also says that if every time you take someone off antipsychotics they become fully and dangerous psychotic again, then lifetime antipsychotics are probably their best bet. In a case like that, I would want the patient's buy-in, ie if they were medicated after a psychotic episode I would advise them of the reasons why continued antipsychotic use was recommended in their case, if they said they didn't want it we would explore why given the very high risk level, and if they still said they didn't want it then I would follow their direction.

I didn't get a chance to talk to you during your episode, so I don't know exactly what was going on. I do think that psychosis should be thought of differently than just "weird thoughts that might be true", as more of a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom. I think in mild psychosis it's possible to snap someone back to reality where they agree their weird thoughts aren't true, but in severe psychosis it isn't (I remember when I was a student I tried so hard to convince someone that they weren't royalty, hours of passionate debate, and it just did nothing). I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom. Analogy to eg someone having chest pain from a heart attack, and you give them painkillers for the pain but don't treat the heart attack.

(although there's a separate point where it would be wrong and objectifying to falsely claim someone who's just thinking differently is psychotic or pre-psychotic, given that you did end up psychotic it doesn't sound like the people involved were making that mistake)

My impression is that some medium percent of psychotic episodes end in permanent reduced functioning, and some other medium percent end in suicide or jail or some other really negative consequence, and this is scary enough that treating it is always an emergency, and just treating the symptom but leaving the underlying condition is really risky.

I agree many psychiatrists are terrible and that wanting to avoid them is a really sympathetic desire, but when it's something really serious like psychosis I think of this as like wanting to avoid surgeons (another medical profession with more than its share of jerks!) when you need an emergency surgery.

Comment by Scott Alexander (Yvain) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-17T22:08:56.321Z · LW · GW

I want to add some context I think is important to this.

Jessica was (I don't know if she still is) part of a group centered around a person named Vassar, informally dubbed "the Vassarites". Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to "jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.

Vassar ran MIRI a very long time ago, but either quit or got fired, and has since been saying that MIRI/CFAR is also infinitely corrupt and conformist and traumatizing (I don't think he thinks they're worse than everyone else, but I think he thinks they had a chance to be better, they wasted it, and so it's especially galling that they're just as bad).  Since then, he's tried to "jailbreak" a lot of people associated with MIRI and CFAR - again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. The combination of drugs and paranoia caused a lot of borderline psychosis, which the Vassarites mostly interpreted as success ("these people have been jailbroken out of the complacent/conformist world, and are now correctly paranoid and weird"). Occasionally it would also cause full-blown psychosis, which they would discourage people from seeking treatment for, because they thought psychiatrists were especially evil and corrupt and traumatizing and unable to understand that psychosis is just breaking mental shackles.

(I am a psychiatrist and obviously biased here)

Jessica talks about a cluster of psychoses from 2017 - 2019 which she blames on MIRI/CFAR. She admits that not all the people involved worked for MIRI or CFAR, but kind of equivocates around this and says they were "in the social circle" in some way. The actual connection is that most (maybe all?) of these people were involved with the Vassarites or the Zizians (the latter being IMO a Vassarite splinter group, though I think both groups would deny this characterization). The main connection to MIRI/CFAR is that the Vassarites recruited from the MIRI/CFAR social network.

I don't have hard evidence of all these points, but I think Jessica's text kind of obliquely confirms some of them. She writes:

"Psychosis" doesn't have to be a bad thing, even if it usually is in our society; it can be an exploration of perceptions and possibilities not before imagined, in a supportive environment that helps the subject to navigate reality in a new way; some of R.D. Liang's work is relevant here, describing psychotic mental states as a result of ontological insecurity following from an internal division of the self at a previous time. Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing.

RD Laing was a 1960s pseudoscientist who claimed that schizophrenia is how "the light [begins] to break through the cracks in our all-too-closed minds". He opposed schizophrenics taking medication, and advocated treatments like "rebirthing therapy" where people role-play fetuses going through the birth canal - for which he was stripped of his medical license. The Vassarites like him, because he is on their side in the whole "actually psychosis is just people being enlightened as to the true nature of society" thing. I think Laing was wrong, psychosis is actually bad, and that the "actually psychosis is good sometimes" mindset is extremely related to the Vassarites causing all of these cases of psychosis.

Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community.  While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible.  (I noted at the time that there might be a sense in which different people have "auras" in a way that is not less inherently rigorous than the way in which different people have "charisma", and I feared this type of comment would cause people to say I was crazy.) As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.

Jessica is accusing MIRI of being insufficiently supportive to her by not taking her talk about demons and auras seriously when she was borderline psychotic, and comparing this to Leverage, who she thinks did a better job by promoting an environment where people accepted these ideas. I think MIRI was correct to be concerned and (reading between the lines) telling her to seek normal medical treatment, instead of telling her that demons were real and she was right to worry about them, and I think her disagreement with this is coming from a belief that psychosis is potentially a form of useful creative learning. While I don't want to assert that I am 100% sure this can never be true, I think it's true rarely enough, and with enough downside risk, that treating it as a psychiatric emergency is warranted.

On the two cases of suicide, Jessica writes:

Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz.  (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; the splinter group seems to have selected for high scrupulosity and not attenuated its mental impact.)

Ziz tried to create an anti-CFAR/MIRI splinter group whose members had mental breakdowns. Jessica also tried to create an anti-CFAR/MIRI splinter group and had a mental breakdown. This isn't a coincidence - Vassar tried his jailbreaking thing on both of them, and it tended to reliably produce people who started crusades against MIRI/CFAR, and who had mental breakdowns. Here's an excerpt from Ziz's blog on her experience (edited heavily for length, and slightly to protect the innocent):

When I first met Vassar, it was a random encounter in an experimental group call organized by some small-brand rationalist. He talked for about an hour, and automatically became the center of conversation, I typed notes as fast as I could, thinking, “if this stuff is true it changes everything; it’s the [crux] of my life.” (It true, but I did not realize it immediately.) Randomly, another person found the link, came in and said, “hi”. [Vassar] said “hi”, she said “hi” again, apparently for humor. [Vassar] said something terse I forget “well if this is what …”, apparently giving up on the venue, and disconnected without further comment. One by one, the other ~10 people including besides her, including me disconnected disappointedly, wordlessly or just about right after. A wizard was gracing us with his wisdom and she fucked it up. And in my probably-representative case that was just about the only way I could communicate how frustrated I was at her for that.

[Vassar explained how] across society, the forces of gaslighting were attacking people’s basic ability to think and to a justice as a Schelling point until only the built-in Schelling points of gender and race remained, Vassar listed fronts in the war on gaslighting, disputes in the community, and included [local community member ZD] [...] ZD said Vassar broke them out of a mental hospital. I didn’t ask them how. But I considered that both badass and heroic. From what I hear, ZD was, probably as with most, imprisoned for no good reason, in some despicable act of, “get that unsightly person not playing along with the [heavily DRM’d] game we’ve called sanity out of my free world”.

I heard [local community member AM] was Vassar’s former “apprentice”. And I had started picking up jailbroken wisdom from them secondhand without knowing where it was from. But Vassar did it better. After Rationalist Fleet, I concluded I was probably worth Vassar’s time to talk to a bit, and I emailed him, carefully briefly stating my qualifications, in terms of ability to take ideas seriously and learn from him, so that he could get maximally dense VOI on whether to talk to me. A long conversation ensued. And I got a lot from it. [...]

Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve. And didn’t detransition. This all created an awful tension in me. The rationality community was kind of compromised as a rallying point for truthseeking. This was desperately bad for the world. [Vassar] was at the center of, largely the creator of a “no actually for real” rallying point for the jailbroken reality-not-social-reality version of this.

Ziz is describing the same cluster of psychoses Jessica is (including Jessica's own), but I think doing so more accurately, by describing how it was a Vassar-related phenomenon. I would add Ziz herself to the list of trans women who got negative mental effects from Vassar, although I think (not sure) Ziz would not endorse my description of her as having these.

What was the community's response to this? I have heard rumors that Vassar was fired from MIRI a long time ago for doing some very early version of this, although I don't know if it's true. He was banned from REACH (and implicitly rationalist social events) for somewhat unrelated reasons. I banned him from SSC meetups for a combination of reasons including these. For reasons I don't fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything's kind of been frozen in place since then.

I want to clarify that I don't dislike Vassar, he's actually been extremely nice to me, I continue to be in cordial and productive communication with him, and his overall influence on my life personally has been positive. He's also been surprisingly gracious about the fact that I go around accusing him of causing a bunch of cases of psychosis. I don't think he does the psychosis thing on purpose, I think he is honest in his belief that the world is corrupt and traumatizing (which at the margin, shades into values of "the world is corrupt and traumatizing" which everyone agrees are true) and I believe he is honest in his belief that he needs to figure out ways to help people do better. There are many smart people who work with him and support him who have not gone psychotic at all. I don't think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.  My main advice is that if he or someone related to him asks you if you want to take a bunch of drugs and hear his pitch for why the world is corrupt, you say no.

EDIT/UPDATE: I got a chance to talk to Vassar, who disagrees with my assessment above. We're still trying to figure out the details, but so far, we agree that there was a cluster of related psychoses around 2017, all of which were in the same broad part of the rationalist social graph. Features of that part were - it contained a lot of trans women, a lot of math-y people, and some people who had been influenced by Vassar, although Vassar himself may not have been a central member. We are still trying to trace the exact chain of who had problems first and how those problems spread. I still suspect that Vassar unwittingly came up with some ideas that other people then spread through the graph. Vassar still denies this and is going to try to explain a more complete story to me when I have more time.

Comment by Scott Alexander (Yvain) on Blood Is Thicker Than Water 🐬 · 2021-10-08T23:26:49.396Z · LW · GW

I've tried to address your point about psychiatry in particular at https://slatestarcodex.com/2019/12/04/symptom-condition-cause/

For the whale point, am I fairly interpreting your argument as saying that mammals are more similar, and more fundamentally similar, to each other, than swimmy-things? If so, consider a thought experiment. Swimmy-things are like each other because of convergent evolution. Presumably millions of years ago, the day after the separation of the whale and land-mammal lineages, proto-whales and proto-landmammals were extremely similar, and proto-whales and proto-fish were extremely dissimilar. Let's say in 99% of ways, whales were more like landmammals, and in 1% of ways, they were more like fish. Some convergent evolution takes place, we get to the present, and you're claiming that modern whales are still more like landmammals than fish - I have no interest in disputing that claim, let's say they're more like landmammals in 85% of ways, and fish in 15% of ways. Now fast-forward into the future, after a billion more years of convergent evolution, and imagine that whales have evolved to their new niche so well that they are more like fish in 99% of ways, and more like mammals in only 1% of ways. Are you still going to insist that blood is thicker than water and we need to judge them by their phylogenetic group, even though this gives almost no useful information and it's almost always better to judge them by their environmental affinities?

(I don't think this is an absurd hypothetical - I think "crabs" are in this situation right now)

And if not, at some point in the future, do they go from being obviously-mammals-you-are-not-allowed-to-argue-this to obviously-fish-you-are-not-allowed-to-argue-this in the space of a single day? Or might there be a very long period when they are more like mammals in some way, more like fish in others, and you're allowed to categorize them however you want based on which is more useful for you? If the latter, what makes you think we're not in that period right now?

Comment by Scott Alexander (Yvain) on “Eating Dirt Benefits Kids” is Basically Made Up · 2021-10-08T23:11:18.535Z · LW · GW

This rubs me wrong for the same reason that "no evidence for..." claims rub me wrong.

We have a probably-correct model, the hygiene hypothesis broadly understood. We have a plausible corollary of that model, which is that kids eating dirt helps their immune system (I had never heard this particular claim before, but since you mention it, it seems like a plausible corollary). We should have a low-but-not-ridiculously-low prior on this.

(probably some people would say a high prior, since it follows naturally from a probably-true thing, but I don't trust any multi-step chain of reasoning in medicine)

When I read the title, I thought "Oh! I guess someone showed the specific behavior of eating dirt doesn't help, so I should update against the hygiene hypothesis!" But the post presents no evidence this is wrong. It's just saying there are no studies of it.

This seems kind of like framing the proverbial parachute point as "'Parachutes prevent falling injuries'" Is Basically Made Up". It's not made up! It was assigned a high prior based on other things we know! Nobody has given us any evidence for or against that prior, so we should stick to it.

Comment by Scott Alexander (Yvain) on Forecasting Newsletter: July 2021 · 2021-08-03T18:41:02.166Z · LW · GW

Can you explain the no-loss competition idea further?

  • If you have to stake your USDC, isn't this still locking up USDC, the thing you were trying to avoid doing?
  • What gives the game tokens value? 
Comment by Scott Alexander (Yvain) on (Brainstem, Neocortex) ≠ (Base Motivations, Honorable Motivations) · 2021-07-22T08:22:18.419Z · LW · GW

Thanks, I read that, and while I wouldn't say I'm completely enlightened, I feel like I have a good basis for reading it a few more times until it sinks in.

I interpret you as saying in this post: there is no fundamental difference between base and noble motivations, they're just two different kinds of plans we can come up with and evaluate, and we resolve conflicts between them by trying to find frames in which one or the other seems better. Noble motivations seem to "require more willpower" only because we often spend more time working on coming up with positive frames for them, because this activity flatters our ego and so is inherently rewarding.

I'm still not sure I agree with this. My own base motivation here is that I posted a somewhat different model of willpower at https://astralcodexten.substack.com/p/towards-a-bayesian-theory-of-willpower , which is similar to yours except that it does keep a role for the difference between "base" and "noble" urges. I'm trying to figure out if I still want to defend it against this one, but my thoughts are something like:

- It feels like on stimulants, I have more "willpower" : it's easy to take the "noble" choice when it might otherwise be hard. Likewise, when I'm drunk I have less ability to override base motivations with noble ones, and (although I guess I can't prove it) this doesn't seem like a purely cognitive effect where it's harder for me to "remember" the important benefits of my noble motivations. The same is true of various low-energy states, eg tired, sick, stressed - I'm less likely to choose the noble motivation in all of them. This suggests to me that baser and nobler motivations are coming from different places, and stimulants strengthen (in your model) the connection between the noble-motivation-place and the striatum relative to the connection between the base-motivation-place the striatum, and alcohol/stress/etc weaken it.

- I'm skeptical of your explanation for the "asymmetry" of noble vs. base thoughts. Are thoughts about why I should stay home really less rewarding than thoughts about why I should go to the gym? I'm imagining the opposite - I imagine staying home in my nice warm bed, and this is a very pleasant thought, and accords with what I currently really want (to not go to the gym). On the other hand, thoughts about why I should go to the gym, if I were to verbalize them, would sound like "Ugh, I guess I have to consider the fat that I'll be a fat slob if I don't go, even though I wish I could just never have to think about that".

- Base thoughts seem like literally animalistic desires - hunger seems basically built on top of the same kind of hunger a lizard or nematode feels. We know there are a bunch of brain areas in the hypothalamus etc that control hunger. So why shouldn't this be ontologically different from nobler motivations that are different from lizards'? It seems perfectly sensible that eg stimulants strengthen something about the neocortex relative to whatever part of the hypothalamus is involved in hunger. I guess I'm realizing now how little I understand about hunger - surely the plan to eat must originate in the cortex like every other plan, but it sure feels like it's tied into the hypothalamus in some really important way. I guess maybe hunger could have a plan-generator exactly like every other, which is modulated by hypothalamic connections? It still seems like "plans that need outside justification" vs. "plans that the hypothalamus will just keep active even if they're stupid" is a potentially important dichotomy.

- Base motivations also seem like things which have a more concrete connection to reinforcement learning. There's a really short reinforcement loop between "want to eat candy" and "wow, that was reinforcing", and a really long (sometimes nonexistent) loop between going to the gym and anything good happening. Again, this makes me suspicious that the base motivations are "encoded" in some way that's different from the nobler motivations and which explains why different substances can preferentially reinforce one relative to the other.

- The reasons for thinking of base motivations as more like priors, discussed in that post.

- Kind of a dumb objection, but this feels analogous to other problems where a conscious/intellectual knowledge fails to percolate to emotional centers of the brain, for example someone who knows planes are very safe but is scared of flying anyway. I'm not sure how to use your theory here to account for this situation, whereas if I had a theory that explained the plane phobia problem I feel like it would have to involve a concept of lower-level vs. higher-level systems that would be easy to plug into this problem. 

- Another dumb anecdotal objection, but this isn't how I consciously experience weakness of will. The example that comes to mind most easily is wanting to scratch an itch while meditating, even though I'm supposed to stay completely still. When I imagine my thought process while worrying about this, it doesn't feel like trying to think up new reframings of the plan. It feels like some sensory region of the brain saying "HEY! ITCH! YOU SHOULD SCRATCH IT!" and my conscious brain trying to exert some effort to overcome that. The effort doesn't feel like thinking of new framings, and the need for the effort persists long after every plausible new framing has been thought of. And it does seem relevant that "scratch itch" has no logical justification (it's just a basic animal urge that would persist even if someone told you there was no biological cause of the itch and no way that not scratching it could hurt you), whereas wanting to meditate well has a long chain of logical explanations.

Comment by Scott Alexander (Yvain) on (Brainstem, Neocortex) ≠ (Base Motivations, Honorable Motivations) · 2021-07-13T18:14:55.530Z · LW · GW

Can you link to an explanation of why you're thinking of the brainstem as plan-evaluator? I always thought it was the basal ganglia.