Posts
Comments
I don't necessarily agree with every line in this post—I'd say I'm better off and still personally kinda like Olivia, though it's of course been rocky at times—but it does all basically look accurate to me. She stayed at my apartment for maybe a total of 1-2 months earlier this year, and I've talked to her a lot. I don't think she presented the JD Pressman thing as about "lying" to me, but she did generally mention him convincing people to keep her out of things.
There is a lot more I could say, and I am as always happy to answer dms and such, but I am somewhat tired of all this and I don't right at this moment really want to either figure out exactly what things I feel are necessary to disclose about a friend of mine or try to figure out what would be a helpful contribution to years old drama, given that it's 1:30am. But I do want to say that I basically think Melody's statements are all more-or-less reasonable.
Yeah, I don't think it's correct to call it baseless per se, and I continue to have a lot of questions about the history of the rationality community which haven't really been addressed publicly, but I would very much not say that there's good reason to like, directly blame Michael for anything recent!
I'd already been incredibly paranoid about how closely they follow my online activities for years and years. I dunno if that counts as "conspiratorial", but to the extent it does it definitely made me less conspiratorial.
I think when I was at my most psychotic some completely deranged explanations for the "rationalists tend to be first borns" thing crossed my mind, which I guess maybe counts, but that was quickly rejected.
I have conspiratorial interpretations of things at times, which I sorta attribute to the fact that rationalists talk about conspiracies quite a lot and such?
Nope. I've never directly interacted with Vassar at all, and I haven't made any particular decisions at all due to his ideas. Like, I've become more familiar with his work as of the past several months, but it was one thing of many.
I spent a lot of time thinking about ontology and anthropics and religion and stuff... mostly I think the reason weird stuff happened to me at the same time as I learned more about Vassar is just that I started rethinking rather a lot of things at the same time, where "are Vassar's ideas worth considering?" was just one specific question that came up of many. (Plausibly the expectation that Vassar's ideas might be dangerous turned slightly into a self-fulfilling prophecy by making it more likely for me to expand on them in weirder directions or something.)
I want to say I have to an extent (for all three), though I guess there's been second-hand in person interactions which maybe counts. I dunno if there's any sort of central thesis I could summarize, but if you pointed me at like any more specific topics I could take a shot at translating. (Though I'd maybe prefer to avoid the topic for a little while.)
In general, I think an actual analysis of the ideas involved and their merits / drawbacks existing would've been a lot more helpful for me than just... people having a spooky reputation was.
...Yeah I'm well aware but probably useful context
It was historically a direct relationship, but afaik hasn't been very close in years.
Edit: Also, if the "Vassarites" are the type of group with "official stances", this is the first I've heard of it.
Not on LSD, I've done some emotional processing with others on MDMA but I don't know if I'd describe it as "targeted work to change beliefs", it was more stuff like "talk about my relationship with my family more openly than I'm usually able to."
I was introduced to belief reporting, but I didn't do very much of it and wasn't on drugs at the time.
I agree I am "more schizophrenic", that's obvious. (Edit: Though I'd argue I'm less paranoid, and beforehand was somewhat in denial about how much paranoia I did have.) I very clearly do not fit the diagnosis criteria. Even if you set aside the six months requirement, the only symptom I even arguably have is delusions and you need multiple.
Yeah, I'm not meaning to actively suggest taking psychedelics with any of them.
Some discussion of coverups can be found at https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards.
I'd appreciate a rain check to think about the best way to approach things. I agree it's probably better for more details here to be common knowledge but I'm worried about it turning into just like, another unnuanced accusation? Vague worries about Vassarites being culty and bad did not help me, a grounded analysis of the precise details might have.
That's plausible. It was like a week and a half.
Edit: I do think the LSD was a contributing factor, but it's hard to separate effects of the drug from effects of the LSD making it easier for me to question ontological assumptions.
I don't love ranking people in terms of harmfulness but if you are going to do that instead of forming some more specific model then yeah I think there are very good reasons to hold this view. (Mostly because I think there's little reason to worry at all unusually much about anyone else Vassar-associated, though there could possibly be things I'm not aware of.)
No, I did not.
I have had LSD. I've taken like, 100μg maybe once, 50-75 a couple times, 25ish once or twice. No lasting consequences that I would personally consider severe, though other people would disagree I think? Like, from my perspective I have a couple weird long-shot hypotheses bouncing around my head that I haven't firmly disproven but which mostly have no impact on my behavior other than making me act slightly superstitious at times.
I had a serious psychotic episode, like, once, which didn't involve any actual attempts to induce it but did involve a moment where I was like "okay trying to hold myself fully to normality here isn't really important, let's just actually think about the crazy hypotheses." I think I had 10mg cannabis a few days before that, and it'd been like a month around a week and a half since I'd had any LSD. That was in late August.
Edit: Actually, for the sake of being frank here, I should make it clear that I'm not particularly anti-psychosis in all cases? Like, personally I think I've been sorta paranoid for my entire life and like... attempting to actually explicitly model things instead of just having vague uncomfortable feelings might've been good, even if they were crazy... I dunno how accurate this is but it's possible to tell a story where I had some crazy things compartmentalized which I needed to process. How much that generalizes to other people is very much arguable, but I don't personally feel "stay as far away as you possibly can from any mental states that might be considered sorta psychotic-adjacent" would be universally good advice.
But like, no, I was not at any point trying to induce psychosis, that's just my perspective on it in retrospect.
(I am happy to answer questions I just don't want to get into an argument.)
I don't actually want to litigate the details here, but I think describing me as "literally schizophrenic" is taking things a bit far.
In case it's a helpful data point: lines of reasoning sorta similar to the ones around the infohazard warning seemed to have interesting and intense psychological effects on me one time. It's hard to separate out from other factors, though, and I think it had something to do with the fact that lately I've been spending a lot of time learning to take ideas seriously on an emotional level instead of only an abstract one.
I mostly think it's too loose a heuristic and that you should dig into more details
Some of the probability questions (many worlds, simulation) are like... ontologically weird enough that I'm not entirely certain it makes sense to assign probabilities to them? It doesn't really feel like they pay rent in anticipated experience?
I'm not sure "speaking the truth even when it's uncomfortable" is the kind of skill it makes sense to describe yourself as "comfortable" with.
I think it's pretty good to keep it in mind that heliocentrism is literally speaking just a change in what coordinate system you use, but it is legitimately a much more convenient coordinate system.
Switch to neuroscience. I think we have an innate “sense of sociality” in our brainstem (or maybe hypothalamus), analogous to how (I claim) fear-of-heights is triggered by an innate brainstem “sense” that we’re standing over a precipice.
I think lately I've noticed how much written text triggers this for me varying a bit over time?
...Does that hold together as a potential explanation for why our universe is so young? Huh.
I think my ideal is to lean into weirdness in a way that doesn't rely on ignorance of normal conventions
For a while I ended up spending a lot of time thinking about specifically the versions of the idea where I couldn't easily tell how true they were... which I suppose I do think is the correct place to be paying attention to?
I think there is rather a lot of soap to be found... but it's very much not something you can find by taking official doctrine as an actual authority.
That does seem likely.
There's a complication where sometimes it's very difficult to get people not to interpret things as an instruction. "Confuse them" seems to work, I guess, but it does have drawbacks too.
I don't really have a good idea of the principles, here. Personally, whenever I've made a big difference in a person's life (and it's been obvious to me that I've done so), I try to take care of them as much as I can and make sure they're okay.
...However, I have ran into a couple issues with this. Sometimes someone or something takes too much energy, and some distance is healthier. I don't know how to judge this other than intuition, but I think I've gone too far before?
And I have no idea how much this can scale. I think I've had far bigger impacts than I've intended, in some cases. One time I had a friend who was really in trouble and I had to go to pretty substantial lengths to get them to a better place, and I'm not sure all versions of them would've endorsed that, even if they do now.
...But, broadly, "do what you can to empower other people to make their own decisions, when you can, instead of trying to tell them what to do" does seem like a good principle, especially for the people who have more power in a given situation? I definitely haven't treated this as an absolute rule, but in most cases I'm pretty careful not to stray from it.
I don't really think money is the only plausible explanation, here?
I think the game is sufficiently difficult.
I read this post several years ago, but I was... basically just trapped in a "finishing high school and then college" narrative at the time, it didn't really seem like I could use this idea to actually make any changes in my life... And then a few months ago, as I was finishing up my last semester of college, I sort of fell head first into Mythic Mode without understanding what I was doing very much at all.
And I'd say it made a lot of things better, definitely—the old narrative was a terrible one for me—but it was rocky in some ways, and... like, obviously thoughts like "confirmation bias" etc were occurring to me, but "there are biases involved here" doesn't, really, in and of itself tell you what to do?
It would make sense if there's some extent to which everyone who spent the first part of their life following along with a simple "go to school and then get a job i guess" script is going to have a substantial adjustment period once they start having some more interesting life experiences, but... also seems plausible that if I'd read a lot more about this sort of thing I'd've been better equipped.
To have a go at it:
Some people try to implement a decision-making strategy that's like, "I should focus mostly on System 1" or "I should focus mostly on System 2." But this isn't really the point. The goal is to develop an ability to judge which scenarios call for which types of mental activities, and to be able to combine System 1 and System 2 together fluidly as needed.
I, similarly, am pretty sure I had a lot of conformist-ish biases that prevented me from seriously considering lines of argument like this one.
Like, I'm certainly not entirely sure how strong this (and related) reasoning is, but it's definitely something one ought to seriously think about.
This post definitely resolved some confusions for me. There are still a whole lot of philosophical issues, but it's very nice to have a clearer model of what's going on with the initial naïve conception of value.
I do actually think my practice of rationality was benefited by spending some time seriously grappling with the possibility that everything I knew was wrong. Like, yeah, I did quickly reaccept many things, but it was still a helpful exercise.
This feels more like an argument that Wentworth's model is low-resolution than that he's actually misidentified where the disagreement is?
Huh. I... think I kind of do care terminally? Or maybe I'm just having a really hard time imagining what it would be like to be terrible at predicting sensory input without this having a bunch of negative consequences.
you totally care about predicting sensory inputs accurately! maybe mostly instrumentally, but you definitely do? like, what, would it just not bother you at all if you started hallucinating all the time?
Probably many people who are into Eastern spiritual woo would make that claim. Mostly, I expect such woo-folk would be confused about what “pointing to a concept” normally is and how it’s supposed to work: the fact that the internal concept of a dog consists of mostly nonlinguistic stuff does not mean that the word “dog” fails to point at it.
On my model, koans and the like are trying to encourage a particular type of realization or insight. I'm not sure whether the act of grokking an insight counts as a "concept", but it can be hard to clearly describe an insight in a way that actually causes it? But that's mostly deficiency in vocab plus the fact that you're trying to explain a (particular instance of a) thing to someone who has never witnessed it.
Robin Hanson has written about organizational rot: the breakdown of modularity within an organization, in a way which makes it increasingly dysfunctional. But this is exactly what coalitional agency induces, by getting many different subagents to weigh in on each decision.
I speculate (loosely based on introspective techniques and models of human subagents) that the issue isn't exactly the lack of modularity: when modularity breaks down over time, this leads to subagents competing to find better ways to work around the modularity, and creates more zero sum-ish dynamics. (Or maybe it's more that techniques for working around modularity can produce an inaction bias?) But if you intentionally allow subagents to weigh-in, they may be more able to negotiate and come up with productive compromises.
I think I have a much easier time imagining a 3D volume if I'm imagining, like, a structure I can walk through? Like I'm still not getting the inside of any objects per se, but... like, a complicated structure made out of thin surfaces that have holes in them or something is doable?
Basically, I can handle 3D, but I won't by default have all the 3Dish details correctly unless I meaningfully interact with the full volume of the object.
This does necessitate that the experts actually have the ability to tell when an argument is bad.
All the smart trans girls I know were also smart prior to HRT.
I feel like Project Lawful, as well as many of Lintamande's other glowfic since then, have given me a whole lot deeper an understanding of... a collection of virtues including honor, honesty, trustworthiness, etc, which I now mostly think of collectively as "Law".
I think this has been pretty valuable for me on an intellectual level—I think, if you show me some sort of deontological rule, I'm going to give a better account of why/whether it's a good idea to follow it than I would have before I read any glowfic.
It's difficult for me to separate how much of that is due to Project Lawful in particular, because ultimately I've just read a large body of work which all had some amount of training data showing a particular sort of thought pattern which I've since learned. But I think this particular fragment of the rationalist community has given me some valuable new ideas, and it'd be great to figure out a good way of acknowledging that.
i think they presented a pretty good argument that it is actually rather minor
While the concept that looking at the truth even when it hurts is important isn't revolutionary in the community, I think this post gave me a much more concrete model of the benefits. Sure, I knew about the abstract arguments that facing the truth is valuable, but I don't know if I'd have identified it as an essential skill for starting a company, or as being a critical component of staying in a bad relationship. (I think my model of bad relationships was that people knew leaving was a good idea, but were unable to act on that information—but in retrospect inability to even consider it totally might be what's going on some of the time.)
So if a UFO lands in your backyard and aliens ask if you if you want to go on a magical (but not particularly instrumental) space adventure with them, I think it's reasonable to very politely decline, and get back to work solving alignment.
I think I'd probably go for that, actually, if there isn't some specific reason to very strongly doubt it could possibly help? It seems somewhat more likely that I'll end up decisive via space adventure than by mundane means, even if there's no obvious way the space adventure will contribute.
This is different if you're already in a position where you're making substantial progress though.