Posts

Distilling and approaches to the determinant 2022-04-06T06:34:09.553Z
How can a layman contribute to AI Alignment efforts, given shorter timeline/doomier scenarios? 2022-04-02T04:34:47.154Z
[Link] sona ike lili 2022-04-01T18:57:06.373Z
Blatant Plot Hole in HPMoR [Spoilers] 2022-04-01T16:01:45.594Z
AprilSR's Shortform 2022-03-23T18:36:27.608Z
Arguments are good for helping people reason about things 2022-03-11T23:02:33.158Z
What (feasible) augmented senses would be useful or interesting? 2021-03-06T04:28:13.320Z

Comments

Comment by AprilSR on Sapphire Shorts · 2024-12-09T09:32:17.554Z · LW · GW

I don't necessarily agree with every line in this post—I'd say I'm better off and still personally kinda like Olivia, though it's of course been rocky at times—but it does all basically look accurate to me. She stayed at my apartment for maybe a total of 1-2 months earlier this year, and I've talked to her a lot. I don't think she presented the JD Pressman thing as about "lying" to me, but she did generally mention him convincing people to keep her out of things.

There is a lot more I could say, and I am as always happy to answer dms and such, but I am somewhat tired of all this and I don't right at this moment really want to either figure out exactly what things I feel are necessary to disclose about a friend of mine or try to figure out what would be a helpful contribution to years old drama, given that it's 1:30am. But I do want to say that I basically think Melody's statements are all more-or-less reasonable.

Comment by AprilSR on Sapphire Shorts · 2024-12-09T05:04:10.320Z · LW · GW

Yeah, I don't think it's correct to call it baseless per se, and I continue to have a lot of questions about the history of the rationality community which haven't really been addressed publicly, but I would very much not say that there's good reason to like, directly blame Michael for anything recent!

Comment by AprilSR on Sapphire Shorts · 2024-12-09T04:19:07.330Z · LW · GW

I'd already been incredibly paranoid about how closely they follow my online activities for years and years. I dunno if that counts as "conspiratorial", but to the extent it does it definitely made me less conspiratorial.

I think when I was at my most psychotic some completely deranged explanations for the "rationalists tend to be first borns" thing crossed my mind, which I guess maybe counts, but that was quickly rejected.

I have conspiratorial interpretations of things at times, which I sorta attribute to the fact that rationalists talk about conspiracies quite a lot and such?

Comment by AprilSR on Sapphire Shorts · 2024-12-08T03:52:15.794Z · LW · GW

Nope. I've never directly interacted with Vassar at all, and I haven't made any particular decisions at all due to his ideas. Like, I've become more familiar with his work as of the past several months, but it was one thing of many.

I spent a lot of time thinking about ontology and anthropics and religion and stuff... mostly I think the reason weird stuff happened to me at the same time as I learned more about Vassar is just that I started rethinking rather a lot of things at the same time, where "are Vassar's ideas worth considering?" was just one specific question that came up of many. (Plausibly the expectation that Vassar's ideas might be dangerous turned slightly into a self-fulfilling prophecy by making it more likely for me to expand on them in weirder directions or something.)

Comment by AprilSR on Hazard's Shortform Feed · 2024-12-08T02:11:45.059Z · LW · GW

I want to say I have to an extent (for all three), though I guess there's been second-hand in person interactions which maybe counts. I dunno if there's any sort of central thesis I could summarize, but if you pointed me at like any more specific topics I could take a shot at translating. (Though I'd maybe prefer to avoid the topic for a little while.)

In general, I think an actual analysis of the ideas involved and their merits / drawbacks existing would've been a lot more helpful for me than just... people having a spooky reputation was.

Comment by AprilSR on Sapphire Shorts · 2024-12-07T23:50:09.310Z · LW · GW

...Yeah I'm well aware but probably useful context

Comment by AprilSR on Sapphire Shorts · 2024-12-07T20:17:21.707Z · LW · GW

It was historically a direct relationship, but afaik hasn't been very close in years.

Edit: Also, if the "Vassarites" are the type of group with "official stances", this is the first I've heard of it.

Comment by AprilSR on Sapphire Shorts · 2024-12-07T20:11:01.040Z · LW · GW

Not on LSD, I've done some emotional processing with others on MDMA but I don't know if I'd describe it as "targeted work to change beliefs", it was more stuff like "talk about my relationship with my family more openly than I'm usually able to."

I was introduced to belief reporting, but I didn't do very much of it and wasn't on drugs at the time.

Comment by AprilSR on Sapphire Shorts · 2024-12-07T20:07:25.834Z · LW · GW

I agree I am "more schizophrenic", that's obvious. (Edit: Though I'd argue I'm less paranoid, and beforehand was somewhat in denial about how much paranoia I did have.) I very clearly do not fit the diagnosis criteria. Even if you set aside the six months requirement, the only symptom I even arguably have is delusions and you need multiple.

Comment by AprilSR on Sapphire Shorts · 2024-12-07T19:56:34.567Z · LW · GW

Yeah, I'm not meaning to actively suggest taking psychedelics with any of them.

Comment by AprilSR on Sapphire Shorts · 2024-12-07T09:59:41.643Z · LW · GW

Some discussion of coverups can be found at https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards.

Comment by AprilSR on Sapphire Shorts · 2024-12-07T09:56:27.032Z · LW · GW

I'd appreciate a rain check to think about the best way to approach things. I agree it's probably better for more details here to be common knowledge but I'm worried about it turning into just like, another unnuanced accusation? Vague worries about Vassarites being culty and bad did not help me, a grounded analysis of the precise details might have.

Comment by AprilSR on Sapphire Shorts · 2024-12-07T09:17:42.445Z · LW · GW

That's plausible. It was like a week and a half.

Edit: I do think the LSD was a contributing factor, but it's hard to separate effects of the drug from effects of the LSD making it easier for me to question ontological assumptions.

Comment by AprilSR on Sapphire Shorts · 2024-12-07T09:16:25.975Z · LW · GW

I don't love ranking people in terms of harmfulness but if you are going to do that instead of forming some more specific model then yeah I think there are very good reasons to hold this view. (Mostly because I think there's little reason to worry at all unusually much about anyone else Vassar-associated, though there could possibly be things I'm not aware of.)

Comment by AprilSR on Sapphire Shorts · 2024-12-07T08:59:01.864Z · LW · GW

No, I did not.

I have had LSD. I've taken like, 100μg maybe once, 50-75 a couple times, 25ish once or twice. No lasting consequences that I would personally consider severe, though other people would disagree I think? Like, from my perspective I have a couple weird long-shot hypotheses bouncing around my head that I haven't firmly disproven but which mostly have no impact on my behavior other than making me act slightly superstitious at times.

I had a serious psychotic episode, like, once, which didn't involve any actual attempts to induce it but did involve a moment where I was like "okay trying to hold myself fully to normality here isn't really important, let's just actually think about the crazy hypotheses." I think I had 10mg cannabis a few days before that, and it'd been like a month around a week and a half since I'd had any LSD. That was in late August.

Edit: Actually, for the sake of being frank here, I should make it clear that I'm not particularly anti-psychosis in all cases? Like, personally I think I've been sorta paranoid for my entire life and like... attempting to actually explicitly model things instead of just having vague uncomfortable feelings might've been good, even if they were crazy... I dunno how accurate this is but it's possible to tell a story where I had some crazy things compartmentalized which I needed to process. How much that generalizes to other people is very much arguable, but I don't personally feel "stay as far away as you possibly can from any mental states that might be considered sorta psychotic-adjacent" would be universally good advice.

But like, no, I was not at any point trying to induce psychosis, that's just my perspective on it in retrospect.

Comment by AprilSR on Sapphire Shorts · 2024-12-07T08:25:14.894Z · LW · GW

(I am happy to answer questions I just don't want to get into an argument.)

Comment by AprilSR on Sapphire Shorts · 2024-12-07T08:21:42.258Z · LW · GW

I don't actually want to litigate the details here, but I think describing me as "literally schizophrenic" is taking things a bit far.

Comment by AprilSR on "The Solomonoff Prior is Malign" is a special case of a simpler argument · 2024-11-18T16:37:53.579Z · LW · GW

In case it's a helpful data point: lines of reasoning sorta similar to the ones around the infohazard warning seemed to have interesting and intense psychological effects on me one time. It's hard to separate out from other factors, though, and I think it had something to do with the fact that lately I've been spending a lot of time learning to take ideas seriously on an emotional level instead of only an abstract one.

Comment by AprilSR on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-08T02:45:43.201Z · LW · GW

I mostly think it's too loose a heuristic and that you should dig into more details

Comment by AprilSR on 2024 Unofficial LW Community Census, Request for Comments · 2024-11-01T22:38:56.610Z · LW · GW

Some of the probability questions (many worlds, simulation) are like... ontologically weird enough that I'm not entirely certain it makes sense to assign probabilities to them? It doesn't really feel like they pay rent in anticipated experience?

I'm not sure "speaking the truth even when it's uncomfortable" is the kind of skill it makes sense to describe yourself as "comfortable" with.

Comment by AprilSR on Alexander Gietelink Oldenziel's Shortform · 2024-11-01T04:17:09.641Z · LW · GW

I think it's pretty good to keep it in mind that heliocentrism is literally speaking just a change in what coordinate system you use, but it is legitimately a much more convenient coordinate system.

Comment by AprilSR on [Intuitive self-models] 7. Hearing Voices, and Other Hallucinations · 2024-10-29T13:54:31.424Z · LW · GW

Switch to neuroscience. I think we have an innate “sense of sociality” in our brainstem (or maybe hypothalamus), analogous to how (I claim) fear-of-heights is triggered by an innate brainstem “sense” that we’re standing over a precipice.

I think lately I've noticed how much written text triggers this for me varying a bit over time?

Comment by AprilSR on Why is there Nothing rather than Something? · 2024-10-27T19:13:39.134Z · LW · GW

...Does that hold together as a potential explanation for why our universe is so young? Huh.

Comment by AprilSR on leogao's Shortform · 2024-10-18T18:50:10.973Z · LW · GW

I think my ideal is to lean into weirdness in a way that doesn't rely on ignorance of normal conventions

Comment by AprilSR on sarahconstantin's Shortform · 2024-10-07T21:39:43.877Z · LW · GW

For a while I ended up spending a lot of time thinking about specifically the versions of the idea where I couldn't easily tell how true they were... which I suppose I do think is the correct place to be paying attention to?

Comment by AprilSR on Extended Interview with Zhukeepa on Religion · 2024-10-05T00:42:29.839Z · LW · GW

I think there is rather a lot of soap to be found... but it's very much not something you can find by taking official doctrine as an actual authority.

Comment by AprilSR on A Psychoanalytic Explanation of Sam Altman's Irrational Actions · 2024-09-30T00:16:52.505Z · LW · GW

That does seem likely.

Comment by AprilSR on Any real toeholds for making practical decisions regarding AI safety? · 2024-09-30T00:15:38.881Z · LW · GW

There's a complication where sometimes it's very difficult to get people not to interpret things as an instruction. "Confuse them" seems to work, I guess, but it does have drawbacks too.

Comment by AprilSR on Any real toeholds for making practical decisions regarding AI safety? · 2024-09-30T00:12:48.454Z · LW · GW

I don't really have a good idea of the principles, here. Personally, whenever I've made a big difference in a person's life (and it's been obvious to me that I've done so), I try to take care of them as much as I can and make sure they're okay.

...However, I have ran into a couple issues with this. Sometimes someone or something takes too much energy, and some distance is healthier. I don't know how to judge this other than intuition, but I think I've gone too far before?

And I have no idea how much this can scale. I think I've had far bigger impacts than I've intended, in some cases. One time I had a friend who was really in trouble and I had to go to pretty substantial lengths to get them to a better place, and I'm not sure all versions of them would've endorsed that, even if they do now.

...But, broadly, "do what you can to empower other people to make their own decisions, when you can, instead of trying to tell them what to do" does seem like a good principle, especially for the people who have more power in a given situation? I definitely haven't treated this as an absolute rule, but in most cases I'm pretty careful not to stray from it.

Comment by AprilSR on A Psychoanalytic Explanation of Sam Altman's Irrational Actions · 2024-09-29T23:50:33.967Z · LW · GW

I don't really think money is the only plausible explanation, here?

Comment by AprilSR on 2024 Petrov Day Retrospective · 2024-09-29T23:39:45.010Z · LW · GW

I think the game is sufficiently difficult.

Comment by AprilSR on Mythic Mode · 2024-09-28T23:45:15.656Z · LW · GW

I read this post several years ago, but I was... basically just trapped in a "finishing high school and then college" narrative at the time, it didn't really seem like I could use this idea to actually make any changes in my life... And then a few months ago, as I was finishing up my last semester of college, I sort of fell head first into Mythic Mode without understanding what I was doing very much at all.

And I'd say it made a lot of things better, definitely—the old narrative was a terrible one for me—but it was rocky in some ways, and... like, obviously thoughts like "confirmation bias" etc were occurring to me, but "there are biases involved here" doesn't, really, in and of itself tell you what to do?

It would make sense if there's some extent to which everyone who spent the first part of their life following along with a simple "go to school and then get a job i guess" script is going to have a substantial adjustment period once they start having some more interesting life experiences, but... also seems plausible that if I'd read a lot more about this sort of thing I'd've been better equipped.

Comment by AprilSR on A Path out of Insufficient Views · 2024-09-26T01:17:51.860Z · LW · GW

To have a go at it:

Some people try to implement a decision-making strategy that's like, "I should focus mostly on System 1" or "I should focus mostly on System 2." But this isn't really the point. The goal is to develop an ability to judge which scenarios call for which types of mental activities, and to be able to combine System 1 and System 2 together fluidly as needed.

Comment by AprilSR on Struggling like a Shadowmoth · 2024-09-24T19:02:52.543Z · LW · GW

Thank you.

Comment by AprilSR on Another argument against maximizer-centric alignment paradigms · 2024-09-22T17:53:42.314Z · LW · GW

I, similarly, am pretty sure I had a lot of conformist-ish biases that prevented me from seriously considering lines of argument like this one.

Like, I'm certainly not entirely sure how strong this (and related) reasoning is, but it's definitely something one ought to seriously think about.

Comment by AprilSR on We Don't Know Our Own Values, but Reward Bridges The Is-Ought Gap · 2024-09-19T23:44:55.092Z · LW · GW

This post definitely resolved some confusions for me. There are still a whole lot of philosophical issues, but it's very nice to have a clearer model of what's going on with the initial naïve conception of value.

Comment by AprilSR on I finally got ChatGPT to sound like me · 2024-09-17T20:11:10.695Z · LW · GW

I do actually think my practice of rationality was benefited by spending some time seriously grappling with the possibility that everything I knew was wrong. Like, yeah, I did quickly reaccept many things, but it was still a helpful exercise.

Comment by AprilSR on My AI Model Delta Compared To Christiano · 2024-09-14T21:39:06.500Z · LW · GW

This feels more like an argument that Wentworth's model is low-resolution than that he's actually misidentified where the disagreement is?

Comment by AprilSR on The Standard Analogy · 2024-09-02T22:15:58.184Z · LW · GW

Huh. I... think I kind of do care terminally? Or maybe I'm just having a really hard time imagining what it would be like to be terrible at predicting sensory input without this having a bunch of negative consequences.

Comment by AprilSR on The Standard Analogy · 2024-09-02T19:59:29.648Z · LW · GW

you totally care about predicting sensory inputs accurately! maybe mostly instrumentally, but you definitely do? like, what, would it just not bother you at all if you started hallucinating all the time?

Comment by AprilSR on ... Wait, our models of semantics should inform fluid mechanics?!? · 2024-08-26T16:58:45.873Z · LW · GW

Probably many people who are into Eastern spiritual woo would make that claim. Mostly, I expect such woo-folk would be confused about what “pointing to a concept” normally is and how it’s supposed to work: the fact that the internal concept of a dog consists of mostly nonlinguistic stuff does not mean that the word “dog” fails to point at it.

On my model, koans and the like are trying to encourage a particular type of realization or insight. I'm not sure whether the act of grokking an insight counts as a "concept", but it can be hard to clearly describe an insight in a way that actually causes it? But that's mostly deficiency in vocab plus the fact that you're trying to explain a (particular instance of a) thing to someone who has never witnessed it.

Comment by AprilSR on Coalitional agency · 2024-08-12T21:33:05.497Z · LW · GW

Robin Hanson has written about organizational rot: the breakdown of modularity within an organization, in a way which makes it increasingly dysfunctional. But this is exactly what coalitional agency induces, by getting many different subagents to weigh in on each decision.

I speculate (loosely based on introspective techniques and models of human subagents) that the issue isn't exactly the lack of modularity: when modularity breaks down over time, this leads to subagents competing to find better ways to work around the modularity, and creates more zero sum-ish dynamics. (Or maybe it's more that techniques for working around modularity can produce an inaction bias?) But if you intentionally allow subagents to weigh-in, they may be more able to negotiate and come up with productive compromises.

Comment by AprilSR on We’re not as 3-Dimensional as We Think · 2024-08-04T22:12:31.291Z · LW · GW

I think I have a much easier time imagining a 3D volume if I'm imagining, like, a structure I can walk through? Like I'm still not getting the inside of any objects per se, but... like, a complicated structure made out of thin surfaces that have holes in them or something is doable?

Basically, I can handle 3D, but I won't by default have all the 3Dish details correctly unless I meaningfully interact with the full volume of the object.

Comment by AprilSR on On passing Complete and Honest Ideological Turing Tests (CHITTs) · 2024-07-10T06:46:40.983Z · LW · GW

This does necessitate that the experts actually have the ability to tell when an argument is bad.

Comment by AprilSR on Is being a trans woman (or just low-T) +20 IQ? · 2024-04-25T07:47:55.861Z · LW · GW

All the smart trans girls I know were also smart prior to HRT.

Comment by AprilSR on ProjectLawful.com: Eliezer's latest story, past 1M words · 2024-01-18T23:22:10.914Z · LW · GW

I feel like Project Lawful, as well as many of Lintamande's other glowfic since then, have given me a whole lot deeper an understanding of... a collection of virtues including honor, honesty, trustworthiness, etc, which I now mostly think of collectively as "Law".

I think this has been pretty valuable for me on an intellectual level—I think, if you show me some sort of deontological rule, I'm going to give a better account of why/whether it's a good idea to follow it than I would have before I read any glowfic.

It's difficult for me to separate how much of that is due to Project Lawful in particular, because ultimately I've just read a large body of work which all had some amount of training data showing a particular sort of thought pattern which I've since learned. But I think this particular fragment of the rationalist community has given me some valuable new ideas, and it'd be great to figure out a good way of acknowledging that.

Comment by AprilSR on Gender Exploration · 2024-01-14T23:17:48.605Z · LW · GW

i think they presented a pretty good argument that it is actually rather minor

Comment by AprilSR on Staring into the abyss as a core life skill · 2023-12-21T19:10:37.159Z · LW · GW

While the concept that looking at the truth even when it hurts is important isn't revolutionary in the community, I think this post gave me a much more concrete model of the benefits. Sure, I knew about the abstract arguments that facing the truth is valuable, but I don't know if I'd have identified it as an essential skill for starting a company, or as being a critical component of staying in a bad relationship. (I think my model of bad relationships was that people knew leaving was a good idea, but were unable to act on that information—but in retrospect inability to even consider it totally might be what's going on some of the time.)

Comment by AprilSR on [deleted post] 2023-12-17T08:09:25.806Z

So if a UFO lands in your backyard and aliens ask if you if you want to go on a magical (but not particularly instrumental) space adventure with them, I think it's reasonable to very politely decline, and get back to work solving alignment.

I think I'd probably go for that, actually, if there isn't some specific reason to very strongly doubt it could possibly help? It seems somewhat more likely that I'll end up decisive via space adventure than by mundane means, even if there's no obvious way the space adventure will contribute.

This is different if you're already in a position where you're making substantial progress though.

Comment by AprilSR on AI Views Snapshots · 2023-12-13T00:50:03.177Z · LW · GW

Here's mine