Comment by rsaarelm on Why didn't we find katas for rationality? · 2021-09-14T19:56:04.888Z · LW · GW

I think it's an issue of "inside the box" vs "open-ended" fields, that we don't have really good vocabulary to talk about. 'Katas' work great for sports that are very much inside the box. You can innovate new strategies, but the rules of the sport set up an unchanging microworld that you must stay inside of. Coincidentally, these are also areas where even current-day AIs often dominate. Established scientific disciplines with research programs are sort of half and half. You can train people in them, but they can also benefit from serious paradigm shifts and there aren't any a priori hard and fast rules about things that absolutely can't be done, like the rules of chess for the chess-playing domain.

Then there's proto-science when things haven't coalesced into a discipline yet, philosophy when it hasn't been professionalized to death, Kegan's stage 5. This is raw pattern matching, flashes of insight, original seeing, very open ended exploration of an unknown landscape. I don't think anyone has had much of an idea for how to systematically train people for this. This place is also where a lot of the actually efficient rationality practice lives.

Comment by rsaarelm on Review of A Map that Reflects the Territory · 2021-09-13T04:40:59.170Z · LW · GW

So, somewhat inconsequential stylistic thing. I open a PDF link, see it's written in LaTeX, I start expecting something written more or less like an academic paper. This is written in very much a chatty, free-flowing blog post style, with jokes like calling neologisms "newords", so the whole thing feels a bit more off-kilter than was intended. This style of writing would probably work better as an HTML blog post (which could then be posted directly as a Lesswrong post here instead of hosted elsewhere and linked).

Comment by rsaarelm on Hope and False Hope · 2021-09-05T10:56:27.628Z · LW · GW

One thing I've started thinking more after first hearing of cryonics is that keeping an organization around and alive in the long term, order-of-magnitude centuries, is really hard. One of the first ways to fail in the O-ring chain of cryonics leading to successful revivification is the cryonics organization storing the vitrified bodies dissolving or becoming terminally incompetent and the bodies melting and rotting.

Concerns about health care system dysfunction notwithstanding, there is still very thick social proof that seeing an accredited doctor is a net positive when you're ill, and also that the medical system will continue to be reasonably reliable and socially supported, so that a medicine you rely on in the long term suddenly becoming unavailable is an alarming and unexpected event, rather than a common occurrence. The social proof of cryonics orgs is mostly that they're sort of there, about as notable as they were ten or twenty years ago and they have absolutely no buy-in from wider society or legislature. The buy-in would create expectations that random emergency responders and medical personnel will help fulfill your cryonics contract when you're incapable of action or that there would be some reaction other than "good riddance to the charlatans" if the orgs look like they're about to go under.

As it stands, I can apply an abstraction "if I get sick, I can go to the hospital", because "hospital" is a robust category with the wider society. I do not feel like I can currently similarly abstractly state "I make a contract with a cryonics facility to have myself cryopreserved when I'm clinically dead", because there currently isn't a social category of "cryonics facility" like there is one of "hospital". There is a small handful of particular cryonics organizations of varying appearances of competency, founded and run by people operating from a particular late 20th century techno-optimistic subculture (the one that had things like Extropianism come out of it), that seems to be both in decline and actively shunned by many ideologues of a more recent cultural zeitgeist. As it stands, I'm entirely indifferent about a hospital CEO retiring because I'm quite confident the wider society has the will and ability to perpetuate the hospital organization, but I'm quite a bit concerned what will happen with the present-day cryonics orgs when their CEOs retire, because the orgs have no similar societal support network and it also looks like we might be moving on from the cultural period that inspired competent people to found or join cryonics orgs.

Comment by rsaarelm on Petition To Make Inarticulate Downvoting More Difficult · 2021-09-02T04:54:12.728Z · LW · GW

Some things are a question of common sense or common forum etiquette, not of following a specific style guide. You're expected to have enough other-modeling ability to see what it looks like from the outside when you show up with less than a week old account, get a negative reaction with your stuff, and then move on to propose changes to site rules.

Comment by rsaarelm on Petition To Make Inarticulate Downvoting More Difficult · 2021-09-01T17:30:39.056Z · LW · GW

How much have you interacted with strangers on anything intellectual in your life so far? You come off as not really realizing yet that communities have different communication styles and expectations and that you need to understand and learn the local customs before you'll get a good reception.

For example, if you are getting downvoted a lot and don't know why, you might for example make a comment on an open thread saying something like "Hey guys, looks like my stuff is getting downvoted a lot and I'm not sure why, can you tell me what I'm doing wrong". You should probably not start by proposing changes to the fundamental workings of the forum.

Comment by rsaarelm on Prisoner's Dilemmas : Altruism :: Battles of the Sexes : Convention · 2021-02-02T11:49:32.726Z · LW · GW

Relevant SSC: Setting the Default

Comment by rsaarelm on 100 Tips for a Better Life · 2020-12-26T06:40:19.590Z · LW · GW

Can second the not-driving-a-car commute thing. A long commute by bus I used to have amounted to 5 km of walking going to and from the bus stops, with optional podcast listening, and an hour of focused book-reading time every day. It made a big extra dent in my schedule, but walking and book-reading are both things I'd want to be doing regularly in any case.

Comment by rsaarelm on What are the unwritten rules of academia? · 2020-12-26T06:25:28.630Z · LW · GW

Given that the rules partially exist to keep outsiders with guidebooks from barging in and ruining the party, probably not very good ones. I guess someone might write a somewhat tongue in cheek anthropology book like Kate Fox's Watching the English, but that would require a sort of relaxed attitude to absorb and reading it with a rigid "I must obey the precepts to succeed" mindset probably wouldn't end well. Productively learning stuff of this sort from books instead of social immersion is it's own kind of extra hard mode whose nature is very rarely explicated because book-learning unwritten rules is taboo.

What's your general career plan here? If you just want to learn academic results and apply them by eg. becoming a data scientist (not an actual scientist, you can tell because there's "scientist" in the name), you should be fine. Basically anything up to a master's degree and going off to work in industry and you can be completely oblivious. Are you planning on going into something like math where you can basically be a crazy hermit and still do groundbreaking stuff? Again, you can just go do you. The point where you really need to know the local culture is if you're trying to build a regular academic career where you are employed as a researcher in an academic institution, are publishing frequently in peer-reviewed journals and are trying to get on a tenure track for professorship. So, is this specifically what you're after?

Comment by rsaarelm on How to practice rationality? · 2020-12-25T08:21:55.137Z · LW · GW

There might not really be good answers to this. Most of rationality stuff is meta-level practices to apply to object-level activities, and "daily/routine practice" is very much something in the object level. The idea that there's a practice regimen for rationality that looks something like existing school curriculums we all get trained to assume a practice regimen should look like feels related to the failed idea (see also) that we could use the existing school curriculum model to teach critical thinking.

So the boring advice might be, have an object level craft of the sort you might study for an university degree (medicine, law, engineering, science, pie-making) you are learning. Try to get very good at it. Study rationality techniques as tools to help you get very good at the object level craft. Skipping the object level craft is like trying to go from Kegan stage 3 to Kegan stage 5, which doesn't work if you skip stage 4.

Comment by rsaarelm on The tech left behind · 2020-11-18T17:31:30.519Z · LW · GW

Still worse than a computer, since they can't take feedback on words that you've learned better. It only works if your learning rates for different words are what the tape maker expected.

Also this won't work for the end run of spaced repetition where a well-practiced card might pop up a year after it was last reviewed. The long-lived cards are going to be a very eclectic mix. Then again, school courses usually don't expect you to retain the stuff from each course past the duration of the course, so this isn't that much of a shortcoming for education.

Comment by rsaarelm on Has anyone written stories happening in Hanson's em world? · 2020-09-22T04:25:20.397Z · LW · GW

Black Mirror episode White Christmas isn't explicitly based on Hanson's stuff but has a very similar premise.

Comment by rsaarelm on If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach? · 2020-09-03T09:13:59.362Z · LW · GW

We're already drowning in inert content, I don't see how adding more would help. We've had a way to get something like the martial art of rationality since ancient Athens, which is structured interaction with an actual human mentor who knows how to engage with the surrounding world and can teach and train other people with face-to-face interaction. This thing isn't mechanizable, like arithmetic or algebra is, so simple interactive programs are not going to be much better than just a regular book. This also isn't a not mechanizable but still clearly delimited topic like wood-carving or playing tennis, so you can't even say you're unquestionably doing the thing when going it alone, even though you might do better with some professional training. What you're trying to teach is the human ability to observe an unexpected situation, make sense of it and respond sensibly to it at a level above baseline adult competency, and the one way we know how to teach that is to have someone competent in the thing you're trying to learn you can interact with.

Like, yeah, maybe this will help, but I can't help but feel that people are compulsively eating ice and this is planning an ice shavings machine for your kitchen instead of getting an appointment for for having your blood work done.

Comment by rsaarelm on Is there any scientific evidence for benefits of meditation? · 2020-05-10T14:28:31.172Z · LW · GW

"What can we know about what happens to other people when they practice meditation" is a different (and important) question from "what is the best mindset for personally making progress with the practice of meditation" though.

Comment by rsaarelm on Hello, is it you I'm looking for? · 2020-02-05T17:34:11.824Z · LW · GW

The problem is that we think statements have a somewhat straightforward relation to reality because we can generally make sense of them quite easily. In reality it turns out that that ease comes from a lot of hidden work our brain does being smart on the spot every time it needs to fit a given sentence to the given state of reality, and nobody really appreciated this until people started trying to build AIs that do anything similar and repeatedly ended up with things with no ability to distinguish between things that are realistically plausible and incoherent nonsense.

I'm not really sure how to communicate this effectively beyond gesturing at the sorry history of the artificial intelligence research program from the 1950s onwards despite thousands of extremely clever people putting their minds to it. The sequences ESrogs suggests in the sibling reply also deal with stuff like this.

Comment by rsaarelm on Hello, is it you I'm looking for? · 2020-01-30T06:30:02.492Z · LW · GW

Your first problem is that you need a theory for just how do statements relate to the state of the world. Have you read Wittgenstein's Philosophical Investigations?

Overall, this basically sounds like analytical philosophy plus 1970s style AI. Lots of people have probably figured this would be a nice thing to have, but once you drop out of the everyday understanding of language and try to get to the bottom of what's really going on, you end up in the same morass where AI research and modern philosophy are stuck in.

Comment by rsaarelm on What plausible beliefs do you think could likely get someone diagnosed with a mental illness by a psychiatrist? · 2020-01-16T17:16:46.419Z · LW · GW

"The Soviet Union is politically dysfunctional"

Comment by rsaarelm on [deleted post] 2019-12-31T09:31:16.757Z

Let's say you're afflicted by a severe illness and have, say, 5 % odds of surviving. If you end up dying of it, all of your organs will be damaged beyond repair. However, as of now they're still fine and safe for organ donation. How do you feel about cutting to the chase and committing suicide right here and now so you can produce a fresh dead body with superior utilitarian value?

Comment by rsaarelm on How time tracking can help you prioritize · 2019-12-18T16:25:35.729Z · LW · GW

Stochastic time tracking is an interesting approach where you don't need to start and stop timers on your own. The system pings you at random intervals and you answer what you're currently doing. Then you count each sample point as the average sampling interval spent doing the task that was sampled.

Comment by rsaarelm on Do you get value out of contentless comments? · 2019-11-22T10:16:32.528Z · LW · GW

I like comments that don't look like they could have been generated by a chatbot. I feel like whenever I'm being fine with the "Good post!" comments, I'm setting up an environment where after a while a portion of the comments will actually be chatbot spam.

Comment by rsaarelm on [deleted post] 2019-11-15T06:24:26.268Z

No mention of the anthropic principle? Lots of existing thinking in these lines under that term.

Comment by rsaarelm on Literature on memetics? · 2019-11-09T07:21:30.458Z · LW · GW

I remember a sort of consensus from the 00s that memetics had failed as a research program and the big-name people like Dawkins and Blackmore moving on to other stuff. Here's one summary I found. People still find the metaphor compelling, so it might just be that right now nobody has a good idea how to study the thing rigorously.

Comment by rsaarelm on bgaesop's Shortform · 2019-10-29T17:42:20.407Z · LW · GW

The description sounded like both parties were assuming chakras involved some actual mystical energy and were doing the invisible garage dragon dance. The parapsychology angle to this one is simple that even without knowing about a specific rebuttal, chakras are a well-known mystical concept, parapsychology research has been poking at most of the obvious mystical claims, and if parapsychology had verified that some supernatural phenomenon is actually real, we'd have heard of it.

If they were talking about the non-mystical model, the first person could've just said that it's a possibly helpful visualization shorthand for doing relaxation and biofeedback exercises and there's no actual supernatural energies involved.

Comment by rsaarelm on bgaesop's Shortform · 2019-10-29T07:54:37.964Z · LW · GW

Hmm, no, let’s not do that. It makes me un­com­fortable. I can’t tell why, but I don’t want to do it, so let’s not

After 100 years of parapsychology research, it's pretty obvious to anyone with a halfway functioning outside view that any quick experiment will either be flawed or say chakras are not real, so I'm not sure whether to take this as face value of the person thinking chakras are real-real and genuinely not being able to say why they don't want to do the experiment, or just saying a polite-speak version of "we both know doing the experiment will show chakras aren't real and will make me lose face, you're making a status grab against me for putting me on the spot by demanding the experiment so fuck you and fuck your experiment."

Comment by rsaarelm on Vaniver's Shortform · 2019-10-29T06:10:02.068Z · LW · GW

John McCarthy's The Doctor's Dilemma

Comment by rsaarelm on bgaesop's Shortform · 2019-10-28T13:30:56.509Z · LW · GW

If you take the paper at face value, wouldn't you expect a lot of the chronically depressed rats to be jumping at the chance to trade off the ability to remember appointments with no longer subjectively suffering from depression?

Comment by rsaarelm on Sets and Functions · 2019-10-11T05:58:43.798Z · LW · GW

The analogy to geographic maps might confuse someone who knows geographic maps but not mathematical maps since "a map is a thing connecting cities in one country with cities in another country" has nothing to do with how you use a geographic map.

Comment by rsaarelm on Sets and Functions · 2019-10-11T05:31:41.644Z · LW · GW

Throws me off a bit early on, you go directly from dog being an actual dog to dog being an arbitrary variable name. At this point I think of dog as "some x such that x is a dog", so {dog} and {cat} are different because one has an x that is a dog and another has a y that is a cat which is not a dog.

Comment by rsaarelm on The sentence structure of mathematics · 2019-10-08T06:12:32.887Z · LW · GW

Do you know the monads are like burritos problem? Do you have a plan for how this sequence isn't going to end up being "mathematics is like burritos"?

Comment by rsaarelm on Introduction to Introduction to Category Theory · 2019-10-07T10:04:53.274Z · LW · GW

I'm still buying the CT hype, so very interested to see more of this. However, I've been buying the hype for some 10+ years now and trying to learn CT on and off, and still can't point to a single instance of being able to use it either to approach a problem or understand something better, so I'm pretty skeptical about this being teachable to a mathematically naive audience in a way that they can internalize much anything about it that's both correct and usable in some practice that isn't advanced math study.

Comment by rsaarelm on Why I Am Not a Technocrat · 2019-08-21T05:04:22.673Z · LW · GW

Couldn't come up with a way to view the article. Downvoted without reading.

Comment by rsaarelm on What woo to read? · 2019-07-30T09:03:08.286Z · LW · GW

Aro med­i­ta­tion course

Seconding this one. For people who don't want to wait for the weekly drip of e-mails, the contents can also be found here.

Comment by rsaarelm on What are good resources for learning functional programming? · 2019-07-05T09:06:17.032Z · LW · GW

Very non-comprehensive, but from things I've read and liked:

How: What I Wish I Knew When Learning Haskell

What: Chris Okasaki, Purely Functional Data Structures


Comment by rsaarelm on Ethics as Warfare: Metaphysics and Morality of the Era of Transhumanism · 2019-05-22T08:49:34.981Z · LW · GW

There's a bit of a subtext here of trying to figure out whether you're coming from a different tradition or are an internet crazy person. This forum doesn't have much of a culture that can tell Christian intellectual tradition apart from schizophrenia, so terse comments that assume shared idiom won't go over very well.

FWIW, I'm finding the book quite interesting and non-crazy so far. Thanks for the link.

For constructive examples of the culture gap, I'm not sure I've seen the way the book uses 'spiritual' as describing various real-world processes (sex is not spiritual but fertilization is spiritual, using antidepressants is not spiritual but recovering from depression via long-term natural cognition is spiritual) before, and that looks like some role-playing game magic system worldbuilding to me. The only scholarly use for the word I'd expect would be calling worship and prayer spiritual activities. I guess the book's way of use comes from something like Aristotle's teleology?

Comment by rsaarelm on Subagents, akrasia, and coherence in humans · 2019-03-27T12:07:06.144Z · LW · GW

So, just to check, we are still talking about the Kegan stage 4 that according to Kegan, 35 % of the adult population has attained? Are you saying that getting to stage 4 actually is actually the same as attaining stream entry, or just that the work to get to stream entry involves similar insights?

Comment by rsaarelm on The tech left behind · 2019-03-16T06:30:54.403Z · LW · GW

It really needs a personal computer to schedule the repetitions, and we're only now getting to the point where every schoolchild having their own handheld computer is a somewhat practical proposition.

Comment by rsaarelm on What math do i need for data analysis? · 2019-01-23T08:07:33.216Z · LW · GW

You want basic undergraduate probability and linear algebra and some calculus on the side, but you should get along with those. Also some practice with reading academic texts so that you can try to extract some useful meaning from it without understanding every part helps. Also you need some general familiarity with how academic math papers are written, the concepts in 2.1 aren't complex (high-dimensional space make random points stick together in clumps less), but the way the book writes it is going to be unfamiliar if you haven't been exposed to academic math writing much before.

Not sure what's a good place to get that other than "go to university, minor in math". Khan Academy?

Comment by rsaarelm on What math do i need for data analysis? · 2019-01-20T15:58:17.655Z · LW · GW

Check out John Hopcroft's Foundations of Data Science

Comment by rsaarelm on State Machines and the Strange Case of Mutating API · 2018-12-25T10:51:59.812Z · LW · GW

You can do the linear typing thing in Rust. Have a hidden internal handle and API wrapper objects on top of it that get consumed on method calls and can return different wrappers holding the same handle. I took a shot at doing a toy implementation for the TCP case:

type internal_tcp_handle = usize;  // Hidden internal implementation

/// Initial closed state
pub struct Tcp(internal_tcp_handle);

impl Tcp {
    pub fn connect_unauthenticated(self) -> Result<AuthTcp, Tcp> {
        // Consume current API wrapper,
        // return next state API wrapper with same handle.

    pub fn connect_password(self, _user: &str, pass: &str) -> Result<AuthTcp, Tcp> {
        // Can fail back to current state if password is empty.
        if pass.is_empty() { Err(self) } else { Ok(AuthTcp(self.0)) }

/// Authenticated state.
pub struct AuthTcp(internal_tcp_handle);

impl AuthTcp {
    pub fn connect_tcp(self, addr: &str) -> Result<TcpConnection, AuthTcp> {
        if addr.is_empty() { Err(self) } else { Ok(TcpConnection(self.0)) }

    pub fn connect_udp(self, addr: &str) -> Result<UdpConnection, AuthTcp> {
        if addr.is_empty() { Err(self) } else { Ok(UdpConnection(self.0)) }

pub struct TcpConnection(internal_tcp_handle);

pub struct UdpConnection(internal_tcp_handle);

fn main() {
    // Create unauthenticated TCP object.
    let tcp = Tcp(123);
    println!("Connection state: {:?}", tcp);

    // This would be a compiler error:
    // let tcp = tcp.connect_tcp("").unwrap();
    // 'tcp' is bound to an API that doesn't support connect operations yet.

    // Rebind the stupid way, unwrap just runtime errors unless return is Ok.
    let tcp = tcp.connect_unauthenticated().unwrap();
    // Now 'tcp' is bound to the authenticated API, we can open connections.
    println!("Connection state: {:?}", tcp);

    // The runtime errory way is ugly, let's handle failure properly...
    if let Ok(tcp) = tcp.connect_tcp("") {
        println!("Connection state: {:?}", tcp);
    } else {
        println!("Failed to connect to address!");
    // TODO Now that we can use connected TCP methods on 'tcp',
    // implement those and write some actual network code...
Comment by rsaarelm on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-18T08:11:10.985Z · LW · GW

That's the way where you try to make another adult human recognize the thing based on their own experiences, which is how we've gone about this since the Axial Age. Since 1970s, the second approach of how would you program an artificial intelligence to do this has been on the table. If we could manage this, it would in theory be a lot more robust statement of the case, but would also probably be much, much harder for humans to actually follow by going through the source code. I'm guessing this is what Chapman is thinking when he specifies "can be printed in a book of less than 10kg and followed consciously" for a system intended for human consumption.

Of course there's also a landscape between the everyday language based simple but potentially confusion engendering descriptions and the full formal specification of a human-equivalent AGI. We do know that either humans work by magic or a formal specification of a human-equivalent AGI exists even when we can't write down the book of probably more than 10 kg containing it yet. So either Chapman's stuff hits somewhere in the landscape between the present-day reasoning writing that piggybacks on existing human cognition capabilities and the Illustrated Complete AGI Specification or it does not, but it seems like the landscape should be there anyway and getting some maps of it could be very useful.

Comment by rsaarelm on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-18T07:11:11.769Z · LW · GW

That's a lot of reiteration of the problem with Chapman's writing which was a reason I pointed to the reading list to begin with. Not trying to pull a "you must read all this before judging Chapman" Gish gallop, but trying to figure out if there's some common strain of what Nietzsche, Heidegger, Wittgenstein, Dreyfus, Hofstadter and Kegan are going on about that looks like what Chapman is trying to go for. Maybe the idea is just really hard, harder than the Sequences stuff, but at least you got several people doing different approaches to it so you have a lot more to work with.

And it might be there isn't and this is all just Chapman flailing about. When someone builds a working AGI with just good and basic common-sense rationality ideas, I'll concede that he probably was. In the meantime, it seems kind of missing the point to criticize an example whose point is that it's obvious to humans as being obvious to humans. I took the whole point of the example that we're still mostly at the level of "dormitive principle" explanations for how humans figure this stuff out, and now we have the AI programming problem that gives us some roadmap for what an actual understanding of this stuff would look like, and suddenly figuring out the eggplant-water thing from first principles isn't that easy anymore. (Of course now we also have the Google trick of having a statistical corpus of a million cases of humans asking for water from the fridge where we can observe them not being handed eggplants, but taking that as the final answer doesn't seem quite satisfactory either.)

The other thing is the Kegan levels and the transition from a rule-following human who's already doing pretty AI complete tasks, but very much thinking inside the box to the system-shifting human. A normal human is just going to say "there are alarm bells ringing, smoke coming down the hallway and lots of people running towards the emergency exits, maybe we should switch from the weekly business review meeting frame to the evacuating the building frame about now", while the business review meeting robot will continue presenting sales charts until it burns to a crisp. The AI engineer is going to ask, "how do you figure out which inputs should cause a frame shift like that and how do you figure out which frame to shift to?" The AI scientists is going to ask, "what's the overall formal meta-framework of designing an intelligent system that can learn to dynamically recognize when its current behavioral frame has been invalidated and to determine the most useful new behavioral frame in this situation?" We don't seem to really have AI architectures like this yet, so maybe we need something more heavy-duty than SEP pages to figure them out.

So that's a part of what I understand Chapman is trying to do. Hofstadter-like stuff, except actually trying to tackle it somehow instead of just going "hey I guess this stuff is a thing and it actually looks kinda hard" like Hofstadter went in GEB. And then the background reading has the fun extra feature that before about the 1970s nobody was framing this stuff in terms of how you're supposed to build an AI, so they'll be coming at it from quite different viewpoints.

Comment by rsaarelm on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-15T23:54:26.971Z · LW · GW

I like to link to his recommended reading list instead of the main site for gesturing towards what Chapman seems to be circling around while never quite landing on. It's still not a clear explanation of the thing, but at least that's more than one person's viewpoint on the landscape.

Comment by rsaarelm on An Invitation to Measure Meditation · 2018-10-01T06:07:03.915Z · LW · GW

Maybe set up something on your phone that pings you a few times each day at random times to track your mood across the day. Whenever you get a ping, write down the time, and then for example what you were doing, your subjective mood, subjective energy level and how spaced out or focused you're feeling.

Comment by rsaarelm on [deleted post] 2018-05-06T05:28:32.592Z

Reminded me of a blog post from a while back, Thoughts on the STEM "class"

It’s interesting to think about the (many) ways in which the modern “bay-area rationalist techno-libertarian” culture (i.e. Scott Alexander’s Grey Tribe, and to a lesser extent all of STEM academia) is effectively an outgrowth not of the bourgeoisie “entrepreneurial” class identified with the American upper-middle, but rather of the historical-and-present military officer class.

(More commentary from down the tumblr chain here)

Comment by rsaarelm on Some Simple Observations Five Years After Starting Mindfulness Meditation · 2018-04-21T06:17:15.462Z · LW · GW

You know, I’ve gotten everything I can get out of this, and it’s not very valuable any more.

Were you aware of the progress models like the eight jhanas or the sixteen stages of insight? Both of those promise some very interesting sounding effects beyond your basic increased concentration skill and reflective awareness in exchange for more serious effort put in.

Comment by rsaarelm on Categories of Sacredness · 2018-03-01T06:00:41.642Z · LW · GW

Link was region-blocked for me, I guess this is the same thing.