Salvage Epistemology

post by jimrandomh · 2022-04-30T02:10:41.996Z · LW · GW · 119 comments

A funny thing happens with woo sometimes, in the rationality community. There's a frame that says: this is a mix of figurative stuff and dumb stuff, let's try to figure out what the figurative stuff is pointing at and salvage it. Let's call this "salvage epistemology". Unambiguous examples include the rationality community's engagement with religions, cold-reading professions like psychics, bodywork, and chaos magic. Ambiguous examples include intensive meditation, Circling, and many uses of psychedelics.

The salvage epistemology frame got locally popular in parts of the rationality community for awhile. And this is a basically fine thing to do, in a context where you have hyper-analytical programmers who are not at risk of buying into the crazy, but who do need a lens that will weaken their perceptual filters around social dynamics, body language, and muscle tension.

But there's a bad thing happens when you have a group that are culturally adjacent to the hyper-analytical programmers, but who aren't that sort of person themselves. They can't, or shouldn't, take for granted that they're not at risk of falling into the crazy. For them, salvage epistemology disarms an important piece of their immune system.

I think salvage epistemology is infohazardous to a subset of people, and we should use it less, disclaim it more, and be careful to notice when it's leading people in over their heads.

119 comments

Comments sorted by top scores.

comment by Valentine · 2022-04-30T14:15:12.369Z · LW(p) · GW(p)

Well, there's also the fact that "true"[1] ontological updates can look like woo prior to the update. Since you can't reliably tell ahead of time whether your ontology is too sparse for what you're trying to understand, truth-seeking requires you find some way of dealing with frames that are "obviously wrong" without just rejecting them. That's not simply a matter of salvaging truth from such frames.

Separate from that:

I think salvage epistemology is infohazardous to a subset of people, and we should use it less, disclaim it more, and be careful to notice when it's leading people in over their heads.

I for one am totally against this kind of policy. People taking responsibility for one another's epistemic states is saviorist fuckery that makes it hard to talk or think. It wraps people's anxiety around what everyone else is doing and how to convince or compel them to follow rules that keep one from feeling ill at ease.

I like the raising of awareness here. That this is a dynamic that seems worth noticing. I like being more aware of the impact I have on the world around me.

I don't buy the disrespectful frame though. Like people who "aren't hyper-analytical programmers" are on par with children who need to be epistemically protected, and as though LWers are enlightened sages who just know better what these poor stupid people should or shouldn't believe.

Like, maybe beliefs aren't for true things for everyone, and that's actually correct for their lives. That view isn't the LW practice, sure. But I don't think humanity's CEV would have everyone being LW-style rationalists.

  1. ^

    Scare quotes go around "true" here because it's not obvious what that means when talking about ontological updates. You'd have to have some larger meta-ontology you're using to evaluate the update. I'm not going to try to rigorously fix this here. I'm guessing the intuition I'm gesturing toward is clear enough.

Replies from: DanielFilan, SaidAchmiz, Kaj_Sotala
comment by DanielFilan · 2022-05-04T04:55:11.744Z · LW(p) · GW(p)

Well, there's also the fact that "true" ontological updates can look like woo prior to the update.

Do you think they often do, and/or have salient non-controversial examples? My guess prior to thinking about it is that it's rare (but maybe the feeling of woo differs between us).

Past true ontological updates that I guess didn't look like woo:

  • reductionism
  • atomism
  • special relativity
  • many worlds interpretation (guy who first wrote it up was quite dispositionally conservative)
  • belief that you could gain knowledge thru experiment and apply that to the world (IDK if this should count)
  • germ theory of disease
  • evolution by natural selection as the origin of humans

Past true ontological updates that seem like they could have looked like woo, details welcome:

  • 'force fields' like gravity
  • studying arguments and logic as things to analyse
  • the basics of the immune system
  • calculus
Replies from: Kaj_Sotala, AnnaSalamon, AnnaSalamon, Valentine, TAG
comment by Kaj_Sotala · 2022-05-04T19:21:26.297Z · LW(p) · GW(p)
  • 'force fields' like gravity

AFAIK gravity was indeed considered at least woo-ish back in the day, e.g.:

Newton’s theory of gravity (developed in his Principia), for example, seemed to his contemporaries to assume that bodies could act upon one another across empty space, without touching one another, or without any material connection between them. This so-called action-at-a-distance was held to be impossible in the mechanical philosophy. Similarly, in the Opticks he developed the idea that bodies interacted with one another by means of their attractive and repulsive forces—again an idea which was dismissed by mechanical philosophers as non-mechanical and even occult.

Replies from: adele-lopez-1, TAG
comment by Adele Lopez (adele-lopez-1) · 2022-05-04T20:39:27.880Z · LW(p) · GW(p)

And they were probably right about "action-at-a-distance" being impossible (i.e. locality), but it took General Relativity to get a functioning theory of gravity that satisfied locality.

(Incidentally, one of the main reasons I believe the many worlds interpretation is that you need something like that for quantum mechanics to satisfy locality.)

Replies from: TAG
comment by TAG · 2022-05-06T18:01:20.694Z · LW(p) · GW(p)

All interpretations of QM make the same predictions, so if "satisfying locality" is an empirically meaningful requirement, they are all equivalent.

But locality is more than one thing, because everything is more than one thing. Many interpretations allow nonlocal X where X might be a correlation ,but not an action or a signal.

Replies from: adele-lopez-1
comment by Adele Lopez (adele-lopez-1) · 2022-05-07T00:31:16.042Z · LW(p) · GW(p)

Yeah, it's not empirically meaningful over interpretations of QM (at least the ones which don't make weird observable-in-principle predictions). Still meaningful as part of a simplicity prior, the same way that e.g. rejecting a simulation hypothesis is meaningful.

comment by TAG · 2022-05-04T20:27:40.185Z · LW(p) · GW(p)

Zero was considered weird and occult for a while

comment by AnnaSalamon · 2022-05-04T15:59:25.531Z · LW(p) · GW(p)

One example, maybe: I think the early 20th century behaviorists mistakenly (to my mind) discarded the idea that e.g. mice are usefully modeled as having something like (beliefs, memories, desires, internal states), because they lumped this in with something like "woo."  (They applied this also to humans, at least sometimes.)

The article Cognition all the way down argues that a similar transition may be useful in biology, where e.g. embryogenesis may be more rapidly modeled if biologists become willing to discuss the "intent" of a given cellular signal or similar.  I found it worth reading.  (HT: Adam Scholl, for showing me the article.)

comment by AnnaSalamon · 2022-05-04T20:04:34.878Z · LW(p) · GW(p)

I think "you should one-box on Newcomb's problem" is probably an example.  By the time it was as formalized as TDT it was probably not all that woo-y looking, but prior to that I think a lot of people had an intuition along the lines of "yes it would be tempting to one-box but that's woo thinking that has me thinking that."

comment by Valentine · 2022-05-06T17:31:22.727Z · LW(p) · GW(p)

I like this inquiry. Upvoted.

Do you think they often do [look like woo]…?

Well… yes, but not for deep reasons. Just an impression. The cases where I've made shifts from "that's woo" to "that's true" are super salient, as are cases where I try to invite others to make the same update and am accused of fuzzy thinking in response. Or where I've been the "This is woo" accuser and later made the update and slapped my forehead.

Also, "woo" as a term is pretty strongly coded to a particular aesthetic. I don't think you'd ever hear concern about "woo" in, say, Catholicism except to the extent the scientist/atheist/skeptic/etc. cluster is also present. But Catholics still slam into ontology updates that look obviously wrong beforehand and are obviously correct afterwards. Deconversion being an individual-scale example.

(Please don't read me as saying "Deconversion is correct." I could just as well have given the inverse example: Rationalists converting to Catholicism is also an ontological update that's obviously wrong beforehand and obviously correct afterwards. But that update does look like "woo" beforehand, so it's not an example of what I'm trying to name.)

 

Do you… have salient non-controversial examples?

I like the examples others have been bringing. I like them better than mine. But I'll try to give a few anyway.

Speaking to one of your "maybe never woo" examples: if I remember right, the germ theory of disease was incredibly bizarre and largely laughed at when first proposed. "How could living creatures possibly be that small? And if they're so small, how could they possibly create that much illness?" Prevailing theories for illness were things like bad air and demons. I totally expect lots of people thought the microbes theory was basically woo. So that's maybe an example.

Another example is quantum mechanics. The whole issue Einstein took with it was how absurd it made reality. And it did in fact send people like Bohm into spiritual frenzy. This is actually an incomplete ontology update in that we have the mathematical models but people still don't know what it means — and in physics at least they seem to deal with it by refusing to think about it. "If you do the math, you get the right results." Things like the Copenhagen Interpretation or Many Worlds are mostly ways of talking about how to set up experiments. The LW-rationalist thing of taking Many Worlds deeply morally seriously is, as far as I can tell, pretty fringe and arguably woo.

You might recall that Bishop Berkeley had some very colorful things to say about Newton's infinitesimals. "Are they the ghosts of departed quantities?" If he'd had the word "woo" I'm sure he would have used it. (Although this is an odd example because now mathematicians do a forgivable motte-and-bailey where they say infinitesimal thinking is shorthand for limits when asked. Meaning many of them are using an ontology that includes infinitesimals but quickly hide it when challenged. It's okay because they can still do their formal proofs with limits, but I think most of them are unaware of the various ways to formalize infinitesimals as mathematical objects. So this is a case where many mathematicians are intentionally using an arguably woo fake framework [LW · GW] and translating their conclusions afterwards instead of making the full available ontology update.)

Given that I'm basically naming the Semmelweis reflex, I think Semmelweis's example is a pretty good one. "What?! You're accusing me, an educated professional gentleman, of carrying filth on my hands?! Preposterous! How dare you?!" Obviously absurd and wrong at the time, but later vindicated as obviously correct.

Replies from: DanielFilan
comment by DanielFilan · 2022-05-06T18:59:44.001Z · LW(p) · GW(p)

Your examples seem plausible, altho I'd still be interested in more details on each one. Further notes:

  • "And it did in fact send people like Bohm into spiritual frenzy." - do you mean Bohr, or is this a story/take I don't know about?
  • Re: Semmelweis reflex, I think there's a pretty big distinction between the "woo" taste and the "absurd" taste. For example, "all plants are conscious and radiate love all the time" sounds like woo to me. "The only reason anybody gets higher education is to find people to have kids with" and "there's a small organ in the centre of the brain that regulates the temperature of the blood that nobody has found yet" sound absurd to me, but not like woo.
comment by TAG · 2022-05-04T20:06:38.423Z · LW(p) · GW(p)

many worlds interpretation

Received such a bad reception that Everett left academic physics.

atomism

Didn't seem crazy to the Greeks, but was controversial when reintroduced by Boltzman.

Replies from: DanielFilan, AnnaSalamon
comment by DanielFilan · 2022-05-06T18:51:54.194Z · LW(p) · GW(p)

A lot of things can be pretty controversial but not woo-ish.

comment by AnnaSalamon · 2022-05-04T20:26:56.797Z · LW(p) · GW(p)

Can you say more about these for the benefit of folks like me who don't know about them?  What kind of "bad reception" or "controversial" was it?  Was it woo-flavored, or something else?

Replies from: TAG
comment by TAG · 2022-05-05T18:45:18.363Z · LW(p) · GW(p)

https://www.scientificamerican.com/article/hugh-everett-biography/

Everett tried to express his ideas as drily as possible, and it didn't entirely work--he was still accused of "theology" by Bohr.

But there were and are technical issues as well, notably the basis problem. It can be argued that if you reify the whole formalism, then you have to reify the basis, and that squares the complexity of multiverse -- to every state in every basis. The argument actually was by JS Bell in

Modern approaches tend to assume the multiverse has a single "preferred" basis, which has its own problems. Which tells us that it hasn't always been one exact theory.

comment by Said Achmiz (SaidAchmiz) · 2022-04-30T16:51:56.053Z · LW(p) · GW(p)

I would amend the OP by saying that “salvage epistemology” is a bad idea for everyone, including “us” (for any value of “us”). I don’t much like labeling things as “infohazards” (folks around here are much too quick to do that, it seems to me), which obfuscates and imbues with an almost mystical air something that is fairly simple: epistemically, this is a bad idea, and reliably doesn’t work and makes our thinking worse.

As I’ve said before: avoiding toxic, sanity-destroying epistemologies and practices is not something you do when you’re insufficiently rational, it is how you stay sufficiently rational.

comment by Kaj_Sotala · 2022-04-30T15:36:21.039Z · LW(p) · GW(p)

If you think that some kinds of ideas are probably harmful for some people to hear, is acting on that belief always saviorist fuckery or does there exist a healthy form of it?

It seems to me that, just as one can be mindful of one's words and avoid being intentionally hurtful but also not take responsibility for other people's feelings... one could also be mindful of the kinds of concepts one is spreading and acknowledge that there are likely prerequisites for being able to handle exposure to those concepts well, without taking responsibility for anyone's epistemic state.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-04-30T16:55:05.579Z · LW(p) · GW(p)

If you think that some kinds of ideas are probably harmful for some people to hear, is acting on that belief always saviorist fuckery or does there exist a healthy form of it?

I am not Valentine, but I would say: it is “saviorist fuckery” if your view is “these ideas are harmful to hear for people who aren’t me/us (because we’re enlightened/rational/hyper-analytic/educated/etc. and they’re not)”. If instead you’re saying “this is harmful for everyone to hear (indeed I wish I hadn’t heard it!), so I will not disseminate this to anyone”, well, that’s different. (To be clear, I disapprove of both scenarios, but it does seem plausible that the motivations differ between these two cases.)

Replies from: Dustin
comment by Dustin · 2022-04-30T19:15:44.202Z · LW(p) · GW(p)

Is part of your claim that such ideas do not exist?  By "such ideas" I mean ideas that only some people can hear or learn about for some definition of "safely".

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-04-30T21:35:27.090Z · LW(p) · GW(p)

Hard to answer that question given how much work the clause ‘for some definition of “safely”’ is doing in that sentence.

EDIT: This sort of thing always comes down to examples and reference classes, doesn’t it? So let’s consider some hypothetical examples:

  1. Instructions for building a megaton thermonuclear bomb in your basement out of parts you can get from a mail-order catalog.

  2. Langford’s basilisk.

  3. Langford’s basilisk, but anyone who can write a working FizzBuzz is immune.

  4. The truth about how the Illuminati are secretly controlling society by impurifying our precious bodily fluids.

Learning idea #1 is perfectly safe for anyone. That is, it’s safe for the hearer; it will do you no harm to learn this, whoever you are. That does not, however, mean that it’s safe for the general public to have this idea widely disseminated! Some ne’er-do-well might actually build the damn thing, and then—bad times ahead!

If we try to stop the dissemination of idea #1, nobody can accuse us of “saviorist fuckery”, paternalism, etc.; to such charges we can reply “never mind your safety—that’s your own business; but I don’t quite trust you enough to be sure of my safety, if you learn of this (nor the safety of others)!” (Of course, if it turns out that we are in possession of idea #1 ourselves, the subject might ask how comes it that we are so trustworthy as to be permitted this knowledge, but nobody else is!)

Ok, what about #2? This one’s totally unsafe. Anyone who learns it dies. There’s no question of us keeping this idea for ourselves; we’re as ignorant as anyone else (or we’d be dead). If we likewise keep others from learning this, it can only be purely altruistic. (On the other hand, what if we’re wrong about the danger? Who appointed us guardians against this threat, anyway? What gives us the right to deny people the chance to be exposed to the basilisk, if they choose it, and have been apprised of the [alleged] danger?)

#3 is pretty close to #2, except that the people we’re allegedly protecting now have reason to be skeptical of our motives. (Awfully convenient, isn’t it? This trait that already gave us cause to feel superior to others, happens also to make us the only people who can learn the terrible secret thing without going mad! Yeah, right…)

As for #4, it is not even a case of “saviorist fuckery”, but ordinary self-serving treachery. If we’re Illuminati members ourselves, then clearly we’ve got an interest in preventing the truth about our evil schemes from coming to light. If we’re not members, then we’r just cowardly collaborators. (What could we claim—that we, alone, have the good sense to meekly accept the domination of the Illuminati, and that we keep the secret because we fear that less enlightened folks might object so strenuously that they’d riot, revolt, etc.—and that we do this out of selfless altruism? The noble lie, in other words, except that we’re not the architects of the lie, nor have we cause to think that it’s particularly noble—we simply hold stability so dear that we’ll accept slavery to buy it, and ensure that our oblivious fellows remain unwitting slaves?)

So which of these examples is the closest to the sort of thing the OP talks about, in your view?

Replies from: Dustin
comment by Dustin · 2022-05-01T02:36:56.771Z · LW(p) · GW(p)

Hard to answer that question given how much work the clause ‘for some definition of “safely”’ is doing in that sentence.

 

I think the amount of work that clause does is part of what makes the question worth answering...or at least makes the question worth asking.

Awfully convenient, isn’t it? This trait that already gave us cause to feel superior to others, happens also to make us the only people who can learn the terrible secret thing without going mad! Yeah, right…

I'm not a fan of inserting this type of phrasing into an argument.  I think it'd be better to either argue that the claim is true or not true. To me, this type of claim feels like an applause light.  Of course, it's also possibly literally accurate...maybe most claims of the type we're talking about are erroneous and clung to because of the makes-us-feel-superior issue, but I don't think that literally accurate aspect of the argument makes the argument more useful or less of an applause light.

In other words, I don't have access to an argument that says both of these cannot exist:

  1. Cases that just make Group A feel superior because Group A erroneously thinks they are the only ones who can know it safely.
  2. Cases that make Group A feel superior because Group A accurately thinks they are the only ones who can know it safely.

In either case Group A comes across badly, but in case 2, Group A is right.  

If we cannot gather any more information or make any more arguments, it seems likely that case #1 is going to usually be the reality we're looking at.  However, we can gather more information and make more arguments.  Since that is so, I don't think it's useful to assume bad motives or errors on the part of Group A.

So which of these examples is the closest to the sort of thing the OP talks about, in your view?

I don't really know.  The reason for my root question was to suss out whether you had more information and arguments or were just going by the heuristics that make you default to my case #1.  Maybe you have a good argument that case #2 cannot exist.  (I've never heard of a good argument for that.)

 

eta: I'm not completely satisfied with this comment at this time as I don't think it completely gets across the point I'm trying to make. That being said, I assign < 50% chance that I'll finish rewriting it in some manner so I'm going to leave it as is and hope I'm being overly negative in my assessment of it or at least that someone will be able to extract some meaning from it.

Replies from: Pattern, SaidAchmiz
comment by Pattern · 2022-05-04T05:27:12.909Z · LW(p) · GW(p)
To me, this type of claim feels like an applause light.  

Do you have the same reaction to:

"This claim is suscpicious."

Replies from: Dustin
comment by Dustin · 2022-05-05T16:58:46.670Z · LW(p) · GW(p)

Less so, but it just leads to the question of "why do you think it's suspicious?".  If at all possible I'd prefer just engaging with whether the root claim is true or false.

Replies from: Pattern
comment by Pattern · 2022-05-05T22:14:45.227Z · LW(p) · GW(p)

That's fair. I initially looked at (the root claim) as a very different move, which could use critique on different grounds.

'Yet another group of people thinks they are immune to common bias. At 11, we will return to see if they, shockingly, walked right into it. When will people (who clearly aren't immune) going to stop doing this?'

comment by Said Achmiz (SaidAchmiz) · 2022-05-01T04:33:45.718Z · LW(p) · GW(p)

I’m not a fan of inserting this type of phrasing into an argument. I think it’d be better to either argue that the claim is true or not true. To me, this type of claim feels like an applause light.

Er… I think there’s been some confusion. I was presenting a hypothetical scenario, with hypothetical examples, and suggesting that some unspecified (but also hypothetical) people would likely react to a hypothetical claim in a certain way. All of this was for the purpose of illustrating and explaining the examples, nothing more. No mapping to any real examples was intended.

The reason for my root question was to suss out whether you had more information and arguments or were just going by the heuristics that make you default to my case #1. Maybe you have a good argument that case #2 cannot exist. (I’ve never heard of a good argument for that.)

My point is that before we can even get to the stage where we’re talking about which of your cases apply, we need to figure out what sort of scenario (from among my four cases, or perhaps others I didn’t list?) we’re dealing with. (For instance, the question of whether Group A is right or wrong that they’re the only ones who can know a given idea safely, is pretty obviously ridiculous in my scenario #4, either quite confused or extremely suspect in my case #1, etc. At any rate, scenario #1 and scenario #2—just to take one obviously contrasting pair—are clearly so different that aggregating them and discussing them as though they’re one thing, is absurd!)

So it’s hard to know how to take your question, in that light. Are you asking whether I think that things like Langford’s basilisk exist (i.e., my scenario #2), or can exist? (Almost certainly not, and probably not but who knows what’s possible, respectively…) Are you asking whether I think that my scenario #3 exists, or can exist? Even less likely…

Do you think that such things exist?

Replies from: Dustin
comment by Dustin · 2022-05-01T17:28:17.958Z · LW(p) · GW(p)

Er… I think there’s been some confusion.

I was referring to this part of your text:

(Awfully convenient, isn’t it? This trait that already gave us cause to feel superior to others, happens also to make us the only people who can learn the terrible secret thing without going mad! Yeah, right…)

It seemed to me like your parentheticals were you stepping out of the hypothetical and making commentary about the standpoint in your hypotheticals.  I apologize if I interpreted that wrong. 

My point is that before we can even get to the stage where we’re talking about which of your cases apply, we need to figure out what sort of scenario (from among my four cases, or perhaps others I didn’t list?) we’re dealing with.

Yeah, I think I understood that is what you're saying, I'm saying I don't think your point is accurate. I do not think you have to figure out which of your scenarios we're dealing with.  The scenario type is orthogonal to the question I'm asking.

I'm asking if you think it's possible for these sort of ideas to exist in the real world:

“these ideas are harmful to hear for people who aren’t me/us (because we’re enlightened/rational/hyper-analytic/educated/etc. and they’re not)”

I'm confused about how what you've said has a bearing on the answerability of my root question.

Do you think that such things exist?

I...don't know.

My prior is that they can exist. It's doesn't break any laws of physics. I don't think it breaks any laws of logic. I think there are things that some people are better able to understand than others. It's not insane to think that some people are less prone to manipulation than others. Just because believing something makes someone feel superior does not logically mean that the thing they believe is wrong.

As far as if they do exist:  There are things that have happened on LW like Roko's basilisk that raise my prior that there are things that some people can hold in their heads safely and others can't. Of course, that could be down to quirks of individual minds instead of general features of some group.  I'd be interested in someone exploring that idea further.  When do we go from saying "that's just a quirk" to "that's a general feature"?  I dunno.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-05-01T18:41:35.404Z · LW(p) · GW(p)

It seemed to me like your parentheticals were you stepping out of the hypothetical and making commentary about the standpoint in your hypotheticals. I apologize if I interpreted that wrong.

That was indeed not my intention.

Yeah, I think I understood that is what you’re saying, I’m saying I don’t think your point is accurate. I do not think you have to figure out which of your scenarios we’re dealing with. The scenario type is orthogonal to the question I’m asking.

I don’t see how that can be. Surely, if you ask me whether some category of thing exists, it is not an orthogonal question, to break that category down into subcategories, and make the same inquiry of each subcategory individually? Indeed, it may be that the original question was intended to refer only to some of the listed subcategories—which we cannot get clear on, until we perform the decomposition!

I’m asking if you think it’s possible for these sort of ideas to exist in the real world:

“these ideas are harmful to hear for people who aren’t me/us (because we’re enlightened/rational/hyper-analytic/educated/etc. and they’re not)”

I’m confused about how what you’ve said has a bearing on the answerability of my root question.

The bearing is simple. Do you think my enumeration of scenarios exhausts the category you describe? If so, then we can investigate, individually, the existence or nonexistence of each scenario. Do you think that there are other sorts of scenarios that I did not list, but that fall into your described category? If so, then I invite you to comment on what those might be.

Just because believing something makes someone feel superior does not logically mean that the thing they believe is wrong.

True enough.

I agree that what you describe breaks no (known) laws of physics or logic. But as I understood it, we were discussing existence, not possibility per se. In that regard, I think that getting down to specifics (at least to the extent of examining the scenarios I listed, or others like them) is really the only fruitful way of resolving this question one way or the other.

Replies from: Dustin
comment by Dustin · 2022-05-01T21:57:10.412Z · LW(p) · GW(p)

I think I see a way towards mutual intelligibility on this, but unfortunately I don't think I have the bandwidth to get to that point. I will just point out this:

But as I understood it, we were discussing existence, not possibility per se.

Hmm, I was more interested in the possibility.

comment by Kaj_Sotala · 2022-04-30T16:01:30.517Z · LW(p) · GW(p)

This post seems to be implying that "salvage epistemology" is somehow a special mode of doing epistemology, and that one either approaches woo from a frame of uncritically accepting it (clearly bad) or from a frame of salvage epistemology (still possibly bad but not as clearly so).

But what's the distinction between salvage epistemology and just ordinary rationalist epistemology?

When I approach woo concepts to see what I might get out of them, I don't feel like I'm doing anything different than when I do when I'm looking at a scientific field and seeing what I might get out of it.

In either case, it's important to remember that hypotheses point to observations [LW · GW] and that hypotheses are burdensome details [? · GW]. If a researcher publishes a paper saying they have a certain experimental result, then that's data towards something being true, but it would be dangerous to take their interpretation of the results - or for that matter the assumption that the experimental results are what they seem - as the literal truth. In the same way, if a practitioner of woo reports a certain result, that is informative of something, but that doesn't mean the hypothesis they are offering to explain it is true.

In either case, one needs to separate "what the commonly offered narratives are" from "what would actually explain these results". And I feel like exactly the same epistemology applies, even if the content is somewhat different.

Replies from: gworley, Viliam
comment by Gordon Seidoh Worley (gworley) · 2022-05-01T17:34:09.337Z · LW(p) · GW(p)

Indeed. I left a comment on the Facebook version of this basically saying "it's all hermeneutics unless you're just directly experiencing the world without conceptions, so worrying about woo specifically is worrying about the wrong frame".

comment by Viliam · 2022-04-30T21:40:11.607Z · LW(p) · GW(p)

I think the incentives in science and woo are different. Scientists are rewarded for discovering new things, or finding an error in existing beliefs, so if 100 scientists agree on something, that probably means more than if 100 astrologers agree on something. You probably won't make a career in science by merely saying "all my fellow scientists are right", but I don't see how agreeing with fellow astrologers would harm your career in astrology.

But what's the distinction between salvage epistemology and just ordinary rationalist epistemology?

An ordinary rationalist will consider some sources more reliable, and some other sources less reliable. For example, knowing that 50% of findings in some field don't replicate, is considered bad news.

Someone who wants to "salvage" e.g. Buddhism is privileging a source that has a replication rate way below 50%.

Replies from: Kaj_Sotala, Pattern, ChristianKl, pktechgirl
comment by Kaj_Sotala · 2022-04-30T22:34:38.691Z · LW(p) · GW(p)

I think the incentives in science and woo are different.

I agree, though I'm not sure how that observation relates to my comment. But yes, certainly evaluating the incentives and causal history of a claim is an important part of epistemology.

Someone who wants to "salvage" e.g. Buddhism is privileging a source that has a replication rate way below 50%.

I'm not sure if it really makes sense to think in terms of salvaging "Buddhism", or saying that it has a particular replication rate (it seems pretty dubious whether the concept of replication rate is well-defined outside a particular narrow context in the first place). There are various claims associated with Buddhism, some of which are better-supported and potentially valuable than others. 

E.g. my experience is that much of meditation seems to work the way some Buddhists say it works, and some of their claims seem to be supported by compatible models and lines of evidence [? · GW] from personal experience, neuroscience, and cognitive science. Other claims, very much less so. Talking about the "replication rate of Buddhism" seems to suggest taking a claim and believing it merely on the basis of Buddhism having made such a claim, but that would be a weird thing to do. We evaluate any claim on the basis of several factors, such as what we know about the process that generated them, how compatible they are with our other beliefs, how useful they would be for explaining experiences we've had, what success similar claims have shown in helping us get better at something we care about, etc.. And then some other claim that's superficially associated with the same source (e.g. "Buddhism") might end up scoring so differently on those factors that it doesn't really make sense to think of them as even being related. 

Even if you were looking at a scientific field that was known to have a very low replication rate, there might be some paper that seemed more likely to be true and also relevant for things that you cared about. Then it would make sense to take the claims in that paper and use them to draw conclusions that were as strong or weak as warranted, based on that specific paper and everything else you knew.

Replies from: Viliam
comment by Viliam · 2022-05-01T00:41:39.040Z · LW(p) · GW(p)

Imagine two parallel universes, each of them containing a slightly different version of Buddhism. Both versions tell you to meditate, but one of them, for example, concludes that there is "no-self", and the other concludes that there is "all-self", or some other similarly nebulous claim.

How certain do you feel that in the other universe you would evaluate the claim and say: "wrong"? As opposed to finding a different interpretation why the other conclusion is also true.

(Assuming the same peer pressure, etc.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2022-05-01T18:17:19.923Z · LW(p) · GW(p)

(Upvoted.)

Well, that is kind of already the case, in that there are also Buddhist-influenced people talking about "all-self" rather than "no-self". AFAICT, the framings sound a little different but are actually equivalent: e.g. there's not much difference between saying "there is no unique seat of self in your brain that would one could point at and say that it's the you" and "you are all of your brain".

There's more to this than just that, given that talking in terms of the brain etc. isn't what a lot of Buddhists would do, but that points at the rough gist of it and I guess you're not actually after a detailed explanation. Another way of framing that is what Eliezer once pointed out [LW · GW], that there is a trivial mapping between a graph and its complement. A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all. Similarly, a graph where each vertex is marked as self is kind of equivalent to one where none are.

More broadly, a lot of my interpretation of "no-self" isn't actually that directly derived from any Buddhist theory. When I was first exposed to such theories, much of their talk about self/no-self sounded to me like the kind of misguided folk speculation of a prescientific culture that didn't really understand the mind very well yet. It was only when I actually tried some meditative practices and got to observe my mind behaving in ways that my previous understanding of it couldn't explain, that I started thinking that maybe there's actually something there.

So when I talk about "no-self", it's not so much that "I read about this Buddhist thing and then started talking about their ideas about no-self"; it's more like "I first heard about no-self but it was still a bit vague what it exactly meant and if it even made any sense, but then I had experiences which felt like 'no-self' would be a reasonable cluster label for, so I assumed that these kinds of things are probably what the Buddhists meant by no-self, and also noticed that some of their theories now started feeling like they made more sense and could help explain my experiences while also being compatible with what I knew about neuroscience and cognitive science".

Also you say that there being no-self is a "nebulous" claim, but I don't think I have belief in a nebulous and ill-defined claim. I have belief in a set of specific concrete claims, such as "there's no central supreme leader agent running things in the brain, the brain's decision-making works by a distributed process that a number of subsystems contribute to and where very different subsystems can be causally responsible for a person's actions at different times". "No-self" is then just a label for that cluster of claims. But the important thing are the claims themselves, not whether there's some truth of "no-self" in the abstract.

So if I slightly rephrased your question as something like "how certain am I that in an alternate universe where Buddhism made importantly wrong claims, I would evaluate them as wrong?". Then reasonably certain, given that I currently already only put high probability in those Buddhist claims for which I have direct evidence for, put a more moderate probability for claims I don't have direct evidence for but have heard from meditators who have seemed sane and reliable so far, and disbelieve in quite a few ones that I don't think I have good evidence for and which contradict what I know about reality otherwise. (Literal karma or reincarnation, for instance.) Of course, I don't claim to be infallible and do expect to make errors (in both directions), but again that's the case with any field. 

comment by Pattern · 2022-05-04T05:30:24.660Z · LW(p) · GW(p)
Someone who wants to "salvage" e.g. Buddhism is privileging a source that has a replication rate way below 50%.

Who has tried to replicate it?

comment by ChristianKl · 2022-05-01T18:52:39.973Z · LW(p) · GW(p)

Scientists are rewarded for discovering new things, or finding an error in existing beliefs, so if 100 scientists agree on something, that probably means more than if 100 astrologers agree on something. 

When aspiring rationalists interact with science, it's not just believing whatever 100 scientists agree on. If you take COVID-19 for example, we read a bunch of science, build models in our head about what's happening and then took action based on those models.

comment by Elizabeth (pktechgirl) · 2022-05-01T20:26:39.223Z · LW(p) · GW(p)

Scientists are rewarded for discovering new things, or finding an error in existing beliefs, so if 100 scientists agree on something, that probably means more than if 100 astrologers agree on something.

It's not obvious to me this effect dominates over politician punishments for challenging powerful people's ideas. I definitely think science is more self-correcting than astrology over decades, but don't trust the process on a year to year basis.

comment by Adele Lopez (adele-lopez-1) · 2022-04-30T04:47:28.295Z · LW(p) · GW(p)

I would guess that a lot (perhaps most) of time, "salvage epistemology" is a rationalization to give to rationalists to justify their interest in woo, as opposed to being the actual reason they are interested in the woo. (I still agree that the concept is likely hazardous to some people.)

Replies from: SaidAchmiz, gworley
comment by Said Achmiz (SaidAchmiz) · 2022-04-30T10:52:00.139Z · LW(p) · GW(p)

I agree with this.

There is also a related phenomenon: when a community that otherwise/previously accepted only people who bought into that community’s basic principles (aspiration to rationality, belief in the need for clear reasoning, etc.) adopts “salvage epistemology”, that community now opens itself up to all manner of people who are, shall we say, less committed to those basic principles, or perhaps not committed at all. This is catastrophic for the community’s health, sanity, integrity, ability to accomplish anything, and finally its likelihood of maintaining those very basic principles.

In other words, there is a difference between a community of aspiring rationalists of whom some have decided to investigate various forms of woo (to see what might be salvaged therefrom)—and the same community which has a large contingent of woo-peddlers and woo-consumers, of whom none believe in rationalist principles in the first place, but are only there to (at best) hang out with fellow peddlers and consumers of woo. The former community might be able to maintain some semblance of sanity even while they make their salvage attempts; the latter community is doomed.

Replies from: Viliam
comment by Viliam · 2022-04-30T13:43:08.772Z · LW(p) · GW(p)

It is difficult to distinguish between (1) people who think that there may be some value in a woo, and it is worth exploring it and separating the wheat from the chaff, and (2) people who believe that the woo is useful, and their only question is how to make it more palatable for the rationalist community. Both these groups are together opposed to people who would refuse to touch the woo in principle.

The subtle difference between those two groups is the absence or presence of motivated reasoning. If you are willing to follow evidence wherever it may lead you, you are open to the possibility that the horoscopes may actually correlate with something useful, but you are also open to the possibility that they might not. While the "salvage at all costs" group already knows that the horoscopes are useful, and are useful in more or less the traditional way, the only question is how to convince the others, who are immune to the traditional astrological arguments, but it seems mostly like the question of using the right lingo [LW · GW], so perhaps if we renamed Pisces to "cognitive ichthys", the usual objections would stop and rationalists would finally accept that Pisces might actually be cognitively different from other people; especially if a high-status community member supported it publicly with an anecdote or two.

(The opposite kind of mistake would be refusing to accept in principle that being a Virgo might correlate with your success at school... simply because it makes you one of the oldest kids in the classroom.)

Maybe it's a question of timing. First prove that some part of the woo makes sense; then use it. Do not simply start with the assumption that surely most of the woo will be salvageable somehow; it may not.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-04-30T17:04:04.625Z · LW(p) · GW(p)

I think we’re talking about the same distinction? Or did you mean to specifically disagree with me / offer a competing view / etc.?

Maybe it’s a question of timing. First prove that some part of the woo makes sense; then use it. Do not simply start with the assumption that surely most of the woo will be salvageable somehow; it may not.

I’d go further and say: first give us a good reason why we should think that it’s even plausible or remotely likely that there’s anything useful in the woo in question. (Otherwise, what motivates the decision to attempt to “salvage” this particular woo? Why, for example, are you trying to “salvage” Buddhism, and not Old Believer-ism?) How, in other words, did you locate the hypothesis that this woo, out of all the nonsense that’s been purveyed by all the woo-peddlers over the whole history of humanity, is worth our time and attention to examine for salvageability?

Replies from: Viliam, TAG
comment by Viliam · 2022-04-30T21:19:42.271Z · LW(p) · GW(p)

I mostly agree. I believe it is possible -- and desirable -- in theory to do the "salvage epistemology" correctly, but sadly I suspect that in practice 90% of wannabe rationalists will do it incorrectly.

Not sure what is the correct strategy here, because telling people "be careful" will probably just result in them saying "yes, we already are" when in fact they are not.

Why, for example, are you trying to “salvage” Buddhism, and not Old Believer-ism?

That actually makes sense. I would assume that each of them contains maybe 5% of useful stuff, but almost all useful stuff of Old Believer-ism is probably shared with the rest of Christianity, and maybe 1/3 of it is already "in the water supply" if you grew up in a Christian culture.

Also, the "Buddhism" popular in the West is probably quite different from the original Buddhism; it is filtered for modern audience. Big focus on meditation and equanimity, and mostly silence about Buddha doing literal miracles or how having the tiniest sexual thought will fuck up your afterlife. (So it's kinda like Jordan B. Peterson's idea of Christianity, compared to the actual Christianity.) So I wouldn't be surprised if the Western "Buddhism" actually contained 10% of useful stuff.

But of course, 10% correct still means 90% incorrect. And when I hear some people in rationalist community talk about Buddhism, they do not sound like someone who is 90% skeptical.

comment by TAG · 2022-04-30T19:20:52.901Z · LW(p) · GW(p)

Broadly speaking, it's useful to have a wide range of ideas, because you can't guarantee that the ideas that are "local" to you are the best ones. It's gradient descent stuff.

comment by Gordon Seidoh Worley (gworley) · 2022-05-01T17:28:03.687Z · LW(p) · GW(p)

You do say "a lot"/"most", but at least for me this is totally backwards. I only looked at woo type stuff because it was the only place attempting to explain some aspects of my experience. Rationalists were leaving bits of reality on the floor so I had to look elsewhere and then perform hermeneutics to pick out the useful bits (and then try to bring it back to rationalists, with varying degrees of success).

comment by Elizabeth (pktechgirl) · 2022-04-30T23:42:16.689Z · LW(p) · GW(p)

"Salvage" seems like a very strong frame/asserting what you're trying to prove. Something that needs salvaging has already failed, and the implication is you're putting a bunch of work into fixing it.

An alternate frame would be "mining", where it's accepted that most of the rock in a mine is worthless but you dig through it in the hopes of finding something small but extremely valuable. It might need polishing or processing, but the value is already in it in a way it isn't for something that needs salvaging. 

My guess is that you (Jim) would agree with the implications of "salvage", but I wanted to make them explicit.

Replies from: jimrandomh
comment by jimrandomh · 2022-05-01T01:17:43.943Z · LW(p) · GW(p)

"Salvage" seems like a very strong frame/asserting what you're trying to prove. Something that needs salvaging has already failed, and the implication is you're putting a bunch of work into fixing it.

It only counts as salvage epistemology if the person doing the mining explicitly believes that the thing they're mining from has failed; that is, they've made a strong negative judgment and decided to push past it. I don't mean for the term to include cases where the thing has failed but the person mistakenly believes it's good.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2022-05-01T01:48:26.527Z · LW(p) · GW(p)

Suggestion withdrawn.

comment by AnnaSalamon · 2022-04-30T03:32:54.636Z · LW(p) · GW(p)

Do you have a principled model of what an "epistemic immune system" is and why/whether we should have one?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2022-04-30T03:43:02.702Z · LW(p) · GW(p)

To elaborate a bit where I'm coming from here: I think the original idea with LessWrong was basically to bypass the usual immune system against reasoning, to expect this to lead to some problems, and to look for principles such as "notice your confusion," "if you have a gut feeling against something, look into it and don't just override it," "expect things to usually add up to normality" that can help us survive losing that immune system. (Advantage of losing it: you can reason!)

My guess is that that (having principles in place of a reflexive or socially mimicked immune system) was and is basically still the right idea. I didn't used to think this but I do now.

An LW post from 2009 that seems relevant (haven't reread it or its comment thread; may contradict my notions of what the original idea was for all I know): Reason as Memetic Immune Disorder [LW · GW]

Replies from: jimrandomh
comment by jimrandomh · 2022-04-30T06:33:32.059Z · LW(p) · GW(p)

I don't have a complete or principled model of what an epistemic immune system is or ought to be, in the area of woo, but I have some fragments.

One way of looking at it is that we look at a cluster of ideas, form an outside view of how much value and how much crazymaking there is inside it, and decide whether to engage. Part of the epistemic immune system is tracking the cost side of the corresponding cost/benefit. But this cost/benefit analysis doesn't generalize well between people; there's a big difference between a well-grounded well-studied practitioner looking at their tenth fake framework, and a newcomer who's still talking about how they vaguely intend to read the Sequences.

Much of the value, in diving into a woo area, is in the possibility that knowledge can be extracted and re-cast into a more solid form. But the people who are still doing social-mimicking instead of cost/benefit are not going to be capable of doing that, and shouldn't copy strategies from people who are.

(I am trying not to make this post a vagueblog about On Intention Research, because I only skimmed it and I don't know the people involved well, so I can't be sure it fits the pattern, but the parts of it I looked at match what I would expect from a group trying to ingest fake frameworks that they weren't skilled enough to handle.)

I think there's something important in the distinction between declarative knowledge and metis, and something particularly odd about the process of extracting metis from an area where you believe all of the declarative knowledge in the vicinity is false. I think when a group does that together, they wind up generating a new thing attached to a dialect full of false rhymes, where they sound like they're talking about the false things but all the jargon is slightly figurative and slightly askew. When I think of what it's like to be next to that group but not in it, I think of Mage: The Ascenscion.

Engaging with woo can be a rung on the countersignaling hierarchy: a way to say, look, my mental integrity is so strong that I can have crystal conversations and still somehow stay sane. This is orthogonal to cost/benefit, but I expect anyone doing it for that reason to tell themself a different story. I'm not sure how much of a thing this is, but would be surprised if it wasn't a thing at all.

Replies from: ChristianKl
comment by ChristianKl · 2022-05-10T18:03:45.892Z · LW(p) · GW(p)

I'm skeptical that Leverage's intention research is well described as them trying to extract wisdom out of an existing framework that someone outside of Leverage created. They were interested in doing original research on related phenomena. 

It's unclear to me how to do a cost-benefit analysis when doing original research in any domain. 

If I look at credence calibration as a phenomenon to investigate and do original research that research involves playing around with estimating probabilities it's hard to know beforehand which exercises will create benefits. Original research involves persuing a lot of strains that won't pan out. 

Credence calibration is similar to the phenomenon of vibes that Leverage studied in the sense that it's a topic where it's plausible that some value is gained by understanding the underlying phenomena better. It's unclear to me how you would do the related cost-benefit analysis because it's in the nature of doing original research that you don't really know the fruits of your works beforehand.

comment by AnnaSalamon · 2022-05-04T18:08:01.019Z · LW(p) · GW(p)

I want to state more explicitly where I’m coming from, about LW and woo.

One might think: “LW is one of few places on the internet that specializes in having only scientific materialist thoughts, without the woo.”

My own take is more like: “LW is one of few places on the internet that specializes in trying to have principled, truth-tracking models and practices about epistemics, and on e.g. trying to track that our maps are not the territory, trying to ask what we’d expect to see differently if particular claims are true/not-true, trying to be a “lens that sees its own flaws [LW · GW].””

Something I don’t want to see on LW, that I think at least sometimes happens under both the headings of “fake frameworks” and the headings of “woo” (and some other places on LW too), is something like “let’s not worry about the ultimate nature of the cosmos, or what really cleaves nature at the joints right now.  Let’s say some sentences because saying these sentences seems locally useful.”

I worry about this sort of thing being on LW because, insofar as those sentences make truth-claims about the cosmos, deciding to “take in” those sentences “because they’re useful,” without worrying about the nature of the cosmos means deciding to acquire intentionally-unreflective views on the nature of the cosmos, which is not the thing I hope we’re here for.  And it risks muddying the rationality project thereby.

(Alternate version I’m fine with, that is a bit close to this: “observe that people seem to get use out of taking in sentences X and Y.  Ask what this means about the cosmos.”)

(Additional alternate version I’m fine with: notice that a hypothesis seems to be paying at least some rent, despite being false.  Stay interested in both facts.  Play around and see how much more rent you can extract from the hypothesis, while still tracking that it is false or probably-false, and while still asking what the unified ultimate nature of the cosmos might be, that yields this whole thing.  I think this is another thing people sometimes do under the heading of "fake frameworks," and I like this one.)

Something else I don’t want to see on LW (well, really the same thing again, but stated differently because I think it might be perceived differently) is: “let’s not read author X, or engage with body of material Y, or hypothesis Z, because it’s woo.”  (Or: “… because people who engaged with that seem to have come to bad ends” or “because saying those sentences seems to cause instrumental harm.”) I don’t want this because this aversion, at least stated in this way, doesn’t seem principled, and LW is one of the few places on the internet where many folks aspire to avoiding unprincipled social mimicry of “let’s think this way and not that way,” and toward instead asking how our minds work and how epistemics work and what this means about what ever works for forming accurate maps.

(I really like having meta-level conversations about whether we should talk in these ways, though!  And I like people who think we should talk in the ways I’m objecting to stating their disagreements with me/whoever, and the reasons for their disagreements, and then folks trying together to figure out what’s true.  That’s part of how we can do the actually principled thing.  By not shaming/punishing particular perspectives, but instead arguing with them.)

Replies from: Xandre Maxwell
comment by Jes (Xandre Maxwell) · 2022-05-04T20:26:35.054Z · LW(p) · GW(p)

Are people here mostly materialists? I'm not. In a Cartesian sense, the most authentic experience possible is that of consciousness itself, with matter being something our mind imagines to explain phenomenon that we think might be real outside of our imagination (but we can never really know).

In other words, we know that idealism is true, because we experience pure ideas constantly, and we suspect that the images our minds serve up might actually correspond to some reality out there (Kant's things-in-themselves).

The map might really be the territory. Like, if you read a book by Tolkein and find that the map doesn't match the text, which is right? And if Tolkein clarified, would he be right, considering the thing he's talking about doesn't even exist? Except it kinda does, in that we're debating real things, and they impact us, etc?

I don't think we're anywhere near approaching a meaningful metaphysics, so the confidence of the materialists seems misplaced. I mean, yeah, I've seen matter, so I know it's real. But I've also seen matter in my dreams (including under a microscope, where it continued to be "real").

Sorry to rant on this single word!

Replies from: rsaarelm, Kaj_Sotala
comment by rsaarelm · 2022-05-06T06:33:33.600Z · LW(p) · GW(p)

Are people here mostly materialists?

Okay, since you seem interested in knowing why people are materialists. I think it's the history of science up until now. The history of science has basically been a constant build-up of materialism.

We started out at prehistoric animism where everything happening except that rock you just threw at another rock was driven by an intangible spirit. The rock wasn't since that was just you throwing it. And then people started figuring out successive compelling narratives about how more complex stuff is just rocks being thrown about. Planets being driven by angels? Nope, just gravitation and inertia. Okay, so comets don't have comet spirits, but surely living things have spirits [LW · GW]. Turns out no, molecular biology is a bit tricky, but it seems to still paint a (very small) rocks thrown about picture that convincingly gets you a living tree or a cat. Human minds looked unique until people started building computers. The same story is repeating again, people point human activities as proofs of the indomitable human spirit, then someone builds an AI to do it. Douglas Hofstadter was still predicting that mastering chess would have to involve encompassing the whole of human cognition in 1979 and had to eat crow in the introduction of the 20th anniversary edition of his book.

So to sum up, simple physics went from spiritual (Aristotle's "rocks want to go down, smoke wants to go up") to materialist, the outer space went from spiritual to materialist, biological life went from spiritual to materialist and mental acts like winning a chess game went from spiritual to materialist.

We're now down to the hard problem of consciousness, and we're also still missing a really comprehensive scientific picture for how you go from neurons to high-level human thought. So which way do you think this is going to go? A discovery that the spiritual world exists after all, and was hiding in the microtubules of the human brain all along, or people looking at the finished blueprint for how the brain works that explains things up to conscious thought and going "oh, so that's how it works" and it's all just rocks thrown about once again. So far we've got a perfect record of everybody clamoring for the first option and then things turning out to be the second one.

Replies from: Xandre Maxwell
comment by Jes (Xandre Maxwell) · 2022-05-06T15:57:24.903Z · LW(p) · GW(p)

Thank you, this makes a lot of sense. I do see how the history of science kind of narrows its way down towards materialism, and if we assume that path will continue in the same direction, pure materialism is the logical outcome.

But...

I disagree with the narrative that science is narrowing in on materialism. Popular culture certainly interprets the message of Science with a capital S that way, but reading actual scientific work doesn't leave that impression at all.

The message I got from my middle school science classes was that science is profoundly uncertain of what matter is, but that it appears to manifest probabilistically under the governance of forces, which are really just measurable tendencies of the behavior of matter, whose origin we also have no guess at.

The spiritualists were wrong in their specific guesses, but so were the scientists, who as you note when citing Aristotle.

I have no doubt you will be on the right side of history. The priesthood will change the definitions of matter to accommodate whatever spiritual magic we discover next. Past scriptures will be reinterpreted to show how science was always progressing here, the present is the logical endpoint of the past, or at least, of our team in the past.

So far we've got a perfect record of everybody clamoring for the first option and then things turning out to be the second one.

That's because materialists write the record. It's easy to construct History to serve Ideology, so history, at least not epic narrative history like this, is a bad teacher when received from power. Primitive pagan mythology stumbled ignorantly towards the True Religion, or even the inverse of your claim, history is full of self-sure clockwork Newtonians eating crow when the bizarre, uncertain nature of modern physics slowly unraveled before their arrogant, annoyed eyes.

---

Thanks again for taking the time to discuss this btw, your response answered my question very well. After all, I'm arguing about whether people should be materialists, but you only explained why they are, so feel free to ignore my ramblings and accept my gratitude :)

Replies from: rsaarelm
comment by rsaarelm · 2022-05-07T11:56:37.418Z · LW(p) · GW(p)

You seem to be claiming that whatever does get discovered, which might be interpreted as proof of the spiritual in another climate, will get distorted to support the materialist paradigm. I'm not really sure how this would work in practice. We already have a something of a precommitment to what we expect something "supernatural" to look like, ontologically basic mental entities [LW · GW]. So far the discoveries of science have been nothing like that, and if new scientific discoveries suddenly were, I find it very hard to imagine quite many people outside of the "priesthood" not sitting up and paying very close attention.

I don't really follow your arguments about what matter is and past scientist being wrong. Science improved and proved past scientists mistaken, that's the whole idea with science. Spiritualists have not improved much so far. And the question with matter isn't so much as what it is (what would an answer to this look like anyway?), but how matter acts, and science has done a remarkably good job at that part.

comment by Kaj_Sotala · 2022-05-04T20:36:32.482Z · LW(p) · GW(p)

Are people here mostly materialists?

Yes.

Replies from: Xandre Maxwell
comment by Jes (Xandre Maxwell) · 2022-05-04T20:51:26.204Z · LW(p) · GW(p)

Okay. I'm curious to understand why! Are you yourself materialist? Any recommended reading or viewing on the topic, specifically within the context of the rationalist movement?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2022-05-05T18:23:43.148Z · LW(p) · GW(p)

I'd say that something-like-materialism feels like the most consistent and likely explanation. Sure we could assume that maybe, despite all appearances, there isn't a real world of matter out there after all... but given that we do assume such a world for pretty much everything else we do, it would seem like an unjustifiably privileged hypothesis [? · GW] to assume anything else.

There's a pretty strong materialist viewpoint in the original LW sequences [? · GW], though it's kinda scattered across a number of posts so I'm not sure which ones in particular I'd recommend (besides the one about privileged hypotheses).

comment by romeostevensit · 2022-04-30T16:59:13.545Z · LW(p) · GW(p)

This sounds a bit harsher than I really intend but... Self described rationalists and post rationalists could mostly use a solid course in something like Jonathan Baron's Thinking and Deciding, ie obtaining a broad and basic grounding in practical epistemology in the first place.

Replies from: Pattern
comment by Pattern · 2022-05-04T17:03:09.073Z · LW(p) · GW(p)

Is that a book?

Replies from: Vaniver, ChristianKl
comment by Vaniver · 2022-05-04T20:28:41.788Z · LW(p) · GW(p)

I reviewed it on LW [LW · GW], almost 10 years ago now.

comment by ChristianKl · 2022-05-04T17:29:25.523Z · LW(p) · GW(p)

It's an academic textbook on rationality. It defines rationality as:

The best kind of thinking, which we shall call rational thinking, is whatever kind of thinking best helps people achieve their goals. If it should turn out that following the rules of formal logic leads to eternal happiness, then it is “rational thinking” to follow the laws of logic (assuming that we all want eternal happiness). If it should turn out, on the other hand, that carefully violating the laws of logic at every turn leads to eternal happiness, then it is these violations that we shall call “rational.”

When I argue that certain kinds of thinking are “most rational,” I mean that these help people achieve their goals. Such arguments could be wrong. If so, some other sort of thinking is most rational.

comment by gallabytes · 2022-04-30T05:25:27.834Z · LW(p) · GW(p)

This seems roughly on point, but is missing a crucial aspect - whether or not you're currently a hyper-analytical programmer is actually a state of mind which can change. Thinking you're on one side when actually you've flipped can lead to some bad times, for you and others.

Replies from: jimrandomh
comment by jimrandomh · 2022-04-30T06:02:53.985Z · LW(p) · GW(p)

I'm genuinely uncertain whether this is true. The alternate hypothesis is that it's more of a skillset than a frame of mind, which mean that it can atrophy but only partially and only slowly.

comment by AnnaSalamon · 2022-04-30T19:07:41.302Z · LW(p) · GW(p)

This is a bit off-topic with respect to the OP, but I really wish we’d more often say “aspiring rationalist” rather than “rationalist.” (Thanks to Said for doing this here.) The use of “rationalist” in parts of this comment thread and elsewhere grates on me. I expect most uses of either term are just people using the phrase other people use (which I have no real objection to), but it seems to me that when we say “aspiring rationalist” we at least sometimes remember that to form a map that is a better predictor of the territory requires aspiration, effort, forming one’s beliefs via mental motions that’ll give different results in different worlds. While when we say “rationalist”, it sounds like it’s just a subculture.

TBC, I don’t object to people describing other people as “self-described rationalists” or similar, just to using “rationalist” as a term to identify with on purpose, or as the term for what LW’s goal is. I’m worried that if we intentionally describe ourselves as “rationalists,” we’ll aim to be a subculture (“we hang with the rationalists”; “we do things the way this subculture does them”) instead of actually asking the question of how we can form accurate beliefs.

I non-confidently think “aspiring rationalist” used to be somewhat more common as a term, and its gradual disappearance (if it has been gradually disappearing; I’m not sure) is linked with some LWers and similar having less of an aspirational identity, and more of a “we’re the set of people who tread water while affiliating with certain mental habits or keywords or something” identity.

Replies from: Raemon, jimrandomh, steven0461, Valentine
comment by Raemon · 2022-05-01T08:05:28.240Z · LW(p) · GW(p)

I dutifully tried to say "aspiring rationalist" for awhile, but in addition to the syllable count thing just being too much of a pain, it... feels like it's solving the wrong problem.

An argument that persuaded me to stop caring about it as much: communities of guitarists don't call themselves "aspiring guitarists". You're either doing guitaring, or you're not. (in some sense similar for being a scientist or researcher).

Meanwhile, I know at least some people definitely meet any reasonable bar for "actually a goddamn rationalist". If you intentionally reflect on and direct your cognitive patterns in ways that are more likely to find true beliefs and accomplish your goals, and you've gone off into the world and solved some difficult problems that depended on you being able to do that... I think you're just plain a rationalist.

I think I myself am right around the threshold where I think it might reasonably make sense to call myself a rationalist. Reasonable people might disagree. I think 10 years ago I was definitely more like "a subculture supporting character." I think Logan Strohl and Jim Babcock and Eliezer Yudkowsky and Elizabeth van Nostrand and Oliver Habryka each have some clear "actually the sort of rationalist you might want to pay money to do rationality at professional rates" thing going on. It'd feel dumb to me for them to go out of their way to tack-on "aspiring" (even if, of course, there are a ton more skills they could learn or improve at)

I guess you do sometimes have "students" vs "grad students" vs "doctors" of various stripes. You probably don't call yourself a scientist while you're still getting your biology degree. But even a 2nd year undergrad biology major is doing something that someone who retweets "I fucking love science" memes is not. A guy who knows 5 chords on guitar and can play a few songs is in some sense straightforwardly a "guitarist", in a way that a guy who hangs out in the guitar club but doesn't play is not. Could he be better at guitar? Sure, and so could the professional guitarist who can improvise an entire song.

I agree there's a problem where rationalism feels prone to "being a subculture", and there is a need to guard against that somehow. But I don't think the "aspiring" thing is the way to go about it.

Replies from: Valentine
comment by Valentine · 2022-05-02T15:29:27.776Z · LW(p) · GW(p)

I love your observations here. The quality of grounding in a clear intuition here.

I don't think you can avoid the subculture thing. The discipline doesn't exist in a void the way math kind of does. Unless & until you can actually define the practice of rationality, there's no clear dividing line between the social scene and the set of people who practice the discipline. No clear analogue to "actually playing a guitar".

Like, I think I follow your intuition, but consider:

Meanwhile, I know at least some people definitely meet any reasonable bar for "actually a goddamn rationalist". If you intentionally reflect on and direct your cognitive patterns in ways that are more likely to find true beliefs and accomplish your goals, and you've gone off into the world and solved some difficult problems that depended on you being able to do that... I think you're just plain a rationalist.

I'm reasonably sure a lot of people here would consider me a great example of a non-rationalist. Lots of folk told me that to my face while I worked at CFAR. But the above describes me to an utter T. I'm just doing it in a way that the culture here doesn't approve of and thinks is pretty nutty. Which is fine. I think the culture here is doing its "truth-seeking" in a pretty nutty way too. Y'all are getting great results predicting Covid case numbers, and I'm getting great results guiding people to cure their social anxiety and depression. To each their own.

I think what you're talking about is way, way more of an aesthetic [LW · GW] than you might realize. Like, what are you really using to detect who is and isn't "actually a goddamn rationalist"? My guess is it's more of a gut sense that you then try to examine upon reflection.

Is Elon Musk "actually a goddamn rationalist"? He sure seems to care about what's true and about being effective in the world. But I'm guessing he somehow lands as less of a central example than Oli or Eliezer do. If so, why?

If Elon doesn't do it for you, insert some other successful smart person who mysteriously doesn't gut-ping as "actually a goddamn rationalist".

If I'm way off here, I'd actually be pretty interested in knowing that. Because I'd find that illuminating as to what you mean by rationalism.

But if I'm basically right, then you're not going to separate the discipline from the social scene with a term. You'll keep seeing social status and perception of skill conflated. Not exactly overlapping, but muddled nonetheless.

comment by jimrandomh · 2022-04-30T19:31:50.476Z · LW(p) · GW(p)

I agree that "aspiring rationalist" captures the desired meaning better than "rationalist", in most cases, but... I think language has some properties, studied and documented by linguists, which define a set of legal moves, and rationalist->aspiring rationalist is an invalid move. That is: everyone using "aspiring rationalist" is an unstable state from which people will spontaneously drop the word aspiring, and people in a mixed linguistic environemnt will consistently adopt the shorter one. Aspiring Rationalist just doesn't fit within the syllable-count budget, and if we want to displace the unmodified term Rationalist, we need a different solution.

Replies from: AnnaSalamon, Benito
comment by AnnaSalamon · 2022-04-30T19:41:25.736Z · LW(p) · GW(p)

I don't know; finding a better solution sounds great, but there aren't that many people who talk here, and many of us are fairly reflective and ornery, so if a small group keeps repeatedly requesting this and doing it it'd probably be sufficient to keep "aspiring rationalist" as at least a substantial minority of what's said.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2022-05-01T07:26:25.666Z · LW(p) · GW(p)

FWIW, I would genuinely use the term 'aspiring rationalist' more if it struck me as more technically correct — in my head 'person aspiring to be rational' ≈ 'rationalist'. So I parse aspiring rationalist as 'person aspiring to be a person aspiring to be rational'.

'Aspiring rationalist' makes sense if I equate 'rationalist' with 'rational', but that's exactly the thing I don't want to do.

Maybe we just need a new word here. E.g., -esce is a root meaning "to become" (as in coalesce, acquiesce, evanesce, convalescent, iridescent, effervescent, quiescent). We could coin a new verb "rationalesce" and declare it means "to try to become more rational" or "to pursue rationality", then refer to ourselves as the rationalescents.

Like adolescents, except for becoming rational rather than for becoming adult. :P

Replies from: elityre
comment by Eli Tyre (elityre) · 2022-05-03T04:24:10.438Z · LW(p) · GW(p)

I'm in for coining a new word to refer to exactly what we mean.

I find it kind of annoying that if I talk about "rationality" on say, twitter, I have to wade through a bunch of prior assumptions that people have about what the term means (eg "trying to reason through everything is misguided. Most actual effective deciding is intuitive.")

I would rather refer to the path of self honesty and aspirational epistemic perfection by some other name that doesn't have prior associations, in the same way that if a person says "I'm a circler / I'm into Circling", someone will reply "what's circling?". 

comment by Ben Pace (Benito) · 2022-05-01T09:36:09.332Z · LW(p) · GW(p)

“Effective Altruist” has six syllables, “Aspiring Rationalist” has seven. Not that different.

I will try using it in my writing more for a while.

Replies from: Raemon, Jayson_Virissimo
comment by Raemon · 2022-05-01T17:11:36.106Z · LW(p) · GW(p)

Note what people actually say in conversation is "EA" (suggests "AR" as a replacement)

Replies from: Benito
comment by Ben Pace (Benito) · 2022-05-01T17:25:41.715Z · LW(p) · GW(p)

Hm, the "AR scene" already refers to something, but maybe we could fight out our edge in the culture.

Replies from: Raemon, Dustin
comment by Raemon · 2022-05-01T17:28:51.754Z · LW(p) · GW(p)

There's also the good ol' Asp Rat abbreviation.

Replies from: jimrandomh, Valentine, maia
comment by jimrandomh · 2022-05-02T18:23:28.234Z · LW(p) · GW(p)

Autocompletes to asperger-rationalist for me, and I see Valentine reports the same. But maybe this frees up enough syllable-budget to spend one on bypassing that. How about: endevrat, someone who endeavours to be rational.

(This one is much better on the linguistic properties, but note that there's a subtle meaning shift: it's no longer inclusive of people who aspire but do not endeavour, ie people who identify-with rationality but can't quite bring themselves to read or practice. This seems important but I don't know whether it's better or worse.)

Replies from: Raemon
comment by Raemon · 2022-05-02T18:24:28.169Z · LW(p) · GW(p)

Autocompletes to asperger-rationalist for me

(this was the intended joke)

Replies from: Valentine
comment by Valentine · 2022-05-02T20:23:36.681Z · LW(p) · GW(p)

OOoooooohhhhhhhhhh.

comment by Valentine · 2022-05-02T14:58:03.015Z · LW(p) · GW(p)

Alas, my brain autocompletes "Asp Rat" to "Asperger's-like rationalist".

comment by maia · 2022-05-01T19:36:50.892Z · LW(p) · GW(p)

That one's also a little hard to pronounce, so I think we'd have to collapse it to "assrat".

Replies from: illicitlearning
comment by illicitlearning · 2022-05-02T03:09:45.149Z · LW(p) · GW(p)

Could go "aspirat". (Pronounced /ˈæs.pɪ̯.ɹæt/, not /ˈæsˈpaɪ̯.ɹɪʔ/.)

comment by Dustin · 2022-05-01T17:30:08.140Z · LW(p) · GW(p)

I find "AR" more difficult to actually say out loud than "EA". 

Replies from: Valentine
comment by Valentine · 2022-05-02T14:58:54.686Z · LW(p) · GW(p)

Just think like a pirate.

comment by Jayson_Virissimo · 2022-05-01T17:11:27.191Z · LW(p) · GW(p)

If "rationalist" is a taken as a success term, then why wouldn't "effective altruist" be as well? That is to say: if you aren't really being effective, then in a strong sense, you aren't really an "effective altruist". A term that doesn't presuppose you have already achieved what you are seeking would be "aspiring effective altruist", which is quite long IMO.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-05-01T18:44:33.927Z · LW(p) · GW(p)

One man’s modus tollens is another’s modus ponens—I happen to think that the term “effective altruist” is problematic for exactly this reason.

comment by steven0461 · 2022-04-30T22:12:02.001Z · LW(p) · GW(p)

As I see it, "rationalist" already refers to a person who thinks rationality is particularly important, not necessarily a person who is rational, like how "libertarian" refers to a person who thinks freedom is particularly important, not necessarily a person who is free. Then literally speaking "aspiring rationalist" refers to a person who aspires to think rationality is particularly important, not to a person who aspires to be rational. Using "aspiring rationalist" to refer to people who aspire to attain rationality encourages people to misinterpret self-identified rationalists as claiming to have attained rationality. Saying something like "person who aspires to rationality" instead of "aspiring rationalist" is a little more awkward, but it respects the literal meaning of words, and I think that's important.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-04-30T22:23:49.520Z · LW(p) · GW(p)

This was not the usage in the Sequences, however, and otherwise at the time the Sequences were written.

Replies from: gjm
comment by gjm · 2022-05-01T16:32:34.916Z · LW(p) · GW(p)

I agree that it was not the usage in the Sequences, and that it was therefore not (or at least not always) the usage within the community that coalesced around EY's blogging. But if "otherwise at the time the Sequences were written" is meant to say more than that -- if you're saying that there was a tendency for "rationalist" to mean something like "person skilled in the art of reason" apart from EY's preference for using it that way -- then I would like to see some evidence. I don't think I have ever seen the word used in that way in a way that wasn't clearly causally descended from EY's usage.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-05-01T16:35:53.747Z · LW(p) · GW(p)

I was referring to usage here on Less Wrong (and in adjacent/related communities). In other words—

But if “otherwise at the time the Sequences were written” is meant to say more than that

—nope, it is not meant to say more than that.

comment by Valentine · 2022-05-02T15:31:01.913Z · LW(p) · GW(p)

Maybe what you're actually looking for is something like "aspiring beisutsuka". Like there's an ideal you're aiming for but can maybe approach only asymptotically. Just don't equate "rationalist" with "beisutsuka" and you're good.

Replies from: jimrandomh, Raemon
comment by jimrandomh · 2022-05-02T18:17:22.395Z · LW(p) · GW(p)

The same model that says aspiring rationalist will self-replace with rationalist, says aspiring beisutsuka will self-rpeplace with beisutsuka. But beisutsuka is a bit better than rationalist on its own terms; it emphasizes being a practitioner more, and presupposes the skill less. And it avoids punning with a dozen past historical movements that each have their own weird connotations and misconceptions. Unfortunately the phonology and spelling of beisutsuka is 99.9th-percentile tricky and that might mean it's also a linguistic invalid move.

Replies from: Rana Dexsin
comment by Rana Dexsin · 2022-05-02T18:57:14.763Z · LW(p) · GW(p)

Unfortunately the phonology and spelling of beisutsuka is 99.9th-percentile tricky and that might mean it's also a linguistic invalid move.

Some rabbit-hole expansion on this:

First of all, you're missing an “i” at the end (as attested in “Final Words [LW · GW]”), so that's some direct evidence right there.

The second half is presumably a loan from Japanese 使い “tsukai”, “one who uses/applies”, usable as a suffix. In fiction and pop culture, it shows up prominently in 魔法使い “mahoutsukai”, “magic user” thus “wizard” or “sorcerer”; I infer this may have been a flavor source given Eliezer's other fandom attachments.

The first half is presumably a transliteration of “Bayes” as ベイス “beisu”, which devoices the last mora for reasons which are not clear to me. Compare to Japanese Wikipedia's article on Thomas Bayes which retains the ズ (zu) at the end, including in compounds related to Bayesian probability and inference.

comment by Raemon · 2022-05-02T16:10:45.197Z · LW(p) · GW(p)

I kinda like this

comment by Richard_Kennaway · 2022-04-30T10:16:03.438Z · LW(p) · GW(p)

Go to such people not for their epistemology, which is junk, but for whatever useful ground-level observations can be separated from the fog.

comment by Davis_Kingsley · 2022-05-01T06:15:55.546Z · LW(p) · GW(p)

Crossposted from Facebook:

The term used in the past for a concept close to this was "Fake frameworks" -- see for instance Val's post in favor of it from 2017: https://www.lesswrong.com/.../in-praise-of-fake-frameworks [LW · GW]

Unfortunately I think this proved to be a quite misguided idea in practice, and one that was made more dangerous by the fact that it seems really appealing in principle. As you imply, the people most interested in pursuing these frameworks are often not I think the ones who have the most sober and evenhanded evaluations of such, which can lead to unfortunate results.

(Also, uh, note that I myself converted to Catholicism, but not because of this sort of thing, so give or subtract points from my reply as you will.)

comment by shminux · 2022-04-30T04:27:06.127Z · LW(p) · GW(p)

But there's a bad thing happens when you have a group that are culturally adjacent to the hyper-analytical programmers, but who aren't that sort of person themselves.

I... don't think "hyper-analytical programmers" are a thing. We are all susceptible the the risk of "falling into crazy" to a larger degree than we think we are. There is something in the brain where openness, being necessary for Bayesian updating, also means suspending your critical faculties to consider a hypothetical model seriously, and so one runs a risk that the hypothetical takes hold, and the confirming evidence has an outsize influence on the inner "model accuracy evaluation engine".

Replies from: TAG
comment by TAG · 2022-04-30T15:29:53.937Z · LW(p) · GW(p)

Yep. If you were crazy, what would that feel like from the inside?

Replies from: gallabytes
comment by gallabytes · 2022-04-30T17:16:33.084Z · LW(p) · GW(p)

for me it mostly felt like I and my group of closest friends were at the center of the world, with the last hope for the future depending on our ability to hold to principle. there was a lot of prophesy of varying qualities, and a lot of importance placed suddenly on people we barely knew then rapidly withdrawn when those people weren't up for being as crazy as we were.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2022-05-03T16:55:37.266Z · LW(p) · GW(p)

Thanks.  Are you up for saying more about what algorithm (you in hindsight notice/surmise) you were following internally during that time, and how it did/didn't differ from the algorithm you were following during your "hyper-analytical programmer" times?

comment by Said Achmiz (SaidAchmiz) · 2022-04-30T10:43:57.428Z · LW(p) · GW(p)

This sort of “salvage epistemology” can also turn “hyper-analytical programmers”[1] into crazy people. This can happen even with pure ideas, but it’s especially egregious when you apply this “salvage epistemology” approach to, say, taking drugs (which, when I put it like that, sounds completely insane, and yet is apparently rather common among “rationalists”…).


  1. To the extent that such people even exist; actually, I mostly agree with shminux [LW(p) · GW(p)] that they basically do not. ↩︎

Replies from: benjamincosman
comment by benjamincosman · 2022-04-30T13:03:53.152Z · LW(p) · GW(p)

it’s especially egregious when you apply this “salvage epistemology” approach to, say, taking drugs

I'm not so certain of that? Of the two extreme strategies "Just Say No" and "do whatever you want man, it feels goooooood", Just Say No is the clear winner. But when I've interacted with reasonable-seeming people who've also done some drugs, it looks like "here's the specific drug we chose and here's our safety protocol and here's everything that's known to science about effects and as you can see the dangers are non-zero but low and we think the benefits are also non-zero and are outweighing those dangers". And (anecdotally of course) they and all their friends who act similarly appear to be leading very functional lives; no one they know has gotten into any trouble worse than the occasional bad trip or exposure to societal disapproval (neither of which was ultimately a big deal, and both of which were clearly listed in their dangers column).

Now it is quite possible they're still ultimately definitively wrong - maybe there are serious long-term effects that no one has figured out yet; maybe it turns out that the "everyone they know turns out ok" heuristic is masking the fact that they're all getting really lucky and/or the availability bias since the ones who don't turn out ok disappear from view; etc. And you can certainly optimize for safety by responding to all this with "Just Say No". But humans quite reasonably don't optimize purely for safety, and it is not at all clear to me that what these ones have chosen is crazy.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-04-30T13:25:21.501Z · LW(p) · GW(p)

What you say sounds like it could easily be very reasonable, and yet it has almost nothing in common, results-wise, with what we actually observe among rationalists who take psychedelics.

Replies from: Kaj_Sotala, benjamincosman
comment by Kaj_Sotala · 2022-04-30T15:41:24.344Z · LW(p) · GW(p)

I know several rationalists who have taken psychedelics, and the description does seem to match them reasonably well.

There's a selection bias in that the people who use psychedelics the least responsibly and go the most crazy are also the ones most likely to be noticed. Whereas the people who are appropriately cautious - caution which commonly also involves not talking about drug use in public - and avoid any adverse effects go unnoticed, even if they form a substantial majority.

Replies from: Viliam, SaidAchmiz
comment by Viliam · 2022-04-30T21:51:41.965Z · LW(p) · GW(p)

Unless it is a survivor bias, where among people who use drugs with approximately the same level of caution some get lucky and some get unlucky, and then we say "eh, those unlucky ones probably did something wrong, that would never happen to me".

Or maybe the causality is the other way round, and some people become irresponsible as a consequence of becoming addicted.

comment by Said Achmiz (SaidAchmiz) · 2022-04-30T17:09:25.849Z · LW(p) · GW(p)

The selection effect exists, I don’t doubt that. The question is how strong it is.

The phenomenon of people in the rationalist community taking psychedelics and becoming manifestly crazier as a result is common enough that in order for the ranks of such victims to be outnumbered substantially by “functional” psychedelic users, it would have to be the case that use of such drugs is, among rationalists, extremely common.

Do you claim that this is the case?

Replies from: jimrandomh, Kaj_Sotala
comment by jimrandomh · 2022-04-30T18:25:13.739Z · LW(p) · GW(p)

it would have to be the case that use of such drugs is, among rationalists, extremely common.

It is, in fact, extremely common, including among sane stable people who don't talk about it.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-04-30T21:37:56.305Z · LW(p) · GW(p)

For avoidance of doubt, could you clarify whether you mean this comment to refer to “rationalist” communities specifically (or some particular such community?), or more broadly?

Replies from: jimrandomh
comment by jimrandomh · 2022-04-30T22:43:51.203Z · LW(p) · GW(p)

Both.

(One additional clarification: the common version of psychedelic use is infrequent, low dose and with a trusted sober friend present. Among people I know to use psychedelics often, as in >10x/year, the outcomes are dismal.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-04-30T23:52:21.774Z · LW(p) · GW(p)

Understood, thanks.

comment by Kaj_Sotala · 2022-04-30T17:37:14.275Z · LW(p) · GW(p)

I don't want to make a claim either way, since I don't know exactly how common the public thing you're referring to is. I know there's been some talk about this kind of thing happening, but I know neither exactly how many people we've talking about, nor with what reliability the cause can be specifically identified as being the psychedelics.

comment by benjamincosman · 2022-04-30T15:54:48.194Z · LW(p) · GW(p)

Who is “we”? This is more or less what I’ve observed from 100% of my (admittedly small sample size of) rationalist acquaintances who have taken psychedelics.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-04-30T17:05:47.781Z · LW(p) · GW(p)

“We” is “we on Less Wrong, talking about things we observe”. Of course I can’t speak for your private experience.

comment by Dustin · 2022-04-30T02:44:52.609Z · LW(p) · GW(p)

It's been my experience that many more people think they're immune to woo than actually are. I'm not sure the risk is worth the reward.

comment by tcheasdfjkl · 2022-05-01T19:50:41.661Z · LW(p) · GW(p)

Hmm, I agree that the thing you describe is a problem, and I agree with some of your diagnosis, but I think your diagnosis focuses too much on a divide between different Kinds Of People, without naming the Kinds Of People explicitly but kind of sounding (especially in the comments) like a lot of what you're talking about is a difference in how much Rationality Skill people have, which I think is not the right distinction? Like I think I am neither a hyper-analytic programmer (certainly not a programmer) nor any kind of particularly Advanced rationalist, and I think I am not particularly susceptible to this particular problem (I'm certainly susceptible to other problems, just not this one, I think). I think it's more that people doing the salvage epistemology thing can kind of provide cover for people doing a different thing where they actually respect and believe the woo traditions they're investigating, and especially a lack of clear signposting beliefs makes this hard to navigate.

comment by A1987dM (army1987) · 2023-08-21T15:11:28.023Z · LW(p) · GW(p)

I dunno...  IME, when someone not capable of steelmanning him reads e.g. David Icke, what usually happens is that they just think he must be crazy or something and dismiss him out of hand, not that they start believing in literal reptilian humanoids.

comment by Tapatakt · 2022-05-04T20:34:05.043Z · LW(p) · GW(p)

That post increased the propability that I will overcome my laziness and finally write a post about the concept of "bright doublethink" in English. Thanks.

comment by Mateusz Bagiński (mateusz-baginski) · 2022-04-30T11:36:43.672Z · LW(p) · GW(p)

I'm not sure how the right decision process on whether to do salvage epistemology on any given subject should look like. Also, if you see or suspect that this woo-ish thingy X "is a mix of figurative stuff and dumb stuff" but decide that it's not worth salvaging because of infohazard, how do you communicate it? "There's 10% probability that the ancient master Changacthulhuthustra discovered something instrumentally useful about human condition but reading his philosophy may mess you up so you shouldn't." How many novices do you expect to follow a general consensus on that? My hunch is that if one is likely to fall into the crazy, they are also unlikely to let their outside view override the inside view, assert "I calculated the expected value and it's positive" and rush into it. Also2, how does one know whether they are "experienced enough" to try salvaging anything for themselves? Also3, I don't think protecting new rationalists in this way would be helpful for their development.

To reduce the risks pointed out by OP, I would rather aim at being more explicit when we're using salvage epistemology (here just having this label can be helpful) and poke around their belief system more when they start displaying tentative signs of going crazy.

comment by Jes (Xandre Maxwell) · 2022-05-04T03:07:01.572Z · LW(p) · GW(p)

Or from the other angle - reason is a self defeating joke meant to point to it's own ridiculousness. But what can we as Christians salvage from this wordly conceit? One might even say, this is how modern European philosophy began.