How I started believing religion might actually matter for rationality and moral philosophy
post by zhukeepa · 2024-08-23T17:40:47.341Z · LW · GW · 41 commentsContents
“Trapped Priors As A Basic Problem Of Rationality” Active blind spots as second-order trapped priors Inner work ≈ the systematic addressing of trapped priors Religious mystical traditions as time-tested traditions of inner work? None 41 comments
After the release of Ben Pace's extended interview with me about my views on religion [LW · GW], I felt inspired to publish more of my thinking about religion in a format that's more detailed, compact, and organized. This post is the first publication in my series of intended posts about religion.
Thanks to Ben Pace, Chris Lakin, Richard Ngo, Damon Pourtahmaseb-Sasi, Marcello Herreshoff, Renshin Lauren Lee, Mark Miller, and Imam Ammar Amonette for their feedback on this post, and thanks to Kaj Sotala, Tomáš Gavenčiak, Paul Colognese, and David Spivak for reviewing earlier versions of this post. Thanks especially to Renshin Lauren Lee and Imam Ammar Amonette for their input on my claims about religion and inner work, and Mark Miller for vetting my claims about predictive processing.
In Waking Up, Sam Harris wrote:[1]
But I now knew that Jesus, the Buddha, Lao Tzu, and the other saints and sages of history had not all been epileptics, schizophrenics, or frauds. I still considered the world’s religions to be mere intellectual ruins, maintained at enormous economic and social cost, but I now understood that important psychological truths could be found in the rubble.
Like Sam, I’ve also come to believe that there are psychological truths that show up across religious traditions. I furthermore think these psychological truths are actually very related to both rationality and moral philosophy. This post will describe how I personally came to start entertaining this belief seriously.
“Trapped Priors As A Basic Problem Of Rationality”
“Trapped Priors As A Basic Problem of Rationality” was the title of an AstralCodexTen blog post. Scott opens the post with the following:
Last month I talked about van der Bergh et al’s work on the precision of sensory evidence, which introduced the idea of a trapped prior. I think this concept has far-reaching implications for the rationalist project as a whole. I want to re-derive it, explain it more intuitively, then talk about why it might be relevant for things like intellectual, political and religious biases.
The post describes Scott's take on a predictive processing account of a certain kind of cognitive flinch that prevents certain types of sensory input from being perceived accurately, leading to beliefs that are resistant to updating.[2] Some illustrative central examples of trapped priors:
- Karl Friston has written about how a traumatized veteran might not hear a loud car as a car, but as a gunshot instead.
- Scott mentions phobias and sticky political beliefs as central examples of trapped priors.
I think trapped priors are very related to the concept that “trauma” tries to point at, but I think “trauma” tends to connote a subset of trapped priors that are the result of some much more intense kind of injury. “Wounding” is a more inclusive term than trauma, but tends to refer to trapped priors learned within an organism’s lifetime, whereas trapped priors in general also include genetically pre-specified priors, like a fear of snakes.
My forays into religion and spirituality actually began via the investigation of my own trapped priors, which I had previously articulated to myself as “psychological blocks”, and explored in contexts that were adjacent to therapy (for example, getting my psychology dissected at Leverage Research, and experimenting with Circling). It was only after I went deep in my investigation of my trapped priors that I learned of the existence of traditions emphasizing the systematic and thorough exploration of trapped priors. These tended to be spiritual traditions, which is where my interest in spirituality actually began.[3] I will elaborate more on this later.
Active blind spots as second-order trapped priors
One of the hardest things about working with trapped priors is recognizing when we’ve got them in the first place. When we have trapped priors, we’re either consciously aware we’ve got a trapped prior (for example, in the case of a patient seeking treatment for a phobia of dogs), or we can have a second-order (meta-level) trapped prior that keeps us attached to the idea that the problem is entirely external. Consider the difference between “I feel bad around dogs, but that’s because I have a phobia of dogs” and “I feel bad around [people of X political party], and that’s because [people of X political party] are BAD“.
I think second-order trapped priors are related to the phenomenon where people sometimes seem to actively resist getting something that you try to point out to them. Think of a religious fundamentalist, or a family member who resists acknowledging their contributions to relational conflicts. I call this an active blind spot.
One thing that distinguishes active blind spots from blind spots in general is that there’s an element of fear and active resistance around “getting it”. In contrast, someone could have a “passive blind spot” in which they’re totally open to “getting it”, but simply haven’t yet been informed about what they’ve been missing.[4]
I think active blind spots and second-order trapped priors actually correspond pretty directly. This element of fear around “getting it” is captured in the first-order trapped prior, and the second-order trapped prior functions as a mechanism to obfuscate that you’re trying to "not get it”.
There are many parallels between active blind spots and lies – they both spread and grow [LW · GW]; their spreading and growing can both lead to outgrowths that have “lives of their own” disconnected from the larger whole from which they originated; and they’re both predicated on second-order falsehoods that “double down” on first-order falsehoods (a lie involves both a false assertion X and the second-order false assertion “the assertion X is true”, the latter of which distinguishes a lie from something false said by mistake). In some sense, an active blind spot is a lie, with the first-order falsehood being a perceptual misrepresentation (like the veteran “mishearing” the loud car as a gunshot) rather than a verbal misrepresentation.
I think it can get arbitrarily difficult to recognize when you’ve got active blind spots, especially when your meta-epistemology (i.e., how you discern where your epistemology is limited) might have active blind spots baked into them since before you've developed episodic memory, which I’ll describe later in this post.
Inner work ≈ the systematic addressing of trapped priors
For me, the concept of “inner work” largely refers to the systematic addressing of trapped priors, with the help of tools like therapy, psychedelics, and meditation – all of which Scott Alexander explicitly mentioned as potential tools for addressing trapped priors (see the highlighted section here). I’ve found inner work particularly valuable for noticing and addressing my own active blind spots, which has led to vastly improved relationships with family, romantic partners, colleagues, and friends, by virtue of me drastically improving at taking responsibility for my contributions to relational conflicts.
I think a lot of modern-day cults (e.g. Scientology, NXIVM) were so persuasive because their leaders were able to guide people through certain forms of inner work, leading to large positive effects in their psychology that they hadn’t previously conceived of as even being possible.
There are major risks involved in going deep into inner work. If one goes deep enough, it can amount to a massive refactor of “the codebase of one’s mind”, all the while one tries to continue living their life. Just as massively refactoring a product’s codebase risks breaking the product (e.g. because spaghetti code that was previously sufficient to get you by can no longer function without getting totally rewritten), refactoring the codebase of your mind can “break” your ability to perform a bunch of functions that had previously come easily.
A commonly reported example is people switching away from coercion as a source of motivation, and then being less capable of producing output, at least for a while (like publishing on the internet, in my case 😅). In more extreme cases, people may lose the ability to hold down jobs, or may get stuck in depressive episodes.
Because of the risks involved, I think going deep into inner work is best done with the support of trustworthy peers and mentors. Cults often purport to offer these, but often end up guiding people’s inner work in ways that end up exploiting and abusing them.
This naturally invites the question of how to find ethical and trustworthy traditions of inner work. I will now describe a formative experience I had that led me to seriously entertain the hypothesis that religious mystical traditions fit the bill.
Religious mystical traditions as time-tested traditions of inner work?
My entire worldview got turned upside-down the first time I experienced the healing of a trauma from infancy. It was late 2018, and I was in San Francisco, having my third or fourth session with a sexological bodyworker[5] recommended to me by someone in the rationalist community.[6] The experience started with me saying that I’d felt very small and lonely and that I’d wanted to curl up into a little ball. To my shock, my bodyworker suggested that I do exactly that. She proceeded to sit next to me, envelop her arms around me like I was a baby, rocking me, and telling me that everything would be okay. I suddenly had a distinct somatic memory of being a baby (when I recall memories of kindergarten, there’s a corresponding somatic sense of being short and having tiny limbs; with the activation of this memory, I had a body-sense of being extremely tiny and having very tiny limbs).[7] I found myself wailing into her arms as she rocked me back and forth, and feeling the release of a weight I’d been carrying on my shoulders my whole life, that I’d never had any conscious awareness or recollection of having carried.
When I sat up, my moment-to-moment experience of reality was radically different. I could suddenly feel my body more fully, and immediately thereafter understood what people meant when they told me that I was constantly “up in my head”. My very conception of what conscious experience could be expanded, since all my prior conceptions of conscious experience had involved this weight on my shoulder, for as long as I’d had episodic memory.
I was hungry for ways to account for this experience. I felt like I had just been graced with a profound and bizarre experience, with enormous philosophical implications, that very few people even recognize exist. It seemed obviously relevant for our attempts to understand personal identity and human values that our senses of who we are and what we value might be distorted by active blind spots rooted in experiences from before we’d developed episodic memory. I had also been pondering the difficulty of metaphilosophy in the context of AI alignment [LW · GW], and it seemed obviously relevant for metaphilosophy that people’s philosophical intuitions could get distorted by preverbal trapped priors, and therefore that humanity’s understanding of metaphilosophy might be bottlenecked by an awareness of preverbal trapped priors.
For the first time, it seemed plausible to me that the millennia-old questions about moral philosophy[8] might only have seemed intractable because most of the people thinking about them didn’t know about the existence of preverbal trapped priors. This led me to become very curious about the worldviews held by people who were familiar with preverbal trapped priors. Every person I’d trusted who’d recognized this experience when I described it to them (including the bodyworker who facilitated this experience, some Circling facilitators, and a Buddhist meditation coach[9]) had done lots of inner work themselves, had received significant guidance from religious and spiritual traditions, and had broad convergences among their worldviews that also seemed consistent with the commonalities between the major world religions.
I was pretty sure all these people I'd trusted were on to something, which was what led me to start seriously considering the hypothesis that the major world religions implicitly claim to have solutions to the big problems of moral philosophy because they actually once did.[10] (WTF, RIGHT???) To be more precise, I’d started to seriously consider the hypothesis that:
- people who go deep enough exploring inner work “without going off the rails” tend to notice subtle psychological truths that hold the keys to solving the big problems of moral philosophy
- humanity has implicitly stumbled upon the solutions to the big problems of moral philosophy many times over, and whenever this happens, the solutions typically get packaged in some sort of religious tradition
- the reason this is not obvious is because religious memes tend to mutate in ways that select for persuasiveness to the masses, rather than faithfulness to the original psychological truths, which is why they suck so much in all the ways LessWrongers know they suck
The more deeply I explored religions, and the deeper I went down my inner work journey, the more probable my hypothesis came to seem. I’ve come to believe that the mystical traditions of the major world religions are still relatively faithful to these core psychological truths, and that this is why there are broad convergences in their understandings of the human psyche, the nature of reality,[11] their prescriptions for living life well, and their approaches toward inner work.[12] I think these traditions, whose areas of convergence could together be referred to as the perennial philosophy, are trustworthy insofar as they constitute humanity’s most time-tested traditions of inner work.
The next post [LW · GW] will go into further detail about my interpretations of some central claims of the perennial philosophy.
- ^
I have a number of substantial disagreements with Sam Harris about how to think about religion, and in general think he interprets religious claims in overly uncharitable ways (that nevertheless seem understandable and defensible to me). I do appreciate the clarity and no-bullshit attitude he brings toward his interpretations of spirituality, though, and wish more people adopted an analogous stance when sifting through spiritual claims.
- ^
Scott says the more official predictive processing term for this is “canalization”. I think this is mostly correct, with one caveat – canalization doesn’t necessarily imply maladaptiveness, whereas I think “trapped priors” imply a form of canalization that prevents the consideration of more appropriate alternative beliefs. In other words, I think someone’s belief can only be judged as trapped relative to an alternative belief that’s more truthful and more adaptive.
By analogy, there’s a trope that trauma healing is a first-world concern, because “trauma responses” for those in the first-world may just be effective adaptations for those in the third-world. It might make perfect sense for someone growing up hungry in the third-world to hoard food and money, because starvation is always a real risk. It’s only if they move to a first-world country where they will clearly never again be at risk of starvation, yet continue to hoard food and money as though starvation remains a constant risk, that it would make sense to consider this implicit anticipation of starvation a trapped prior.
Often, it’s clear from the context what the superior alternative belief is – for example, a veteran hearing the sound of a loud car as a gunshot would obviously do better hearing it as a car than as a gunshot. But I think the concept of “trapped prior” can get slippery or confusing sometimes if this contextuality isn’t made explicit, so I’m making an explicit note of it here.
- ^
Renshin Lauren Lee notes that Buddhism could be thought of as a religion based in letting go of all trapped priors, and actually, to let go of all priors, period. Renshin also notes that this doesn't capture all of Buddhism, since it's also about compassion and ethics, but that Buddhism does make the radical claim that relieving all priors is critical for ethics / compassion / happiness / living a good life.
- ^
I will mention that it’s not obvious to me that the distinction between active and passive blind spots is always as clear and clean-cut as I’m presenting it to be, and that I might be oversimplifying things a bit.
- ^
Her name is Kai Wu.
- ^
Thanks for changing my life, Tilia!
- ^
People often express skepticism that I can actually access such a memory, and I think this is partly because the thing I mean by “memory” here is different from what most people imagine by “memory”. In particular, it’s more like an emotional memory than it is an episodic memory, and the experience is more somatic and phenomenological than it is visual or verbal. To further illustrate – if a dog bit me when I was a toddler, I might have no explicit recollection of the event, but my fight-or-flight response might still activate in the presence of dogs. If I were to do exposure therapy with dogs, I would consider the somatic experiences of fear I feel in the presence of these dogs to be a form of "memory access". As I continue titrating into this fear, I might even feel activation around the flesh where I’d gotten bitten, without necessarily any episodic recollection of the event. These are the kinds of “memory access” that I’d experienced in the bodywork session.
- ^
The linked excerpt does not explicitly mention moral philosophy per se, but I consider the subjects of the excerpt to be substantially about moral philosophy.
- ^
When I described my experience to Michael Taft, he said something like “Infant traumas? That’s old news, Alex. Buddhists have known about this for thousands of years. They didn’t have a concept of trauma, so they called it ‘evil spirits leaving the body’, but this is really what they were referring to.”
- ^
As a concrete illustration for how this might not be totally crazy, I think metaethics is largely bottlenecked on the question “where do we draw the boundaries around the selves that are alleged to be moral patients?” and Buddhism has a lot of insight into personal identity and the nature of self – including that our conceptions of ourselves are distorted by preverbal trapped priors.
- ^
Truths about psychology can bleed into truths about the nature of reality. This might be counterintuitive, because truths about psychology ostensibly concern our maps of reality, whereas truths about reality concern reality itself. But some of these psychological truths take the form “most of our maps of reality are biased in some particular way, leading our conceptions of reality to also be biased in that particular way; if we correct these biases in our best guesses of what reality is actually like, we find that reality might actually be very different from what we’d initially thought”.
- ^
I often employ an analogy with geometry, which a bunch of civilizations figured out (semi-)independently. The civilizations didn’t prove the exact same theorems, some civilizations figured out way more than others, and some civilizations got some important details wrong (e.g. the Babylonians thought π = 3.125), but there was nevertheless still a shared thing they were all trying to get at.
41 comments
Comments sorted by top scores.
comment by quiet_NaN · 2024-08-24T23:01:56.736Z · LW(p) · GW(p)
What I don't understand is why there should be a link between trapped priors and an moral philosophy.
I mean, if moral realism was correct, i.e. if moral tenets such as "don't eat pork", "don't have sex with your sister", or "avoid killing sentient beings" had an universal truth value for all beings capable of moral behavior, then one might argue that the reason why people's ethics differ is that they have trapped priors which prevent them from recognizing these universal truths.
This might be my trapped priors talking, but I am a non-cognitivist. I simply believe that assigning truth values to moral sentences such as "killing is wrong" is pointless, and they are better parsed as prescriptive sentences such as "don't kill" or "boo on killing".
In my view, moral codes are intrinsically subjective. There is no factual disagreement between Harry and Professor Quirrell which they could hope to overcome through empiricism, they simply have different utility functions.
--
My second point is that if moral realism was true, and one of the key roles of religion was to free people from trapped priors so they could recognize these universal moral truths, then at least during the founding of religions, we should see some evidence of higher moral standards before they invariably mutate into institutions devoid of moral truths. I would argue that either, our commonly accepted humanitarian moral values are all wrong or this mutation process happened almost instantly:
- Whatever Jesus thought about gender equality when he achieved moral enlightenment, Paul had his own ideas a few decades later.
- Mohammed was clearly not opposed to offensive warfare.
- Martin Luther evidently believed that serfs should not rebel against their lords.
On the other hand, for instances where religions did advocate for tenets compatible with humanitarianism, such as in Christian abolitionism, do not seem to correspond to strong spiritualism. Was Pope Benedict XIV condemning the slave trade because he was more spiritual (and thus in touch with the universal moral truth) than his predecessors who had endorsed it?
--
My last point is that especially with regard to relational conflicts, our map not corresponding to the territory might often not be a bug, but a feature. Per Hanson, we deceive ourselves so that we can better deceive others. Evolution has not shaped our brains to be objective cognitive engines. In some cases, objective cognition is what it advantageous -- if you are alone hunting a rabbit, no amount of self-deception will fill your stomach -- but in any social situation, expect evolution to put the hand on the scales of your impartial judgement. Arguing that your son should become the new chieftain because he is the best hunter and strongest warrior is much more effective than arguing for that simply because he is your son -- and the best way to argue that is to believe it, no matter if it is objectively true.
The adulterer, the slave owner and the wartime rapist all have solid evolutionary reasons to engage in behaviors most of us might find immoral. I think their moral blind spots are likely not caused by trapped priors, like an exaggerated fear of dogs is. Also, I have no reason to believe that I don't have similar moral blind spots hard-wired into my brain by evolution.
I would bet that most of the serious roadblocks to a true moral theory (if such a thing existed) are of that kind, instead of being maladaptive trapped priors. Thus, even if religion and spirituality are effective at overcoming maladaptive trapped priors, I don't see how they would us bring closer to moral cognition.
Replies from: ABlue, Unreal, JenniferRM, zhukeepa, Unreal, MakoYass, zhukeepa, zhukeepa, Christian Z R↑ comment by ABlue · 2024-08-25T18:11:22.226Z · LW(p) · GW(p)
The adulterer, the slave owner and the wartime rapist all have solid evolutionary reasons to engage in behaviors most of us might find immoral. I think their moral blind spots are likely not caused by trapped priors, like an exaggerated fear of dogs is.
I don't think the evopsych and trapped-prior views are incompatible. A selection pressure towards immoral behavior could select for genes/memes that tend to result in certain kinds of trapped prior.
↑ comment by Unreal · 2024-09-25T14:56:58.950Z · LW(p) · GW(p)
My second point is that if moral realism was true, and one of the key roles of religion was to free people from trapped priors so they could recognize these universal moral truths, then at least during the founding of religions, we should see some evidence of higher moral standards before they invariably mutate into institutions devoid of moral truths. I would argue that either, our commonly accepted humanitarian moral values are all wrong or this mutation process happened almost instantly:
This is easy to research.
I will name a few ways the Buddha was ahead of his time in terms of 'humanitarian moral values' (which I do not personally buy into, and I don't claim the Buddha did either, but if it helps shed light on some things):
- He cared about environmentalism and not polluting shared natural resources, such as forests and rivers. I don't have specific examples in mind about how he advocated for this, but I believe the evidence is out there.
- Soon upon getting enlightened, his vow included a flourishing women monastic sangha. For the time, this was completely unheard of. People did not believe women could get enlightened or become arhats or do spiritual practice. With the Buddha's blessing and support, his mother and former wife started a for-women, women-led monastic sangha. It was important that men did not lead this group, and he wisely made that clear to people. The nuns in this sangha had their lives threatened continuously, as what they were doing was so against the times.
- Someone who digs into this might find places where things were not 'equal' for women and men and bring those up as a reason to doubt. But from my own investigation into this, I think a lot of reasonable compromises had to be made. A delicate balancing between fitting into the current social structures while ensuring the ability of women to do spiritual practice in community.
- I do not personally buy into 'equality' in the way progressive Westerners do, and I think our current takes on women/men are "off" and I don't advocate comparing all of our social norms and memes with the Buddha's implementation of a complex, context-dependent system. I do not think we have "got it right"; we are still in the process of working this out.
- The Buddha's followers were extremely ethical people, and there are notes of people being surprised and flabbergasted about this from his time, including various kings and such. Ethical here means non-violent, non-lying, non-stealing, well-behaved, calm, heedful, caring, sober, etc.
- Also extremely, extremely taboo for his time, the Buddha ordained from the slave caste. Ven. Upali is the main example. He became one of the Buddha's main disciples. The Buddha firmly stood on grounds that people are not to be judged by their births. Race, class, gender, etc. There are some inspiring stories around this.
- I think it can be reasonably argued that Buddhists continue to be fairly ethical, relatively speaking. The Buddha did a good job setting things up.
- Unfortunately, Jesus died soon after he started teaching. The Buddha had decades to set things up for his followers. But I would also claim Jesus just wouldn't have done as good a job as the Buddha, even with more time. Not throwing shade at Jesus though. Setting things up well is just extremely difficult and requires unimaginable spiritual power and wisdom.
There are also amazing stories about Christians.
Replies from: Unreal↑ comment by Unreal · 2024-09-25T14:59:50.578Z · LW(p) · GW(p)
I would also argue against the claim religious institutions are "devoid of moral truths". I think this is mostly coming from secularist propaganda. In fact these institutions are still some of the most charitable, generous institutions that provide humanitarian aid all over the world. Their centuries-old systems are relatively effective at rooting out evil-doing in their ranks.
Compared to modern corporations, they're acting out of a much clearer sense of morality than capitalist institutions. Compared to modern secular governments, such as that of the US, they're doing less violence and harm to the planet. They did not invent nuclear weapons. They are not striving to build AGI. Furthermore, I doubt they would.
When spiritual teachers were asked about creating AI versions of themselves, they were not interested, and one company had to change their whole business model to creating sales bots instead. (Real story. I won't reveal which company.)
I'm sad about all the corruption in religious institutions, still. It's there. Hatred against gay people and controlling women's bodies. The crusades. The jihads. Using coercive shame to keep people down. OK, well, I can tell a story about why corruption seeped into the Church, and it doesn't sound crazy to me. (Black Death happened, is what.)
But our modern world has become nihilistic, amoral, and vastly more okay with killing large numbers of living beings, ecosystems, habitats, the atmosphere, etc. Pat ourselves on the back for civil rights, yes. Celebrate this. But who's really devoid of moral truths here? When we are the ones casually destroying the planet and even openly willing to take 10%+ chances at total extinction to build an AGI? The Christians and the Buddhists and even the jihadists aren't behind this.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2024-09-28T03:17:44.013Z · LW(p) · GW(p)
OK, well, I can tell a story about why corruption seeped into the Church, and it doesn't sound crazy to me. (Black Death happened, is what.)
The Mediaeval Christian church's power-seeking and hypocrisy precedes the Black Death.
- Charlemagne led campaigns against the Saxon pagans in the late 8th century, to convert them by force, with the blessing of the papacy.
- Medieval popes very regularly got into power-conflicts with Medieval kings.[1]
- Church leaders got into into conflicts with each other, often declaring each other illegitimate. [2]
- The papacy ordered the first Crusade in 1095.
Admittedly it does seem like like there might have been an uptick in Church hypocrisy around the 1300s (I'm thinking of the schism (which results in a period where there are three people claiming to be Pope for decades), Pope Alexander VI's many illegitimate kids, and the whole indulgences thing.)
But overall, the church does not look like a beacon of ethics to me. It's looks like just about what I would expect: an institution of considerable power, lead by people who defending and expanding their power, with various spiritual narratives pasted on top. (Indeed, it doesn't look like most popes were particularly selected for their spiritual aptitude or insight at all, compared to their political savvy.)
Compared to modern corporations, they're acting out of a much clearer sense of morality than capitalist institutions. Compared to modern secular governments, such as that of the US, they're doing less violence and harm to the planet. They did not invent nuclear weapons. They are not striving to build AGI. Furthermore, I doubt they would.
I think this is an unfair comparison. Religions, while still non-trivial entities, have been waning in power since the 1500s. They do less harm in these particular ways 1) because they're less powerful and so have less big effects overall, and 2) because they now select primairly for leadership that's motivated by signaling moral superiority rather than motivated by desire for power or wealth.
(In the same way that the US government is less capable than it was in the 1950s, because many of the competent people who would have gone into public service can now get an exponentially better deal in tech and finance.)
I agree that most religions are not making war the way powerful nations do. But I don't buy that that's because they're more ethical. They don't have the wealth and resources of the superpowers any more. They can't make war, even if they wanted to.
But that wasn't always the case. When regions were more powerful, they did in fact instigate wars to defend their interests.
Islam, uniquely of western religions, still holds this kind of sway, and...notably, religious leaders do order guerrilla war and terrorist acts.
- ^
Claude lists:
- Pope Gregory VII vs. Henry IV (Investiture Controversy, 1075-1122):
- This was one of the most famous conflicts between papal and royal authority.
- Gregory VII asserted the pope's right to appoint church officials, challenging Henry IV's traditional role.
- Henry attempted to depose Gregory, who in turn excommunicated Henry.
- Henry famously performed penance at Canossa in 1077 to lift his excommunication.
- Pope Innocent III vs. King John of England (1205-1213):
- Dispute arose over the appointment of the Archbishop of Canterbury.
- Innocent placed England under an interdict and excommunicated John.
- John eventually submitted, accepting England as a papal fief and paying tribute.
- Pope Boniface VIII vs. Philip IV of France (1296-1303):
- Conflict over taxation of clergy and papal authority.
- Boniface issued the bull "Unam Sanctam" asserting papal supremacy.
- Philip's agents attacked Boniface (the "Outrage of Anagni"), leading to the pope's death shortly after.
- Pope Alexander III vs. Frederick Barbarossa (1159-1177):
- Long-standing dispute over papal authority and control of northern Italy.
- Frederick supported a series of antipopes against Alexander.
- The conflict ended with the Peace of Venice, where Frederick acknowledged Alexander as pope.
- Pope Urban II vs. William II of England (1088-1099):
- Conflict over church appointments and authority in England.
- William refused to acknowledge Urban as pope for several years.
[There was at least one more. This is a sample.]
- Pope Gregory VII vs. Henry IV (Investiture Controversy, 1075-1122):
- ^
Again, Claude lists:
- Hippolytus of Rome (217-235 AD):
- Considered one of the earliest antipopes.
- He opposed Pope Callixtus I and subsequent popes, setting himself up as a rival bishop of Rome.
- Novatian (251-258 AD):
- Declared himself pope in opposition to Pope Cornelius.
- This schism was based on disagreements over how to treat Christians who had lapsed during persecution.
- Arian Schism (4th century):
- While not strictly a papal schism, this theological dispute led to rival bishops in many cities, each declaring the other illegitimate.
- Laurentian Schism (498-506 AD):
- After the death of Pope Anastasius II, both Symmachus and Laurentius were elected pope by rival factions.
- This led to a period of conflict until Symmachus was finally recognized as the legitimate pope.
- Cadaver Synod (897 AD):
- Pope Stephen VI had the corpse of his predecessor, Pope Formosus, exhumed and put on trial.
- He declared Formosus's papacy illegitimate and his acts invalid.
- Investiture Controversy (late 11th - early 12th century):
- While not a direct antipapacy, this conflict between the papacy and secular rulers led to periods where rival popes were appointed.
- For instance, Henry IV of Germany appointed Clement III as antipope against Pope Gregory VII.
- Anacletan Schism (1130-1138):
- Both Innocent II and Anacletus II claimed to be the rightful pope, dividing Europe's loyalty.
- Hippolytus of Rome (217-235 AD):
↑ comment by Unreal · 2024-10-01T18:53:21.662Z · LW(p) · GW(p)
Catholicism never would have collected the intelligence necessary to invent a nuke. Their worldview was not compatible with science. It was an inferior organizing principle. ("inferior" meaning less capable of coordinating a collective intelligence needed to build nukes.)
You believe intelligence is such a high good, a high virtue, that it would be hard for you to see how intelligence is deeply and intricately causal with the destruction of life on this planet, and therefore the less intelligent, less destructive religions actually have more ethical ground to stand on, even though they were still fairly corrupt.
But it's a straightforward comparison.
Medieval "dark" ages = almost no technological progress, very little risk of blowing up the planet in any way; relatively, not inspiring, but still - kudos for keeping us from hurtling toward extinction, and at this point, we're fine with rewarding this even though it's such a "low bar"
Today = massive, exponential technological progress, nuclear war could already take us all out, but we have a number of other x-risks to worry about. And we're so identified with science and tech that we aren't willing to stop, even as we admit OUT LOUD that it could cause extinction-level catastrophe. This is worse than the Crusades by a long shot. We're not talking about sending children to war. We're talking about the end of children. Just no more children. This is worse than suicide cults that claim we go to heaven as long as we commit suicide. We don't even think what we're doing will necessarily result in heaven, and we do it anyway. We have no evidence we can upload consciousnesses at all. Or end aging and death. Or build a friendly AI. At least the Catholics were convinced a very good thing would happen by sending kids to war. We're not even convinced, and we are willing to risk the lives of all children. Do you see how this is worse than the Catholics?
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2024-10-01T20:59:17.547Z · LW(p) · GW(p)
I agree that religions mostly don't cause x-risk, because (for the most part) they're not sufficiently good at organizing intellectual endeavor. (There might be exceptions to that generalization, and they can coopt the technological products of other organizational systems.)
I agree that the x-risk is an overriding concern, in terms of practical consequences. If any given person does tons of good things, and also contributes to x-risk, its easy for the x-risk contribution to swamp everything else in their overall impact.
But yeah, I object to calling a person or an institution more ethical because they are / it is too weak to do (comparatively) much harm.
I care about identifying which people and institutions are more ethical so that 1) I can learn from them ethics from them 2) so that I can defer to them.
If a person or institution avoids causing harm because they're weak, they're mostly not very helpful to learn from (they can't help me figure out how to wield power ethically, at least) and defering to them or otherwise empowering them is actively harmful because doing so removes the feature that was keeping them (relatively) harmless.
A person who is dispositionally a bully, but who is physically weak, but who would immediately start acting like a bully if he were bigger, or if he had more social power, is not ethical on account of his not bullying people. An AGI that is "aligned", until it is much more powerful than the rest of the world, is not aligned. A church that does (relatively) less harm unless and until it is powerful enough to command armies or nukes, is likewise not very trustworthy.
To reason well in these domains, I need a concept of ethics that can be discussed independently of power. And therefore I need to be able to evaluate ethics independently of actual harm caused.
Not just "how much harm does this institution do?" but "how much harm would it do, in other circumstances?". I might want to ask "how does this person or institution behave, if given different levels or different kinds of power over the world?"
Given that criterion.
- The Catholic Church causes less overall harm than OpenAI. (I think, as always it's hard to evaluate.)
- It causes less overall harm than the US government.
- It's unclear to me if it causes more or less harm than Coca-cola.
Harm-caused is certainly relevant evidence about the ethics of an institution, but not most of the question.
Considering the comparison with the US government:
- The US government seems to me to be overall more robust to the stresses imposed by power, than the Catholic Church.
- I think the organizations are probably about equally trustworthy in terms of how much you can rely on them to follow their agreements when you don't have particular power to enforce those agreements?
- I think they're about equally likely to cover up the illegal or immoral actions of their members?
- I would prefer that the US government and the Catholic hierarchy to have their current relative distributions of power rather than to have them reversed. I don't think that the world would get better if the Catholic hierarchy was a the leading world superpower, instead of the US.
As a shorthand for that, I might say that the US government, while not ethical by any means, is more ethical than the the Catholic Church.
There is a bit of an out here where people or institutions that do less harm because they are less powerful, and which are less powerful by their own choice, might indeed be ethically superior. They might be safe to give more power to, because they would not accept the power granted, and they might be worth learning from.
I would be interested in examples of religious institutions declining power granted to them.
From my read of history, the Catholic Hierarchy has never done this?
We're not even convinced, and we are willing to risk the lives of all children. Do you see how this is worse than the Catholics?
Absolutely. I definitely think there's something awful about being willing to risk the future, and even more awful about being willing to risk the future for no particular ideal.
I'd probably agree that that's worse than Catholicism. Catholicism seems unlikely to metastasize into an actively ominicidal worldview to me. Though I think if it were more powerful and relevant, and it's incentives were somewhat different, it would totally risk omnicide in a holy war against heresy (extrapolating from the long history of Christian holy wars causing great destruction, short of omnicide, because omnicide wasn't technologically on the table yet.)
But, I don't know who you're referring to when you say "we". It sounds like something like "moderns" or "post-enlightenment societies" or maybe "cultures based on 'scientific materialism'"?
I mostly reject those charges. Mostly it looks to me like there are a small number (~10,000 to 100,000) of people who are willing to risk all the children, unilaterally, while most people broadly oppose that, to the extent that they're informed about it.
Almost everyone does oppose the destruction of all life (though by their revealed preferences, almost everyone is fine with subsidising factory farming).
You believe intelligence is such a high good, a high virtue, that it would be hard for you to see how intelligence is deeply and intricately causal with the destruction of life on this planet, and therefore the less intelligent, less destructive religions actually have more ethical ground to stand on, even though they were still fairly corrupt.
I mean, it's obviously hard for me to say definitively if I have a cultural blindspot.
But, FYI, while I would say that intelligence is "a good", I am unlikely to call it a "virtue" or a "high good" (which connotes a moral good, as opposed to eg an economic good).
Intelligence is a force multiplier. More intelligent agents are more capable. They do a better job of doing whatever it is that they do.
And yeah, it's pretty obvious to me that "intelligence is deeply and intricately causal with the destruction of life on this planet". Humans might destroy the biosphere, specifically by dint of their collective intelligence. No other species is even remotely in the running to do that, except for the AIs we're rushing forward to create. If you remove the intelligence and you don't get the omnicide.
I think you mean something more specific here. Not just that destroying all life is a big action, and so is only possible with a big force multiplier, but that intelligence is the motivating factor, or actively obscures moral truth, or something.
What do you mean here?
and therefore the less intelligent, less destructive religions actually have more ethical ground to stand on, even though they were still fairly corrupt.
Yeah, I don't buy this, for the reasons outlined above.
If you're less destructive because you're weak, you don't get "moral points". You get "moral points" based on how you behave, relative to the options and incentives presented to you.
↑ comment by JenniferRM · 2024-09-25T07:42:09.096Z · LW(p) · GW(p)
I'm not sure about the rest of it, but this caught my eye:
if moral realism was true, and one of the key roles of religion was to free people from trapped priors so they could recognize these universal moral truths, then at least during the founding of religions, we should see some evidence of higher moral standards before they invariably mutate into institutions devoid of moral truths.
I had a similar thought, and was trying to figure out if I could find a single good person to formally and efficiently coordinate with in a non-trivial pre-existing institution full of "safely good and sane people".
I'm still searching. If anyone has a solid lead on this, please DM me, maybe?
Something you might expect is that many such "hypothetically existing hypothetically good people" would be willing to die slightly earlier for a good enough cause (especially late in life when their life expectancy is low, and especially for very high stakes issues where a lot of leverage is possible) but they wouldn't waste lives, because waste is ceteris paribus bad, and so... so... what about martyrs who are also leaders?
This line of thinking is how I learned about Martin The Confessor, the last Pope to ever die for his beliefs.
Since 655 AD is much much earlier than 2024 AD, it would seem that Catholicism no longer "has the sauce" so to speak?
Also, slightly relatedly, I'm more glad that I otherwise might be that in this timeline the bullet missed Trump. In other very nearby timelines I'm pretty sure the whole idea of using physical courage to detect morally good leadership in a morally good group would be much more controversial than the principle is here, now, in this timeline, where no one has trapped priors about it that are being actively pumped full of energy by the media, with the creation of new social traumas, and so on...
...not that elected secular leaders of mere nation states would have any obvious formal duties to specifically be the person to benevolently serve literally all good beings as a focal point.
To get that formula to basically work, in a way that it kinda seems to work with US elections, since many US Presidents are assassinated in ways they could probably predict were possible (modulo this currently only working within the intrinsically "partial" nature of US elections, since these are merely elections for the leader of a single nation state that faces many other hostile nation states in a hobbesian world of eternal war (at least eternal war... so far! [? · GW]) ) I think one might need to hold global elections?
And... But... And this... this seems sorta do-able?!? Weirdly so!
We have the internet now. We have translation software to translate all the political statements into all the languages. We have internet money that could be used to donate to something that was worth donating to.
Why not create a "United Persons Alliance" (to play the "House of Representatives" to the UN's "Senate"?) and find out what the UPA's "Donation Weighted Condorcet Prime Minister" has to say?
I kinda can't figure out why no one has tried it yet.
Maybe it is because, logically speaking, moral realism MIGHT be true and also maybe all humans are objectively bad?
If a lot of people knew for sure that "moral realism is true but humans are universally fallen" then it might explain why we almost never "produce and maintain legibly just institutions".
Under the premises entertained here so far, IF such institutions were attempted anyway, and the attempt had security holes [? · GW], THEN those security holes would be predictably abused and it would be predictably regretted by anyone who spent money setting it up, or trusted such a thing.
So maybe it is just that "moral realism is true, humans are bad, and designing secure systems is hard and humans are also smart enough to never try to summon a real justice system"?
Maybe.
↑ comment by zhukeepa · 2024-08-26T06:36:40.060Z · LW(p) · GW(p)
Regarding your second point, I'm leaving this comment as a placeholder to indicate my intention to give a proper response at some point. My views here have some subtlely that I want to make sure I unpack correctly, and it's getting late here!
↑ comment by Unreal · 2024-09-25T14:16:03.158Z · LW(p) · GW(p)
I mean, if moral realism was correct, i.e. if moral tenets such as "don't eat pork", "don't have sex with your sister", or "avoid killing sentient beings" had an universal truth value for all beings capable of moral behavior, then one might argue that the reason why people's ethics differ is that they have trapped priors which prevent them from recognizing these universal truths.
This might be my trapped priors talking, but I am a non-cognitivist. I simply believe that assigning truth values to moral sentences such as "killing is wrong" is pointless, and they are better parsed as prescriptive sentences such as "don't kill" or "boo on killing".
In my view, moral codes are intrinsically subjective. There is no factual disagreement between Harry and Professor Quirrell which they could hope to overcome through empiricism, they simply have different utility functions.
I don't claim to be a moral realist or any other -ist that we currently have words for. I do follow the Buddha's teachings on morals and ethics. So I will share from that perspective, which I have reason to believe to be true and beneficial to take on, for anyone interested in becoming more ethical, wise, and kind.
"Don't eat pork" is something I'd call an ethical rule, set for a specific time and place, which is a valid manifestation of morality.
"Avoiding killing" and "Avoid stealing" (etc) are held, in Buddhism, as "ethical precepts." They aren't rules, but they're like...
a) Each precept is a game in and of itself with many levels
b) It is generally considered good to use this life and future lives to deepen one's practice of each of the precepts (to take on the huge mission of perfecting our choices to be more in alignment with the real thing these statements are pointing at). It's also friendly to help others do the same.
c) It's not about being a stickler to the letter of the law. The deeper you investigate each precept, you actually have to let go of your ideas of what it means to "be doing it right." It's not about getting fixated on rules, heuristics, or norms. There's something more real and true being pointed to that cannot be predicted, pre-determined, etc.
Moral codes are not intrinsically subjective. But I would also not make claims about them being objective. We are caught in a sinkhole dichotomy between subjectivity and objectivity. Western thinking needs to find a way out of this. Too many philosophical discussions get stuck on these concepts. They're useful to a degree, but we need to be able to discard them when they become useless.
"Killing is wrong" is a true statement. It's not subjectively true; it's not objectively true. It's true in a sense that doesn't neatly fit into either of those categories.
↑ comment by mako yass (MakoYass) · 2024-09-21T20:57:54.590Z · LW(p) · GW(p)
The connection to moral systems could be due to the fact that curing people of trapped priors or other narcissism-like self-defending pathologies is hard and punishing and you won't do it for them unless you have a lot of love and faith in you.
I wonder if it also has something to do with certain kinds of information being locally nonexcludable goods, they have a cost to spread, but the value of the information is never obvious to a potential buyer until after the transfer has taken place. A person only pays their teacher back if the teacher can convey a sense of moral responsibility to do so.
Finally, harari's definition of religion is just a system of ideas that brings order between people. This is usually a much more useful definition than definitions like "claims about the supernatural" or whatever. In this frame, many truths, "trade allows mutual benefit", or [the english language] or [how to not be cripplingly insane] are religious in that it benefits all of us a little bit if more people have these ideas installed.
↑ comment by zhukeepa · 2024-08-26T06:25:57.120Z · LW(p) · GW(p)
In response to your third point, I want to echo ABlue's comment [LW(p) · GW(p)] about the compatibility of the trapped prior view and the evopsych view. I also want to emphasize that my usage of "trapped prior" includes genetically pre-specified priors, like a fear of snakes, which I think can be overriden.
In any case, I don't see why priors that predispose us to e.g. adultery couldn't be similarly overriden. I wonder if our main source of disagreement has to do with the feasibility of overriding "hard-wired" evolutionary priors?
↑ comment by zhukeepa · 2024-08-26T05:45:21.441Z · LW(p) · GW(p)
In response to your first point, I think of moral codes as being contextual more than I think of them as being subjective, but I do think of them as fundamentally being about pragmatism ("let's all agree to coordinate in ABC way to solve PQR problem in XYZ environment, and socially punish people who aren't willing to do so"). I also think religions often make the mistake of generalizing moral codes beyond the contexts in which they arose as helpful adaptations.
I think of decision theory as being the basis for morality -- see e.g. Critch's take here [LW · GW] and Richard Ngo's take here [LW · GW]. I evaluate how ethical people are based on how good they are at paying causal costs for larger acausal gains.
↑ comment by Christian Z R · 2024-10-03T10:00:16.287Z · LW(p) · GW(p)
'I simply believe that assigning truth values to moral sentences such as "killing is wrong" is pointless, and they are better parsed as prescriptive sentences such as "don't kill" or "boo on killing". '
Going to bring in a point I stole from David Friedmann: If I see that an apple is red, and almost everybody else agree that the apple is red, and the only person who disagrees also tend to disagree with most people about all colors and so is probably color blind, then it makes sense to say that it is true that the apple is red.
-Jesus, Muhammed and Luther:
Muhammed did support offensive warfare, but apart from that his religious rules might have been a step up from earlier arabic society. I have noticed that modern Islamic countries actually doesn't have a lot of peacetime violence or crime, compared to equally rich or developed countries. And Martin Luther was opposed to rebellions exactly because he thought anarchy and violent religious movements were worse than the status quo. He did support peaceful movements for peasant rights.
_________________
Finally, why would spirituality only help you overcome 'maladaptive' trapped priors? Might it not just as well cure adaptive, but unwanted ones?
comment by IrenicTruth · 2024-09-21T13:27:29.687Z · LW(p) · GW(p)
The next post is Secular interpretations of core perennialist claims [LW · GW]. Zhukeepa should edit the main text to explicitly link to it rather than just mentioning that it exists. (Or people could upvote this comment so it's at the top. I don't object to more good karma.)
Replies from: Benito↑ comment by Ben Pace (Benito) · 2024-09-21T18:33:44.944Z · LW(p) · GW(p)
Good point. I have edited it into the last line of the post.
comment by Ben Pace (Benito) · 2024-09-19T18:26:04.825Z · LW(p) · GW(p)
Curated.
I think this really enriched my notion of a trapped prior — an idea that someone can be fully locked into a perspective on the world that they cannot see outside of, for various biochemical and psychological reasons, but that certain particular biochemical or psychological experiences could move them out of. I think it's something of a challenge to any life-philosophy based on argument and empiricism as sources of truth, that the thinker can be so trapped in certain perspectives as to falsely reinterpret evidence within their present framework.
I think this also helped me see the ways in which trapped priors are a very human problem. The story of Alex re-enacting social roles from childhood and being able to live life very differently and have access to parts of his mind he'd shut off immediately after ("[I] understood what people meant when they told me that I was constantly “up in my head”") and leading him down a path of wanting to understand religion, was a helpful pointer, especially given the fact that the experience was not in the context of nor related to the doctrines of any organized religions.
(I am not close to curating the follow-up post, which I didn't really understand and on first few reads seemed to say some false things.)
Replies from: Elessar2↑ comment by Elessar2 · 2024-09-19T21:06:15.284Z · LW(p) · GW(p)
I'd go farther than zhukeepa goes, and declare that activating "unrealized afters" (higher perspectives and modes beyond mere conventional ways of existing) is potentially MUCH more transformative and powerful than releasing any childhood issues of the sort he describes. As in, ok got all the crap cleaned out of me-now what? There's a limit to what that kind of therapy can do, IOW, as compared to the potentially limitless realms beyond the ego. In those cases, it is society itself which tries to keep them unrealized, not the ego so much. Since the perennial philosophy goes into quite of bit of detail about that, I'll leave it there for his next entry on said subject.
comment by ABlue · 2024-08-23T20:50:19.079Z · LW(p) · GW(p)
I also suspect something along the lines of "Many (most?) great spiritual leaders were making a good-faith effort to understand the same ground truth with the same psychological equipment and got significantly farther than most normal people do." But in order for that to be plausible, you would need a reason why the almost-truths they found are so goddamn antimemetic that the most studied and followed people in history weren't able to make them stick. Some of the selection pressure surely comes down to social dynamics. I'd like to think that people who have grazed some great Truth are less likely to torture and kill infidels than someone who thinks they know a great truth. Cognitive blind spots could definitely explain things, though.
The problem is, the same thing that would make blind spots good at curbing the spread of enlightenment also makes them tricky to debate as a mechanism for it. They're so slippery that until you've gotten past one yourself it's hard to believe they exist (especially when the phenomenal experience of knowing-something-that-was-once-utterly-unknowable can also seemingly be explained by developing a delusion). They're also hard to falsify. What you call active blind spots are a bit easier to work with, I think most people can accept the idea of something like "a truth you're afraid to confront" even if they haven't experienced such a thing themselves (or are afraid to confront the fact that they have).
I look forward to reading your next post(s) as well as this site's reaction to them
Replies from: zhukeepa↑ comment by zhukeepa · 2024-08-23T21:59:43.821Z · LW(p) · GW(p)
But in order for that to be plausible, you would need a reason why the almost-truths they found are so goddamn antimemetic that the most studied and followed people in history weren't able to make them stick.
A few thoughts:
- I think many of the truths do stick (like "it's never too late to repent for your misdeeds"), but end up getting wrapped up in a bunch of garbage.
- The geeks, mops, and sociopaths model feels very relevant, with the great spiritual leaders / people who were serious about doing inner work being the geeks.
- In some sense, these truths are fundamentally about beating Moloch, and so long as Moloch is in power, Moloch will naturally find ways to subvert them.
They're so slippery that until you've gotten past one yourself it's hard to believe they exist (especially when the phenomenal experience of knowing-something-that-was-once-utterly-unknowable can also seemingly be explained by developing a delusion).
YES. I think this is extraordinarily well-articulated.
Replies from: Christian Z R↑ comment by Christian Z R · 2024-10-03T09:48:15.244Z · LW(p) · GW(p)
I think you accidentally pointed the link about geeks, mops, and sociopaths to this article. I googled the term instead.
It does a really good work of explaining what happened in most religions in late antiquity, for evidence about Christianity actually being a better subculture than paganism back then you just have to look at how envious the last pagan emperor, Julian the Apostate, was of their spontaneous altruism.
comment by Danylo Zhyrko · 2024-09-20T16:25:31.873Z · LW(p) · GW(p)
I am unsure whether I've grasped the mechanism behind this inner work stuff. You feel your limbs weird, and what? What implication are you trying to point to? How should your experience or insight contribute to rationality and moral philosophy? Yes, we have inherent biases but, as far as I've managed to get acquainted with the rationality discourse, they are not resolved by merely being aware of them or expanding consciousness. I cannot clearly see the added value of this inner work. Why is experimental psychology insufficient? Thanks for your answer.
Replies from: niark↑ comment by senguidev (niark) · 2024-09-21T14:44:04.990Z · LW(p) · GW(p)
You feel your limbs weird, and what? What implication are you trying to point to? How should your experience or insight contribute to rationality and moral philosophy?
The reasoning is quite basic actually.
- You believe, for decades, that X happening would make absolutely no rational sense.
- X happens.
- You are shocked. You realize your rationality was lacking.
- You didn't thought your rationality could be lacking in this way.
- This meta-fact is important for rationality.
Yes, we have inherent biases but [...] they are not resolved by merely being aware of them
Except if the biases are "fixable"*. Suppose they are. Then you need to work on them. But to do so, it's pretty logical that you need to be aware of them first. The emphasis on the awareness ensues.
*somehow, partially, and with lot of efforts.
I hope it's somehow clearer!
comment by Gordon Seidoh Worley (gworley) · 2024-08-23T17:58:27.330Z · LW(p) · GW(p)
My own story is a little different, but maybe not too different.
I wrote some of it a while ago in this post [LW · GW]. I don't know if I totally endorse the way I framed it there, so let me try again.
For basically as long as I can remember, my moment-to-moment experience of the world sucked. But of course when your every experience feels net negative, you adapt and learn to live with it. But I also have the kind of mind that likes to understand things and won't rest if it doesn't understand the mechanism by which something works, so I regularly turned this to myself. I was constantly dissatisfied with everything, and just when I'd think I'd nailed down why, it would turn out I had missed something huge and had to start over again.
Eventually this led to some moments of insight when I realized just how trapped by my own ontology I had become, and then found a way threw to a new way of seeing the world. These happened almost instantly, like a dam breaking and releasing all the realizations that had been held back.
This led me to positive psychology, because I noticed that sometimes I could make my life better, and eventually led me to realize that religions weren't totally full of bunk, despite having been a life-long atheist. I'm not saying they're right about the supernatural—as best I can tell, those claims are just false if interpreted straightforwardly—but I am saying I discovered that one of the things religions try to do is tell you how to live a happy life, and some do a better job of teaching you to do this than others.
To skip ahead, that's what led me to Buddhism, Zen, and eventually practicing enough that I my moment-to-moment experience flipped. Now everything is always deeply okay, even if in a relative sense it's not okay and needs to change, and it was thanks to taking all my skills as a rationalist and then using them with teachings from religion to find my way through.
Replies from: zhukeepa↑ comment by zhukeepa · 2024-08-23T22:00:42.451Z · LW(p) · GW(p)
Thanks a lot for sharing your experience! I would be very curious for you to further elaborate on this part:
Replies from: gworleyEventually this led to some moments of insight when I realized just how trapped by my own ontology I had become, and then found a way threw to a new way of seeing the world. These happened almost instantly, like a dam breaking and releasing all the realizations that had been held back.
↑ comment by Gordon Seidoh Worley (gworley) · 2024-08-23T23:56:49.369Z · LW(p) · GW(p)
Sure. This happened several times to me, each of which I interpret as a transition from one developmental level to the next, e.g. Kegan 3 -> 4 -> 5 -> Cook-Greuter 5/6 -> 6. Might help to talk about just one of these transitions.
In the Summer of 2015 I was thinking a lot about philosophy and trying to make sense of the world and kept noticing that, no matter what I did, I'd always run into some kind of hidden assumption that acted as a free variable in my thinking that was not constrained by anything and thus couldn't be justified. I had been going in circles around this for a couple years at this point. I was also, coincidentally, trying to figure out how to manage the work of a growing engineering team and struggling because, to me, other people looked like black boxes that I only kind of understood.
In the midst of this I read The e-Myth on the recommendation of a coworker, and in the middle of it there was this line about how effective managers are neither always high or low status, but change how they act based on the situation, and combined with a lot of other reading I was doing this caused a lot of things to click into place.
The phenomenology of it was the same as every time I've had one of these big insights. It felt like my mind stopped for several seconds while I hung out in an empty state, and then I came back online with a deeper understanding of the world. In this case, it was something like "I can believe anything I want" in the sense that there really was some unjustified assumptions being made in my thinking, this was unavoidable, and it was okay because there was no other choice. All I could do was pick the assumptions to be the ones that would be most likely to make me have a good map of the world.
It then took a couple years to really integrate this insight, and it wasn't until 2017 that I really started to grapple with the problems of the next one I would have.
Replies from: Raemon↑ comment by Raemon · 2024-08-24T18:08:31.255Z · LW(p) · GW(p)
In the midst of this I read The e-Myth on the recommendation of a coworker, and in the middle of it there was this line about how effective managers are neither always high or low status, but change how they act based on the situation, and combined with a lot of other reading I was doing this caused a lot of things to click into place.
I'm interested in the object level of "what are some nuts and bolts of how the high/low status manager thing worked, and how it applied", and maybe a bit more meta-but-still-object-ish level of how that insight integrated with the rest of your worldview. (or, if that second part seems wrongly phrased... idk substitute the better question you think I should have asked? lol)
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2024-08-24T20:39:47.129Z · LW(p) · GW(p)
Sure. I'll do my best to give some more details. This is all from memory, and it's been a while, so I may end up giving ahistorical answers that mix up the timeline. Appologies in advance for any confusion this causes. If you have more questions or I'm not really getting at what you want to know, please follow up and I'll try again.
First, let me give a little extra context on the status thing. I had also not long before read Impro, which has a big section on status games, and that definitely informed how The e-Myth hit me.
So, there's this way in which managers play high and low. When managers play high they project high confidence. Sometimes this is needed, like when you need to motivate an employee to work on something. Sometimes it's counterproductive, like when you need to learn from an employee. Playing too high status can make it hard for you to listen and for the person you need to listen to to feel like you are listening to them and thus encourage them to tell you what you need to know. Think of the know-it-all manager who can do your job better than you, or the aloof manager uninterested in the details.
Playing low status is often a problem for managers, and not being able to play high is one thing that keeps some people out of management. No one wants to follow a low status leader. A manager doesn't necessarily need to be high status in the wider world, but they at least need to be able to claim higher status than their employees if those employees are going to want to do what they say.
The trouble is, sometimes managers need to play high playing low, like when a manager listens to their employee to understand the problems they are facing in their work, and actually listen rather than immediately dismiss the concerns or round them off to something they've dealt with before. A key technique can be literally lowering oneself, like crouching down to be at eye level of someone sitting at a desk, as this non-verbally makes it clear that the employee is now in the driver seat and the manager is along for the ride.
Effective managers know how to adjust their status when needed. The bests are naturals who never had to be taught. Second best are those who figure out the mechanics and can deploy intentional status play changes to get desired outcomes. I'm definitely not in the first camp. To any extent I'm successful as a manger, it's because I'm in the second.
Ineffective managers, by contrast, just don't understand any of this. They typically play high all the time, even at inappropriate times. That will keep a manager employed, but they'll likely be in the bottom quartile of manager quality, and will only succeed in organizations where little understanding and adaptation is needed. The worst is low playing high status (think Michael Scott in The Office). You only stay a manager if you are low playing high due to organizational disfunction.
Okay, so all that out of the way, the way this worked for me was mostly in figuring out how to play high straight. I grew up with the idea that I was a smart person (because I was in fact more intelligent than lots of people around me, even if I had less experience and made mistakes due to lack of knowledge and wisdom). The archetypal smart person that most closely matched who I seemed to be was the awkward professor type who is a genius but also struggles to function. So I leaned into being that type of person and eschewed feedback I should be different because it wasn't in line with the type of person I was trying to be.
This meant my default status mode was high playing low playing high, by which I mean I saw myself as a high status person who played low, not because he wanted to, but because the world didn't recognize his genius, but who was going to press ahead and precociously aim for high status anyway. Getting into leadership, this kind of worked. Like I had good ideas, and I could convince people to follow them because they'd go "well, I don't like the vibe, but he's smart and been right before so let's try it", but it didn't always work and I found that frustrating.
At the time I didn't really understand what I was doing, though. What I realized, in part, after this particular insight, was that I could just play the status I wanted to straightforwardly. Playing multilayer status games is a defense mechanism, because if any one layer of the status play is challenges, you can fall back one more layer and defend from there. If you play straight, you're immediately up against a challenge to prove you really are what you say you are. So integration looked like peeling back the layers and untangling my behaviors to be more straightforward.
I can't say I totally figured it out from just this one insight. There was more going on that later insights would help me untangle. And I still struggle with it despite having a thorough theory and lots of experience putting it into play. My model of myself is that my brain literally runs slow, in that messages seem to propagate across it less quickly than they do for other people, as suggested by my relatively poor reaction times (+2 sd), and this makes it difficult for me to do high-bandwidth real-time processing of information like is required in social settings like work. All this is to say that I've had to dramatically over-solve almost every problem in my life to achieve normalcy, but I expect most people wouldn't need so much as I have. Make of this what you will when thinking about what this means for me to have integrated insights: I can't rely on S2 thinking to help me in the moment; I have do things with S1 or not at all (or rather with a significant async time delay).
Replies from: Raemon↑ comment by Raemon · 2024-08-24T20:45:47.829Z · LW(p) · GW(p)
Thanks!
I don't have a very substantive response, but wanted to say:
A key technique can be literally lowering oneself, like crouching down to be at eye level of someone sitting at a desk, as this non-verbally makes it clear that the employee is now in the driver seat and the manager is along for the ride.
This is something I've intentionally done more of lately (not in a management capacity, but in other contexts), inspired by making yourself small [LW · GW]. It's seemed to work reasonably well but it's hard to get a clear feedback signal on how it's coming across.
comment by tylerlikes · 2024-09-19T23:13:06.285Z · LW(p) · GW(p)
Very interesting thoughts. The idea of a “trapped prior” (though not the term, of course) is something of a commonplace in Christian theology, where it might be considered a cognitive aspect of the fallen human condition, especially in the epistemology of the Augustinian school. Or consider a biblical text like Matthew 13: “And in them is fulfilled the prophecy of Esaias, which saith, By hearing ye shall hear, and shall not understand; and seeing ye shall see, and shall not perceive: For this people's heart is waxed gross, and their ears are dull of hearing, and their eyes they have closed; lest at any time they should see with their eyes and hear with their ears, and should understand with their heart, and should be converted, and I should heal them.”
comment by Mike Johnson (mike-johnson) · 2024-09-20T21:57:40.111Z · LW(p) · GW(p)
I really enjoyed this piece and think it’s an important topic.
The question of how the brain implements priors, and how they can become maladaptively ‘trapped’, is an open question. I suggested last year that we could combine the “hemo-neural hypothesis” that bloodflow regulates the dynamic range of nearby neurons/nerves, with the “latch-bridge mechanism” where smooth muscle (inclusive of vascular muscle) can lock itself in a closed position. I.e. vascular tension is a prediction (Bayesian prior) about the world, and such patterns of microtension can be stored in a very durable form (“smooth muscle latches”) that can persist for days, weeks, months, years, decades.
This paints psychological release, releasing a trapped prior, and vasomuscular release as the same thing.
https://opentheory.net/2023/07/principles-of-vasocomputation-a-unification-of-buddhist-phenomenology-active-inference-and-physical-reflex-part-i/
comment by João Ribeiro Medeiros (joao-ribeiro-medeiros) · 2024-09-24T18:36:21.567Z · LW(p) · GW(p)
Thank you zhukeepa [LW · GW] for your thoughtful post. I've been also recently interested in the contribution that religions potentially can bring to the rationality enterprise as a whole.
My perspective is that even though trapped priors consistently pose epistemological barriers to effective reasoning, there is a superficial layer of those trapped priors which are accessible via concentration, breath control and ritual, in other words, to meditation.
I mention meditation as a generalized concept which can depict many different variations of those three points I mention: concentration, breath control and ritual. This should definitely include all Yoga traditions as well as Christian / Jewish prayer, and many modern forms of therapeutic practice.
The experience you describe of connecting with a distant memory via a reenactment of it and vocal guidance of a therapist is a good example of meditation, in that sense.
Meditation can produce outstanding results when it comes to creativity, productivity as well as mental and physical health. And fundamentally, meditation is a technique, something which was shaped through many iterations to become more and more effective, that is, less and and less wrong, pardon the pun.
One of the strongest arguments you make in favor of the religious experience is on the timely persistence, or, as you put it, the fact that those traditions have been substantially time-tested, and thus probably have something relevant to say.
This is something relevant on Moral Philosophy, as well as something relevant on the period that produced any target religion we want to analyze, and also, in the meditation front, something relevant on technical management of cognitive faculties via concentration, breath and ritual.
Technique only thrives in time via survival, and that's how we get such impressive results from more than a thousand year old meditation techniques. We can argue on the premises which are used to enact the specific ritual that a given meditation practice rely on, but we can hardly argue with the results it produces.
comment by Optisemist · 2024-09-24T07:51:15.719Z · LW(p) · GW(p)
I suppose to discuss religions in any meaningful way, you need to look at their brain effects. This thing looks like a good starter (so you don't need to delve into primary research publications). Well, I also found a useful quick-lookup, Sapolsky's lecture on biology of religiosity
https://www.cambridge.org/core/books/neurology-and-religion/FE245A58770B5986B10B86F6B39EB746
comment by 4gate · 2024-08-27T01:04:22.618Z · LW(p) · GW(p)
I'm curious on your thoughts of this notion of perennial philosophy and convergence of beliefs. One interpretation that I have of perennial philosophy is purely empirical: imagine that we have two "belief systems". We could define a belief system as a set of statements about the way the world works and valuation of world states (i.e. statements "if X then Y could happen" and "Z is good to have"). You can probably formalize it some other way, but I think this is a reasonable starter pack to keep it simple. (You can also imagine further formalizing it by using numbers or lattices for values and probabilities and some well-defined FSM to model parts of the world.) We could say that the religions have converged if they share a lot of features, by which I mean that for some definition of a feature the feature is present in both belief systems. We can define a feature in many ways, but for our simple thought experiment it can be effectively a state or relation between states in the two world views. For example, we could imagine that a feature is a function from states and their values/causal relations such that under the mapping it remains unchanged (i.e. there is some notion of this mapping being like an isomorphism on the projection of the set via the function). For example, in one belief system you might have some sort of "god" character that is somehow the cause of many things. The function here could be "(int(god is cause of x1) + int(god is cause of x2) + ...) / num_objects". If we map common objects (this spoon) to themselves in the other system (still the spoon) and god to god, we will see that in both systems the function representing how causal god is, remains close to 1, and so we may say that both systems have a notion of a "god" and therefore there has been some degree of convergence in the "having a god" stuff for the two systems.
So now with all this formal BS out of the way, which I had to do, because it highlights what is missing, the question is clear: under some reasonable such definition of what convergence means, how do you decide whether two religions have converged? The vibe I get from the perennial philosophy believers that I have thus spoken to is that "you have to go through the journey to understand" and generally it appears to be a sort of dispositional convergence, at least on face value—though I do not observe people of very different religions, who claim to have convergence, conviving for a long time (meaning that it is not verifiable whether indeed, the dispositions are truly something that we could call converged). Of course, it may be possible to find mappings that claim that two belief systems have converged or not when the opposite is a more honest appreciation.
Obviously no one is going to come out here and create a mathematical definition and just "be right" (idt that's even a fair thing to consider to be possible), but I do not particularly like making such assertions totally "on vibes". Often people will say that they are "spiritual" and that "spirituality" helped them overcome some psychological challenge or who knows what, but what does "spiritual" mean here? Often it's associated with some belief system that we would, as laymen, call religious or spiritual (i.e. in the enumerable list of christianity and its sub-branches, buddhism and it's, etc...), but who is to say that it is not only some part of the phenomenon that person experienced, which happened to be caused by the spiritual system which was present at the time and place, that was the truer cause of the change of psyche? It seems compelling to me to want to decouple these "core truths" from the religions that hold them so as to have them presentable in a more neutral way, since in the alternative world where you must "go through the journey" of spirituality via some specific religion, you cannot know beforehand that you won't be effectively brainwashed—and you cannot even know afterwards either... you can only get faint hints at it during the process.
So this is not to say that that anyone is getting brainwashed or that anything is good or bad or that anything should be embraced or not. I'm just saying that from an outside perspective, it is not verifiable whether religions actually converge, without diving into this stuff. However, it is is also not verifiable whether diving in is actually good, and it's not verifiable whether afterwords it even will be verifiable. Maybe I'm stumbling into some core metaphysical whirlwind of "you cannot know anything" but I do truly believe that a more systematic exposition of how we should interpret spirituality, trapped priors, convergence, and the like is possible and would enable more productive discussion.
PS I think you've touched on something tangential in the statement that you should do this with trusted people. That's trying to bootstrap, however, a resistance to manipulative misappropriation of spirituality, whereas what I'm saying I would also like more of a logical bootstrapping to the whole notion of spirituality and ideas like "convergence" so that one can leave the conversation with solid conclusions, knowing their limitations, and having a higher level of actionability.
PPS: I feel like treating a belief system, like "rationality" as a machine/tool: something which has a certain reach, certain limitations, and that usually behaves as expected in most situations but might have some bugs, is a good way to go. This will make it easier to decouple rationality with, say, spiritual traditions. At each point of time and space you can basically decide on common sense which of these two machines/tools is best to apply. Each different tool can be shown, hopefully, to be good for some cases and thus most decision making happens on the routing level: which tool to use. If you understand the tool from a third person point of view there is less of a tendency to rely on it in the wrong cases purely on dogma.
comment by romeostevensit · 2024-08-23T18:35:27.004Z · LW(p) · GW(p)
And not mutually exclusive with convergence due to exploiting the same flaws.
Replies from: zhukeepa↑ comment by zhukeepa · 2024-08-23T21:33:25.090Z · LW(p) · GW(p)
I'm not sure what you mean by that, but the claim "many interpretations of religious mystical traditions converge because they exploit the same human cognitive flaws" seems plausible to me. I mostly don't find such interpretations interesting, and don't think I'm interpreting religious mystical traditions in such a way.
Replies from: romeostevensit↑ comment by romeostevensit · 2024-08-25T02:40:53.245Z · LW(p) · GW(p)
I'm saying it's difficult to distinguish causation
comment by Review Bot · 2024-09-21T19:09:50.050Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
comment by M Ls (m-ls) · 2024-09-19T22:15:44.374Z · LW(p) · GW(p)
The first realisation here moving forward, is that religion is a subset of something else… —and not a thing-in-itself that needs to be explained /selected for. This something else is the inchoate urge "to should", "to world the self with a self in the world among others". I realised this ten years ago, https://www.academia.edu/40978261/Why_we_should_an_introduction_by_memoir_into_the_implications_of_the_Egalitarian_Revolution_of_the_Paleolithic_or_Anyone_for_cake
and write on it at my substack https://whyweshould.substack.com/
any commonalties are the result of worlding in the world, in a framework of big history, in which the thickets of metaphysics are dense, grand and commodious, ready to support any world we should feel it good to espouse.
Convergence is a thing.
Evolution don't care about the outcomes (art/religion/polity/morality) merely that we should, and thus make mistakes and learn.
comment by Hudjefa (Agent Smith) · 2024-08-26T10:15:32.584Z · LW(p) · GW(p)
This is curious. The usual is atheism using psychology to discredit theism. Roles are being reversed here with trapped priors, the suggestion being some veritas are being obscured by kicking religion out of our system. I half-agree since I consider this demonstration non finito.
As for philosophia perennis, I'd say it's a correlation is causation fallacy. It looks as though the evident convergence of religions on moral issues is not due to the mystical and unprovable elements therein but follows from common rational aspects present in most/all religions. To the extent this is true, religion may not claim moral territory.
That said, revelatory moral knowledge is a fascinating subject.
comment by Review Bot · 2024-09-21T19:09:49.825Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?