Posts

Comments

Comment by fitw on My idea of sacredness, divinity, and religion · 2023-10-30T04:42:11.676Z · LW · GW

Would love more commentary on how you seem to go from cooperation to altered state of consciousness, specifically, from (i) below to (ii) below:

(i) Your comment <blockquote>I live in a local pocket of the universe where even my interactions with complete strangers – such as the salespeople at stores – tend to be friendly and warm in tone, or at least politely neutral.</blockquote> This, I can and do believe, is a natural denouement of cooperation.

(ii) That bit about altered states of consciousness ("pure love"?). It is not clear to me that this has anything do with the cooperation you mentioned.

To begin with, the latter doesn't seem like something one grows into with increased "social intelligence", but rather "quantum jumps" that are taken at unpredictable moments that one cannot engineer. Secondly, it is not clear to me at all that people who integrate a greater understanding and dexterity with cooperation in their personal lives have any higher chances of reaching these altered states of consciousness.

Comment by fitw on [deleted post] 2023-02-19T05:39:18.203Z

I don't have any answer, but I have similar-seeming problems (or so I believe). One thing I would like to understand better is whether the problem of "attention span" is really an umbrella term for many different kinds of problems, and understand it in a Kaj Sotala-kind of framework of internal systems. For instance, one can have an attention span problem while reading, because of any of the following reasons (sorry for the long list; please let me know if I am spamming):

1(a) Reading involves effort, which an internal subsystem doesn't want to expend.

1(b) Reading exposes one's insecurities regarding one's clarity of understanding, which some internal subsystem hates confronting.

2(a) Reading triggers various memories and reveries, which some internal subsystem wants to indulge.

2(b) Some narcissistic internal subsystem that wants to explain the newly gathered insight to others and gain brownie points,  and starts designing explanation schemes.

3(a) Integrating what one reads into one's existing knowledge-base requires effort, which an internal subsystem doesn't want to expend.

3(b) Integrating what one reads into one's existing knowledge-base requires effort, and an internal subsystem wants to spend extra time with that (i.e., craving a more thorough integration than "normal") before proceeding to reading the next.

3(c) An internal subsystem that wants to develop more theories or get more insights on the basis of some newly gathered insight overwhelms the knowledge-acquisition subsystem.

Perhaps some of these can be separated into "low-serotonin" vs "dopamine-craving"?

Comment by fitw on [deleted post] 2022-11-17T04:04:43.379Z

While I understand that many would find this post irrelevant and distractionary to lesswrong, I can't make sense of a comment like "I did not even want to know the information contained in the title of this post (about which candidate is entering the upcoming national election in the US), and am irritated that I learned it on LessWrong. So I have downvoted."

Comment by fitw on Syncretism · 2022-10-28T17:04:54.166Z · LW · GW

There is a view that "syncretism" used to be the "default" religious response and natural human tendency (well, why not maximize your portfolio and hedge your bets?), but that a minority of religions that explicitly defined themselves in opposition to other religions had huge evolutionary success, and some of them have shaped modernity.

Comment by fitw on My cognitive inertia cycle · 2022-06-26T17:11:32.057Z · LW · GW

Sorry I don't have anything useful to comment, but just wanted to thank you profusely for this post. I relate pretty strongly to these very problems, especially the uncertainty of usefulness (often the certainty of uselessness) and the opportunity cost considerations. I don't know any good article where someone talks about handling this issue.