Dishonorable Gossip and Going Crazy
post by Ben Pace (Benito), Unreal · 2023-10-14T04:00:35.591Z · LW · GW · 31 commentsContents
Thursday, October 12th Friday, October 13th Addendum: Ren's views on Relationship None 31 comments
This is a fairly high-context conversation between me (Ben) and my friend (Ren).
Ren used to work at CFAR, and then ~4.5 years ago moved away to join the Monastic Academy in Vermont. Ren had recently visited me and others in the rationalist scene in the Bay and had written a Facebook post that included her saying she'd heard word that people had been gossiping behind her back about whether she'd "gone nuts", which she said was unwholesome and further said she was angry that people thought it was acceptable behavior. As someone who had done some amount of that, I wanted to talk with Ren about it.
We found a time to have a zoom, then I proposed we move to try talking via written dialogue, a format which we both liked quite a bit.
This is a fairly high-context conversation, and the topic I'm sorry if it doesn't make sense to a bunch of readers.
Thursday, October 12th
Friday, October 13th
31 comments
Comments sorted by top scores.
comment by AnnaSalamon · 2023-10-14T15:19:10.343Z · LW(p) · GW(p)
Ben Pace, honorably quoting aloud a thing he'd previously said about Ren:
the other day i said [ren] seemed to be doing well to me
to clarify, i am not sure she has not gone crazy
she might've, i'm not close enough to be confident
i'd give it 25%
I really don't like this usage of the word "crazy", which IME is fairly common in the bay area rationality community. This is for several reasons. The simple to express one is that I really read through like 40% of this dialog thinking (from its title plus early conversational volleys) that people were afraid Ren had gone, like, the kind of "crazy" that acute mania or psychosis or something often is, where a person might lose their ability to do normal tasks that almost everyone can do, like knowing what year it is or how to get to the store and back safely. Which was a set of worries I didn't need to have, in this case. I.e., my simple complaint is that it caused me confusion here.
The harder to express but more heartfelt one, is something like: the word "crazy" is a license to write people off. When people in wider society use it about those having acute psychiatric crises, they give themselves a license to write off the sense behind the perceptions of like 2% or something of the population. When the word is instead used about people who are not practicing LW!rationality, including ordinary religious people, it gives a license to write off a much larger chunk of people (~95% of the population?), so one is less apt to seek sense behind their perceptions and actions.
This sort of writing-off is a thing people can try doing, if they want, but it's a nonstandard move and I want it to be visible as such. That is, I want people to spell it out more, like: "I think Ren might've stopped being double-plus-sane like all the rest of us are" or "I think Ren might've stopped following the principles of LW!rationality" or something. (The word "crazy" hides this as though it's the normal-person "dismiss ~2% of the population" move; these other sentences make make visible that it's an unusual and more widely dismissive move.) The reason I want this move to be made visible in this way is partly that I think the outside view on (groups of people who dismiss those who aren't members of the group) is that this practice often leads to various bad things (e.g. increased conformity as group members fear being dubbed out-group; increased blindness to outside perspectives; difficulty collaborating with skilled outsiders), and I want those risks more visible.
(FWIW, I'd have the same response to a group of democrats discussing republicans or Trump-voters as "crazy", and sometimes have. But IMO bay area rationalists say this sort of thing much more than other groups I've been part of.)
Replies from: habryka4, Unreal↑ comment by habryka (habryka4) · 2023-10-14T23:08:14.640Z · LW(p) · GW(p)
Hmm, for what it's worth, my model of Ben here was not talking about the "95% of the population" type of crazy, and I also don't mean that when I use the word crazy.
I mean the type of crazy that I've seen quite a lot in the extended EA and rationality community, where like, people start believing in demons and try to quarantine their friends from memetic viruses, or start having anxiety attacks about Roko's basilisk all the time, or claim strongly that all life is better off being exterminated because all existence is suffering, or threaten to kill people on the internet, or plan terrorist attacks.
I currently assign around 25% probability to Ren ending up like that, or already being like that. Maybe "crazy" is the wrong word here, but Maple really seems to me like the kind of environment that would produce that kind of craziness, and I am seeing a lot of flags. This is vastly above population baseline for me.
Maybe Ben disagrees with me on this, in which case that confirms your point, but I guess I am getting a bit of a feeling that Ben didn't want to give as harsh of a comparison in this conversation as my inner model of him thinks is warranted, because it would have produced a bunch of conflict, and I would like to avoid a false model of consensus forming in the comments.
Replies from: localdeity, Benito, lc, Unreal↑ comment by localdeity · 2023-10-15T02:37:59.224Z · LW(p) · GW(p)
Another cluster I'd throw in: believing in "woo" stuff (e.g. crystal healing, astrology, acupuncture) as anything other than a placebo. Now, if someone was raised to believe in some of those things, I wouldn't count it heavily against them. But if they at one point were a hard-nosed skeptic and later got seriously into that stuff, I'd take this as strong evidence that something had gone wrong with their mind.
Not quite sure it's a cluster; I can name just one major case of "prominent rationalist -> woo promoter", though I feel like there might be more, That person I would say went somewhat generally crazy.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-10-15T04:22:23.239Z · LW(p) · GW(p)
though I feel like there might be more
I can definitely think of two obvious examples just offhand, and I know that I’ve noticed more, but haven’t exactly kept track.
↑ comment by Ben Pace (Benito) · 2023-10-18T00:40:05.825Z · LW(p) · GW(p)
Sadly I meant a more complicated thing. Within my claim of 25% on 'crazy', here's a rough distribution of world-states I think I was referring to:
- 50%: Has joined a new religion and is devoting their ~entire life around it being true and good even though it's IMO pretty obviously false and a waste of time, in a way where I'm like "I guess we're basically going to increasingly drift apart and end ties".
- 30%: Has joined a new cult that, while seeming positive for many people, does have some unhinged beliefs and unethical norms that will cause a lot of damage. This is like joining Scientology than like joining Christianity, and could involve things like suicide pacts, becoming a sex cult, or engaging in organized crime / other coordinated unethical action.
- 20%: Has personally lost the plot (closer to the way habryka describes) and will start taking unpredictable and obviously harmful actions, not in a way especially coordinated with others in her religious group. Giving concrete examples here feels hard and like it will be overly hypothesis-promoting. But basically things that will hurt herself or other people.
I think these are more extreme than the majority of the population (incl. most ordinary religious people). But my guess is that the first bullet is probably unfair to call 'crazy', and instead I should've given 12.5% to the claim. I think it's reasonable to think that it is mean and unfair of me to refer to the first thing as 'crazy', and regret it a bit.
↑ comment by lc · 2023-10-14T23:45:23.705Z · LW(p) · GW(p)
This is part of the problem though, I don't think all of those things are crazy, and some of them seem to follow from standard LW and EA axioms.
start believing in demons and try to quarantine their friends from memetic viruses...threaten people... plan terrorist attacks
Sure, those are really bad.
start having anxiety attacks about Roko's basilisk all the time
Having anxiety attacks about things is pretty universally unhelpful. But if you're using Roko's basilisk as a shorthand for all of the problems of "AIs carrying out threats", including near term AIs in a multipolar setting, then it seems perfectly reasonable to be anxious about that. Labeling people crazy for being scared is inaccurate if the thing they're fearing is actually real+scary.
claim strongly that all live is better off being exterminated because all existence is suffering
Again, this depends on what you mean. I think if you take the EA worldview seriously then the obvious conclusion is that Earth life up until and including today has been net-negative because of animal suffering. My guess is also that most current paths through AI run an unacceptable risk of humans being completely subjugated by either some tech-government coalition or hitting some awful near miss section of mind space. Do either of those beliefs make me crazy?
Replies from: habryka4, philh↑ comment by habryka (habryka4) · 2023-10-15T00:02:59.920Z · LW(p) · GW(p)
Having anxiety attacks about things is pretty universally unhelpful. But if you're using Roko's basilisk as a shorthand for all of the problems of "AIs carrying out threats", then it seems perfectly reasonable to be anxious about that. Labeling people crazy seems misguided if the thing they're fearing is actually scary.
Agree that being anxious is totally fine, but the LW team has to deal with a relatively ongoing stream of people (like 3-4 a year) who really seem to freak out a lot about either Roko's basilisk or quantum immortality/suicide. To be clear, these are interesting ideas, but usually when we deal with these people they are clearly not in a good spot.
Again, this depends on what you mean. I think if you take the EA worldview seriously then the obvious conclusion is that Earth life up until and including today has been net-negative because of animal suffering.
Net-negative I think is quite different from "all existence is suffering". But also, yeah, I do think that the reason why I've encountered a lot of this kind of craziness in the EA/Rationality space is because we discuss a lot of ideas with really big implications, and have a lot of people who take ideas really seriously, which increases the degree to which people do go crazy.
My guess is you are dealing fine with these ideas, though some people are not, which is sad, but also does mean I just encounter a pretty high density of people I feel justified in calling crazy.
↑ comment by philh · 2023-10-20T10:42:25.121Z · LW(p) · GW(p)
I think if you take the EA worldview seriously then the obvious conclusion is that Earth life up until and including today has been net-negative because of animal suffering.
Nit: I don't consider "the EA worldview" to have any opinion on animal suffering. But (roughly speaking) I agree you can get this conclusion from the EA worldview plus some other stuff which is also common among EAs.
↑ comment by Unreal · 2023-10-15T01:32:45.384Z · LW(p) · GW(p)
so okay i'm actually annoyed by a thing... lemme see if i can articulate it.
- I clearly have orders of magnitude more of the relevant evidence to ascertain a claim about MAPLE's chances of producing 'crazy' ppl as you've defined—and much more even than most MAPLE people (both current and former).
- Plus I have much of the relevant evidence about my own ability to discern the truth (which includes all the feedback I've received, the way people generally treat me, who takes me seriously, how often people seem to want to back away from me or tune me out when I start talking, etc etc).
- A bunch of speculators, with relatively very little evidence about either, come out with very strong takes on both of the above, and don't seem to want to take into account EITHER of the above facts, but instead find it super easy to dismiss any of the evidence that comes from people with the relevant data. Because of said 'possibility they are crazy'.
And so there is almost no way out of this stupid box,; this does not incline me to try to share any evidence I have, and in general, reasonable people advise me against it. And I'm of the same opinion. It's a trap to try.
It is both easy and an attractor for ppl to take anything I say and twist it into more evidence for THEIR biased or speculative ideas, and to take things I say as somehow further evidence that I've just been brainwashed. And then they take me less seriously. Which then further disinclines me to share any of my evidence. And so forth.
This is not a sane, productive, and working epistemic process? As far as I can tell?
Literally I was like "I have strong evidence" and Ben's inclination was to say "strong evidence is easy to come by / is everywhere" and links to a relevant LW article, somehow dismissing everything I said previously and might say in the future with one swoop. It effectively shut me down.
And I'm like....
what is this "epistemic process" ya'll are engaged in
[Edit: I misinterpreted Ben's meaning. He was saying the opposite of what I thought he meant. Sorry, Ben. Another case of 'better wrong than vague' for me. 😅]
To me, it looks like [ya'll] are using a potentially endless list of back-pocket heuristics and 'points' to justify what is convenient for you to continue believing. And it really seems like it has a strong emotional / feeling component that is not being owned.
[edit: you -> ya'll to make it clearer this isn't about Oliver]
I sense a kind of self-protection or self-preservation thing. Like there's zero chance of getting access to the true Alief in there. That's why this is pointless for me.
Also, a lot of online talk about MAPLE is sooo far from realistic that it would, in fact, make me sound crazy to try to refute it. A totally nonsensical view is actually weirdly hard to counter, esp if the people aren't being very intellectually honest AND the people don't care enough to make an effort or stick through it all the way to the end.
Replies from: habryka4, Unreal, Benito↑ comment by habryka (habryka4) · 2023-10-15T01:46:16.021Z · LW(p) · GW(p)
I mean, I am not sure what you want me to do. If I had taken people at their word when I was concerned about them or the organizations they were part of, and just believed them on their answer on whether they will do reckless or dangerous or crazy things in the future, I would have gotten every single one of the cases I know about wrong.
Like, it's not impossible but seems very rare that when I am concerned about the kind of thing I am concerned about here and say "hey I am worried that you will do a crazy thing" that my interlocutor goes "yeah, I totally might do a crazy thing". So the odds ratio on someone saying "trust me, I am fine" is just totally flat, and doesn't help me distinguish between the different worlds.
So would a sane, productive and epistemic process just take you at your word? I don't think so, that seems pretty naive to me. But I have trouble reading your comment as asking for anything else. As you yourself say, you haven't given me any additional evidence, and you don't want to go into the details.
I also don't know what things you are claiming about my psychology here. I haven't made many comments on Maple or you, and the ones I have made seem reasonably grounded to me, so I don't know on what basis you are accusing me of "endless list of back-pocket heuristics and points". I don't know how it's convenient for me to think that my friends and allies and various institutions around me tend to do reckless and dangerous things at alarming rates. Indeed, it makes me very sad and I really wish it wasn't so.
To be clear, I wouldn't particularly dismiss concrete evidence you give about MAPLE being a fine environment to be in. I would be surprised if you e.g. lied about verifiable facts, and would update if you told me about the everyday life there (of course I don't know in which direction I would update, since I would have already updated if I could predict that, but I don't feel like evidence you are giving me is screened off by me being concerned about you going 'crazy' in the relevant ways, though of course I am expecting various forms of filtered evidence which will make the updates a bit messier)
Replies from: thoth-hermes, Unreal↑ comment by Thoth Hermes (thoth-hermes) · 2023-10-15T18:14:04.430Z · LW(p) · GW(p)
I think if you ask people a question like, "Are you planning on going off and doing something / believing in something crazy?", they will, generally speaking, say "no" to that, and that is roughly more likely the more isomorphic your question is to that, even if you didn't exactly word it that way. My guess is that it was at least heavily implied that you meant "crazy" by the way you worded it.
To be clear, they might have said "yes" (that they will go and do the thing you think is crazy), but I doubt they will internally represent that thing or wanting to do it as "crazy." Thus the answer is probably going to be one of, "no" (as a partial lie, where no indirectly points to the crazy assertion), or "yes" (also as a partial lie, pointing to taking the action).
In practice, people have a very hard time instantiating the status identifier "crazy" on themselves, and I don't think that can be easily dismissed.
I think the utility of the word "crazy" is heavily overestimated by you, given that there are many situations where the word cannot be used the same way by the people relevant to the conversation in which it is used. Words should have the same meaning to the people in the conversation, and since some people using this word are guaranteed to perceive it as hostile and some are not, that causes it to have asymmetrical meaning inherently.
I also think you've brought in too much risk of "throwing stones in a glass house" here. The LW memespace is, in my estimation, full of ideas besides Roko's Basilisk that I would also consider "crazy" in the same sense that I believe you mean it: Wrong ideas which are also harmful and cause a lot of distress.
Pessimism, submitting to failure and defeat, high "p(doom)", both MIRI and CFAR giving up (by considering the problems they wish to solve too inherently difficult, rather than concluding they must be wrong about something), and people being worried that they are "net negative" despite their best intentions, are all (IMO) pretty much the same type of "crazy" that you're worried about.
Our major difference, I believe, is in why we think these wrong ideas persist, and what causes them to be generated in the first place. The ones I've mentioned don't seem to be caused by individuals suddenly going nuts against the grain of their egregore.
I know this is a problem you've mentioned before and consider it both important and unsolved, but I think it would be odd to notice both that it seems to be notably worse in the LW community, but also to only be the result of individuals going crazy on their own (and thus to conclude that the community's overall sanity can be reliably increased by ejecting those people).
By the way, I think "sanity" is a certain type of feature which is considerably "smooth under expectation" which means roughly that if p(person = insane) = 25%, that person should appear to be roughly 25% insane in most interactions. In other words, it's not the kind of probability where they appear to be sane most of the time, but you suspect that they might have gone nuts in some way that's hard to see or they might be hiding it.
The flip side of that is that if they only appear to be, say, 10% crazy in most interactions, then I would lower your assessment of their insanity to basically that much.
I still find this feature, however, not altogether that useful, but using it this way is still preferable over a binary feature.
Replies from: Viliam↑ comment by Viliam · 2023-10-16T11:02:42.507Z · LW(p) · GW(p)
I also think you've brought in too much risk of "throwing stones in a glass house" here. The LW memespace is, in my estimation, full of ideas (...) that I would also consider "crazy"
That seems to me like an extra reason to keep "throwing stones". To make clear the line between the kind of "crazy" that rationalists enjoy, and the kind of "crazy" that is the opposite.
As an insurance, just in the (hopefully unlikely) case that tomorrow Unreal goes on a shooting spree, I would like to have it in writing - before it happened - that it happened because of ideas that the rationalist community disapproves of.
Otherwise, the first thing everyone will do is: "see, another rationalist gone crazy". And whatever objection we make afterwards, it will be like "yeah, now that the person is a bad PR, everyone says 'comrades, this is not true rationalism, the true rationalism has never been tried', but previously no one saw a problem with them".
(I am exaggerating a lot, of course. Also, this is not a comment on Unreal specifically, just on the value of calling out "crazy" memes, despite being perceived as "crazy" ourselves.)
↑ comment by Unreal · 2023-10-15T02:59:10.887Z · LW(p) · GW(p)
The 'endless list' comment wasn't about you, it was a more 'general you'. Sorry that wasn't clear. I edited stuff out and then that became unclear.
I mostly wanted to point at something frustrating for me, in the hopes that you or others would, like, get something about my experience here. To show how trapped this process is, on my end.
I don't need you to fix it for me. I don't need you to change.
I don't need you to take me for my word. You are welcome to write me off, it's your choice.
I just wanted to show how I am and why.
Replies from: Unreal↑ comment by Unreal · 2023-10-15T03:13:51.394Z · LW(p) · GW(p)
I had written a longer comment, illustrating how Oliver was basically committing the thing that I was complaining about and why this is frustrating.
The shorter version:
His first paragraph is a strawman. I never said 'take me at my word' or anything close. And all previous statements from me and knowing anything about my stances would point to this being something I would never say, so this seems weirdly disingenuous.
His second paragraph is weirdly flimsy, implying that ppl are mostly using the literal words out of people's mouths to determine whether they're lying (either to others or to themselves). I would be surprised if Oliver would actually find Alice and Bob both saying "trust me i'm fine" would be 'totally flat' data, given he probably has to discern deception on a regular basis.
Also I'm not exactly the 'trust me i'm fine' type, and anyone who knows me would know that about me, if they bothered trying to remember. I have both the skill of introspection and the character trait of frankness. I would reveal plenty about my motives, aliefs, the crazier parts of me, etc. So paragraph 2 sounds like a flimsy excuse to be avoidant?
But the IMPORTANT thing is... I don't want to argue. I wasn't interested in that. I was hoping for something closer to perspective-taking, reconciliation, or reaching more clarity about our relational status. But I get that I was sounding argumentative. I was being openly frustrated and directing that in your general direction. Apologies for creating that tension.
↑ comment by Unreal · 2023-10-15T02:05:37.990Z · LW(p) · GW(p)
FTR, the reason I am engaging with LW at all, like right now...
I'm not that interested in preserving or saving MAPLE's shoddy reputation with you guys.
But I remain deeply devoted to the rationalists, in my heart. And I'm impacted by what you guys do. A bunch of my close friends are among you. And... you're engaging in this world situation, which impacts all of us. And I care about this group of people in general. I really feel a kinship here I haven't felt anywhere else. I can relax around this group in a way I can't elsewhere.
I concern myself with your norms, your ethical conduct, etc. I wish well for you, and wish you to do right by yourselves, each other, and the world. The way you conduct yourselves has big implications. Big implications for impacts to me, my friends, the world, the future of the world.
You've chosen a certain level of global-scale responsibility, and so I'm going to treat you like you're AT THAT LEVEL. The highest possible levels with a very high set of expectations. I hold myself AT LEAST to that high of a standard, to be honest, so it's not hypocritical.
And you can write me off, totally. No problem.
But in my culture, friends concern themselves with their friends' conduct. And I see you as friends. More or less.
If you write me off (and you know me personally), please do me the honor of letting me know. Ideally to my face. If you don't feel you are gonna do that / don't owe me that, then it would help me to know that also.
↑ comment by Ben Pace (Benito) · 2023-10-15T03:36:13.338Z · LW(p) · GW(p)
Literally I was like "I have strong evidence" and Ben's inclination was to say "strong evidence is easy to come by / is everywhere" and links to a relevant LW article, somehow dismissing everything I said previously and might say in the future with one swoop. It effectively shut me down.
Oh, this is a miscommunication. The thing I was intending to communicate when I linked to that post [LW · GW] was that it is indeed plausible that you have observed strong evidence and that your confidence that you are in a healthy environment is accurate. I am saying that I think it is not in-principle odd or questionable to have very confident beliefs. I did not mean this to dismiss your belief, but to say the opposite, that your belief is totally plausible!
Replies from: Unreal↑ comment by Unreal · 2023-10-15T03:48:08.421Z · LW(p) · GW(p)
Oh, okay, I found that a confusing way to communicate that? But thanks for clarifying. I will update my comment so that it doesn't make you sound like you did something very dismissive.
I feel embarrassed by this misinterpretation, and the implied state of mind I was in. But I believe it is an honest reflection about something in my state of mind, around this subject. Sigh.
↑ comment by Unreal · 2023-10-14T15:56:26.573Z · LW(p) · GW(p)
Anonymized paraphrase of a question someone asked about me (reported to me later, by the person who was being asked the question):
I have a prior about people who go off to monasteries sometimes going nuts, is Renshin nuts?
The person being asked responded "nah" and the question-asker was like "cool"
I think this sort of exchange might be somewhat commonplace or normal in the sphere.
I personally didn't feel angry, offended, or sad to hear about this exchange, but I don't feel the person asking the question was asking out of concern or care for me, as a person. But rather to get a quick update for their world model or something. And my "taste" about this is a "bad" taste. I don't currently have time to elaborate but may later.
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-10-15T01:56:32.174Z · LW(p) · GW(p)
Thanks for adding this. I felt really hamstrung by not knowing exactly what kind of conversation we were talking about, and this helps a lot.
I think it's legit that this type of conversation feels shitty to the person it is about. Having people talk about you like you're not a person feels awful. If it included someone with whom you had a personal relation with, I think it's legit that this hurts the relationships. Relationships are based on viewing each other as people. And I can see how a lot of generators of this kind of conversation would be bad.
But I think it's pretty important that people be able to do these kind of checks, for the purpose of updating their world model, without needing to fully boot up personal caring modules as if you were a friend they had an obligation to take care of. There are wholesome generators that would lead to this kind of conversation, and having this kind of conversation is useful to a bunch of wholesome goals.
Which doesn't make it feel any less painful. You're absolutely entitled to feel hurt, and have this affect your relationship with the people who do it. But this isn't (yet) a sufficient argument for "...and therefore people shouldn't have these kinds of conversations".
Replies from: Unreal↑ comment by Unreal · 2023-10-15T03:32:54.140Z · LW(p) · GW(p)
But I think it's pretty important that people be able to do these kind of checks, for the purpose of updating their world model, without needing to fully boot up personal caring modules as if you were a friend they had an obligation to take care of. There are wholesome generators that would lead to this kind of conversation, and having this kind of conversation is useful to a bunch of wholesome goals.
There is a chance we don't have a disagreement, and there is a chance we do.
In brief, to see if there's a crux anywhere in here:
- Don't need ppl to boot up 'care as a friend' module.
- Do believe compassion should be the motivation behind these conversations, even if not friends, where compassion = treats people as real and relationships as real.
- So it matters if the convo is like (A) "I care about the world, and doing good in the world, and knowing about Renshin's sanity is about that, at the base. I will use this information for good, not for evil." Ideally the info is relevant to something they're responsible for, so that it's somewhat plausible the info would be useful and beneficial.
- Versus (B) "I'm just idly curious about it, but I don't need to know and if it required real effort to know, I wouldn't bother. It doesn't help me or anyone to know it. I just want to eat it like I crave a potato chip. I want satisfaction, stimulation, or to feel 'I'm being productive' even if it's not truly so, and I am entitled to feel that just b/c I want to. I might use the info in a harmful way later, but I don't care. I am not really responsible for info I take in or how I use info."
- And I personally think the whole endeavor of modeling the world should be for the (A) motive and not the (B) motive, and that taking in any-and-all information isn't, like, neutral or net-positive by default. People should endeavor to use their intelligence, their models, and their knowledge for good, not for evil or selfish gain or to feed an addiction to feeling a certain way.
- I used a lot of 'should' but that doesn't mean I think people should be punished for going against a 'should'. It's more like healthy cultures, imo, reinforce such norms, and unhealthy cultures fail to see or acknowledge the difference between the two sets of actions.
↑ comment by Elizabeth (pktechgirl) · 2023-10-15T23:25:13.403Z · LW(p) · GW(p)
This was a great reply, very crunchy, I appreciate you spelling out your beliefs so legibly.
- Do believe compassion should be the motivation behind these conversations, even if not friends, where compassion = treats people as real and relationships as real.
I'm confused here because that's not my definition of compassion and the sentence doesn't quite make sense to me if you plug that definition in.
But I agree those questions should be done treating everyone involved as real and human. I don't believe they need to be done out of concern for the person. I also don't think the question needs to be motivated by any specific concern; desire for good models is enough. It's good if people ultimately use their models to help themselves and others, but I think it's bad to make specific questions or models justify their usefulness before they can be asked.
Replies from: Unreal↑ comment by Unreal · 2023-10-16T04:50:41.772Z · LW(p) · GW(p)
Hm, neither of the motives I named include any specific concern for the person. Or any specific concern at all. Although I do think having a specific concern is a good bonus? Somehow you interpreted what I said as though there needs to be specific concerns.
RE: The bullet point on compassion... maybe just strike that bullet point. It doesn't really affect the rest of the points.
It's good if people ultimately use their models to help themselves and others, but I think it's bad to make specific questions or models justify their usefulness before they can be asked.
I think I get what you're getting at. And I feel in agreement with this sentiment. I don't want well-intentioned people to hamstring themselves.
I certainly am not claiming ppl should make a model justify its usefulness in a specific way.
I'm more saying ppl should be responsible for their info-gathering and treat that with a certain weight. Like a moral responsibility comes with information. So they shouldn't be cavalier about it.... but especially they should not delude themselves into believing they have good intentions for info when they do not.
And so to casually ask about Alice's sanity, without taking responsibility for the impact of speech actions and failing to acknowledge the potential damage to relationships (Alice's or others), is irresponsible. Even if Alice never hears about this exchange, it nonetheless can cause a bunch of damage, and a person should speak about these with eyes open, about that.
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-10-18T19:22:48.738Z · LW(p) · GW(p)
Could you say more on what you mean by "with compassion" and "taking responsibility for the impact of speech actions"?
Replies from: Unreal↑ comment by Unreal · 2023-10-18T23:08:50.908Z · LW(p) · GW(p)
I'm fine with drilling deeper but I currently don't know where your confusion is.
I assume we exist in different frames, but it's hard for me to locate your assumptions.
I don't like meandering in a disagreement without very specific examples to work with. So maybe this is as far as it is reasonable to go for now.
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-10-19T06:43:21.343Z · LW(p) · GW(p)
That makes sense. Let me take a stab at clarifying, but if that doesn't work seems good to stop.
You said
to casually ask about Alice's sanity, without taking responsibility for the impact of speech actions and failing to acknowledge the potential damage to relationships (Alice's or others), is irresponsible. Even if Alice never hears about this exchange, it nonetheless can cause a bunch of damage, and a person should speak about these with eyes open, about that
When I read that, my first thought is that before (most?) every question, you want people to think hard and calculate the specific consequences asking that question might have, and ask only if the math comes out strongly positive. They bear personal responsibility for anything in which their question played any causal role. I think that such a policy would be deeply harmful.
But another thing you could mean is that people who have a policy of asking questions like this should be aware and open about the consequences of their general policies on questions they ask, and have feedback loops that steer themselves towards policies that produce good results on average. That seems good to me. I'm generally in favor of openly acknowledging costs even when they're outweighed by benefits, and I care more that people have good feedback loops than that any one action is optimal.
Replies from: Unreal↑ comment by Unreal · 2023-10-19T17:39:48.933Z · LW(p) · GW(p)
I would never have put it as either of these, but the second one is closer.
For me personally, I try to always have an internal sense of my inner motivation before/during doing things. I don't expect most people do, but I've developed this as a practice, and I am guessing most people can, with some effort or practice.
I can pretty much generally tell whether my motivation has these qualities: wanting to avoid, wanting to get away with something, craving a sensation, intention to deceive or hide, etc. And when it comes to speech actions, this includes things like "I'm just saying something to say something" or "I just said something off/false/inauthentic" or "I didn't quite mean what I just said or am saying".
Although, the motivations to really look out for are like "I want someone else to hurt" or "I want to hurt myself" or "I hate" or "I'm doing this out of fear" or "I covet" or "I feel entitled to this / they don't deserve this" or a whole host of things that tend to hide from our conscious minds. Or in IFS terms, we can get 'blended' with these without realizing we're blended, and then act out of them.
Sometimes, I could be in the middle of asking a question and notice that the initial motivation for asking it wasn't noble or clean, and then by the end of asking the question, I change my inner resolve or motive to be something more noble and clean. This is NOT some kind of verbal sentence like going from "I wanted to just gossip" to "Now I want to do what I can to help." It does not work like that. It's more like changing a martial arts stance. And then I am more properly balanced and landed on my feet, ready to engage more appropriately in the conversation.
What does it mean to take personal responsibility?
I mean, for one example, if I later find out something I did caused harm, I would try to 'take responsibility' for that thing in some way. That can include a whole host of possible actions, including just resolving not to do that in the future. Or apologizing. Or fixing a broken thing.
And for another thing, I try to realize that my actions have consequences and that it's my responsibility to improve my actions. Including getting more clear on the true motives behind my actions. And learning how to do more wholesome actions and fewer unwholesome actions, over time.
I almost never use a calculating frame to try to think about this. I think that's inadvisable and can drive people onto a dark or deluded path 😅
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-10-24T05:14:17.906Z · LW(p) · GW(p)
I 100% agree it's good to cultivate an internal sense of motivation, and move to act from motives more like curiosity and care, and less like prurient gossip and cruelty. I don't necessarily think we can transition by fiat, but I share the goal.
But I strongly reject "I am responsible for mitigating all negative consequences of my actions". If I truthfully accuse someone of a crime and it correctly gets them fired, am I responsible for feeding and housing them? If I truthfully accuse someone of a crime but people overreact, am I responsible for harm caused by overreaction? Given that the benefits of my statement accrue mostly to other people, having me bear the costs seems like a great way to reduce the supply of truthful, useful negative facts being shared in public.
I agree it's good to acknowledge the consequences, and that this might lead to different actions on the margin. But that's very different than making it a mandate.
comment by localdeity · 2023-10-15T02:56:09.807Z · LW(p) · GW(p)
I'll list some benefits of gossiping about people who appear to have gone crazy. "Knowing to beware of those people" was mentioned, but here are others:
- If you know what they had been doing prior to going crazy, which seems potentially causally related (e.g. taking certain drugs, being already in a mentally vulnerable state for other reasons, hanging out with certain crazy-ish people, and/or obsessing about certain books or blogs), then you can update your beliefs about what's dangerous to do. Which can inform your own behavior and possibly that of your friends.
- If you know how that person behaves currently or in the past, you can update your model of how to estimate a given person's current or future sanity based on their behavior.
I'll note it seems common for people, when they hear that someone died, to want to know how they died, especially if they died young. This seems obviously evolutionarily useful—learning about the dangers in your environment—and it seems plausibly an evolved desire. You can replace "went crazy" with "died" above (or with any significantly negative outcome), and most of it applies directly.
comment by Thoth Hermes (thoth-hermes) · 2023-10-14T17:33:48.941Z · LW(p) · GW(p)
Sometimes people want to go off and explore things that seem far away from their in-group, and perhaps are actively disfavored by their in-group. These people don't necessarily know what's going to happen when they do this, and they are very likely completely open to discovering that their in-group was right to distance itself from that thing, but also, maybe not.
People don't usually go off exploring strange things because they stop caring about what's true.
But if their in-group sees this as the person "no longer caring about truth-seeking," that is a pretty glaring red-flag on that in-group.
Also, the gossip / ousting wouldn't be necessary if someone was already inclined to distance themselves from the group.
Like, to give an overly concrete example that is probably rude (and not intended to be very accurate to be clear), if at some point you start saying "Well I've realized that beauty is truth and the one way and we all need to follow that path and I'm not going to change my mind about this Ben and also it's affecting all of my behavior and I know that it seems like I'm doing things that are wrong but one day you'll understand why actually this is good" then I'll be like "Oh no, Ren's gone crazy".
"I'm worried that if we let someone go off and try something different, they will suddenly become way less open to changing their mind, and be dead set on thinking they've found the One True Way" seems like something weird to be worried about. (It also seems like something someone who actually was better characterized by this fear would be more likely to say about someone else!) I can see though, if you're someone who tends not to trust themselves, and would rather put most of their trust in some society, institution or in-group, that you would naturally be somewhat worried about someone who wants to swap their authority (the one you've chosen) for another one.
I sometimes feel a bit awkward when I write these types of criticisms, because they simultaneously seem:
- Directed at fairly respected, high-level people.
- Rather straightforwardly simple, intuitively obvious things (from my perspective, but I also know there are others who would see things similarly).
- Directed at someone who by assumption would disagree, and yet, I feel like the previous point might make these criticisms feel condescending.
The only times that people actually are incentivized to stop caring about the truth is in a situation where their in-group actively disfavors it by discouraging exploration. People don't usually unilaterally stop caring about the truth via purely individual motivations.
(In-groups becoming culty is also a fairly natural process too, no matter what the original intent of the in-group was, so the default should be to assume that it has culty-aspects, accept that as normal, and then work towards installing mitigations to the harmful aspects of that.)
Replies from: T3t, SaidAchmiz↑ comment by RobertM (T3t) · 2023-10-15T05:34:16.533Z · LW(p) · GW(p)
"I'm worried that if we let someone go off and try something different, they will suddenly become way less open to changing their mind, and be dead set on thinking they've found the One True Way" seems like something weird to be worried about.
This both seems like a totally reasonable concern to have, and also missing many of the concerning elements of the thing it's purportedly summarizing, like, you know, suddenly having totally nonsensical beliefs about the world.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-10-15T04:17:12.870Z · LW(p) · GW(p)
People don’t usually go off exploring strange things because they stop caring about what’s true.
On the contrary, there are certain things which people do, in fact, only “explore” seriously if they’ve… “stopped” is a strong term, but, at least, stopped caring about the truth as much. (Or maybe reveal that they never cared as much as they said?) And then, reliably, after “exploring” those things, their level of caring about the truth drops even more. Precipitously, in fact.
(The stuff being discussed in the OP is definitely, definitely an example of this. Like, very obviously so, to the point that it seems bizarre to me to say this sort of stuff and then go “I wonder why anyone would think I’m crazy”.)