Yes, words can cause harm

post by kithpendragon · 2021-02-24T00:18:56.790Z · LW · GW · 54 comments

Contents

  [ETA] I've made a few clarifying edits in response to early feedback, discussing some of my writing mishakes. All are clearly marked, and all of the original text is still present. Thanks to those who told me they had read something other than what I intended to write.
  [ETA] I've been convinced to make a list of abstracted examples. See Appendix I in the comments section.
None
54 comments

[ETA] I've made a few clarifying edits in response to early feedback, discussing some of my writing mishakes. All are clearly marked, and all of the original text is still present. Thanks to those who told me they had read something other than what I intended to write.

[ETA] I've been convinced to make a list of abstracted examples. See Appendix I [LW(p) · GW(p)] in the comments section.

⚠️ Content Warning: The territory pointed to in this post can be extremely challenging to navigate skillfully.

Quoted below in full is the About This Document section of a short essay I wrote some time ago about the ways words can cause harm to people.

On more than one occasion, I've been asked to explain why I suggested that my interlocutor should be more careful with their words. «Words are just wiggly air or patterns of light and dark; I can't produce any string of words that can hurt you without your consent,» goes the argument.

I've given a short answers that stuck very closely to the context of the conversations. On one recent occasion, I began compiling a "So, you still don't buy it" list. This would consist of examples of words and phrases that might cause harm, since examples are often the fastest and easiest way to show things to people.

Two minutes later, I noticed the obvious information hazard and deleted the list in horror.

After about an hour's frantic thought, I tried again. This time, I planned to produce a lightly structured document with categories and explanations. There would be only a single example for each category, carefully selected to exemplify a certain kind of harm, but applicable only to a very narrow range of circumstances.

I deleted that list before it even finished taking shape. Likewise the list that followed with examples selected from popular fiction.

Nevertheless, I still felt that if the question of how words could cause harm was being asked, it was important to be able to discuss the answer. And the short statement, "words and symbols are causal mental objects capable of producing or promoting harm to any number of entity classes in a wide variety of contexts," seemed unlikely to convey the full scope and gravity of the problem, or even to be particularly convincing on its own. So I wrestled with how to present the whole thing ethically. I wrote a draft of this document and held off on posting it for a week to see if I still felt OK publishing it after that time.

I didn't. And even after many hours spent trying over several weeks, I wasn't able to write that essay in a way that made me comfortable publishing.

Instead, I'll propose a mechanism for how harm -- or benefit -- can causally flow from the words we choose to express. I've deliberately offered no [concrete] examples whatsoever Appendix I [LW(p) · GW(p)], only a brief examination of how change (including harmful change) might follow from a chain of causes containing at least one linguistic link.

[ETA] To clarify, my goal here is only to refute the claim that "words can't cause harm". And, as I think that this discussion can be very easy to mishandle, I've written this post to a much higher standard of non-harming than I usually aim for. It is absolutely not my intention to imply that the community as a whole is failing at appropriate speech.


Words are one medium humans use to share symbols representing thoughts between one mind and another. They have low fidelity, to be sure, but under the right conditions they are able to cause part of one mind to become different in response to part of another mind.

Thoughts, including those induced by words, can be causal in at least two modes. They can trigger an urge to act a certain way, and they can help set the stage for those urges to be more or less likely to arise or to be acted on. Urges that are acted on can result in harm in all the obvious ways, and also in many other ways that are far more subtle. Stage-setting thoughts can collectively adjust a mind to be more likely to produce or choose to act on harmful urges, or less likely to produce or choose to act on beneficial urges; establish or contribute to the conditions for trauma; and trigger or exacerbate existing traumatic conditions.

Thoughts induced by words can be seen by other parts of the mind as evidence for or against certain beliefs, regardless of the veracity of the beliefs themselves. This might even result in a positive feedback loop where strong beliefs (that can result in strong urges) spin up out of basically nothing repeated back and forth endlessly between different parts of the mind. Such beliefs might become extremely difficult to dislodge due to repetition.

Fortunately, it is also true that we can also accomplish the opposite by careful application of the same means. Through our words, actions, and other symbols, we can transmit thoughts that promote more beneficial urges, and that help set the stage for a mind that is less likely to produce harmful actions or ideas. We can encourage minds that take the time to carefully respond instead of violently reacting to stressful or dangerous situations. We can help create conditions that make minds more resilient against trauma.

It's also important to make a quick note about causality. From what I've seen, most minds have protections in place to make them robust against perturbation. This also seems to be evident in the literature. Just look at any list of biases to see that human minds, at least, seem to be structured to resist change. That said, sometimes unexpectedly small causes can produce large cascades of massive change. As far as I can tell, this can really only occur if the system happens to be unstable in just the right way for that particular cause. Unfortunately, our minds sometimes get themselves into just such a state. When they do, what seems like a tiny thing can result in a downward spiral of self destruction with truly unfortunate consequences for everybody with the misfortune of being involved. That's why I deleted every version of this post before I committed to using zero explicit examples: you can never tell what tiny thing might set somebody down that path.

[ETA] But I don't think that should lead to paralysis, or excessive caution. It's important to be aware of these things, while still getting on with the business of actually being a causal agent in the world. Most humans are pretty robust, and can tollerate a few mistakes. I should have made that clearer in the original text. Even so, the entire point isn't as close to topic as it could have been, leading to misunderstandings as to the intent of the essay. Again, my goal was to refute the claim that "words can not cause harm", nothing more.


⚠️ Fair warning, if any [concrete] examples of words that might cause harm show up in the comments, they run an extremely high risk of moderation on the basis that such an example is made of words that were chosen because they might cause harm. If you are unsure, at least hide the questionable phrases behind spoiler protection [? · GW].

That said, it is important that we are able to discuss this topic. If I didn't believe that, I wouldn't have bothered figuring out how to write this post as ethically as possible for publication.

If you choose to comment, please be especially careful with your words.

54 comments

Comments sorted by top scores.

comment by Viliam · 2021-02-24T16:18:14.120Z · LW(p) · GW(p)

Lots of warnings, and then it seems you don't write the actual content, suggest that other people do it, and threaten to remove their examples if they go too far. Hm.

Okay, here are some ideas...

  • There are crazy people out there, reacting out of proportion. Triggering them on purpose is evil. But even without targeting any specific one -- if you speak to a sufficiently large crowd, one of them is likely to hear you. "Only one person in thousand could understand it that way" sounds weakly from outside, if your audience is tens of thousands. (Problem is, you usually don't grow audience to tens of thousands by speaking moderately.)
  • Even relatively sane people may have obsessions or phobias, which can be triggered.
  • Lying, especially when it is difficult to get evidence.
  • Providing technical knowledge to evil people who lack technical competence. ("Here is the blueprint for doomsday device. This expensive component can actually be replaced by Raspberry Pi.")
  • Providing coordination to evil people who until now have been stopped or slowed down by coordination problems.

Now of course, many of these examples are problematic and can be abused. For example, 4 and 5 depend on your definition of "evil" -- some people will be happy to use this as a label for any disagreement. Example 3 depends on what is truth. Examples 1 and 2 can lead to "heckler's veto". Which means, any of them can... and sooner or later will... be used to suppress harmless but inconvenient speech.

Replies from: Ikaxas, kithpendragon, kithpendragon
comment by Vaughn Papenhausen (Ikaxas) · 2021-02-24T16:35:09.717Z · LW(p) · GW(p)

Upvoted for giving "defused examples" so to speak (examples that are described rather than directly used). I think this is a good strategy for avoiding the infohazard.

Replies from: kithpendragon
comment by kithpendragon · 2021-02-24T16:54:04.123Z · LW(p) · GW(p)

Agreed. Thanks, Viliam, for pointing at conditions instead of giving direct examples.

comment by kithpendragon · 2021-02-24T18:13:39.843Z · LW(p) · GW(p)

you don't write the actual content, suggest that other people do it, and threaten to remove their examples if they go too far

Two things. First, the goal was to refute the claim that words can't cause harm. I did not (intend to) suggest that others write examples in part because I feel concrete examples are a likely hazard, and in part because I was able to construct an argument without them.

Second, I'm confused about this "examples = content" thing. What's that about?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-02-25T03:26:18.103Z · LW(p) · GW(p)

But you didn’t refute the claim that words can’t cause harm. To do that, you’d have to provide examples, which you did not do. Given this, what is the substance of this post? I see none.

The idea that concrete examples of harmful words would be a “likely hazard” also strikes me as absurd. I could be convinced otherwise, but for that to happen, I’d have to see some… examples.

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2021-02-25T03:45:52.457Z · LW(p) · GW(p)

I disagree, I think Kithpendragon did successfully refute the argument without providing examples. Their argument is quite simple, as I understand it: words can cause thoughts, thoughts can cause urges to perform actions which are harmful to oneself, such urges can cause actions which are harmful to oneself. There's no claim that any of these things is particularly likely, just that they're possible, and if they're all possible, then it's possible for words to cause harm (again, perhaps not at all likely, for all Kithpendragon has said, but possible). It borders on a technicality, and elsethread I disputed its practical importance, but for all that it is successful at what it's trying to do.

I agree that the idea that concrete examples are a "likely hazard" seems a bit excessive, but I can see the reasoning here even if I don't agree with it: if you think that words have the potential to cause substantial harm, then it makes sense to think that if you put out a long list of words/statements chosen for their potential to be harmful, the likelihood that at least one person will be substantially harmed by at least one entry on the list seems, if not high, then still high enough to warrant caution. Viliam has managed to get around this, because the reasoning only applies if you're directly mentioning the harmful words/statements, whereas Viliam has described some examples indirectly.

Replies from: philh, SaidAchmiz
comment by philh · 2021-02-27T14:23:56.365Z · LW(p) · GW(p)

words can cause thoughts, thoughts can cause urges to perform actions which are harmful to oneself, such urges can cause actions which are harmful to oneself. There’s no claim that any of these things is particularly likely, just that they’re possible, and if they’re all possible, then it’s possible for words to cause harm

Without having an object level opinion here (I didn't read the post and I only skimmed the comments), I note that this argument is incomplete. It may be that the set of "urges that cause harmful actions" is disjoint from the set of "urges which can be caused by thoughts which can be caused by words".

Replies from: kithpendragon
comment by kithpendragon · 2021-02-27T17:38:08.253Z · LW(p) · GW(p)

That's a fair objection. I encourage you to look at the Appendix [LW(p) · GW(p)] for some abstracted examples if you happen to find yourself interested.

comment by Said Achmiz (SaidAchmiz) · 2021-02-25T06:36:17.313Z · LW(p) · GW(p)

Their argument is quite simple, as I understand it: […] It borders on a technicality, and elsethread I disputed its practical importance, but for all that it is successful at what it’s trying to do.

It’s worse than a technicality—it’s an equivocation between meanings of “cause”. In ordinary speech we do not speak of one thing “causing” another if, say, the purported “cause” is merely one of several possible (collectively) necessary-but-not-sufficient conditions for the purported “effect” to occur—even though, in a certain sense, such a relationship is “causal”—because if we did, then we would have to reply to “what caused that car accident” with “the laws of physics, plus the initial conditions of the universe”.

So kithpendragon proves that words can “cause harm” in the latter technical sense, but the force of the argument comes from the claim that words can “cause harm” in the former colloquial sense—and that has assuredly not been proven.

(And this is without even getting into the part about “large cascades of massive change” and “downward spiral of self destruction with truly unfortunate consequences” and such things, that can allegedly be caused by “what seems like a tiny thing”—a claim that is presented without any support at all, but without which the injunction that motivates the post simply fails to follow from any of the rest of it!)

Replies from: kithpendragon
comment by kithpendragon · 2021-02-25T10:16:35.895Z · LW(p) · GW(p)

How about a concrete example of a benign interaction? Suppose you're sitting at a table having a meal with someone in your household. You look up and ask "Please pass the salt", and they do so. I think most people would agree that your asking caused the salt to be passed because the conditions were right for it to do so.

The same can (and does) happen with less benign interactions. To stay in the abstract, threatening, coercion, and bullying come readily to mind. How many people have been threatened into doing terrible things? Or ordered by a superior? This kind of thing happens all the time.

Replies from: SaidAchmiz, Pattern, TheSimplestExplanation
comment by Said Achmiz (SaidAchmiz) · 2021-02-25T13:10:49.339Z · LW(p) · GW(p)

I think most people would agree that your asking caused the salt to be passed because the conditions were right for it to do so.

I do not agree with that.

In the everyday sense of ‘cause’, you didn’t cause the person in question to pass the salt. They chose to pass the salt, after you asked them to do so. They could’ve chosen otherwise. (Indeed, it’s easy to imagine reasons why they would.)

How many people have been threatened into doing terrible things? Or ordered by a superior?

These are examples of causation by actions, not by words. The words in question communicate information about intentions and actions; but the actions (and/or threat/promise thereof) are what cause things to happen.

Replies from: kithpendragon
comment by kithpendragon · 2021-02-25T13:29:50.769Z · LW(p) · GW(p)

They might not pass you the salt. But they most probably will not unless you ask. The asking causes your dinner partner to at least consider passing you the salt, where the thought might not have ever come up otherwise. More simply, your words caused the thought in someone else that resulted in your being handed the salt. That's as much cause as your releasing a catch causing the spring tension to be released and resulting in a lid opening.

You don't need to actually follow through on a threat for it to be effective, someone need only believe that the threat is genuine. That constitutes a necessary condition and a linguistic trigger.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-02-25T17:55:06.242Z · LW(p) · GW(p)

That’s as much cause as your releasing a catch causing the spring tension to be released and resulting in a lid opening.

A cause of the thought? Sure. A cause of the action? Certainly not.

You don’t need to actually follow through on a threat for it to be effective, someone need only believe that the threat is genuine. That constitutes a necessary condition and a linguistic trigger.

Actions are what cause people to believe in threats. The words communicate intent, they do not themselves cause ‘harm’ or anything else.

Replies from: kithpendragon
comment by kithpendragon · 2021-02-25T19:08:09.038Z · LW(p) · GW(p)

Maybe the issue is definition and we should taboo "cause". I'll also taboo "words" and "harm" while I'm at it. My core claim is that:

The purpose of symbolic language is to transmit ideas from one mind to another. In the new mind, the idea can prompt or set the stage for (sometimes strong) urges to arise. Humans acting on strong urges can be destructive in lots of ways.

Do you disagree that this happens? If so, why?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-02-26T08:06:28.178Z · LW(p) · GW(p)

The “core claim” you quote is actually several claims appended to each other, with the connections between them left as implicit additional claims. So when you ask if I disagree that “this” happens, I can’t answer in a binary fashion because you’re asking multiple questions, some of which are unstated (and which I have to infer, with the possibility of inferring incorrectly).

(This is very similar to one of the several ways in which your post itself is flawed.)

I will comment as I am able, however. So:

The purpose of symbolic language is to transmit ideas from one mind to another.

This can reasonably be said to be a purpose of symbolic language (certainly it’s not the only purpose, but it’s a big one).

In the new mind, the idea can prompt or set the stage for (sometimes strong) urges to arise.

I am not quite sure what you mean by “urges” (I can think of multiple possible sense of this word which might be relevant), but clearly you’re referring to some variety of mental states.

Mental states in general arise for various reasons and in various ways. Any given mental state may be capable of arising in any of a variety of ways, “triggered” or immediately preceded by any of a number of precursor mental states or external stimuli etc. Certainly ideas introduced by communication by another person is one possible type of such precursors or triggers.

Does this mean that the communicative act “caused” (“prompted” and “set the stage for” seem to be synonyms for “caused”, with, perhaps, variation in emphasis) the mental state in question? Maybe, maybe not. In the case of overdetermination, the answer would be “not”. (How common is that case? I don’t know. Fairly common, it seems to me.) In other cases, the answer is more ambiguous. It depends on the type of mental state, certainly. Whatever you mean by “urges”, it seems particularly unlikely that mental states that could be described thus (especially those which can reasonably be described as “strong urges”), would be “caused” (in a non-overdetermined way) by communicative acts of others.

[… here we have an apparent gap in the chain of reasoning …]

You jump from “urges” to “humans acting on” those “urges”. This seems like it needs more examination, to say the least. Most people don’t just act on whatever thoughts pop into their heads. Reflexive actions exist, of course, but I’m not sure you have those in mind. If I understand correctly the sort of mental states you refer to, they are such that acting on them, or not doing so, is a choice. People are quite capable of suppressing even very strong emotions, taking no actions whatsoever on their basis.

Humans acting on strong urges can be destructive in lots of ways.

This is literally true, but quite misleading in its implication that “strong urges” are somehow unusual or unique in being mental states which, when humans act on them, result in destructive behavior.

In fact people can be destructive in lots of ways regardless of the nature of the mental state or states which resulted in those destructive actions. A man may strangle a rival in a fit of rage, or strike down a pedestrian with his car while being distracted by a funny picture on his smartphone, or carefully orchestrate a murder while seething with bile and cold hatred, or calmly sign an order that sends thousands to their deaths while taking his morning tea. There is nothing particularly special about any of these mental states. It is simply easy to inflict pain and death and horror, if one is placed in circumstances that permit such actions.

comment by Pattern · 2021-03-09T17:15:46.389Z · LW(p) · GW(p)

While the commenter (above and below) did not appear satisfied with your answer, the idea of inverting is interesting - how can words cause the opposite of harm seems like something that can be safely established/with examples.

comment by TheSimplestExplanation · 2021-02-25T14:04:28.173Z · LW(p) · GW(p)

The article is rather vague but seems to imply/say that words alone can cause harm. Which is not the case in these examples.

comment by kithpendragon · 2021-02-24T17:09:18.801Z · LW(p) · GW(p)

I'm sorry for coming off as particularly harsh in my warnings. I noticed early on just how easy it would be to accidentally fall into some really horrible speech on this topic. I just don't want to end up with someone falling into the same trap I almost did, so I put up the orange cones.

Your examples are well callibrated; thank you.

comment by Said Achmiz (SaidAchmiz) · 2021-02-24T07:48:12.715Z · LW(p) · GW(p)

Without examples, this post is almost entirely devoid of content.

comment by ChristianKl · 2021-02-24T13:37:09.936Z · LW(p) · GW(p)

The general way to get around the infohazard is to use either historic examples or examples of other cultures and societal contexts. 

You can say that saying "Democracy is good" can cause harm because saying it can motivated people to act politically in totalitarian societies in which they will get punished for it. 

Butterflies famously can cause a lot of harm by flapping their wings as well. That seems to be a way words can cause harm as well.

I think most people can agree that in both of those examples the actions can lead to causal chains that result in harm. The problem is that you play motte and bailey and equate "words can be lead to causal chains that produce harm" with "interlocutors should be more careful with their words". That leads to avoiding any of the cruxes that might come up with "interlocutors should be more careful with their words".

Replies from: kithpendragon
comment by kithpendragon · 2021-02-24T14:18:01.390Z · LW(p) · GW(p)

I generally think of "harm should be avoided whenever possible" as morally foundational. (Although it certainly isn't the only possible basis for a moral system, it seems really common). If "words can lead to causal chains that produce harm", then it follows directly that "interlocutors should be careful with their words so as to avoid accidental harm", does it not? I'll own that I didn't make that link explicitly, though. Thanks for pointing out the gap (and the blind spot).

As for the motte and bailey, I'm not sure where you're getting that. In the introduction, I lay out the argument I'm defending against clearly, and you can see it repeated elsewhere [LW(p) · GW(p)] in the comments. When I state that we should be more careful with our words, it is met with "words can't cause harm, that would be magic".

Replies from: Ikaxas, ChristianKl
comment by Vaughn Papenhausen (Ikaxas) · 2021-02-24T15:45:20.350Z · LW(p) · GW(p)

I originally had a longer comment, but I'm afraid of getting embroiled in this, so here's a short-ish comment instead. Also, I recognize that there's more interpretive labor I could do here, but I figure it's better to say something non-optimal than to say nothing.

I'm guessing you don't mean "harm should be avoided whenever possible" literally. Here's why: if we take it literally, then it seems to imply that you should never say anything, since anything you say has some possibility of leading to a causal chain that produces harm. And I'm guessing you don't want to say that. (Related is the discussion of the "paralysis argument" in this interview: https://80000hours.org/podcast/episodes/will-macaskill-paralysis-and-hinge-of-history/#the-paralysis-argument-01542)

I think this is part of what's behind Christian's comment. If we don't want to be completely mute, then we are going to take some non-zero risk of harming someone sometime to some degree. So then the argument becomes about how much risk we should take. And if we're already at roughly the optimal level of risk, then it's not right to say that interlocutors should be more careful (to be clear, I am not claiming that we are at the optimal level of risk). So arguing that there's always some risk isn't enough to argue that interlocutors should be more careful -- you also have to argue that the current norms don't prescribe the optimal level of risk already, they permit us to take more risk than we should. There is no way to avoid the tradeoff here, the question is where the tradeoff should be made.

[EDIT: So while Stuart Anderson does indeed simply repeat the argument you (successfully) refute in the post, Christian, if I'm reading him right, is making a different argument, and saying that your original argument doesn't get us all the way from "words can cause harm" to "interlocutors should be more careful with their words."

You want to argue that interlocutors should be more careful with their words [EDIT: kithpendragon clarifies below that they aren't aiming to do that, at least in this post]. You see some people (e.g. Stuart Anderson, and the people you allude to at the beginning), making the following sort of argument:

  1. Words can't cause harm
  2. Therefore, people don't need to be careful with their words.

You successfully refute (1) in the post. But this doesn't get us to "people do need to be careful with their words" since the following sort of argument is also available:

A. Words don't have a high enough probability of causing enough harm to enough people that people need to be any more careful with them than they're already being.

B. Therefore, people don't need to be careful with their words (at least, not any more than they already are). [EDIT: list formatting]]

Replies from: Pattern, kithpendragon
comment by Pattern · 2021-03-09T17:36:02.364Z · LW(p) · GW(p)
I think this is part of what's behind Christian's comment. If we don't want to be completely mute, then we are going to take some non-zero risk of harming someone sometime to some degree.

One way of doing with this is stuff like talking to people in person: with a small group of people the harm seems bounded, which allows for more iteration, as well as perhaps specializing - "what will harm this group? What will not harm this group?" - in ways that might be harder with a larger group. Notably, this may require back and forth, rather than one way communication. For example

I might say "I'm okay with abstract examples involving nukes - for example "spreading schematics for nukes enables their creation, and thus may cause harm, thus words can cause harm". (Spreading related knowledge may also enable nuclear reactors which may be useful 'environmentally' and on, say, missions to mars - high (usable) energy density per unit of weight may be an important metric when there's a high cost associated with weight.)"

Also, no one else seems to have used the spoilers in the comments at all. I think this is suboptimal given that moderation is not a magic process although it seems to have turned out fine so far.

comment by kithpendragon · 2021-02-24T15:52:37.782Z · LW(p) · GW(p)

Yes, I'd agree with all that. My goal was to counter the argument that words can't cause harm. I keep seeing that argument in the wild.

Thanks for helping to clarify!

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2021-02-24T15:57:20.971Z · LW(p) · GW(p)

Sorry for the long edit to my comment, I was editing while you posted your comment. Anyway, if your goal wasn't to go all the way to "people need to be more careful with their words" in this post, then fair enough.

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2021-02-24T16:02:03.162Z · LW(p) · GW(p)

I was thinking a bit more about why Christian might have posted his comment, and why the post (cards on the table) got my hackles up the way it did, and I think it might have to do with the lengths you go to to avoid using any examples. Even though you aren't trying to argue for the thesis that we should be more careful, because of the way the post was written, you seem to believe that we should be much more careful about this sort of thing than we usually are. (Perhaps you don't think this; perhaps you think that the level of caution you went to in this post is normal, given that giving examples would be basically optimizing for producing a list of "words that cause harm." But I think it's easy to interpret this strategy as implicitly claiming that people should be much more careful than they are, and miss the fact that you aren't explicitly trying to give a full defense of that thesis in this post.)

Replies from: kithpendragon
comment by kithpendragon · 2021-02-24T16:15:51.156Z · LW(p) · GW(p)

That's a really helpful (and, I think, quite correct) observation. I'm not usually quite so careful as all that. This seemed like something it would be really easy to get wrong.

comment by ChristianKl · 2021-02-24T17:36:11.796Z · LW(p) · GW(p)

By your logic I should be careful when interacting with butterflies because of the hurricanes they cause through causal chains.

Replies from: kithpendragon
comment by kithpendragon · 2021-02-24T18:04:09.274Z · LW(p) · GW(p)

I would say that the weather is probably next-to-never unstable enough for that to actually happen, despite its fame. If I thought otherwise, I would never have even tried to write and post comments, much less essays.

Replies from: ChristianKl
comment by ChristianKl · 2021-02-25T00:19:00.980Z · LW(p) · GW(p)

Weather inherently isn't stable. Wikipedia writes about the butterfly effect: "The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example."

If I thought otherwise, I would never have even tried to write and post comments, much less essays.

It seems that given the physics of our world works you might have to have to refine your ethical system or by okay with constantly violating it.

Replies from: kithpendragon
comment by kithpendragon · 2021-02-25T00:31:08.572Z · LW(p) · GW(p)

Yes, some prediction models are extremely sensitive to initial conditions. But I doubt very much if a flap or even a sneeze can actually be the key thing that determines Hurricane or Not Hurricane in real life. The weather system would have to not only be extremely unstable, but in just the right way for that input to be relevant at such a scale.

You should still be careful with butterflies, though. They're a bit fragile.

Replies from: ChristianKl, Ikaxas
comment by ChristianKl · 2021-02-25T03:00:03.327Z · LW(p) · GW(p)

If everything else is constant then a flag is what determines whether a particular hurricane (that exist far enough in the future) happens or not happens. There's a causal chain between the flag and the hurricane. 

If you care about something else besides the causal chain that defines some notion of "key thing" you actually have to say what you mean with "key thing".

comment by Vaughn Papenhausen (Ikaxas) · 2021-02-25T00:57:03.230Z · LW(p) · GW(p)

A sneeze can determine much more than hurricane/no hurricane. It can determine the identities of everyone who exists, say, a few hundred years into the future and onwards.

If you're not already familiar, this argument gets made all the time in debates about "consequentialist cluelessness". This gets discussed, among other places, in this interview with Hilary Greaves: https://80000hours.org/podcast/episodes/hilary-greaves-global-priorities-institute/. It's also related to the paralysis argument I mentioned in my other comment.

comment by kithpendragon · 2021-02-25T17:19:27.342Z · LW(p) · GW(p)

Appendix I

First, a little context. When I began the process of writing this essay, I nearly fell into the trap of using a set of words that were callibrated to be harmful as example of exactly that. Seeking to avoid this pitfall, I put up a set of proverbial traffic cones in a wide area around the issue; I decided that no examples would be safer than actually writing a list designed that way. That was true, but I got stuck in a binary mode of thinking that resulted in my missing the possibility of abstractions available in between those extremes.

After I published the essay, the comments showed me a number of ways in which my writing is... a bit rough in places. In particular, several excellent suggestions regarding places where I had set the cones too widely; where my over-avoidance of examples was limiting my ability to communicate effectively, and making it difficult for others discuss my work. Huge thanks to the people who wrote those comments! They reminded me that these things have names, after all, and that we can easily discuss them without accidentally engaging in them or encouraging others to do so if we take a little care.

So for those who feel like examples make a much stronger argument than theory, let's connect some of the dots. Below is a short list of ways that words, and often only words, can be used to manipulate the circumstances in a way that can result in harm. Most of these examples work by convincing the victim that a destructive action is in their best interest. This is typically accomplished either by controlling the conditions or the perceived conditions for the victim, then offering a specific instruction with a harmful outcome, but that's not the only possible strategy. While props and other physical components can be helpful to those attempting these acts, they are not really necessary, and speech alone is sufficient as a trigger once the stage is set.

##  Short list of processes by which words can cause deliberate harm

  • Lying is a pretty central way of causing harm, which is probably why there are so many rules made against it. By convincing someone that the world is other than the way it really is, people can get others to act in some pretty destructive ways.  I'm aware that there has been some discussion in the past as to what constitutes Lying, so I'll define it here as "willfully misrepresenting the facts". That includes cherry-picking as well as alteration, but excludes being accidentally incorrect, and also excludes the listener taking the wrong impression in other ways that were not intended. Those can be harmful too; I'm just setting them out of scope for the word "lie" here.
  • Sales tactics can get people to buy and consume things that are (physically, ecologically, economically, socially...) harmful to them and/or others, or things that they do not need or want, causing direct economic harm.
  • Threatening need not have any non-verbal component, only perceived credibility, to establish conditions in which someone might perform harmful acts.
  • A subset of threatening, Blackmail, can easily ruin lives. This works by manipulating the circumstances such that someone becomes willing to do things that are otherwise not in their best interests, usually by threatening to make some information public that the victim believes would cause them harm unless the victim behaves in a certain way.
  • Intimidation overlaps with threatening, but the threat can be either explicit or implied by the setting, some aspect of the delivery of the words, or some perceived power held by the threatener. These constitute necessary conditions for words to do their work. Police interrogations designed to be intimidating produce more false confessions than other methods, resulting in innocent people going to jail.
  • Bullying contains necessary verbal components including threats and words designed to make the victim question their own worth. Often physical violence accompanies the words, but it may not. This allows the bully to make demands of the victim under threat of more bullying. Long-term, this process has resulted in suicides.
  • Gaslighting is a big pack of lies that are orchestrated to convince a person that they are actively and systematically misperceiving reality. The end result can be that the victim loses touch with reality so badly that they begin making actively destructive or self-destructive decisions.
  • Peer pressure takes advantage of the mind's willingness to compare it's situation with that of others, and leverages the fact that rejection causes the brain to react as though there was physical discomfort.
  • Horrors have been committed by people "just following orders". Indoctrination of people into organizations that expect that kind of behavior tend to mix and match these and other methods very efficiently. Once accomplished, this renders "orders" (mere words) as high priorities for the subordinate.
comment by waveman · 2021-02-24T02:57:31.363Z · LW(p) · GW(p)

To be of LW standard this essay should also steelman the opposing argument and assess it fairly 

Ie that Censorship of words can cause harm.

Replies from: kithpendragon, kithpendragon
comment by kithpendragon · 2021-02-24T10:45:23.351Z · LW(p) · GW(p)

Upon further reflection: consider the messages that censorship sends between one group and another. Then, I think you'll have your answer.

comment by kithpendragon · 2021-02-24T10:01:39.435Z · LW(p) · GW(p)

This isn't journalism. That would be another topic.

comment by Stuart Anderson (stuart-anderson) · 2021-02-24T07:22:26.334Z · LW(p) · GW(p)

-

Replies from: kithpendragon
comment by kithpendragon · 2021-02-24T10:03:07.843Z · LW(p) · GW(p)

Exactly which part of the argument are you disagreeing with? Where does the causal chain fall down for you?

Replies from: stuart-anderson
comment by Stuart Anderson (stuart-anderson) · 2021-02-25T11:32:21.510Z · LW(p) · GW(p)

-

Replies from: Pattern, sophia_xu
comment by Pattern · 2021-03-09T18:09:11.790Z · LW(p) · GW(p)
That is in a different lane from your words are magic spells one. 

How? The words have to be tuned to the audience to cause harm? I could plausibly cause harm by

convincing people to drink poison

and yes, it would be easier to convince children to do this. Maybe education takes forever because there is a lot of information to be transferred. Maybe education isn't about education. Maybe it's about locking kids up for long enough** that when they do something stupid, no one else is to blame (i.e., legally liable).

This (trying to convince people to do things that will kill them*) is not quite controlling someone's internal state, but strongly pushing towards bad outcomes (like death).

Do I think this happens often? No - but it does exist. Moderation, voting, etc. exist largely for a different reason, and arguably that is because people are irrational. Maybe, a few words on the internet won't bring about the end of the world.*** 'Choosing words with care' as a practice may be more relevant to communicating efficiently/effectively/at all than to preventing harm.


this post sounds like dogma

I agree with this - the structure is also similar. (I believe in related corners of the internet, the OP's concerns are more often addressed by sticking a warning label at the top, and then writing the piece anyway. I haven't seen lots of spreading automated tools for custom content classification on the user end, though, and that would be cool. Maybe someday I'll be able to open up an article and run things like 'remove links to dogma/tic articles', 'summarize', 'cutout the fluff' or 'count/skip to the examples'. For now, the user controlled evaluation/recommendation system remains on my to do list (to create, find, or assemble) , and I won't be getting to it soon.)


*

Don't drink poison, eat spicy food.

**:

Whether that's age, or 'we tried. You can see we tried, because we took a long time.'

***"If everyone sells all their stocks at once, will the stock market crash?"

Replies from: stuart-anderson
comment by Stuart Anderson (stuart-anderson) · 2021-03-10T00:24:04.918Z · LW(p) · GW(p)

-

Replies from: kithpendragon
comment by kithpendragon · 2021-03-11T16:32:13.424Z · LW(p) · GW(p)

The effects that follow any cause necessarily depend on the conditions in which the cause occurs. Nobody in Jonestown had poison funneled down their throats. They acted on what they were told, each according to the conditions in their own mind and (largely shared but not identical) environment.

Replies from: stuart-anderson
comment by Stuart Anderson (stuart-anderson) · 2021-03-11T18:49:34.443Z · LW(p) · GW(p)

-

Replies from: kithpendragon
comment by kithpendragon · 2021-03-12T17:41:26.635Z · LW(p) · GW(p)

Interesting; I noticed that you're using words like "responsibility" and "consent" and "choice" a lot. Do you take a non-materialist view of the mind? That is, do you think a mind is something more than the physical systems it's made of?

Replies from: stuart-anderson
comment by Stuart Anderson (stuart-anderson) · 2021-03-14T09:38:45.767Z · LW(p) · GW(p)

-

Replies from: kithpendragon
comment by kithpendragon · 2021-03-14T18:08:27.747Z · LW(p) · GW(p)

It's just that if the mind is limited to the physical systems that compose it, then free-will-cluster concepts (consent, responsibility, &c.) are map-stuff and don't really signify in a discussion of cause and effect. The state of the mind-system must necessarily evolve according to the laws of physics when it is provided a particular input. That doesn't mean that there's nothing it's like to be the mind (as it's commonly understood), or that the mind doesn't partially operate by generating and comparing counterfactual realities; only that from a global view it's all physics. I agree that while we're "being in the world" it's usually not useful to take that angle on things, but it's important not to just forget it either.

You've appealed to free-will-cluster concepts heavily in your argument, and I'm just trying to get a feel for how you think they're relevant.

You also say you don't believe in "magic spells" (where just saying a thing has a predictable effect, if I'm reading you correctly), but you claim to be able to predictably make certain changes by incanting "I do not consent". That doesn't feel consistent to me.

Replies from: TAG, stuart-anderson
comment by TAG · 2021-03-14T19:05:32.364Z · LW(p) · GW(p)

It’s just that if the mind is limited to the physical systems that compose it, then free-will-cluster concepts (consent, responsibility, &c.) are map-stuff and don’t really signify in a discussion of cause and effect

So physics excludes libertarian freewill and compatibilist freewill?

Replies from: kithpendragon
comment by kithpendragon · 2021-03-15T18:03:52.714Z · LW(p) · GW(p)

I don't think it's always useful to think of free will as a "capacity to make choices in which the outcome has not been determined by past events" (wikipedia). I'm not even sure that definition makes any sense, actually. To me, at least, it doesn't feel like I make decisions without referring to my memories, which were laid into my mind by my past experiences. It certainly feels like different memories could easily result in my making different decisions in the same situation. And the fact that we can get more skilled at handling certain situations as we get older and experience those situations more times supports that notion.

Rather, I think it's (at least sometimes) more useful to reframe free will as how it feels to be inside [LW · GW] a system that operates at least partially by constructing counterfactual futures and conditioning its outputs on how it provisionally responds to those simulated futures. Start such a system in a specific state, give it a particular input set, and you can expect a specific output; but from within the system it feels like freely making a choice. We can see exactly this happen in patients experiencing Transient Global Amnesia, and with other conditions that prevent the encoding of new memories (though those with permanent conditions do still show some neuroplasticity, and this leads to some changes in the long term).

But I also don't think I would say that "physics excludes... freewill". Rather, I would call trying to reconcile free will with cause and effect a category error. Free will is a way to model how a mind can feel like it works from the inside, while causality is a way to model how information propagates through the universe. They're just not really related, is all.

Replies from: TAG
comment by TAG · 2021-03-15T18:45:18.460Z · LW(p) · GW(p)

I don’t think it’s always useful to think of free will as a “capacity to make choices in which the outcome has not been determined by past events” (wikipedia).

That's not the only definition. That's the definition of libertatian free will.

I’m not even sure that definition makes any sense, actually. To me, at least, it doesn’t feel like I make decisions without referring to my memories, which were laid into my mind by my past experiences.

You are reading "...has not been determined by past events" as though it means "entirely unconnected to previous events". It doesn't mean that.

Rather, I think it’s (at least sometimes) more useful to reframe free will as how it feels to be inside a system that ...

Free will is a way to model how a mind can feel like it works from the inside

You start off by saying that this approach is "sometimes" "useful" and then switch to treating it as stone cold fact.

Taking a step back ,

1 we are basically always at map level, because ,even in physics, we have to use simplifications. We can't model things at the quark level.

2 we can't regard map level features as false just because they are map level features. So claims like "free will is a map level feature" don't disprove free will.

3 defining free will as an illusory feeling doesn't prove or disprove it either, since other people use other definitions.

comment by Stuart Anderson (stuart-anderson) · 2021-03-16T15:31:32.909Z · LW(p) · GW(p)

-

Replies from: TAG
comment by TAG · 2021-03-16T18:24:47.587Z · LW(p) · GW(p)

The best argument against a you are nothing more than a clockwork zombie governed by physics set in motion at the beginning of time assertion is that we don’t have a theory of everything

There's also compatibilism.

comment by sophia_xu · 2021-03-20T14:38:00.852Z · LW(p) · GW(p)

Interesting to see this discussed in a framework about attribution.

If you're willing to engage in a little thought experiment, what levels of responsibility would you consider in this scenario:

Alice was invited to Bob's birthday party. Bob's parents prepared the party and a birthday cake, but they didn't know Alice has a severe peanut allergy. During the party Alice ate the birthday cake, which contained peanut, and was hospitalized for a couple of months. 

In this scenario I don't think Bob's parents are responsible - because as you said in a previous post, one person cannot be expected to be responsible for what's going on in another's body.

But what about this alternative scenario:

Bob's parents bought a birthday cake from a bakery - which (if we're living in a developing country and things like FDA don't exist) didn't label its nutrition and allergy-related facts; everything else is still the same.

In this case I'd consider the bakery to be legally and morally responsible: since they're serving potentially unlimited customers, failure to consider such important facts should not be excused by pleading ignorance.

Like allergies, depression can cause otherwise insignificant remarks/criticisms to be harmful to a patient than otherwise healthy people, since depressed people engage in more negative thinking about themselves than healthy people. I'm not a medical professional so please correct me if I'm wrong, and I'm only extending my personal experience with evidence.

My case is that since internet comments are directed to an unlimited amount of audience, we should use some caution in our words when speaking publicly, even if it's only potentially harmful to other people, intentional or not.

(Also I downvoted the parent comment since it's using unnecessary politics and tribalism as a way to avoid conversation, which isn't something we should encourage as a community)

Replies from: stuart-anderson