Information Hazards and Community Hazards

post by Gleb_Tsipursky · 2016-05-14T20:54:36.238Z · LW · GW · Legacy · 15 comments

Contents

15 comments

Information Hazards and Community Hazards

 

As aspiring rationalists, we generally seek to figure out the truth and hold relinquishments as a virtue, namely that whatever can be destroyed by the truth should be.

 

The only case where this does not apply are information hazards, defined as “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” For instance, if you tell me you committed a murder and make me an accessory after the fact, you have exposed me to an information hazard. In talking about information hazards, we focus on information that is harmful to the individual who receives that information.

 

Yet a recent conversation at my local LessWrong meetup in Columbus brought up the issue of what I would like to call community hazards, namely topics that it would be dangerous to talk about in a community setting. These are topics that are emotionally challenging and hold the risk of tearing apart the fabric of LW community groups if they are discussed.

 

Now, being a community hazard doesn’t mean that the topic is off-limits, especially in the context of a smaller, private LW meetup of fellow aspiring rationalists. What we decided to do is that if anyone in our LW meetup decides a topic is a community hazard, we would go meta and have a discussion about whether we should discuss the topic. We would examine whether discussing it would be emotionally challenging and how challenging it would be, whether discussing it holds the risk of taking down Chesterton’s Fences that we don’t want taken down, whether there are certain aspects of the topic that could be discussed with minimal negative consequences, or if perhaps only some members of the group would like to discuss it and then they can meet separately.

 

This would work differently in the context of a public rationality event, of course, of the type we do for a local secular humanist group as part of our rationality outreach work. There, we decided to use moderation strategies to head off community hazards at the pass, as the audience includes non-rationalists who may not be capable of discussing a community hazard-related topic well.

 

I wanted to share about this concept and these tactics in the hope that it might be helpful to other LW meetups.

15 comments

Comments sorted by top scores.

comment by HungryHobo · 2016-05-17T11:29:22.030Z · LW(p) · GW(p)

One meta-hazard would be that "community hazards" could end up defined far too broadly, encompassing anything that might make some people feel uncomfortable and simply become a defense for sacred values of the people assessing what should constitute "community hazards".

Or worse, that the arguments for one set of positions could get classified as "community hazards" such that, to use a mind-killing example, all the pro-life arguments get classified as "community hazards" while the pro-choice ones do not.

So it's probably best to be exceptionally conservative with what you're willing to classify as a "community hazard"

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2016-05-17T17:34:16.499Z · LW(p) · GW(p)

Good point about that. I think it's a matter of trade-offs - my take is that anything that an aspiring rationalist I trust classifies as a community hazard is a community hazard. For instance, one rationalist I know had a traumatic experience with immigration into the US, and as a result has PTSD around immigration discussions. This makes immigration discussions a community hazard issue in our local LW meetup, due to her particular background. It wouldn't be in another setting. So we hold immigration discussions when she's not there.

However, the broad point is taken, and I think especially the issue of arguments for one set of positions being classified as a community hazard - important to keep this in mind to prevent groupthink and become an echo chamber.

Replies from: HungryHobo
comment by HungryHobo · 2016-05-18T12:57:58.366Z · LW(p) · GW(p)

If something that is tough for even a single member to handle counts as a "community hazard" then this is starting to sound more like safe spaces under a different name rather than what I thought you meant with the example of "accessory after the fact" murder thing.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2016-05-18T17:06:26.524Z · LW(p) · GW(p)

Can you elaborate? As I said, it doesn't mean we don't talk about immigration, just not around her. Similarly, if someone had an eating disorder, we wouldn't talk about triggering stuff in front of her.

Replies from: HungryHobo
comment by HungryHobo · 2016-05-19T11:05:49.128Z · LW(p) · GW(p)

That somewhat necessitates either the group remaining very small or discussions only happening in small subsets since in any non-tiny group there will be one or more people with issues around pretty much anything.

It also wouldn't seem to work terribly well in long term and written discussions such as the ones on LW which can run for years with random members of the public joining and leaving part way through.

So the "accessory after the fact" murder example is a very clear and explicit example of where major penalties can be inflicted on pretty much anyone by providing them with particular information which forces them either into certain actions or into danger. 50%+ of the community present are going to be subject to those hazards whether or not they even understand them.

Safe space avoidance of triggers on the other hand are extremely personal, one person out of thousands can suddenly be used as a reason for why the community shouldn't talk about ,say, Rabies and since most LW communication is long term and permanent there is no such thing as "while they're not in the room". The discussion remains there when they are present even if the discussion took place while they were not.

Of course you could limit your safe spaces to verbal communication in small, personal, community events where you only talk about Rabies on the days when ,say, Jessica isn't there but then you have the situation where the main LW community could have a recurring and popular Rabies Symptoms Explained megathread.

At which point you don't so much have a "community hazard" as a polite avoidance of one topic with a few of your drinking buddies including one who isn't really part of the central community because they can't handle the discussion there but is part of your local hangout.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2016-05-22T20:49:58.340Z · LW(p) · GW(p)

I hear you about difference between verbal and online communication.

The specific point I made above was regarding in-person communication. I hope I made clear that community hazards can be talked about, but carefully, in those settings, depending on the skill levels of the people involved in rational communication.

Regarding online communication, I generally see it as quite fine to talk about a potentially triggering topic on LW, as long as the article is clearly labeled as such, and people can choose not to click on it. There are exceptions where talking about a topic takes down Chesterton's fences, such as PUA, etc., but the suggestions I made above don't apply to them so much.

comment by Dagon · 2016-05-15T15:47:09.032Z · LW(p) · GW(p)

I'd like to acknowledge that community hazards are covered in the typology you link, and when you "go meta" and discuss them, it may be useful to identify which specific mechanisms you expect the hazard to take.

Most of the LW mind-killing seems to be around psychological reaction, but there are a fair number of topics I just don't bring up for fear of altering valuable mindstates.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2016-05-15T16:43:44.315Z · LW(p) · GW(p)

What kind of topics do you avoid bringing up, I'm curious?

Replies from: Dagon
comment by Dagon · 2016-05-17T23:03:13.200Z · LW(p) · GW(p)

One example: I tend not to disparage EA or point out the weaknesses behind utilitarianism very much. It doesn't interfere with other discussions, and I'm perfectly happy having other people overweight imaginary values.

comment by woodchopper · 2016-05-15T12:49:19.025Z · LW(p) · GW(p)

I think a very interesting trait of humans is that we can for the most part collaboratively truth-seek on most issues, except those defined as 'politics', where a large proportion of the population, with varying IQs, some extremely intelligent, believe things that are quite obviously wrong to who anyone who has spent any amount of time seeking the truth on those issues without prior bias.

The ability for humans to totally turn off their rationality, to organise the 'facts' as they see them to confirm their biases, is nothing short of incredible. If humans treated everything like politics, we would certainly get nowhere.

I think a community hazard would, unfortunately, be trying to collaboratively truth-seek about political issues on a forum like LessWrong. People would not be able to get over their biases, despite being very open to changing their mind on all other issues.

Replies from: DanArmak, Gleb_Tsipursky, Houshalter, Lumifer
comment by DanArmak · 2016-05-15T15:10:47.015Z · LW(p) · GW(p)

we can for the most part collaboratively truth-seek on most issues, except those defined as 'politics',

This is true not only connotationally (political topics cause humans to behave this way), but also denotationally: those topics which cause humans to behave this way, we call political (or 'tribal').

Replies from: Lumifer
comment by Lumifer · 2016-05-16T00:47:09.785Z · LW(p) · GW(p)

Nope. Consider the whole wide world of incentives. If a discussion leads to significant real-world results, and not just of political kind, participants have incentives to attempt turn this discussion to their advantage and they regularly do. Truth-seeking is a very common casualty.

For simple examples think about money, sex, etc.

comment by Gleb_Tsipursky · 2016-05-15T16:46:42.436Z · LW(p) · GW(p)

I think a community hazard would, unfortunately, be trying to collaboratively truth-seek about political issues on a forum like LessWrong

Yeah, good point there. That's why it might work in a small private setting of an LW meetup, but not so much on the open forum of LW.

comment by Houshalter · 2016-05-17T07:56:02.906Z · LW(p) · GW(p)

Is this really true? It seems that humans have the capacity to endlessly debate many issues, without changing their minds. Including philosophy, religion, scientific debates, conspiracy theories, and even math, on occasion. Almost any subject can create deeply nested comment threads of people going back and forth debating. Hell I might even be starting one of those right now, with this comment.

I don't think there's anything particularly special about politics. Lesswrong has gotten away with horribly controversial things before, like e.g. torture vs dust specks, or AI Risk, etc. There have even been political subjects on occasion.

I'd just say it's off topic. I don't come to Lesswrong to read about politics. I get that from almost everywhere else. Lesswrong doesn't really have anything to add.

But maybe if there is a political issue that either isn't too controversial, or isn't too mainstream, I wouldn't mind it being discussed here. E.g. there are sometimes discussions about genetically engineered babies, and that even fits well with other lesswrong subjects.

comment by Lumifer · 2016-05-16T00:42:14.854Z · LW(p) · GW(p)

a very interesting trait of humans is that we can for the most part collaboratively truth-seek on most issues, except those defined as 'politics'

That looks to me to be just false. A trivial counterexample: "It is difficult to get a man to understand something, when his salary depends on his not understanding it." -- Upton Sinclair.