Infohazards vs Fork Hazards
post by jimrandomh · 2023-01-05T09:45:28.065Z · LW · GW · 16 commentsContents
16 comments
I think actual infohazardous information is fairly rare. Far more common is a fork: you have some idea or statement, you don't know whether it's true or false (typically leaning false), and you kow that either it's false or it's infohazardous. Examples include unvalidated insights about how to build dangerous technologies, and most acausal trade/acausal blackmail scenarios. Phrased slightly differently: "infohazardous if true".
If something is wrong/false, it's at least mildly bad to spread/talk about it. (With some exceptions; wrong ideas can sometimes inspire better ones, maybe you want fake nuclear weapon designs to trip up would-be designers, etc). And if something is infohazardous, it's bad to spread/talk about it, for an entirely different reason. Taken together, these form a disjunctive argument for not spreading the information.
I think this trips people up when they see how others relate to things that are infohazardous-if-true. When something is infohazardous-if-true (but probably false), peopple bias towards treating it as actually-infohazardous; after all,if it's false, there's not much upside in spreading bullshit. Other people seeing this get confused, and think it's actually infohazardous, or think it isn't but that the first person thinks it is (and therefore thinks the first person is foolish).
I think this is pretty easily fixed with a slight terminology tweak: simply call thinks "infohazardous if true" rather than "infohazardous" (adjective form), and call them "fork hazards" rather that "infohazards" (noun form). This clarifies that you only believe the conditional, and not the underlying statement.
16 comments
Comments sorted by top scores.
comment by Rana Dexsin · 2023-01-05T10:29:05.272Z · LW(p) · GW(p)
“fork hazard” is very easy to confuse with other types of hazards and would occupy a lot of potential-word space for its relative utility. May I suggest something like “conditional infohazard”, elliptical form “condinfohazard”?
Replies from: Raemon↑ comment by Raemon · 2023-01-05T19:19:01.767Z · LW(p) · GW(p)
I like conditional infohazard. I also think "Infohazard if true (but probably false)" is actually just not that long and it may often be best to just say the whole thing.
Replies from: AllAmericanBreakfast, Rana Dexsin↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-01-05T20:55:09.825Z · LW(p) · GW(p)
How about a “Schrodinger’s Infohazard?”
↑ comment by Rana Dexsin · 2023-01-06T01:00:31.524Z · LW(p) · GW(p)
I agree that it's not that long phonetically, but it's longer in tokens, and I think anything with a word boundary that leads into a phrase may get cut at that boundary in practice and leave us back where we started—put from the other side, the point of having a new wording is to create and encourage the bundling effect. More specifically:
- It seems like the most salient reader-perspective difference between “infohazard” and the concept being proposed is “don't make the passive inference that we are in the world where this is an active hazard”, and blocking a passive inference wants to be early in word order (in English).
- Many statements can be reasonably discussed as their negations. Further, many infohazardous ideas center around a concept or construction moreso than a statement: an idea that could be applied to a process to make it more dangerous, say, or a previously hidden phenomenon that will destabilize harmfully if people take actions based on knowledge of it. “if true” wants a more precise referent as worded, whereas “conditional” or equivalent is robust to all this by both allowing and forcing reconstruction of the polarity, which I think is usually going to be unambiguous given the specific information. (Though I now notice many cases are separately fixable by replacing “true” with the slightly more general “correct”.)
- If you want to be more specific, you can add words to the beginning: “truth-conditional” or “falsity-conditional”. Those seem likely to erode to the nonspecific form before eroding back to “infohazard” bare.
This is independent of whether it's worth smithing the concept boundary more first. It's possible for instance that just treating “infohazard” as referring to the broader set is better and leaving “true infohazard” or “verified infohazard” as the opposite subset is better, especially since when discussing infohazards specifically, having the easiest means of reference reveal less about the thing itself is good by default. However, that may not be feasible if people are indeed already inferring truth-value from hazard-description—which is a good question for another comment, come to think of it.
comment by NicholasKees (nick_kees) · 2023-01-05T16:39:22.483Z · LW(p) · GW(p)
Sometimes something can be infohazardous even if it's not completely true. Even though the northwest passage didn't really exist, it inspired many European expeditions to find it. There's a lot of hype about AI right now, and I think the idea for a cool new capabilities idea (even if it turns out not to work well) can also do harm by inspiring people try similar things.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-01-07T00:41:06.137Z · LW(p) · GW(p)
But even the failed attempts at discovering the northwest passage did lead to better mapping of the area, and other benefits so it's not clear if it was net negative at all for society.
Replies from: geoffrey-wood↑ comment by Geoffrey Wood (geoffrey-wood) · 2023-01-11T02:35:24.287Z · LW(p) · GW(p)
It certainly was infohazardous to the people who funded the expeditions and got poor return for their investment.
I would consider the hazard to be to the agent not to society, though I can certainly imagine information that hurts an individual, but benefits somebody else.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-01-12T09:34:25.441Z · LW(p) · GW(p)
How do you know what their evaluation of their investment was?
Replies from: geoffrey-wood↑ comment by Geoffrey Wood (geoffrey-wood) · 2023-01-12T22:12:11.037Z · LW(p) · GW(p)
Thinking about it more, I suppose I don't know, perhaps they were perfectly happy.
However, in my experience, when you set out to find a thing and fail to find it that often leads to dissatisfaction. My expectation / rule of thumb for this is "People don't often hunt for things they don't want for some reason".
Replies from: M. Y. Zuocomment by Dagon · 2023-01-05T19:05:28.981Z · LW(p) · GW(p)
I think there are subtypes of infohazard, and this has been known for quite a long time. Bostrom's paper (https://nickbostrom.com/information-hazards.pdf) is only 12 years old, I guess, but that seems like forever.
There are a LOT of infohazards that are not only hazardous if true. There's a ton of harm in deliberate misinformation, and some pain caused by possibilities that are unpleasant to consider, even if it's acknowledged they may not occur. Roko's Basilisk (https://www.lesswrong.com/tag/rokos-basilisk) is an example from our own group.
edit: I further think that un-anchored requests on LW for unstated targets to change their word choices are unlikely to have much impact. It may be that you're putting this here so you can reference it when you call out uses that seem confusing, in which case I look forward to seeing the reaction.
Replies from: Rana Dexsin↑ comment by Rana Dexsin · 2023-01-06T01:08:27.829Z · LW(p) · GW(p)
I read this as an experimental proposal for improvement, not an actively confirmed request for change, FWIW.
comment by tailcalled · 2023-01-05T16:14:18.479Z · LW(p) · GW(p)
"Potential recipe for destruction"?
comment by Rana Dexsin · 2023-01-06T01:06:44.915Z · LW(p) · GW(p)
Do I understand correctly from your third paragraph that this is based on existing concrete observations of people getting confused by making an inference from the description of something as an infohazard to a connected truth value not intended by the producer of the description? Would it be reasonable to ask in what contexts you've seen this, how common it seems to be, or what follow-on consequences were observed?
Replies from: jimrandomh↑ comment by jimrandomh · 2023-01-06T01:33:31.603Z · LW(p) · GW(p)
I've seen it happen with Roko's Basilisk (in both directions: falsely inferring that the basilisk works as-described, and falsely inferring that the person is dumb for thinking that it works as-described). I've seen it happen with AGI architecture ideas (falsely inferring that someone is too credulous about AGI architecture ideas, which nearly always turn out to not work).