The Incomprehensibility Bluff

post by SocratesDissatisfied · 2020-12-06T18:26:16.168Z · LW · GW · 22 comments

Contents

  I. Outlining the Bluff
  II. Characterising the Bluff
  III. Identifying the Bluff
  IV. Calling the Bluff
  V. In Conclusion
None
22 comments

I. Outlining the Bluff

If you are in a conversation with another person, and you cannot understand what they are saying, various possibilities present themselves; two of which are as follows:

  1. They are much more intelligent or knowledgeable than you. So much more intelligent or knowledgeable than you, in fact, that you are unable to understand them.
  2. They are talking nonsense.

In any given case, it may be highly unobvious which of these possibilities pertains. Everyone who is even the least bit self-reflective knows that – as a statistical matter - there must be a great many people far more intelligent than them[1]. Moreover, everyone knows that there are even more people with knowledge of specialist fields, about which they know little. 

To resolve this problem, people look to social cues. These may include their interlocuter’s education, class, qualifications, and social position. This also extends to their behaviour: their confidence, the rapidity and ease of their responses to questions, and so on. Where arguments are textual, still other factors apply: the sophistication of their language and complexity of their arguments often ranking chief amongst them. 

This presents an extraordinary opportunity for incomprehensibility bluffing: being deliberately unintelligible, to convince your audience of the intellectual superiority of you or your arguments. 

The perversity of this phenomenon is startling. It is already a truism that confidence and social capital make people more convincing. It is far less remarked upon – if at all - that such factors’ efficacy may increase, as the sense that their possessor is making declines.

II. Characterising the Bluff

And there are still more fascinating characteristics of the incomprehensibility bluff to remark upon:

  1. Of least interest, but potentially greatest relevance, is its ease. There are a great many more nonsensical than sensical statements in the possibility space of ideas[2], so it is much easier for people to confidently reel off the former, than laboriously construct the latter. This puts the incomprehensibility bluff within the reach of most , lowering the bar to its rhetorical deployment.
  2. Another characteristic is the unfalsifiability of incomprehensibility bluff arguments and positions. A theory with a clearly logical structure and conclusions can be directly analysed, argued against, and (if flawed) refuted. It is a great deal harder to grapple with a theory that deliberately evades understanding! And if a theory invites miscontrual, arguments against it will tend to target misconstructions, giving its author an easy way to dismiss objections, and frustrating attempts at good faith engagement.
  3. Further, the incomprehensibility bluff leverages audience insecurity to induce agreement. If you are talking to a highly knowledgeable and intelligent individual, saying “I don’t understand what you’re saying” only admits your comparative ignorance/mental incapacity. This is something people are not keen to do. Moreover, as people are risk averse, they may fail to highlight incomprehensibility even if they think it probable that a speaker is incomprehensible. But if everyone makes this calculation, no one will call-out the speaker’s incomprehensibility. And if no one calls out a speaker’s incomprehensibility, this acts as a powerful social cue that the speaker is not in fact incomprehensible: no-one else seems to think they are, so the problem must lie with you.[3]
  4. Finally, and most elegantly, is the reflexivity of the incomprehensibility bluff: the capacity of the bluffer to bluff themselves. Put differently, if you cannot understand what you yourself are saying, two possible explanations are as follows:
    1. You are more intelligent and insightful than even you can fully grasp.
    2. You are talking nonsense.

One of these conclusions is substantially more flattering than the other. And given peoples’ susceptibility to flattery of all kinds, it would be little wonder if many ostensibly deceptive incomprehensibility bluffers, were in fact in genuine awe of their own genius. 

My sense is that the incomprehensibility bluff is widespread and effective. I have personally experienced it (the reality, the temptation, and the fight against it) during my study of academic philosophy. I detect it in theology[4], and modern artistic theory and criticism[5]. I worry that it increasingly characterises certain segments of social activism. To the extent that the bluff enables worse arguments[6], and helps worse motivated/meritorious individuals to gain prominence, it is a profoundly suboptimal influence on public discourse. 

The question then presents itself: if incomprehensibility bluffing is an effective and socially damaging rhetorical strategy, how do we identify and counter it?

III. Identifying the Bluff

As regards identification, there are several prominent tells:

  1. First, complexity for its own sake. This includes using special terminology and vocabulary to express concepts that could be as easily explained with normal language.
  2. Second, an unwillingness to explain or (attempt to) simplify. Part of the difficulty of diagnosing the incomprehensibility bluff is that the world is an extremely complex place, many aspects of which require detailed explanation to understand. However, there are many ways to explain even complex theories – including via analogy, overview, or decomposition into simpler parts. In a good faith dialogue, people should be willing to try and make themselves sensible. If they deliberately resist doing so, one must question why this is the case. Sometimes it may be for legitimate reasons (if two doctors are arguing over the merits of a complex medical procedure, a layperson’s demands for explanation may be distracting and unhelpful). Oftentimes it will be because simplifying their positions would reveal their vapidity – or because their positions are so convoluted that meaningful simplification is impossible. In either case, the incomprehensibility bluff may well be at play.
  3. Third, and interrelated to the above, is a dynamic of stigmatising “ignorance”[7]. Remember that part of the power of the bluff derives from the audience’s fear that, if they speak up, they will be perceived as ignorant or unintelligent. If a speaker responds to questions about his position by attacking the questioner, this may then be an attempt to stigmatise expressions of incomprehension; and empower the social dynamics behind the bluff.
  4. Fourth, is whether there are any independent and reliable indicators of your interlocuters knowledge and intelligence. A physicist who has been practicing for twenty years will likely sound incomprehensible to the layman – but if they’re working for CERN, there’s almost certainly a reasonable explanation[8].
  5. Finally, the confidence with which a theory is expressed can be an important cue, especially where the theory relates to generally low-confidence fields of knowledge (philosophy, psychology, economics and the social sciences being chief amongst them). A theory which is measured, qualified, and expressed with uncertainty invites questions, and forthright expressions of disagreement or lack of understanding. But such statements undermine the social dynamics buttressing the incomprehensibility bluff.  Contrastively, a confident statement of views is a cue that the author knows precisely what they are talking about.

IV. Calling the Bluff

Once an incomprehensibility bluff is identified, the best way to address it is probably to call it. First, by asking for a clear (and if necessary, simplified) explanation of the suspect theory. Then, if such an explanation is not forthcoming, by outright stating that you believe the theory does not make sense. 

Asking questions gives the suspected bluffer the opportunity to demonstrate that they are not bluffing; and gives you more evidence as to whether they are (cf tells 2 and 3 above). This is in line with principles of charity and good faith, which we should all seek to cultivate. 

Outright calling the bluff serves several functions. Firstly, and most crucially, it gives other people an important social cue that it is OK to question the comprehensibility of the theory; potentially triggering a cascade of opinion unfalsification[9]. Secondly, it sanctions the incomprehensibility bluffer and his incomprehensible theory. By contesting the latter, you blunt its persuasive force. By contesting the former, you dissuade him (or any would be imitators) from employing this tactic in future. 

V. In Conclusion

I wish now that this piece were less comprehensible, in the hope that it would be more convincing.


[1] Indeed, even if one is an uncharacteristically intelligent individual, one is likely to socially sort oneself into environments with other uncharacteristically intelligent individuals. 

[2] I concede that, of the set of nonsensical statements, a great many are so nonsensical as to be immediately recognisable as such. But one needs only cast a superficial glance over history to see that humans have believed many things that we would today dismiss out of hand (from the utility of ritual sacrifice, to the flatness of the Earth), but which clearly made sense to them. And they would of course say the same about us. 

[3] Related to this is the way that the incomprehensibility bluff leverages audience politeness. It is one thing to say that you disagree with someone; that at least dignifies your interlocuter’s position as a serious theoretical construct, to be engaged with on its own terms. It is quite another, more serious and insulting thing, to allege that their theory literally does not make sense. Therefore, people will tend to have a strong aversion to expressing such opinions in polite conversation and correspondence. Perversely, this is especially true in the case of good faith actors, who possess a special desire not to “poison the well” with what could be construed as insults. 

[4] Cf Catholic theories of transubstantiation and trinitarianism. 

[5] N.B. that I in fact deeply appreciate modern art – both aesthetically and conceptually. I do not intend this remark as a criticism – confronting the incomprehensible can be a pleasant and thought-provoking experience, and if there is any place for it in society, that place is surely in the aesthetic realm. 

[6] Incomprehensible theories are bad theories because they are unfalsifiable (as expressed earlier in section II 2.). Unfalsifiable theories are bad because, if a theory cannot be proven false, we have no way of knowing whether it is true (as, if the theory were wrong, there would be no way we could find evidence to this effect). 

[7] Note that whilst refusing to explain something may amount to/be accompanied with an effort to stigmatise that demand, the two are conceptually distinct. One can say “I’m sorry but I don’t have time to explain x right now” without also saying “what, you want me to explain x? But it’s so obvious!”

[8] It is certainly possible for institutional indicators to be skewed, and for entire organisations to be captured by incomprehensibility bluffs. However, whether this is the case can itself be assessed using tells set out above. 

[9] Following your example, some people may feel comfortable coming forwards and expressing their belief that the theory is incomprehensible. This in turn may cause other people to come forwards and agree with them; and so on. This type of dynamic is historically common (notably in relation to support for repressive/authoritarian regimes) and intuitive (cf Hans Christian Andersen’s best folktale: “The Emperor’s New Clothes”). 

22 comments

Comments sorted by top scores.

comment by knite · 2020-12-06T20:01:27.918Z · LW(p) · GW(p)

Option 3:  You have insufficient shared understanding/facts/terminology to comprehend each other's opinions and even begin having a real conversation.

Replies from: Vanilla_cabs
comment by Vanilla_cabs · 2020-12-06T21:54:20.967Z · LW(p) · GW(p)

Yes, while not central to the point of the article, opening on a false dichotomy is not a pleasant sight.

Replies from: SocratesDissatisfied
comment by SocratesDissatisfied · 2020-12-06T23:07:01.182Z · LW(p) · GW(p)

Point taken and edited accordingly.

I knew there were more than two possibilities, and didn't say the two I highlighted were the only possibilities for that reason. But I concede that the original wording unnecessarily suggested this. 

comment by romeostevensit · 2020-12-06T23:24:27.438Z · LW(p) · GW(p)

You can actively spot check. Familiarize yourself with the 30 minute level of understanding of the field and ask questions about it. This sniffs out a huge proportion of charlatans.

comment by Raemon · 2020-12-08T22:58:57.893Z · LW(p) · GW(p)

This post actually updated me significantly on the "jargon: good or bad?" debate.

Previously, I knew a bunch of good reasons why Jargon was Bad (hard to understand, unfriendly to newcomers). I nonetheless ended up believing "jargon is basically good", because I think there's a benefit to excluding casual outsiders and maintaining an ingroup that's harder to hijack and can maintain higher degrees of trust (as well as the jargon just actually often pointing to specific, nuanced concepts that don't actually have direct analogues)

The new lens I'm looking at this through is "how should truthseekers be trying to contribute to the commons?". I'm not sure how I weigh all the constraints against each other. But, the new set of considerations I'm thinking about include:

Something that's been on my mind this year is "easy verifiability". It matters not only that someone is not deceiving you, but that it's legibly obvious that they're not deceiving you.

I do think "basically every advanced field descends into jargon", and that they are doing it for mostly good reasons. Nonetheless, it's pretty bad that academia often isn't even trying to be comprehensible, and in some cases I think people are actively bluffing, and the collective willingness to use jargon creates a fog-of-war where it's easier to bluff.

Replies from: jimrandomh, Raemon
comment by jimrandomh · 2020-12-08T23:58:44.865Z · LW(p) · GW(p)

A naive take on this is that having a higher average level of jargon usage makes the incomprehensibility bluff easier to pull off against people who don't know the jargon, so you might think it reduces the legibility of peoples' knowledge and skill levels overall. But I don't think it works out this way in practice. My experience is that on subjects where I have medium knowledge (not an expert, but more informed than most laypeople), when I come across laypeople pretending to be experts, they often give themselves away by using a jargon term incorrectly. I also find that glossaries are a good entry point into a subject, and avoiding jargon too much would make the glossaries less useful for this purpose.

I am a bit worried about people invisibly bouncing off our community because of the jargon, but I think the jargon is important enough that I'd rather solve it by making the jargon better (and making its intellectual infrastructure better) rather reduce the amount of it.

Replies from: Raemon
comment by Raemon · 2020-12-09T00:10:45.435Z · LW(p) · GW(p)

My experience is that on subjects where I have medium knowledge (not an expert, but more informed than most laypeople), when I come across laypeople pretending to be experts, they often give themselves away by using a jargon term incorrectly.

Hmm. That is an interesting term to be in the equation.

FYI I see two related things here: one is excessive jargon, the other is unnecessarily wide inferential gaps (i.e. often you need some jargon for the point you're making but not all of it. In the original sequences, I think it was necessary for Eliezer to bridge a lot of inferential distance, which initially required an excessively-long-braindump. But, I suspect it's possible to collapse a lot of that down in order to make individual points.

comment by Raemon · 2020-12-08T23:04:44.237Z · LW(p) · GW(p)

Ultimately I think it's still possible for groups of intellectual insiders to communicate clearly with each other and make rapid progress, and also for a lot of that to be Babble that isn't necessarily meant to hold up rigorously. I think this is more of an additional argument in favor of reducing research debt, and making sure to distill your ideas.

And, I think maybe people should feel more of an obligation to practice communicating their ideas to laymen, not just because it's object-level-good to be able to explain ideas to a wider audience, but because there's a tragedy-of-the-commons where if you don't do that, the wider audience is not only ignorant, but vulnerable.

comment by Maxwell Peterson (maxwell-peterson) · 2020-12-07T19:27:54.579Z · LW(p) · GW(p)

When I talk to non-technical people at work or about my work, I am frantically translating all the technical words I usually use into words that fit into something that I hope they can understand. This is very difficult! And I have to do it on the fly. I mess up sometimes. This must happen to a lot of people, and it's all very innocent - because communicating across large inferential distances [LW · GW] is hard. I've gotten better at it but it is definitely a skill. I appreciate the edit to call out that there are various options about what's going on besides the Smarter and Bluffing options described in the post, but I still want to stress that bluffing is a small minority of such cases. Certainly no one should be defaulting to thinking that a person is bluffing, just because they're using difficult language or failing to explain well.

Anecdotally, from working with data analysts and data scientists, watching them present to people outside their level of technical expertise, and looking back at certain presentations I've made: I feel embarrassed for a technical person who is failing to bridge the inferential gap. Like another commenter said, failing to model the audience looks bad. If other technical people are listening and notice, they'll think you're just messing up. So even if we imagine that this Bluffing is common, those doing it would probably only want to do it when there are very few people around to notice how bad their explanation is. 

comment by Vladimir_Nesov · 2020-12-06T21:06:45.219Z · LW(p) · GW(p)

Saying incomprehensible things in a personal conversation is evidence of failing to model the interlocutor, so it's a dubious strategy for signaling intelligence. Writing an incomprehensible paper should work better.

comment by Viliam · 2020-12-15T21:10:32.054Z · LW(p) · GW(p)

Catholic theories of transubstantiation and trinitarianism

(Tangential, but the discussion is already 9 days old...)

Trinitarianism was a "mysterious answer" since its beginning, but AFAIK the problem with transubstantiation is that its official explanation is based on obsolete science: Aristotelian chemistry. After a few centuries, with lots of theology built on top of that, Aristotelian chemistry was replaced by atomic theory... but the theologists are not ready to throw away centuries of spiritual writings about one of the central points of their faith. This is why the explanations sound so confusing, because they are built on a foundation that no longer works.

I guess the lesson for everyone who wants to start their own religion is that scientific proofs for religious dogma may seem impressive in short term, but often don't age well. Better keep the magisteria separate.

comment by jimrandomh · 2020-12-08T05:55:23.594Z · LW(p) · GW(p)

I think this bluff is much harder to pull off against programmers, because in software, the priors are very heavily weighted towards incomprehensible things just being wrong, rather than being incomprehensibly advanced.

It's also more difficult to be deceived this way the more intelligent you are; as your intelligence and breadth of knowledge grows, a progressively smaller fraction of legitimate intellectual work is incomprehensible and a larger portion of that fraction is nonsense. Since you can mostly understand arguments that are moderately more advanced than those you can produce, you don't need to be top tier to be able to recognize all the bullshit, only second tier.

comment by Donald Hobson (donald-hobson) · 2020-12-07T15:20:17.539Z · LW(p) · GW(p)

Related to asking for a simplification, ask what type of claim is being made. Is it a mathematical theorem, a physical theory, a moral position ect.

For the theorem, start by trying to understand the statement not its proof. If someone says for all Wasabi spaces, there exists a semiprime covering. (Not a real theorem) Ask for a simple example of a wasabi space, and a semiprime covering on it.

For physical theories, you can ask for a prediction. Eg if you put a single electron in a box with as little energy as possible, the probability of finding it in different locations forms a sine wave. 

You can even say the sort of thing being predicted, without specifying any actual prediction. Ie "QCD predicts how quickly radioactive stuff decays" instead of "QCD predicts the halflife of uranium 238 to be 4.5 billion years".

For moral positions, ask for one moral dilemma that the position would apply to, and what you would do. Eg for transhumanism "Your 80 and getting frail, you have an anti-ageing drug that would make you as healthy and fit as if you were 30, do you use it? (Using the drug doesn't deprive anyone else of it) Yes" 

comment by Donald Hobson (donald-hobson) · 2020-12-07T14:58:59.255Z · LW(p) · GW(p)

Option 4: The person is just really bad at explaining the concept. 

Some people are just really bad at explaining things simply. This also gives a more charitable thing to accuse people of.

comment by Emiya (andrea-mulazzani) · 2020-12-07T09:33:47.529Z · LW(p) · GW(p)

I'm really surprised by the fields where you perceive this to be more frequent, especially philosophy and social activism. One would expect that in the humanistic fields this kind of bluff would be much harder to pull off, since you have less excuses to be obscure and not make sense.

Though I've seen bad reasoning in these fields, and also bad reasoning that nobody called because it was hidden by some amount of complexity. And in technical fields the bluff would have no chance at all to work on anyone save uninformed laypersons.

Could you perhaps provide links of examples of this? I think it would make the post clearer.

 

You seem to imply, with the part where the speaker can also opt to fall for his bluff, that this doesn't apply only to the cases where the perpetrator is wilfully trying to deceive the audience.

If so, I feel that "bluff" might be a misnomer. The Incomprehensibility Obfuscation perhaps could be a bit more accurate?

When I was forced to waste my time and actually study post-Freudian psychoanalysis I think I've met a lot of this, theories I'd stare at for several minutes and just looked... empty, like the whole thing was a complex renaming system talking about nothing (not that I think Freudian psychoanalysis is in any way reliable or useful), but I think that most people working in the field would have some illusion of knowledge and understanding from it, so I'm puzzled if the feeling of "I don't get what you're saying but don't want to look stupid" is the focal part of this process, or if it works better when you manage to give people arguments they don't notice themselves to have not understood, or that are at least easy to remember.

Replies from: frontier64
comment by frontier64 · 2020-12-07T16:18:30.400Z · LW(p) · GW(p)

One would expect that in the humanistic fields this kind of bluff would be much harder to pull off, since you have less excuses to be obscure and not make sense

You would think so, but all of modern sociology is a competition between authors to draft as many new words and crazy theories as they can. So much so that experts in a particular field of sociology can't tell intentional gibberish from a valid article if the paper sticks to the standard form. See also https://www.youtube.com/watch?v=97FuO-hEhQo

comment by Pattern · 2020-12-07T00:23:54.978Z · LW(p) · GW(p)
First, complexity for its own sake. This includes using special terminology and vocabulary to express concepts that could be as easily explained with normal language.

Every field does this.


5. Finally, the confidence with which a theory is expressed can be an important cue, especially where the theory relates to generally low-confidence fields of knowledge (philosophy, psychology, economics and the social sciences being chief amongst them). A theory which is measured, qualified, and expressed with uncertainty invites questions, and forthright expressions of disagreement or lack of understanding. But such statements undermine the social dynamics buttressing the incomprehensibility bluff.  Contrastively, a confident statement of views is a cue that the author knows precisely what they are talking about.

The phrasing on this one is a little weird.

Replies from: SocratesDissatisfied
comment by SocratesDissatisfied · 2020-12-07T00:59:32.392Z · LW(p) · GW(p)

RE 1: yes, but it’s a matter of degree. Technically every scientific theory is somewhat unfalsifiable (you can always invent saving hypotheses). But some are more falsifiable than others (some lend themselves to saving hypotheses, don’t make clear predictions in the first place, etc.) so falsifiability is still a useful criterion of theory choice. Likewise here with IB and needless jargon.

RE 2: This may just be my current writing style! I appreciate any constructive comments on how it might have been phrased better.

Replies from: Ericf
comment by Ericf · 2020-12-07T04:33:42.247Z · LW(p) · GW(p)

I read it as: Hedging invites attacks Confidence implies expertise

And then the concluding sentence is missing: "Therefore, seemingly confident speakers are actually more likely to be bluffing" (this is widely, but not universally, known [#link "it is known" by Zvi])

Replies from: Pattern
comment by Idan Arye · 2020-12-07T17:35:50.794Z · LW(p) · GW(p)

One thing to keep in mind is that even if it does seem likely that the suspected bluffer is smarter and more knowledgeable than you, the bar for actually working on the subject is higher than the bar for understanding a discussion about it. So even if you are not qualified enough to be an X researcher or an X lecturer, you should still be able to understand a lecture about X.

Even if the gap between you two is so great that they can publish papers on the subject and you can't even understand a simple lecture, you should still be able to understand some of that lecture. Maybe you can't follow the entire derivation of an equation but you can understand the intuition behind it. Maybe you get lost in some explanation but can understand an alternative example.

Yes - it is possible that you are so stupid and so ignorant and that the other person is such a brilliant expert that even with your sincere effort to understand and their sincere effort to explain as simply as possible you still can't understand even a single bit of it because the subject really is that complicated. But at this point the likability of this scenario with all these conditions is low enough that you should seriously consider the option that they are just bluffing.

comment by Chris_Leong · 2020-12-08T05:16:43.770Z · LW(p) · GW(p)

There's another possibility, which is that they have some low-level insights that have been dressed up to appear as far more.