Blatant lies are the best kind!

post by Benquo · 2019-07-03T20:45:56.948Z · LW · GW · 17 comments

This is a link post for http://benjaminrosshoffman.com/blatant-lies-best-kind/

Contents

17 comments

Mala: But then why do people get so indignant about blatant lies?

Noa: You mean, indignant when others call out blatant lies? I see more of that, though they often accuse the person calling out the lie of being unduly harsh.

Mala: Sure, but you can't deny - you've seen yourself - that people actually do get more indignant when they say that, than when they're pointing out a subtle pattern of motivated reasoning. How do you explain that, if "blatant lie" isn't a stronger accusation? 

Noa: I think I see the problem. A stronger accusation can mean an accusation of greater wrongdoing, or it can mean a better-founded accusation. Blatant lying is ... well ... blatant! If someone pretends not to see that, that's terrible news about their ability or willingness to help detect deception.

Mala: But then the indignation is misplaced. Suppose Jer is talking with Horaha, trying to persuade him that their mutual acquaintance Narmer is behaving deceptively. Jer indignantly points out a blatant lie Narmer told. The proper target of the extra indignation due to blatancy is Horaha, not Narmer.

Noa: Who said otherwise?

Mala: Come on, you know that people get extra-indignant at the liar about blatant lies, despite your so far unsubstantiated claim that they are the best kind.

Noa: Sure, people make that mistake. People also yell at their friends because a stranger was mean to them earlier in the day or because they stubbed their toe. I never claimed - OK, it's helpful of you to point out that people make this mistake, but still, there is a good reason for indignation here, and understanding its proper target might help us avoid this kind of slippage.

Mala: So, why is blatant lying the best kind? Is it because it's purer somehow?

Olga: Hold on, you two, you've skipped over something.

Noa: What's that?

Olga: Sometimes Horaha and Narmer are the same person - sometimes the liar is the person we're trying to point out the lie to. Let's call the liar in the composite situation Menes.

Jer might be uncertain about Menes's true intentions. Menes might have an unexamined tendency to lie about something, but might - if this is drawn to his attention - reflect on what's going on, and what he's trying to accomplish.

When Menes says something to Jer - for instance, that everybody knows we don't have true freedom of speech - Jer might want to point Menes's attention to the fact that he just said something that he already knows not to be true.

Mala: So, if it's meant to be helpful, why the anger?

Olga: Part of how we ask for more attention is by expressing excitement - and excitement about wrongness can easily turn out like indignation. If Menes treats this as an important problem, halts, melts, and catches fire [LW · GW] - then Jer can treat him as a basically friendly collaborator with intent to inform on a level he can reach, even if some surface-level behavior lied. But if Menes digs in and fights a rearguard action against the truth [LW · GW], then Jer has to process the new information: that Menes is behaving like an adversary even after his attention's been drawn to that fact. At this point, Jer really should be more angry at Menes (and expressing this anger makes some sense if Jer still has some hope that Menes can be reached).

Mala: I think I get it. But it still seems like you're saying that blatant lying reflects unusually bad intent, not unusually good intent, as far as deception goes.

Noa: Olga's been assuming agreement on something I think you're still missing.

Mala: And what's that? It's rude and condescending to keep talking around this instead of just telling me what you think I'm missing. Do you want to feel special because I'm clueless, or do you want to clue me in?

Noa: I was going to - never mind, I'll just tell you now. You know about simulacrum levels?

Mala: Vaguely, though I keep losing track of the distinctions between the higher levels, and I wish there were better names for them.

Noa: Me too, but maybe this will help. Telling the truth as best you can is simulacrum level 1 - speech is nothing but sharing information. Telling an outright lie is simulacrum level 2 behavior - there's a clean distinction within the mind of the liar between the process of understanding what's going on, and the process of trying to control others' beliefs about what's going on. The more blatant the lie, the less it raises the intersubjective simulacrum level - the simulacrum level conversations happen at. If you can say, "oops, that was a lie," and then step back and try to repair things from the root, then you're at least not confusing other people about what good thinking looks like.

Mala: What do the higher simulacrum levels look like here?

Noa: Motivated reasoning is level 3. You're trying to persuade yourself that the desired state is true, in the usually unconscious hope that this will make it true. Nietzsche said something about this, I think, comparing it to bowling.

Olga: Some people throw a bit of their personality after their bad arguments, as if that might straighten their paths and turn them into right and good arguments-just as a man in a bowling alley, after he has let go of the ball, still tries to direct it with gestures.

Noa: Yes, that's the aphorism I was thinking of.

Mala: And what about lying, and then consciously offering the most persuasive arguments you can for it?

Noa: Then the inside of the liar's mind is still an uncorrupted simulacrum level 2.

Mala: Funny calling that uncorrupted.

Noa: Well, simulacrum level 3 is worse! If they're successful in defending the lie, they're also confusing other people about what reasoning looks like. If people's idea of argument is based on the kind of point-scoring that happens in a competitive debate or a courtroom, and they model their thinking and discourse after this, then they're learning simulacrum level 3 thought patterns, trying to make a thing true by arguing for it, instead of using arguments to try to find out what's true.

Olga: What about breaking on the floor?

Noa: Debate societies that really laud and encourage changing your mind are beautiful but rare exceptions.

Mala: And level four?

Noa: That's when arguments are just acts, people don't bother trying to make their arguments actually persuasive to an honest evaluator, they're just performing the behavior "having arguments for your point of view" as the kind of thing that makes their side seem more respectable and impressive to someone who isn't paying attention. Harry G. Frankfurt's monograph On Bullshit is the classic treatment of this.

Mala: But isn't blatant lying still especially bad because it makes people think lies are okay, while subtle lying might not discourage honest truth-tellers as much? At least the clueless ones.

Noa: There's not actually a norm against lying. There's a norm against allowing people to notice that someone is lying. Depending on the power dynamics of the situation, the blame can fall on the liar, or on the person calling them out.

Honest truth-telling is being protected in the short run by this kind of behavior - but only by exploiting honest truth-tellers for the benefit of people at higher simulacrum levels. It's a conspiracy of silence. Hypocrisy may, as La Rochefoucauld said, be the tribute vice pays to virtue, but it's paid in a currency that's only valuable to vice - lip service and empty statements of affiliation.

We all agree that it's upsetting when it seems as though someone's lying, but sometimes for exactly opposite reasons. Some people object to the lie, but others are participating in blame games. Simulacrum level 3 players are unhappy that their fantasy that we're cooperating is being disrupted. Simulacrum level 4 players are just piling on the current target of the mob, in order not to stick out - or directing the mob at one of their enemies.

Mala: I can see why this was hard for me to understand - it sounds hopeless and awful.

Noa: I don't have any systematic solution, but the first step is always to discuss the problem [LW · GW].

Olga: Actually, this is immediately usable for self-defense. If you want to understand whether someone's trying to deceive you, one thing to look for is how indignant they get when their honor is questioned [LW · GW]. If they get angry instead of curious, that's a bad sign. If they get angrier the more someone tries to explain - at least, if it's an honest explanation - that's a very bad sign. 

On the other hand, if they try hard to be pinned down, to expose potential flaws in their position - if they actually change their behavior when called out, and try to reward their critics - that's golden. Though of course there are fake versions of all of these.

Mala: How do I tell the difference?

Olga: There's no special trick to it; any special trick could be faked. You just have to do it the old-fashioned way. Pay attention to what they're actually saying and doing and see if you can make sense of it. See if their stated beliefs are the best explanation for their behavior. Pay attention to your anticipations - not whether you can defend their behavior, but whether it actually seems to make you less confused about what's going on.

Noa: Or less confused about whether you're confused.

Olga: No need to go all Socrates on us.

17 comments

Comments sorted by top scores.

comment by Raemon · 2019-07-04T05:27:41.118Z · LW(p) · GW(p)

I'm planning to write up a summary of what I think this post's points are, before trying to sort out whether I agree or what followup points seem relevant.

One thing that would be somewhat helpful is clarity over whether I should expect the post to have been optimized pedagogically, or whether it's mostly just a chat transcript with mild cleanup and/or anonymization (I'm assuming the latter).

I'm not sure whether that's actually relevant or not, but there's a lot going on here and a little extra handle on the post's frame felt potentially helpful.

Replies from: Benquo
comment by Benquo · 2019-07-04T11:55:38.039Z · LW(p) · GW(p)

It wasn't an actual conversation between multiple people, this is just how it felt intuitive to try to explain the issue. When it's an actual chat transcript I say so :)

comment by Raemon · 2019-07-09T19:53:00.396Z · LW(p) · GW(p)

One fairly strong disagreement I have with the simulacrum frame is the implication that the stages come in order. Using words like "corrupted", and that performing certain actions moves you from level 1-to-2 or 2-to-3, implies (to my mind) a misleading model.

I think it's most likely that the various levels co-evolved. I could imagine level 4 coming much later than level 3, since 4 requires a bit of sophistication, but level 3 seems like it probably existed for thousands of years, at least.

I think if you're in a reasonably object-level environment where you have to do-things-on-purpose with your brain to survive and flourish, you're probably living in a mixed level 2-3 world.

(I also think blatant lying about nontrivial things, conscious or unconscious, usually just isn't effective, so it's more like you're living in a mixed level 1 and 3 world. [I'm less confident about that though]. Or at least, that's how the concepts in this particular post seem – in some other posts where Benquo illustrated level 2 in somewhat different contexts, it had a different feel to me)

I think there are certain environments and domains where level 1 wins, because it's actually just the dominating strategy.

So the framework that I look at this all through is:

  • How can we construct environments where level 1 dominates (and then you don't have to really enforce anything because the environment just causes you to do level 1 automatically). Such an environment probably needs to have strong barrier to entry.
  • How do we negotiate a safe transition from level 4 to level 3, or level 3 to level 1, in situations where you can't construct such an environment? Such a transition actually needs to take into account that you still need to defend yourself against level 2 threats.

The metaphor that feels comparable to me is "the status quo is a zombie apocalypse or wild west town, where everyone's got guns, "everyone knows" that mostly the strong and ruthless survive so that you can't easily trust people you meet on the road, and somehow in this hostile world you need to bootstrap cooperation and civilization.

Cooperation has the benefit of typically being win/win so it's a stable equilibrium if you can get to it, but that doesn't mean unilaterally switching to a new set of norms. It requires building common knowledge of a new set of norms that will actually be locally advantageous to switch to.

In the metaphor, this would involve:

  • Starting with clear protocols to costly signal trust to each other (when meeting people on the road)
  • Having strong barriers to entry to particular towns that allow people to significantly relax and not carry weapons around all the time, and focus on things other than self defense.

I'm still working out my conceptions of what this non-metaphorically means.

comment by Raemon · 2019-07-06T23:35:13.801Z · LW(p) · GW(p)

My attempted summary:

1. Blatant Lies damage the intellectual social fabric less

Blatant Lies are, in some ways at least, less damaging than more confusing/confused lies (esp. lies that involve some mixture of motivated cognition, or which use bad reasoning to argue their point)

The reason for this is that blatant lies preserve the ability of the nearby social network to distinguish good and bad reasoning. If someone makes says something that is untrue, or makes a bad argument, but the untruth/bad-argument isn't readily apparent, this can shift the overall discussion framework in a direction that normalizes bad reasoning.

This problem builds on itself over time. Once certain kinds of bad reasoning are normalized, they provide a framework that enables some kinds of "even worse" and eventually "not even wrong" reasoning.

2. Certain patterns of indignation are evidence about how trustworthy and cooperative a person is.

I think this quote mostly stands alone:

If you want to understand whether someone's trying to deceive you, one thing to look for is how indignant they get when their honor is questioned. [LW · GW] If they get angry instead of curious, that's a bad sign. If they get angrier the more someone tries to explain - at least, if it's an honest explanation - that's a very bad sign. 
On the other hand, if they try hard to be pinned down, to expose potential flaws in their position - if they actually change their behavior when called out, and try to reward their critics - that's golden. Though of course there are fake versions of all of these.

There's some nuances that this summary glosses over, which depend a bit on both simulacrum framework and some other background assumptions that I wasn't sure I understood. But it looked like these are the two main points. Benquo does that all sound right?

(I think I agree with these two points, although I also think there are important facets of the territory that the characters don't really address which changes some of the subtext).

Replies from: Raemon, Benquo
comment by Raemon · 2019-07-06T23:41:49.403Z · LW(p) · GW(p)

Notes from my initial attempt at distillation:

An interesting thing about this post is that you have three characters, each of which has a subtly different frame and/or epistemic position, and each of who's epistemic position seems (probably?) different from mine. The fact there are three subtly different frames is simultaneously pretty confusing but also helps illuminate some of the underlying frame differences.

(I think "how to communicate across very different frames" is a key group rationality question. So I think this was at least a good exercise, although probably not the best pedagogical technique to use all the time)

I started by attempting to just summarize the key points of the article, then found that I had track what each character knew or believed on every given sentence. I then switched to just separating the dialog into block quotes with a blow-by-blow about what I inferred each person believed.

I got around 1/3 of the way through before the branching tree of beliefs got too convoluted to track, but by that point I also felt roughly oriented around the overall point that was being made. I'm including the notes that I did take for posterity.

Starting in media res makes it a bit more confusing (I think the post would be better if it gave some context of what Mala, Noa and Olga were talking about at the beginning). But here is my attempt to fill in some of the gaps here.

...

Mala: But then why do people get so indignant about blatant lies?
Noa: You mean, indignant when others call out blatant lies? I see more of that, though they often accuse the person calling out the lie of being unduly harsh.
Mala: Sure, but you can't deny - you've seen yourself - that people actually do get more indignant when they say that, than when they're pointing out a subtle pattern of motivated reasoning. How do you explain that, if "blatant lie" isn't a stronger accusation? 

[From the bolded part, I assume Noa does not believe "blatant lie is a stronger accusation", at least in some sense?]

At this moment I am unsure who is supposed to be getting indignant (the liar, the person accusing someone of lying, or the person hearing the accusation, or all three – each seems plausible to me, for different reasons)

Noa: I think I see the problem. A stronger accusation can mean an accusation of greater wrongdoing, or it can mean a better-founded accusation. Blatant lying is ... well ... blatant! If someone pretends not to see that, that's terrible news about their ability or willingness to help detect deception.

From this, I infer:

  • Noa's assumption is that if someone does not see something blatant, a likely explanation (most likely?) is that they are pretending not to see it. [or is possibly defining 'pretending' differently than I would].
    • (which is, in turn, evidence that they are either unable or unwilling to help detect deception)
  • I'm still slightly confused here, but my best guess is that Noa considers an accusation of blatant lying to not (necessarily?) be an accusation of greater wrongdoing, but to be a better founded accusation (since it should be clearer to more people that it was a lie)
Mala: But then the indignation is misplaced. Suppose Jer is talking with Horaha, trying to persuade him that their mutual acquaintance Narmer is behaving deceptively. Jer indignantly points out a blatant lie Narmer told. The proper target of the extra indignation due to blatancy is Horaha, not Narmer.
Noa: Who said otherwise?
Mala: Come on, you know that people get extra-indignant at the liar about blatant lies, despite your so far unsubstantiated claim that they are the best kind.

Ah. Now it looks like the title, "Blatant lies are the best kind!" was a statement uttered by Noa, presumably just before Mala's opening line.

Replies from: Benquo
comment by Benquo · 2019-07-07T02:21:50.913Z · LW(p) · GW(p)

Ah, sorry, I thought that inference would be obvious by the time the reader started the second line of dialogue. Thanks for letting me know it wasn't! I feel stuck between repeating the line with Noa's name attached (which feels clunky to me), using a worse title, and the current situation.

Replies from: Pattern, Raemon, Raemon
comment by Pattern · 2019-07-11T02:51:51.100Z · LW(p) · GW(p)

You could have someone respond to the statement, and include the name of the person they're addressing.

comment by Raemon · 2019-07-07T02:34:39.675Z · LW(p) · GW(p)

Nod. A possible solution (slightly clunky but I think the sacrifice of poetry is well worth the clarity) is to begin with 1-2 sentences of scene-setting:

"A fictional dialog: Noa, Olga and Mala are discussion [social games and lying], when Noa makes the claim: 'Blatant lies are the best kind.'"

An issue I run into with dialogs is keeping track of which character is saying what, especially when I don't have a strong sense of who they are.

I ran into when *I* was recently constructing an anonymized (nonfictional) dialog. Someone suggested naming them after Game of Thrones characters who represented the sort of viewpoints they were expressing. That still felt too confusing. I later tried naming them "Frustratio" (who's main characteristic was that he was frustrated) and "Mistakio" (who's main characteristic is that Frustratio thought Mistakio made a mistake).

This wouldn't work here, since the characters don't especially have different main characteristics, just slightly different beliefs. But the status quo was a bit hard to follow.

comment by Raemon · 2019-07-07T02:36:09.295Z · LW(p) · GW(p)

I'm curious if it's meant to be ambiguous who is indignant about what? I had to read it several times to figure that out (and then I didn't write it down, and forgot it)

Replies from: Benquo
comment by Benquo · 2019-07-07T03:55:02.229Z · LW(p) · GW(p)

No, I meant it to be straightforward. Oops!

comment by Benquo · 2019-07-07T02:29:59.791Z · LW(p) · GW(p)

Those feel like important surface-level points, though I'd phrase the second one a bit differently. But the underlying models used to generate those claims are more of what I wanted to get across. Here are a couple pointers to the kinds of things I think are core content I was trying to work through an example of:

  • A clearer idea of how the different kinds of simulacrum relate to each other, and how some bring others into existence.
  • The interaction between speech's denotative meaning, direct effects as an act, and indirect effects as a way of negotiating norms. (E.g. The way we argue for a point isn't just about whether a claim is true or false, but also about how reasoning works. Expressing anger that someone's violated a norm isn't just a statement about the act, but about the norm, and about the knowability of the relation between the two.)
  • There are kinds of motivated distortions of thinking that are bad, not because there is or might be a direct victim of harm, but because they change what we're doing when we're talking, in a way that makes some kinds of important coordination much harder.
Replies from: Raemon
comment by Raemon · 2019-07-07T07:47:13.589Z · LW(p) · GW(p)

Nod. I think I had (mostly) successfully heard those points, although not so cleanly that I could have described them easily.

Communicating underlying models is hard, and I appreciate the techniques employed here to aim at getting that across.

comment by Dagon · 2019-07-04T21:45:39.118Z · LW(p) · GW(p)

This varies so much by topic and group that it's hard to follow exactly what situations the discussion applies to. "There is a norm for..." is a very difficult proposition to evaluate - norms are complicated and situational, and don't exist outside of relationships among humans.

As a summary of a situation, "there is a norm against pointing out lies", is probably useful in some contexts, but it's not specific enough to really predict anything or argue against. Which lies are punished, by whom in what contexts is necessary for the statement to have any meaning.

comment by Dagon · 2019-07-07T04:03:56.872Z · LW(p) · GW(p)

Do any of the participants make the claim that lying of any sort is preferable to truth-telling? The conditions that cause this preference are likely also very important for which kind of lies are best.

comment by Raemon · 2019-07-07T02:41:22.037Z · LW(p) · GW(p)
Mala: But isn't blatant lying still especially bad because it makes people think lies are okay, while subtle lying might not discourage honest truth-tellers as much? At least the clueless ones.
Noa: There's not actually a norm against lying. There's a norm against allowing people to notice that someone is lying. Depending on the power dynamics of the situation, the blame can fall on the liar, or on the person calling them out.
Honest truth-telling is being protected in the short run by this kind of behavior - but only by exploiting honest truth-tellers for the benefit of people at higher simulacrum levels. It's a conspiracy of silence. Hypocrisy may, as La Rochefoucauld said, be the tribute vice pays to virtue, but it's paid in a currency that's only valuable to vice - lip service and empty statements of affiliation.
We all agree that it's upsetting when it seems as though someone's lying, but sometimes for exactly opposite reasons. Some people object to the lie, but others are participating inblame games. Simulacrum level 3 players are unhappy that their fantasy that we're cooperating is being disrupted. Simulacrum level 4 players are just piling on the current target of the mob, in order not to stick out - or directing the mob at one of their enemies.

I think the last two paragraphs here were the ones I had the hardest time parsing (including the prior two paragraphs to keep it easier to orient around their context)

What is "this kind of behavior?" Blatant lying? Disproportionate punishing of blatant lying over subtle lying?

Replies from: Benquo
comment by Benquo · 2019-07-07T03:57:19.398Z · LW(p) · GW(p)

"this kind of behavior" = the blame machinery that gets activated when lying is mentioned, i.e. "Depending on the power dynamics of the situation, the blame can fall on the liar, or on the person calling them out."

comment by Itsnotme · 2019-10-04T09:30:17.458Z · LW(p) · GW(p)

Regarding the punishment of different kinds of lying: I imagine that punishment is a useful tool if the threat of punishment can prevent people from doing the punishable thing. Since level 3 lying is usually not conciously done (after all, the liar has convinced himself of the lie), it is not easily preventable; it doesn't respond to punishment well. Blatant lying can be prevented by the liar if he wants to, so it easily responds to punishment, and that's why we use indignation and punishment against it as a preventive tool - since it actually works.