Maybe Lying Doesn't Exist
post by Zack_M_Davis · 2019-10-14T07:04:10.032Z · LW · GW · 58 commentsContents
Appeals to Consequences Are Invalid The Optimal Categorization Depends on the Actual Psychology of Deception None 58 comments
In "Against Lie Inflation", the immortal Scott Alexander argues that the word "lie" should be reserved for knowingly-made false statements, and not used in an expanded sense that includes unconscious motivated reasoning. Alexander argues that the expanded sense draws the category boundaries of "lying" too widely in a way that would make the word less useful. The hypothesis that predicts everything predicts nothing: in order for "Kevin lied" to mean something, some possible states-of-affairs need to be identified as not lying, so that the statement "Kevin lied" can correspond to redistributing conserved probability mass away from "not lying" states-of-affairs onto "lying" states-of-affairs.
All of this is entirely correct. But Jessica Taylor (whose post "The AI Timelines Scam" inspired "Against Lie Inflation") wasn't arguing that everything is lying; she was just using a more permissive conception of lying than the one Alexander prefers, such that Alexander didn't think that Taylor's definition could stably and consistently identify non-lies.
Concerning Alexander's arguments against the expanded definition, I find I have one strong objection (that appeal-to-consequences is an invalid form of reasoning for optimal-categorization questions for essentially the same reason as it is for questions of simple fact), and one more speculative objection (that our intuitive "folk theory" of lying may actually be empirically mistaken). Let me explain.
(A small clarification: for myself, I notice that I also tend to frown on the expanded sense of "lying". But the reasons for frowning matter! People who superficially agree on a conclusion but for different reasons, are not really on the same page [LW · GW]!)
Appeals to Consequences Are Invalid
There is no method of reasoning more common, and yet none more blamable, than, in philosophical disputes, to endeavor the refutation of any hypothesis, by a pretense of its dangerous consequences[.]
Alexander contrasts the imagined consequences of the expanded definition of "lying" becoming more widely accepted, to a world that uses the restricted definition:
[E]veryone is much angrier. In the restricted-definition world, a few people write posts suggesting that there may be biases affecting the situation. In the expanded-definition world, those same people write posts accusing the other side of being liars perpetrating a fraud. I am willing to listen to people suggesting I might be biased, but if someone calls me a liar I'm going to be pretty angry and go into defensive mode. I'll be less likely to hear them out and adjust my beliefs, and more likely to try to attack them.
But this is an appeal to consequences. Appeals to consequences [LW · GW] are invalid because they represent a map–territory confusion, an attempt to optimize our description of reality at the expense of our ability to describe reality accurately (which we need in order to actually optimize reality).
(Again, the appeal is still invalid even if the conclusion—in this case, that unconscious rationalization shouldn't count as "lying"—might be true for other reasons.)
Some aspiring epistemic rationalists like to call this the "Litany of Tarski". If Elijah is lying (with respect to whatever the optimal category boundary [LW · GW] for "lying" turns out to be according to our standard Bayesian philosophy of language [LW · GW]), then I desire to believe that Elijah is lying (with respect to the optimal category boundary according to ... &c.). If Elijah is not lying (with respect to ... &c.), then I desire to believe that Elijah is not lying.
If the one comes to me and says, "Elijah is not lying; to support this claim, I offer this-and-such evidence of his sincerity," then this is right and proper, and I am eager to examine the evidence presented.
If the one comes to me and says, "You should choose to define lying such that Elijah is not lying, because if you said that he was lying, then he might feel angry and defensive," this is insane. The map is not the territory! If Elijah's behavior is, in fact, deceptive—if he says things that cause people who trust him to be worse at anticipating their experiences [LW · GW] when he reasonably could [LW · GW] have avoided this—I can't make his behavior not-deceptive by changing the meanings of words.
Now, I agree that it might very well empirically be the case that if I say that Elijah is lying (where Elijah can hear me), he might get angry and defensive, which could have a variety of negative social consequences. But that's not an argument for changing the definition of lying; that's an argument that I have an incentive to lie about whether I think Elijah is lying! (Though Glomarizing [LW · GW] about whether I think he's lying might be an even better play.)
Alexander is concerned that people might strategically equivocate between different definitions of "lying" as an unjust social attack against the innocent, using the classic motte-and-bailey maneuver: first, argue that someone is "lying (expanded definition)" (the motte), then switch to treating them as if they were guilty of "lying (restricted definition)" (the bailey) and hope no one notices.
So, I agree that this is a very real problem [LW · GW]. But it's worth noting that the problem of equivocation between different category boundaries associated with the same word [LW · GW] applies symmetrically: if it's possible to use an expanded definition of a socially-disapproved category as the motte and a restricted definition as the bailey in an unjust attack against the innocent, then it's also possible to use an expanded definition as the bailey and a restricted definition as the motte in an unjust defense of the guilty. Alexander writes:
The whole reason that rebranding lesser sins as "lying" is tempting is because everyone knows "lying" refers to something very bad.
Right—and conversely, because everyone knows that "lying" refers to something very bad, it's tempting to rebrand lies as lesser sins. Ruby Bloom explains what this looks like in the wild [LW(p) · GW(p)]:
I worked in a workplace where lying was commonplace, conscious, and system 2. Clients asking if we could do something were told "yes, we've already got that feature (we hadn't) and we already have several clients successfully using that (we hadn't)." Others were invited to be part an "existing beta program" alongside others just like them (in fact, they would have been the very first). When I objected, I was told "no one wants to be the first, so you have to say that."
[...] I think they lie to themselves that they're not lying (so that if you search their thoughts, they never think "I'm lying")[.]
If your interest in the philosophy of language is primarily to avoid being blamed for things—perhaps because you perceive that you live in a Hobbesian dystopia [LW(p) · GW(p)] where the primary function of words is to elicit actions, where the denotative structure [LW · GW] of language was eroded by political processes [LW · GW] long ago, and all that's left is a standardized list of approved attacks [LW · GW]—in that case, it makes perfect sense to worry about "lie inflation" but not about "lie deflation." If describing something as "lying" is primarily a weapon, then applying extra scrutiny to uses of that weapon is a wise arms-restriction treaty.
But if your interest in the philosophy of language is to improve and refine the uniquely human power of vibratory telepathy [LW · GW]—to construct shared maps that reflect the territory—if you're interested in revealing what kinds of deception are actually happening, and why—
(in short, if you are an aspiring epistemic rationalist)
—then the asymmetrical fear of false-positive identifications of "lying" but not false-negatives—along with the focus on "bad actors", "stigmatization", "attacks", &c.—just looks weird. What does that have to do with maximizing the probability you assign to the right answer??
The Optimal Categorization Depends on the Actual Psychology of Deception
Deception
My life seems like it's nothing but
Deception
A big charadeI never meant to lie to you
I swear it
I never meant to play those games—"Deception" by Jem and the Holograms
Even if the fear of rhetorical warfare isn't a legitimate reason to avoid calling things lies (at least privately), we're still left with the main objection that "lying" is a different thing from "rationalizing" or "being biased". Everyone is biased in some way or another, but to lie is "[t]o give false information intentionally with intent to deceive." Sometimes it might make sense to use the word "lie" in a noncentral [LW · GW] sense, as when we speak of "lying to oneself" or say "Oops, I lied" in reaction to being corrected. But it's important that these senses be explicitly acknowledged as noncentral and not conflated with the central case of knowingly speaking falsehood with intent to deceive—as Alexander says, conflating the two can only be to the benefit of actual liars.
Why would anyone disagree with this obvious ordinary view, if they weren't trying to get away with the sneaky motte-and-bailey social attack that Alexander is so worried about?
Perhaps because the ordinary view relies an implied theory of human psychology that we have reason to believe is false? What if conscious intent to deceive is typically absent in the most common cases of people saying things that (they would be capable of realizing upon being pressed) they know not to be true? Alexander writes—
So how will people decide where to draw the line [if egregious motivated reasoning can count as "lying"]? My guess is: in a place drawn by bias and motivated reasoning, same way they decide everything else. The outgroup will be lying liars, and the ingroup will be decent people with ordinary human failings.
But if the word "lying" is to actually mean something rather than just being a weapon, then the ingroup and the outgroup can't both be right. If symmetry considerations [LW · GW] make us doubt that one group is really that much more honest than the other, that would seem to imply that either both groups are composed of decent people with ordinary human failings, or that both groups are composed of lying liars. The first description certainly sounds nicer, but as aspiring epistemic rationalists, we're not allowed to care about which descriptions sound nice; we're only allowed to care about which descriptions match reality.
And if all of the concepts available to us in our native language fail to match reality in different ways, then we have a tough problem that may require us to innovate.
The philosopher Roderick T. Long writes—
Suppose I were to invent a new word, "zaxlebax," and define it as "a metallic sphere, like the Washington Monument." That's the definition—"a metallic sphere, like the Washington Monument." In short, I build my ill-chosen example into the definition. Now some linguistic subgroup might start using the term "zaxlebax" as though it just meant "metallic sphere," or as though it just meant "something of the same kind as the Washington Monument." And that's fine. But my definition incorporates both, and thus conceals the false assumption that the Washington Monument is a metallic sphere; any attempt to use the term "zaxlebax," meaning what I mean by it, involves the user in this false assumption.
If self-deception is as ubiquitous in human life as authors such as Robin Hanson argue (and if you're reading this blog, this should not be a new idea to you!), then the ordinary concept of "lying" may actually be analogous to Long's "zaxlebax": the standard intensional definition [LW · GW] ("speaking falsehood with conscious intent to deceive"/"a metallic sphere") fails to match the most common extensional examples that we want to use the word for ("people motivatedly saying convenient things without bothering to check whether they're true"/"the Washington Monument").
Arguing for this empirical thesis about human psychology is beyond the scope of this post. But if we live in a sufficiently Hansonian world where the ordinary meaning of "lying" fails to carve reality at the joints, then authors are faced with a tough choice: either be involved in the false assumptions of the standard believed-to-be-central intensional definition, or be deprived of the use of common expressive vocabulary [LW · GW]. As Ben Hoffman points out in the comments to "Against Lie Inflation", an earlier Scott Alexander didn't seem shy about calling people liars in his classic 2014 post "In Favor of Niceness, Community, and Civilization"—
Politicians lie, but not too much. Take the top story on Politifact Fact Check today. Some Republican claimed his supposedly-maverick Democratic opponent actually voted with Obama's economic policies 97 percent of the time. Fact Check explains that the statistic used was actually for all votes, not just economic votes, and that members of Congress typically have to have >90% agreement with their president because of the way partisan politics work. So it's a lie, and is properly listed as one. [bolding mine —ZMD] But it's a lie based on slightly misinterpreting a real statistic. He didn't just totally make up a number. He didn't even just make up something else, like "My opponent personally helped design most of Obama's legislation".
Was the politician consciously lying? Or did he (or his staffer) arrive at the misinterpretation via unconscious motivated reasoning and then just not bother to scrupulously check whether the interpretation was true? And how could Alexander know?
Given my current beliefs about the psychology of deception, I find myself inclined to reach for words like "motivated", "misleading", "distorted", &c., and am more likely to frown at uses of "lie", "fraud", "scam", &c. where intent is hard to establish. But even while frowning internally, I want to avoid tone-policing [LW · GW] people whose word-choice procedures are calibrated differently from mine when I think I understand the structure-in-the-world they're trying to point to. Insisting on replacing the six instances of the phrase "malicious lies" in "Niceness, Community, and Civilization" with "maliciously-motivated false belief" would just be worse writing.
And I definitely don't want to excuse motivated reasoning as a mere ordinary human failing for which someone can't be blamed! One of the key features that distinguishes motivated reasoning from simple mistakes is the way that the former responds to incentives (such as being blamed). If the elephant in your brain thinks it can get away with lying just by keeping conscious-you in the dark, it should think again!
58 comments
Comments sorted by top scores.
comment by Connor_Flexman · 2019-10-14T17:26:46.741Z · LW(p) · GW(p)
The folk theory of lying is a tiny bit wrong and I agree it should be patched. I definitely do not agree we should throw it out, or be uncertain whether lying exists.
Lying clearly exists.
1. Oftentimes people consider how best to lie about e.g. them being late. When they settle on the lie of telling their boss they were talking to their other boss and they weren't, and they know this is a lie, that's a central case of a lie—definitely not motivated cognition.
To expand our extensional definition to noncentral cases, you can consider some other ways people might tell maybe-lies when they are late. Among others, I have had the experiences [edit: grammar] of
2. telling someone I would be there in 10 minutes when it was going to take 20, and if you asked me on the side with no consequences I would immediately have been able to tell you that it was 20 even though in the moment I certainly hadn't conceived myself as lying, and I think people would agree with me this is a lie (albeit white)
3. telling someone I would be there in 10 minutes when it was going to take 20, and if you asked me on the side with no consequences I would have still said 10, because my model definitely said 10, and once I started looking into my model I would notice that probably I was missing some contingencies, and that maybe I had been motivated at certain spots when forming my model, and I would start calculating... and I think most people would agree with me this is not a lie
4. telling someone I would be there in 10 minutes when it was going to take 20, and my model was formed epistemically virtuously despitely obviously there being good reasons for expecting shorter timescales, and who knows how long it would take me to find enough nuances to fix it and say 20. This is not a lie.
Ruby's example of the workplace fits somewhere between numbers 1 and 2. Jessica's example of short AI timelines I think is intended to fit 3 (although I think the situation is actually 4 for most people). The example of the political fact-checking doesn't fit cleanly because politically we're typically allowed to call anything wrong a "lie" regardless of intent, but I think is somewhere between 2 and 3 and I think nonpartisan people would agree that, unless the perpetrators actually could have said they were wrong about the stat, the case was not actually a lie (just a different type of bad falsehood reflecting on the character of those involved). There are certainly many gradations here, but I just wanted to show that there is actually a relatively commonly accepted implicit theory about when things are lies that fits with the territory and isn't some sort of politicking map distortion as it seemed you were implying.
The intensional definition you found that included "conscious intent to deceive" is not actually the implicit folk theory most people operate under: they include number 2's "unconscious intent to deceive" or "in-the-moment should-have-been-very-easy-to-tell-you-were-wrong obvious-motivated-cognition-cover-up". I agree the explicit folk theory should be modified, though.
I also want to point out that this pattern of explicit vs implicit folk theories applies well to lots of other things. Consider "identity"—the explicit folk theory probably says something about souls or a real cohesive "I", but the implicit version often uses distancing or phrases like "that wasn't me" [edit: in the context of it being unlike their normal self, not that someone else literally did it] and things such that people clearly sort of know what's going on. Other examples include theory of action, "I can't do it", various things around relationships, what is real as opposed to postmodernism, etc etc. To not cherry-pick, there are some difficult cases to consider like "speak your truth" or the problem of evil, but under nuanced consideration these fit with the dynamic of the others. I just mention this generalization because LW types (incl me) learned to tear apart all the folk theories because their explicit version were horribly contradictive, and while this has been very powerful for us I feel like an equally powerful skill is figuring out how to put Humpty-Dumpty back together again.
comment by Scott Alexander (Yvain) · 2019-12-22T01:38:26.408Z · LW(p) · GW(p)
Sorry it's taken this long for me to reply to this.
"Appeal to consequences" is only a fallacy in reasoning about factual states of the world. In most cases, appealing to consequences is the right action.
For example, if you want to build a house on a cliff, and I say "you shouldn't do that, it might fall down", that's an appeal to consequences, but it's completely valid.
Or to give another example, suppose we are designing a programming language. You recommend, for whatever excellent logical reason, that all lines must end with a semicolon. I argue that many people will forget semicolons, and then their program will crash. Again, appeal to consequences, but again it's completely valid.
I think of language, following Eliezer's definitions sequence, as being a human-made project intended to help people understand each other. It draws on the structure of reality, but has many free variables, so that the structure of reality doesn't constrain it completely. This forces us to make decisions, and since these are not about factual states of the world (eg what the definition of "lie" REALLY is, in God's dictionary) we have nothing to make those decisions on except consequences. If a certain definition will result in lots of people misunderstanding each other, bad people having an easier time confusing others, good communication failing to occur, or other bad things, then it's fine to decide against it based on those grounds, just as you can decide against a programming language decision on the grounds that it will make programs written in it more likely crash, or require more memory, etc.
I am not sure I get your point about the symmetry of strategic equivocation. I feel like this equivocation relies on using a definition contrary to its common connotations. So if I was allowed to redefine "murderer" to mean "someone who drinks Coke", then I could equivocate "Alice who is a murderer (based on the definition where she drinks Coke)" and also "Murderers should be punished (based on the definition where they kill people) and combine them to get "Alice should be punished". The problem isn't that you can equivocate between any two definitions, the problem is very specifically when we use a definition counter to the way most people traditionally use it. I think (do you disagree?) that most people interpret "liar" to mean an intentional liar. As such, I'm not sure I understand the relevance of the Ruby's coworkers example.
I think you're making too hard a divide between the "Hobbesian dystopia" where people misuse language, versus a hypothetical utopia of good actors. I think of misusing language as a difficult thing to avoid, something all of us (including rationalists, and even including me) will probably do by accident pretty often. As you point out regarding deception, many people who equivocate aren't doing so deliberately. Even in a great community of people who try to use language well, these problems are going to come up. And so just as in the programming language example, I would like to have a language that fails gracefully and doesn't cause a disaster when a mistake gets made, one that works with my fallibility rather than naturally leading to disaster when anyone gets something wrong.
And I think I object-level disagree with you about the psychology of deception. I'm interpreting you (maybe unfairly, but then I can't figure out what the fair interpretation is) as saying that people very rarely lie intentionally, or that this rarely matters. This seems wrong to me - for example, guilty criminals who say they're innocent seem to be lying, and there seem to be lots of these, and it's a pretty socially important thing. I try pretty hard not to intentionally lie, but I can think of one time I failed (I'm not claiming I've only ever lied once in my life, just that this time comes to mind as something I remember and am particularly ashamed about). And even if lying never happened, I still think it would be worth having the word for it, the same way we have a word for "God" that atheists don't just repurpose to mean "whoever the most powerful actor in their local environment is."
Stepping back, we have two short words ("lie" and "not a lie") to describe three states of the world (intentional deception, unintentional deception, complete honesty). I'm proposing to group these (1)(2,3) mostly on the grounds that this is how the average person uses the terms, and if we depart from how the average person uses the terms, we're inviting a lot of confusion, both in terms of honest misunderstandings and malicious deliberate equivocation. I understand Jessica wants to group them (1,2)(3), but I still don't feel like I really understand her reasoning except that she thinks unintentional deception is very bad. I agree it is very bad, but we already have the word "bias" and are so in agreement about its badness that we have a whole blog and community about overcoming it.
↑ comment by Zack_M_Davis · 2019-12-25T06:14:43.198Z · LW(p) · GW(p)
Like, I know my sibling comment is hugely inappropriately socially aggressive of me, and I don't want to hurt your feelings any more than is necessary to incentivize you to process information, but we've been at this for a year! "This definition will make people angry" is not one of the 37 Ways Words Can Be Wrong [LW · GW].
↑ comment by Zack_M_Davis · 2019-12-25T06:28:42.350Z · LW(p) · GW(p)
Like, sibling comments are very not-nice, but I argue that they meet the Slate Star commenting policy guidelines on account of being both true and necessary.
↑ comment by jessicata (jessica.liu.taylor) · 2019-12-22T02:06:35.623Z · LW(p) · GW(p)
For the record, my opinion is essentially the same as the one expressed in "Bad intent is a disposition, not a feeling" [LW · GW], which gives more detail on the difference between consciousness of deception and intentionality of deception. (Subconscious intentions exist, so intentional lies include subconsciously intended ones; I don't believe things that have no intentionality/optimization can lie)
"Normal people think you can't lie unawarely" seems inconsistent with, among other things, this article.
Note also, you yourself are reaching for the language of strategic equivocation, which implies intent; but, how could you know the conscious intents of those you believe are equivocating? If you don't, then you probably already have a sense that intent can be subconscious, which if applied uniformly, implies lies can be subconscious.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2019-12-25T19:11:48.645Z · LW(p) · GW(p)
I say "strategic" because it is serving that strategic purpose in a debate, not as a statement of intent. This use is similar to discussion of, eg, an evolutionary strategy of short life histories, which doesn't imply the short-life history creature understands or intends anything it's doing.
It sounds like normal usage might be our crux. Would you agree with this? IE that if most people in most situations would interpret my definition as normal usage and yours as a redefinition project, we should use mine, and vice versa for yours?
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-12-25T20:18:31.095Z · LW(p) · GW(p)
I don't think it's the crux, no. I don't accept ordinary language philosophy, which canonizes popular confusions. There are some contexts where using ordinary language is important, such as when writing popular news articles, but that isn't all of the contexts.
↑ comment by Zack_M_Davis · 2019-12-25T05:41:39.273Z · LW(p) · GW(p)
but has many free variables, so that the structure of reality doesn't constrain it completely. This forces us to make decisions, and since these are not about factual states of the world (eg what the definition of "lie" REALLY is, in God's dictionary) we have nothing to make those decisions on except consequences
Scott, I appreciate the appearance of effort, but I'm afraid I just can't muster the willpower to engage if you're going to motivatedly play dumb like this. (I have a memoir that I need to be writing instead.) You know goddamned well I'm not appealing to God's dictionary. I addressed this shit in "Where to Draw the Boundaries?" [LW · GW]. I worked really really hard on that post. My prereaders got it. Said got it. [LW(p) · GW(p)] 82 karma points says the audience got it. If the elephant in your brain thinks it can get away with stringing me along like this when I have the math and you don't, it should think again.
In the incredibly unlikely event that you're actually this dumb, I'll try to include some more explanations in my forthcoming memoir (working title: "'I Tell Myself to Let the Story End'; Or, A Hill of Validity in Defense of Meaning; Or, The Story About That Time Everyone I Used to Trust [LW · GW] Insisted on Playing Dumb About the Philosophy of Language in a Way That Was Optimized for Confusing Me Into Cutting My Dick Off (Independently of the Empirical Facts Determining Whether or Not Cutting My Dick Off Is a Good Idea) and Wouldn't Even Cut It Out Even After I Spent Five Months and Thousands of Words Trying to Explain the Mistake in Exhaustive Detail Including Dozens of Links to Their Own Writing; Or, We Had an Entire Sequence About This [LW · GW], You Lying Motherfuckers").
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2019-12-25T19:19:02.651Z · LW(p) · GW(p)
EDIT: Want to talk to you further before I try to explain my understanding of your previous work on this, will rewrite this later.
The short version is I understand we disagree, I understand you have a sophisticated position, but I can't figure out where we start differing and so I don't know what to do other than vomit out my entire philosophy of language and hope that you're able to point to the part you don't like. I understand that may be condescending to you and I'm sorry.
I absolutely deny I am "motivatedly playing dumb" and I enter this into the record as further evidence that we shouldn't redefine language to encode a claim that we are good at ferreting out other people's secret motivations.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-12-26T05:50:01.140Z · LW(p) · GW(p)
(Scott and I had a good conversation today. I think I need to write a followup post (working title: "Instrumental Categories, Wireheading, and War") explaining in more detail exactly what distinction I'm making when I say I want to consider some kinds of appeals-to-consequences invalid while still allowing, e.g. "Requiring semicolons in your programming language will have the consequence of being less convenient for users who forget them." The paragraphs in "Where to Draw the Boundaries?" [LW · GW] starting with "There is an important difference [...]" are gesturing at the distinction, but perhaps not elaborating enough for readers who don't already consider it "obvious.")
comment by jessicata (jessica.liu.taylor) · 2019-10-15T05:26:28.147Z · LW(p) · GW(p)
When conscious intent is selectively discouraged more than unconscious intent, the result is rule by unconscious intent. Those who can conveniently forget, who can maintain narcissistic fantasies, who can avoid introspection, who can be ruled by emotions with hidden causes, will be the only ones able to deceive (or otherwise to violate norms) blamelessly.
Only a subset of lies may be detected by any given justice process, but "conscious/unconscious" does not correspond to the boundary of such a subset. In fact, due to the flexibility and mystery of mental architecture, such a split is incredibly hard to pin down by any precise theory.
"Your honor, I know I told the customer that the chemical I sold to them would cure their disease, and it didn't, and I had enough information to know that, but you see, I wasn't conscious that it wouldn't cure their disease, as I was selling it to them, so it isn't really fraud" would not fly in any court that is even seriously pretending to be executing justice.
Replies from: Wei_Dai, Kaj_Sotala, abramdemski, Connor_Flexman, SaidAchmiz↑ comment by Wei Dai (Wei_Dai) · 2019-10-16T05:40:49.646Z · LW(p) · GW(p)
When conscious intent is selectively discouraged more than unconscious intent, the result is rule by unconscious intent. Those who can conveniently forget, who can maintain narcissistic fantasies, who can avoid introspection, who can be ruled by emotions with hidden causes, will be the only ones able to deceive (or otherwise to violate norms) blamelessly.
Conscious intent being selectively discouraged more than unconscious intent does not logically imply that unconscious intent to deceive will be blameless or "free from or not deserving blame", only that it will be blamed less.
(I think you may have an unconscious motivation to commit this logical error in order to further your side of the argument. Normally I wouldn't say this out loud, or in public, but you seem to be proposing a norm where people do state such beliefs freely. Is that right? And do you think this instance also falls under "lying"?)
I think conscious intent being selectively discouraged more than unconscious intent can make sense for several reasons:
- Someone deceiving with conscious intent can apply more compute / intelligence and other resources for optimizing and maintaining the lie, which means the deception can be much bigger and more consequential, thereby causing greater damage to others.
- Deceiving with conscious intent implies that the person endorses lying in that situation which means you probably need to do something substantially different to dissuade that person from lying in a similar situation in the future, compared to someone deceiving with unconscious intent. In the latter case, it might suffice to diplomatically (e.g., privately) bring up the issue to that person's conscious awareness, so they can consciously override their unconscious motivations.
- Conscious lies tend to be harder to detect (due to more optimizing power applied towards creating the appearance of truth). Economics research into optimal punishment suggests that (all else equal) crimes that are harder to detect should be punished more.
- Unconscious deception is hard to distinguish from innocent mistakes. If you try to punish what you think are cases of unconscious deception, you'll end up making a lot people feel like they were punished unfairly, either because they're truly innocent, or because they're not consciously aware of any deceptive intent and therefore think they're innocent. You inevitably make a lot of enemies to you personally or to the norm you're proposing.
(There are some issues in the way I stated points 1-4 above that I can see but don't feel like spending more time to fix. I would rather spend my time on other topics but nobody is bringing up these points so I feel like I have to, given how much the parent comment has been upvoted.)
Replies from: jessica.liu.taylor, Benquo↑ comment by jessicata (jessica.liu.taylor) · 2019-10-16T06:59:02.091Z · LW(p) · GW(p)
Conscious intent being selectively discouraged more than unconscious intent does not logically imply that unconscious intent to deceive will be blameless or “free from or not deserving blame”, only that it will be blamed less.
Yes, I was speaking imprecisely. A better phrasing is "when only conscious intent is blamed, ..."
you seem to be proposing a norm where people do state such beliefs freely. Is that right?
Yes. (I think your opinion is correct in this case)
And do you think this instance also falls under “lying”?
It would fall under hyperbole. I think some but not all hyperboles are lies, and I weakly think this one was.
Regarding the 4 points:
- I think 1 is true
- 2 is generally false (people dissuaded from unconsciously lying once will almost always keep unconsciously lying; not lying to yourself is hard and takes work; and someone who's consciously lying can also stop lying when called out privately if that's more convenient)
- 3 is generally false, people who are consciously lying will often subconsciously give signals that they are lying that others can pick up on (e.g. seeming nervous, taking longer to answer questions), compared to people who subconsciously lie, who usually feel safer, as there as an internal blameless narrative being written constantly.
- 4 is irrelevant due to the point about conscious/unconscious not being a boundary that can be pinned down by a justice process; if you're considering this you should mainly think about what the justice process is able to pin down rather than the conscious/unconscious split.
In general I worry more about irrational adversariality than rational adversariality, and I especially worry about pressures towards making people have lower integrity of mind (e.g. pressures to destroy one's own world-representation). I think someone who worries more about rational adversariality could more reasonably worry more about conscious lying than unconscious lying. (Still, that doesn't tell them what to do about it; telling people "don't consciously lie" doesn't work, since some people will choose not to follow that advice; so a justice procedure is still necessary, and will have issues with pinning down the conscious/unconscious split)
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-11-28T02:51:43.985Z · LW(p) · GW(p)
> I think you may have an unconscious motivation to commit this logical error in order to further your side of the argument. Normally I wouldn't say this out loud, or in public, you seem to be proposing a norm where people do state such beliefs freely. Is that right?
Yes. (I think your opinion is correct in this case)
Wow. Thanks for saying so explicitly, I wouldn't have guessed that, and am surprised. How do you imagine that it plays out, or how it properly ought to play out when someone makes an accusation / insinuation of another person like this?
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-11-28T03:11:21.712Z · LW(p) · GW(p)
Treat it as a thing that might or might not be true, like other things? Sometimes it's hard to tell whether it's true, and in those cases it's useful to be able to say something like "well, maybe, can't know for sure".
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-11-28T04:03:21.745Z · LW(p) · GW(p)
I'm trying to understand why this norm seems so crazy to me...
I definitely do something very much like this with people that I'm close with, in private. I have once been in a heated multi-person conversation, and politely excused myself and a friend, to step into another room. In that context, I then looked the friend in the eye, and said "it seems to me that you're rationalizing [based on x evidence]. Are you sure you really believe what you're saying here?"
And friends have sometimes helped me in similar ways, "the things that you're saying don't quite add up..."
(Things like this happen more often these days, now that rationalists have imported more Circling norms of sharing feelings and stories. Notably these norms include a big helping of NVC norms: owning your experience as your own, and keeping interpretation separate from observation.)
All things considered, I think this is a pretty radical move. But it seems like it depends a lot on the personal trust between me and the other person. I would feel much less comfortable with that kind of interaction with a random stranger, or in a public space.
Why?
- Well for one thing, if I'm having a fight with someone, having someone else question my motivations can cause me to lose ground in the fight. It can be an aggressive move, used to undercut the arguments that one is trying to make.
- For another, engaging with a person's psychological guts like that is intimate, and vulnerable. I am much less likely to be defensive if I trust that the other person is sincerely looking out for my best interests.
- I guess I feel like it's basically not any of your business what's happening in my mind. If you have an issue with my arguments, you can attack those, those are public. And you are, of course, free to have your own private opinion about my biases, but only the actual mistakes in reasoning that I make are in the common domain for you to correct.
- In general, It seems like a bad norm have "psychological" evidence be admissible in discourse, because it biases the disagreements towards whoever is more charismatic / has more rhetorical skill in pointing out biases, as opposed to the the person who is more correct.
- The arbital page on Psychoanalyzing is very relevant.
- Also, it just doesn't seem like it helps very much. "I have a hypothesis that you're rationalizing." The other party is like, "Ok. Well, I think my position is correct." and then they go back to the object level (maybe with one of them more defensive). I can't know what's happening in your head, so I can't really call you out on what's happening there, or enforce norms there. [I would want to think about it more, but I think that might be a crux for me.]
. . .
Now I'm putting those feeling next to my sense of what we should do when one has someone like Gleb Tsipursky in the mix.
I think all of the above still stands. It is inappropriate for me to attack him at the level of his psychology, as opposed to pointing to specific bad-actions (including borderline actions), and telling him to stop, and if that fails, telling him that he is no-longer welcome here.
This was mostly for my own thinking, but I'd be glad to hear what you think, Jessica.
Replies from: jessica.liu.taylor
↑ comment by jessicata (jessica.liu.taylor) · 2019-11-28T04:57:28.873Z · LW(p) · GW(p)
The concept of "not an argument" seems useful; "you're rationalizing" isn't an argument (unless it has evidence accompanying it). (This handles point 1)
I don't really believe in tabooing discussion of mental states on the basis that they're private, that seems like being intentionally stupid and blind, and puts a (low) ceiling on how much sense can be made of the world. (Truth is entangled! [LW · GW]) Of course it can derail discussions but again, "not an argument". (Eliezer's post says it's "dangerous" without elaborating, that's basically giving a command rather than a model, which I'm suspicious of)
There's a legitimate concern about blame/scapegoating but things can be worded to avoid that. (I think Wei did a good job here, noting that the intention is probably subconscious)
With someone like Gleb it's useful to be able to point out to at least some people (possibly including him) that he's doing stupid/harmful actions repeatedly in a pattern that suggests optimization. So people can build a model of what's going on (which HAS to include mental states, since they're a causally very important part of the universe!) and take appropriate action. If you can't talk about adversarial optimization pressures you're probably owned by them (and being owned by them would lead to not feeling safe talking about them).
↑ comment by Benquo · 2019-12-25T13:50:58.198Z · LW(p) · GW(p)
Someone deceiving with conscious intent can apply more compute / intelligence and other resources for optimizing and maintaining the lie, which means the deception can be much bigger and more consequential, thereby causing greater damage to others.
Unconscious deception is hard to distinguish from innocent mistakes.
Surely someone consciously intending to deceive can apply some of that extra compute to making it harder to distinguish their behavior from an innocent mistake.
↑ comment by Kaj_Sotala · 2019-10-15T08:49:21.629Z · LW(p) · GW(p)
"Your honor, I know I told the customer that the chemical I sold to them would cure their disease, and it didn't, and I had enough information to know that, but you see, I wasn't conscious that it wouldn't cure their disease, as I was selling it to them, so it isn't really fraud" would not fly in any court that is even seriously pretending to be executing justice.
(just to provide the keyword: the relevant legal doctrine here is that the seller "knew or should have known" that the drug wouldn't cure the disease)
↑ comment by abramdemski · 2019-10-16T23:57:06.461Z · LW(p) · GW(p)
"Your honor, I know I told the customer that the chemical I sold to them would cure their disease, and it didn't, and I had enough information to know that, but you see, I wasn't conscious that it wouldn't cure their disease, as I was selling it to them, so it isn't really fraud" would not fly in any court that is even seriously pretending to be executing justice.
Yet, oddly, something called 'criminal intent' is indeed required in addition to the crime itself.
It seems that 'criminal intent' is not interpreted as conscious intent. Rather, the actions of the accused must be incompatible with those of a reasonable person trying to avoid the crime.
Replies from: howie-lempel↑ comment by Howie Lempel (howie-lempel) · 2019-12-25T15:26:49.951Z · LW(p) · GW(p)
Note that criminal intent is *not* required for a civil fraud suit which could be brought simultaneously with or after a criminal proceeding.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-12-25T19:15:28.283Z · LW(p) · GW(p)
Can you say more about this? I've been searching for a while about the differences between civil and criminal fraud, and my best guess (though I am really not sure) is that both have an intentional component. Here for example is an article on intent in the Texas Civil Law code:
https://www.dwlawtx.com/issue-intent-civil-litigation/
Replies from: howie-lempel↑ comment by Howie Lempel (howie-lempel) · 2019-12-26T04:05:23.822Z · LW(p) · GW(p)
[I'm not a lawyer and it's been a long time since law school. Also apologies for length]
Sorry - I was unclear. All I meant was that civil cases don't require *criminal intent.* You're right that they'll both usually have some intent component, which will vary by the claim and the jurisdiction (which makes it hard to give a simple answer).
---
tl;dr: It's complicated. Often reckless disregard for the truth r deliberate ignorance is enough to make a fraud case. Sometimes a "negligent misrepresentation" is enough for a civil suit. But overall both criminal and ccivil cases usually have some kind of intent/reckless in difference/deliberate ignorance requirement. Securities fraud in NY is an important exception.
Also I can't emphasize enough that there are 50 versions in 50 states and also securities fraud, mail fraud, wire fraud, etc can all be defined differently in each state.
----
After a quick Google., it looks to me like the criminal and civil standards are usually pretty similar.
It looks like criminal fraud typically (but not always) requires "fraudulent intent" or "knowledge that the fraudulent claim was false." However, it seems "reckless indifference to the truth" is enough to satisfy this in many jurisdictions.[1]
New York is famous for the Martin Act, which outlaws both criminal and civil securities fraud without having any intent requirement at all.[2] (This is actually quite important because a high percentage of all securities transactions go through New York at some point, so NY gets to use this law to prosecute transactions that occur basically anywhere).
The action most equivalent to civil fraud is Misrepresentation of material facts/fraudulent misrepresentation. This seems a bit more likely than criminal law to accept "reckless indifference" as a substitute for actually knowing that the relevant claim was false.[3] For example, thee Federal False Claims Act makes you liable if you display "deliberate ignorance" or "reckless disregard of the truth" even if you don't knowingly make a false claim.[4]
However, in at least some jurisdictions you can bring a civil claim for negligent misrepresentation of material facts, which seems to basically amount too fraud but with a negligence standard, not an intent standardd.[5]
P.S. Note that we seem to be discussing the aspect of "intent" pertaining to whether the defendant knew the relevant statement was false.There's also often a required intent to deceive or harm in both the criminal and civil context (I'dd guess the requirement is a bit weaker in civil law.
------
[1] "Fraudulent intent is shown if a representation is made with reckless indifference to its truth or falsity." https://www.justice.gov/jm/criminal-resource-manual-949-proof-fraudulent-intent
[2] "In some instances, particularly those involving civil actions for fraud and securities cases, the intent requirement is met if the prosecution or plaintiff is able to show that the false statements were made recklessly—that is, with complete disregard for truth or falsity."
[3] https://en.wikipedia.org/wiki/False_Claims_Act#1986_changes
[4] "Notably, in order to secure a conviction, the state is not required to prove scienter (except in connection with felonies) or an actual purchase or sale or damages resulting from the fraud.[2]
***
.In 1926, the New York Court of Appeals held in People v. Federated Radio Corp. that proof of fraudulent intent was unnecessary for prosecution under the Act.[8] In 1930, the court elaborated that the Act should "be liberally and sympathetically construed in order that its beneficial purpose may, so far as possible, be attained."[9]
https://en.wikipedia.org/wiki/Martin_Act#Investigative_Powers
[5] "Although a misrepresentation fraud case may not be based on negligent or accidental misrepresentations, in some instances a civil action may be filed for negligent misrepresentation. This tort action is appropriate if a defendant suffered a loss because of the carelessness or negligence of another party upon which the defendant was entitled to rely. Examples would be negligent false statements to a prospective purchaser regarding the value of a closely held company’s stock or the accuracy of its financial statements." https://www.acfe.com/uploadedFiles/Shared_Content/Products/Self-Study_CPE/Fraud-Trial-2011-Chapter-Excerpt.pdf
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-12-26T04:55:43.236Z · LW(p) · GW(p)
Thank you, this was a good clarification and really helpful!
↑ comment by Connor_Flexman · 2019-10-20T17:28:23.305Z · LW(p) · GW(p)
I feel torn because I agree that unconscious intent is incredibly important to straighten out, but also think
1. everyone else is relatively decent at blaming them for their poor intent in the meantime (though there are some cases I'd like to see people catch onto faster), and
2. this is mostly between the person and themselves.
It seems like you're advocating for people to be publicly shamed more for their unconscious bad intentions, and this seems both super bad for social fabric (and witch-hunt-permitting) while imo not adding very much capacity to change due to point (2), and would be much better accomplished by a culture of forgiveness such that the elephant lets people look at it. Are there parts of this you strongly disagree with?
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-10-20T17:57:42.272Z · LW(p) · GW(p)
I'm not in favor of shaming people. I'm strongly in favor of forgiveness. Justice in the current context requires forgiveness because of how thoroughly the forces of deception have prevailed, and how motivated people are to extend coverups to avoid punishment. Law fought fraud, and fraud won.
It's important to be very clear on what actually happened (incl. about violations), AND to avoid punishing people. Truth and reconciliation.
Replies from: elityre, Kenny↑ comment by Eli Tyre (elityre) · 2019-11-28T04:07:29.979Z · LW(p) · GW(p)
Justice in the current context requires forgiveness because of how thoroughly the forces of deception have prevailed, and how motivated people are to extend coverups to avoid punishment. Law fought fraud, and fraud won.
This seems really important for understanding where you're at, and I don't get it yet.
I would love a concrete example of people being motivated to extend coverups to avoid punishment.
Do you have writings I should read?
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-11-28T04:45:07.719Z · LW(p) · GW(p)
Jeffrey Epstein
↑ comment by Kenny · 2019-11-08T01:18:20.698Z · LW(p) · GW(p)
It's important to be very clear on what actually happened (incl. about violations), AND to avoid punishing people. Truth and reconciliation.
I think this a very much underrated avenue to improve lots of things. I'm a little sad at the thought that neither are likely without the looming threat of possible punishment.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-10-15T08:43:30.375Z · LW(p) · GW(p)
This is an excellent point. The more relevant boundary seems like the one we usually refer to with the phrase “should have known”—and indeed this is more or less the notion that the courts use.
The question, then, is: do we have a satisfying account of “should have known”? If so: can we describe it sensibly and concisely? If not: can we formulate one?
Replies from: Raemon↑ comment by Raemon · 2019-10-15T22:22:32.767Z · LW(p) · GW(p)
I roughly agree with this being the most promising direction. In my mind the problem isn't "did so-and-so lie, or rationalize?" the question is "was so-and-so demonstratably epistemically negligent?". If so, and if you can fairly apply disincentives (or, positive incentives on how to be epistemically non-negligent), then the first question just doesn't matter.
In actual law, we have particular rules about what people are expected to know. It is possible we could construct such rules for LessWrong and/or the surrounding ecosystems, but I think doing so is legitimately challenging.
Replies from: mr-hire, SaidAchmiz, Kenny↑ comment by Matt Goldenberg (mr-hire) · 2019-10-16T00:53:29.682Z · LW(p) · GW(p)
I disagree that answering the first question doesn't matter - that's a very extreme "mistake theory" lens.
If someone is actively adversarial vs. biased but open to learning, that changes quite a bit about how leaders and others in the community should approach the situation.
Replies from: Raemon↑ comment by Raemon · 2019-10-19T21:25:39.723Z · LW(p) · GW(p)
I do agree that it's important to have the "are they actively adversarial" hypothesis and corresponding language. (This is why I've generally argued against the conflation of lying and rationalization).
But I also think, at least in most of the disagreements and conflicts I've seen so far, much of the problem has had more to do with rationalization (or, in some cases, different expectations of how much effort to put into intellectual integrity)
I think there is also an undercurrent of genuine conflict (as people jockey for money/status) that manifests primarily through rationalization, and in some cases duplicity.*
*where the issue is less about people lying but is about them semi-consciously presenting different faces to different people.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-10-16T00:43:19.028Z · LW(p) · GW(p)
Indeed, I agree that it would be more challenging for us, and I have some thoughts about why that would be and how to mitigate it. That said, I think the most productive and actionable way to make progress on this is to look into the relevant legal standards: what standards are applied in criminal proceedings (in the U.S.? elsewhere?) to “should have known”? to cases of civil liability? contract law? corporate law? etc. By looking at what constraints these sorts of situations place on people, and what epistemic obligations are assumed, we can get some insight into how our needs might be similar and/or different, compared to those contexts, which should give us ideas on how to formulate the relevant norms.
↑ comment by Kenny · 2019-11-08T01:15:55.376Z · LW(p) · GW(p)
I think we, and others too, are already constructing rules, tho not at as a single grand taxonomy, completed as a single grand project, but piecemeal, e.g. like common law.
There have been recent shifts in ideas about what counts as 'epistemically negligent' [and that's a great phrase by the way!], at least among some groups of people with which I'm familiar. I think the people of this site, and the greater diaspora, have much more stringent standards today in this area.
comment by Wei Dai (Wei_Dai) · 2019-10-14T16:42:48.244Z · LW(p) · GW(p)
But even while frowning internally, I want to avoid tone-policing people whose word-choice procedures are calibrated differently from mine when I think I understand the structure-in-the-world they’re trying to point to.
What if you think that you know what the author is trying to convey, but due to different calibration on word choices, a large fraction of the audience will be mislead? Worse, what if you suspect that the author is deliberately or subconsciously using a more extreme word choice (compared to what most of the audience would choose if they were fully informed) for non-epistemic reasons (e.g., to get more attention, or to mislead the audience into thinking the situation is worse than it actually is)? It seems to me that tone-policing is useful for getting everyone to have (or to be closer to having) the same calibration on word choice, and to help prevent and defend against non-epistemic motivations for word choice. (Although maybe "tone-policing" is a bad word for this, and "word-choice-policing" makes more sense here.)
Like anything, it can be misused, but I guess the solution to that is to police the word-choice-policing rather than to avoid it altogether?
Replies from: Zack_M_Davis, Benquo↑ comment by Zack_M_Davis · 2019-10-15T01:49:19.569Z · LW(p) · GW(p)
Oh, that's a good point! Maybe read that paragraph as a vote for "relatively less word-choice-policing on the current margin in my memetic vicinity"? (The slip into the first person ("I want to avoid tone-policing," not "tone-policing is objectively irrational") was intentional.)
↑ comment by Benquo · 2019-12-25T13:56:44.297Z · LW(p) · GW(p)
The tone and implications of comments along the lines of “this wording is going to cause a lot of people to believe specific proposition X, when it seems to me like what you would actually be willing to defend narrower proposition Y” is very different from that of “this wording is inappropriate because it is likely to upset people.”
An important feature of the first comment is that it actually does some work trying to clarify, and is an implicit ITT.
comment by Kaj_Sotala · 2019-10-14T14:44:20.385Z · LW(p) · GW(p)
But this is an appeal to consequences. Appeals to consequences are invalid because they represent a map–territory confusion, an attempt to optimize our description of reality at the expense of our ability to describe reality accurately (which we need in order to actually optimize reality).
This sounds like you are saying that the purpose of language is only to describe reality, so we should not appeal to consequences when discussing word boundaries. If so, that seems wrong to me - language serves several different purposes, of which prediction is only one.
As an example, consider the word "crime", and more specifically the question of defining which things should be crimes. When discussing whether something might be a crime, people often bring in considerations like "lying is bad, but it shouldn't be a crime, because that would have worse consequences than it being legal"; and it would seem clearly wrong to me not to.
One might object that legal terms are a special case, since they are part of a formal system with wide-ranging impact. But is that so different from other words, other than quantitatively? A legal term is primarily a tool for coordination, but so are rubes and bleggs [LW · GW]: on average, bleggs contain vanadium and rubes contain palladium, and the reason the factory draws those boundaries is to be able to instruct their workers on how to sort the things. If it turned out that their standard definitions were too confusing to their workers and made it harder to extract vanadium and palladium efficiently, then the factory would want to redefine the terms so as to make the sorting more efficient.
Or if I am a speaker of American English and want to ask my friend to bring me what are called chips in US English, but I know him to be a Brit, I might ask him to bring me crisps... because that word choice will have better consequences.
This is still compatible with all the standard words-as-disguised-queries stuff, because the language-as-prediction and language-as-coordination can be viewed as special cases of each other:
- From the language-as-prediction model, the ultimate disguised query is "what are the consequences of defining the word in this way and do those consequences align with my goals"; that is still capturing statistical regularities, those regularities just happen to also be defined in terms of one's values.
- From the language-as-coordination model, sometimes we want to coordinate around a purpose such as describing reality in a relatively value-neutral way, in which case it's good to also have terms whose queries make less reference to our values (even if the meta-algorithm producing them still uses our values as the criteria for choosing the object-level query; e.g. different occupations develop specialized vocabulary that allows them to do their jobs better, even though the queries implicit in their vocabulary don't directly reference this).
More succinctly: both "Language is about coordination, and sometimes we want to coordinate the best way to make predictions" and "Language is about prediction, and sometimes we want to predict the best ways to coordinate" seem equally valid, and compatible with the standard Sequences.
Replies from: SaidAchmiz, Zack_M_Davis↑ comment by Said Achmiz (SaidAchmiz) · 2019-10-14T16:32:45.742Z · LW(p) · GW(p)
As an example, consider the word “crime”, and more specifically the question of defining which things should be crimes. When discussing whether something might be a crime, people often bring in considerations like “lying is bad, but it shouldn’t be a crime, because that would have worse consequences than it being legal”; and it would seem clearly wrong to me not to.
This is a bad example, because whether something is a crime is, in fact, fully determined by whether “we” (in the sense of “we, as a society, expressing our will through legislation, etc.”) decide to label it a ‘crime’. There is no “fact of the matter” about whether something “is a crime”, beyond that.
Therefore “lying is bad, but it shouldn’t be a crime, because that would have worse consequences than it being legal” is a statement of an entirely different kind from “we shouldn’t call this ‘lying’, because that would have bad consequences”. In the former case, if we decide that lying isn’t a crime, then it is not, in fact, a crime—we actually cause reality to change by that decision, such that the facts of the matter now fully align with the new usage. In the latter case, however, it’s very different; there is a fact of the matter, regardless of how we talk about it.
Replies from: Kaj_Sotala, Wei_Dai, Kenny↑ comment by Kaj_Sotala · 2019-10-14T16:53:08.221Z · LW(p) · GW(p)
For demonstrating anything which involves a matter of degree, the point is communicated most effectively by highlighting examples which are at an extreme end of the spectrum. It is true that something being a "crime" is arguably 100% socially determined and 0% "objectively" determined, but that doesn't make it a bad example. It just demonstrates the extreme end of the spectrum, in the same way that a concept from say physics demonstrates the opposite end of the spectrum, where it's arguably close to 100% objective whether something really has a mass of 23 kilograms or not.
The relevant question is where "lying" falls on that spectrum. To me it feels like it's somewhere in between - neither entirely socially determined, nor entirely a fact of the matter.
↑ comment by Wei Dai (Wei_Dai) · 2019-10-18T20:40:31.829Z · LW(p) · GW(p)
This is a bad example, because whether something is a crime is, in fact, fully determined by whether “we” (in the sense of “we, as a society, expressing our will through legislation, etc.”) decide to label it a ‘crime’. There is no “fact of the matter” about whether something “is a crime”, beyond that.
Maybe a better example is "danger"? Everything is somewhat dangerous, there are no "concentrations of unusually high probability density in Thingspace [LW · GW]" that we can draw boundaries around, where one concentration is more dangerous than the other with a clear gap in between, so whether we decide to call something a "danger" seemingly must depend entirely or mostly on the consequences of doing so. Yet there is clearly a fact of the matter about how dangerous something really is.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-10-19T02:13:44.180Z · LW(p) · GW(p)
so whether we decide to call something a "danger" seemingly must depend entirely or mostly on the consequences of doing so
I'm not claiming that the theory can tell us exactly how dangerous something has to be before we call it a "danger." (Nor how many grains of sand make a "heap".) This, indeed, seems necessarily subjective.
I'm claiming that whether we call something a "danger" should not take into account considerations like, "We shouldn't consider this a 'danger', because if we did, then people would feel afraid, and their fear is suffering to be minimized according to the global utilitarian calculus."
That kind of utilitarianism might (or might not) be a good reason to not tell people about the danger, but it's not a good reason to change the definition of "danger" itself. Why? Because from the perspective of "language as AI design", that would be wireheading. You can't actually make people safer in reality by destroying the language we would use to represent danger.
Is that clear, or should I write a full post about this?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-10-19T03:47:00.076Z · LW(p) · GW(p)
I’m claiming that whether we call something a “danger” should not take into account considerations like, “We shouldn’t consider this a ‘danger’, because if we did, then people would feel afraid, and their fear is suffering to be minimized according to the global utilitarian calculus.”
Is the reason that you don't think we should not take this kind of consideration into account that if we did decide to not consider the object under discussion a "danger", that will have worse consequences in the long run? If so, why not argue for taking both of these considerations into account and argue that the second consideration is stronger? Kind of a "fight speech with more speech instead of censorship" approach? (That would allow for the possibility that we override considerations for people's feelings in most cases, but avoid calling something a "danger" in extreme cases where the emotional or other harm of doing so is exceptionally great.)
It seems like the only reason you'd be against this is if you think that most people are too irrational to correctly weigh these kinds of considerations against each on a case by case basis, and there's no way to train them to be more rational about this. Is that true, and if so why do you think that?
That kind of utilitarianism might (or might not) be a good reason to not tell people about the danger, but it’s not a good reason to change the definition of “danger” itself.
I'm questioning whether there is any definition of "danger" itself (in the sense of things that are considered dangerous, not the abstract concept of danger), apart from the collection of things we decide to call "danger".
Replies from: Vladimir_Nesov, Zack_M_Davis↑ comment by Vladimir_Nesov · 2019-10-19T14:24:16.744Z · LW(p) · GW(p)
correctly weigh these kinds of considerations against each on a case by case basis
The very possibility of intervention based on weighing map-making and planning against each other destroys their design, if they are to have a design. It's similar to patching a procedure in a way that violates its specification in order to improve overall performance of the program or to fix an externally observable bug. In theory this can be beneficial, but in practice the ability to reason about what's going on deteriorates.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-10-19T17:56:02.227Z · LW(p) · GW(p)
In theory this can be beneficial, but in practice the ability to reason about what’s going on deteriorates.
I think (speaking from my experience) specifications are often compromises in the first place between elegance / ease of reasoning and other considerations like performance. So I don't think it's taboo to "patch a procedure in a way that violates its specification in order to improve overall performance of the program or to fix an externally observable bug." (Of course you'd have to also patch the specification to reflect the change and make sure it doesn't break the rest of the program, but that's just part of the cost that you have to take into account when making this decision.)
Assuming you still disagree, can you explain why in these cases, we can't trust people to use learning and decision theory (i.e., human approximations to EU maximization or cost-benefit analysis) to make decisions, and we instead have to make them follow a rule (i.e., "don't ever do this")? What is so special about these cases? (Aren't there tradeoffs between ease of reasoning and other considerations everywhere?) Or is this part of a bigger philosophical disagreement between rule consequentialism and act consequentialism, or something like that?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2019-10-20T04:02:23.535Z · LW(p) · GW(p)
The problem with unrestrained consequentialism is that it accepts no principles in its designs. An agent that only serves a purpose has no knowledge of the world or mathematics, it makes no plans and maintains no goals. It is what it needs to be, and no more. All these things are only expressed as aspects of its behavior, godshatter of the singular purpose, but there is no part that seeks excellence in any of the aspects.
For an agent designed around multiple aspects, its parts rely on each other in dissimilar ways, not as subagents with different goals. Access to knowledge is useful for planning and can represent goals. Exploration and reflection refine knowledge and formulate goals. Planning optimizes exploration and reflection, and leads to achievement of goals.
If the part of the design that should hold knowledge accepts a claim for reasons other than arguments about its truth, the rest of the agent can no longer rely on its claims as reflecting knowledge.
Of course you'd have to also patch the specification
In my comment, I meant the situation where the specification is not patched (and by specification in the programming example I meant the informal description on the level of procedures or datatypes that establishes some principles of what it should be doing).
In the case of appeal to consequences, the specification is a general principle that a map reflects the territory to the best of its ability, so it's not a small thing to patch. Optimizing a particular belief according to the consequences of holding it violates this general specification. If the general specification is patched to allow this, you no longer have access to straightforwardly expressed knowledge (there is no part of cognition that satisfies the original specification).
Alternatively, specific beliefs could be marked as motivated, so the specification is to have two kinds of beliefs, with some of them surviving to serve the original purpose. This might work, but then actual knowledge that corresponds to the motivated beliefs won't be natively available, and it's unclear what the motivated beliefs should be doing. Will curiosity act on the motivated beliefs, should they be used for planning, can they represent goals? A more developed architecture for reliable hypocrisy might actually do something sensible, but it's not a matter of merely patching particular beliefs.
↑ comment by Zack_M_Davis · 2019-10-28T04:27:14.962Z · LW(p) · GW(p)
(Thanks for the questioning!—and your patience.)
In order to compute what actions will have the best consequences, you need to have accurate beliefs—otherwise, how do you know what the best consequences are?
There's a sense in which the theory of "Use our methods of epistemic rationality to build predictively accurate models, then use the models to decide what actions will have the best consequences" is going to be meaningfully simpler than the theory of "Just do whatever has the best consequences, including the consequences of the thinking that you do in order to compute this."
The original timeless decision theory manuscript distinguishes a class of "decision-determined problems", where the payoff depends on the agent's decision, but not the algorithm that the agent uses to arrive at that decision: Omega isn't allowed to punish you for not making decisions according to the algorithm "Choose the option that comes first alphabetically." This seems like a useful class of problems to be able to focus on? Having to take into account the side-effects of using a particular categorization, seems like a form of being punished for using a particular algorithm.
I concede that, ultimately, the simple "Cartesian" theory that disregards the consequences of thinking can't be the true, complete theory of intelligence, because ultimately, the map is part of the territory. I think the embedded agency [LW · GW] people are working on this?—I'm afraid I'm not up-to-date on the details. But when I object to people making appeals to consequences, the thing I'm objecting to is never people trying to do a sophisticated embedded-agency thing; I'm objecting to people trying to get away with choosing to be biased [LW · GW].
you think that most people are too irrational to correctly weigh these kinds of considerations against each on a case by case basis, and there's no way to train them to be more rational about this. Is that true
Actually, yes.
and if so why do you think that?
Long story. How about some game theory instead?
Consider some agents cooperating in a shared epistemic project—drawing a map, or defining a language, or programming an AI—some system that will perform better if it does a better job of corresponding with (some relevant aspects of) reality. Every agent has the opportunity to make the shared map less accurate in exchange for some selfish consequence. But if all of the agents do that, then the shared map will be full of lies. Appeals to consequences tend to diverge (because everyone has her own idiosyncratic favored consequence); "just make the map be accurate" is a natural focal point (because the truth is generally useful to everyone).
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-10-30T06:22:11.408Z · LW(p) · GW(p)
I think the embedded agency people are working on this?—I’m afraid I’m not up-to-date on the details. But when I object to people making appeals to consequences, the thing I’m objecting to is never people trying to do a sophisticated embedded-agency thing; I’m objecting to people trying to get away with choosing to be biased.
In that case, maybe you can clarify (in this or future posts) that you're not against doing sophisticated embedded-agency things? Also, can you give some examples of what you're objecting to, so I can judge for myself whether they're actually doing sophisticated embedded-agency things?
Appeals to consequences tend to diverge (because everyone has her own idiosyncratic favored consequence); “just make the map be accurate” is a natural focal point (because the truth is generally useful to everyone).
This just means that in most cases, appeals to consequences won't move others much, even if they took such consequences into consideration. It doesn't seem to be a reason for people to refuse to consider such appeals at all. If appeals to consequences only tend to diverge, it seems a good idea to actually consider such appeals, so that in the rare cases where people's interests converge, they can be moved by such appeals.
So, I have to say that I still don't understand why you're taking the position that you are. If you have a longer version of the "story" that you can tell, please consider doing that.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-10-30T15:04:26.400Z · LW(p) · GW(p)
I will endeavor to make my intuitions more rigorous and write up the results in a future post.
↑ comment by Kenny · 2019-11-08T01:23:59.743Z · LW(p) · GW(p)
This is a bad example, because whether something is a crime is, in fact, fully determined by whether “we” (in the sense of “we, as a society, expressing our will through legislation, etc.”) decide to label it a ‘crime’.
I think it's still a good example, perhaps because of what you pointed out. It seems pretty clear to me that there's a sometimes significant difference between the legal and colloquial meanings of 'crime' and even bigger differences for 'criminal'.
There are many legal 'crimes' that most people would not describe as such and vice versa. "It's a crime!" is inevitably ambiguous.
↑ comment by Zack_M_Davis · 2019-10-15T01:53:08.046Z · LW(p) · GW(p)
I agree that the complete theory needs to take coordination problems into account, but I think it's a much smaller effect than you seem to? See "Schelling Categories, and Simple Membership Tests" [LW · GW] for what I think this looks like. (I also analyzed a topical example on my secret ("secret") blog.)
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2019-10-15T08:54:39.630Z · LW(p) · GW(p)
I didn't mean "coordination" just in the sense of "coordination problems" (in the technical economic sense), but as language existing to enable any coordination at all. In the sense where, if I ask you to bring me a glass of water, we have coordinated on an action to bring me a glass of water. I don't think that this is just an effect which needs to be taken into account, but rather one of the primary reasons why language exists in the first place. Its usefulness for making improved (non-coordination-related) predictions could very well be a later addition that just happened to get tacked onto the existing mechanism.
comment by Kenny · 2019-11-08T01:36:04.969Z · LW(p) · GW(p)
This is a great post! A lot of these points have been addressed, but this is what I wrote while reading this post:
It's not immediately clear that an 'appeal to consequences' is wrong or inappropriate in this case. Scott was explicitly considering the policy of expanding the definition of a word, not just which definition is better.
If the (chief) purpose of 'categories' (i.e. words) is to describe reality, then we should only ever invent new words, not modify existing ones. Changing words seems like a strict loss of information.
It also seems pretty evident that there are ulterior motives (e.g. political ones) behind overt or covert attempts to change the common shared meaning of a word. It's certainly appropriate to object to those motives, and to object to the consequences of the desired changes with respect to those motives. One common reason to make similar changes seems to be to exploit the current valence or 'mood' of that word and use it against people that would be otherwise immune based on the current meaning.
Some category boundaries should reflect our psychology and the history of our ideas in the local 'category space', and not be constantly revised to be better Bayesian categories. For one, it doesn't seem likely that Bayesian rationalists will be deciding the optimal category boundaries of words anytime soon.
But if the word "lying" is to actually mean something rather than just being a weapon, then the ingroup and the outgroup can't both be right.
This is confusing in the sense that it's obviously wrong but I suspect intended in a much more narrow sense. It's a demonstrated facts that people assign different meanings to the 'same words'. Besides otherwise unrelated homonyms, there's no single unique global community of language users where every word means the same thing for all users. That doesn't imply that words with multiple meanings don't "mean something".
Given my current beliefs about the psychology of deception, I find myself inclined to reach for words like "motivated", "misleading", "distorted", &c., and am more likely to frown at uses of "lie", "fraud", "scam", &c. where intent is hard to establish. But even while frowning internally, I want to avoid tone-policing people whose word-choice procedures are calibrated differently from mine when I think I understand the structure-in-the-world they're trying to point to.
You're a filthy fucking liar and you've twisted Scott Alexander's words while knowingly ignoring his larger point; and under cover of valuing 'epistemic rationality' while leveraging your privileged command of your cult's cant.
[The above is my satire against-against tone policing. It's not possible to maintain valuable communication among a group of people without policing tone. In particular, LessWrong is great in part because of it's tone.]
comment by Dagon · 2019-10-14T17:05:06.997Z · LW(p) · GW(p)
I suspect we're very far apart on your first point. "lie" is not in the territory, it's all about which map you are using. Which definition(s) you prefer are context- and audience-dependent, and if you suspect something is unclear, you should use more words rather than expending a lot of energy on over-generalizing this use of one word.
I'm more sympathetic to your second point, but note that you are arguing from consequences - you want to use the word in such a way that it apportions blame and influences behavior in the ways you (and I, TBH) prefer.
comment by Hazard · 2019-10-15T21:58:11.821Z · LW(p) · GW(p)
Thanks for redirecting to the question of "What is the Psychology of Deception like?"
Upon inspection, one way my mind treats "lying" is directly as the decision to punish. I also might use "intentional deceit" as a definition when prompted. This was the first time that I checked if my gut sense of when people should be punished maps onto "intentional deceit". No conclusions on if they're mismatched, but it's no longer obvious to me that they are the same.
comment by Shmi (shminux) · 2019-10-14T19:52:49.539Z · LW(p) · GW(p)
Maybe it is worth starting with a core definition of a lie that most people would agree with, something like "an utterance that is consciously considered by the person uttering it to misrepresent reality as they know it at that moment, with the intention to force the recipient to adjust their map of reality to be less accurate". Well, that's unwieldy. Maybe "an attempt to deceive through conscious misrepresentation of one's models"? Still not great.
comment by quwgri · 2024-12-13T19:33:42.819Z · LW(p) · GW(p)
The Hume's quote (or rather the way you use it) has nothing to do with models of reality. Your post is not about the things Scott was talking about from the very beginning.
Suppose I say "Sirius is a quasar." I am relying on the generally accepted meaning of the word "quasar." My words suggest that the interlocutor change the model of reality. My words are a hypothesis. You can accept this hypothesis or reject it.
Suppose the interlocutor says "Sirius cannot be considered a quasar because it would have very bad social consequences." Perhaps he is making a mistake. For the reasons you described in your text. (To be honest, I am not sure that this is a mistake. But I realize that I am writing this text on a resource for noble crazy idealists, so I will not delve deeply into this issue. Let's assume that this is indeed a mistake.)
Suppose I say "Let's consider stars like Sirius to be quasars." Is this sentence similar to the previous one? No. I am not suggesting that the other person change their model of reality. My words are not a hypothesis. They are just a project. They are just a proposal for certain actions.
Suppose the other person says "If we use the word 'quasar' in this way, it will have very bad social consequences." Is his logic sound? In my opinion, yes. My proposal does not suggest that anyone change their model of reality. It is a proposal for a specific practical action. It is as if I suggested: "Let's sing the National Anthem while walking." If the other person says: "If you sing the National Anthem while walking, it will lead to terrible consequences" (and if he can prove it), is he wrong?