A Dissent on Honesty
post by eva_ · 2025-04-15T02:43:44.163Z · LW · GW · 51 commentsContents
Context Reminder About Winning Technical Truth is as Bad as Lying Being Mistaken is Also as Bad as Lying This is Partially a Response Examples Biting the Bullet My Proposed Policy Appearing Trustworthy Anyway Cooperative Epistemics Conclusion None 52 comments
Context
Disney's Tangled (2010) is a great movie. Spoilers if you haven't seen it.
The heroine, having been kidnapped at birth and raised in a tower, has never stepped foot outside. It follows, naturally, that she does not own a pair of shoes, and she is barefoot for the entire adventure. The movie contains multiple shots that focus at length on her toes. Things like that can have an outsized influence on a young mind, but that's Disney for you.
Anyway.
The male romantic lead goes by the name of "Flynn Rider." He is a dashingly handsome, swashbuckling rogue who was carefully crafted to be maximally appealing to women. He is the ideal male role model. If you want women to fall in love with you, it should be clear that the optimal strategy is to pretend to be Flynn Rider. Shortly into the movie is the twist: Flynn Rider (real name Eugene Fitzherbert) is also just pretending to be Flynn Rider. He was once a nerdy, loner orphan with no friends (I hope this is cutting close to your heart) till he read a series of books called "The Tales of Flynnigan Rider" and decided to steal the protagonist's personality wholesale to use as his own. Presumably what followed was an unbroken series of victories and successes all the way until he stumbled into the arms of a beautiful, naive teenager with implausible hair, who he seduced on his first try in about a day and a half.
Flynn admits his real name (and parts of his real personality) to his love interest and only his love interest, and only after he's already successfully seduced her. She accepts this, finding his (selective) openness endearing.
The lesson here is likewise clear: If your actual personality isn't good enough, pretend to be Flynn Rider to everyone at all times, with the sole carve-out being people who love you, like your mother or a princess. This works because people who love you will find your openness endearing, whereas everyone else will think you pathetic and use it against you.
Actually, even if your personality is good enough, you should probably still pretend to be Flynn Rider, because his personality is better. It was, after all, carefully designed by a crack team of imagineers. Was yours? Didn't think so.
Reminder About Winning
Once upon a time, two armies went to war. The first army desired honour and glory, to prove their bravery against their foe, to stand their ground whatever their odds, to bring no shame to their ancestors, and to be worthy of great ballads that would be sung across the lands for generations to come. The second army wanted to kill the people in the first army, without dying themselves in the process. It should be of little surprise to you that, since none of their goals contradicted, in the end everybody got what they wanted.
- Sun Tzu maybe, IDK I made it up.
Philosophers get to pick one axiom, one highest value, to declare superior to all others. If you have no highest value you have no way to judge all your other values. If you have two highest values you will not make it three paces from your own front door before they start contradicting with each other.
Rationality can be about Winning [LW · GW], or it can be about The Truth, but it can't be about both. Sooner or later, your The Truth will demand you shoot yourself in the foot, while Winning will offer you a pretty girl with a country-sized dowry. The only price will be presenting various facts about yourself in the most seductive order instead of the most informative one.
If your highest value isn't Winning, you do not get to be surprised when you lose. You do not even get to be disappointed. By revealed preference, you have to have a mad grin across your face, that you were able to hold fast to your highest-value-that-isn't-winning all the way to the bitter end.
And yet, the rationalist movement has some kind of weird fetish for honesty, without much formal proof or empirical evidence that it's a good idea. Why? Did you watch the wrong Disney movies growing up?
Technical Truth is as Bad as Lying
There is a philosophy I want to call "Technical Truthism". Technical Truthists believe that, so long as what they said was technically true, you have no right to be mad at them, including when they tricked you into giving them all your money, cleverly danced around important issues, lied to themselves so they could truthfully report their own delusional beliefs as if they were valuable, laundered their opinions through a series of shell companies to create the illusion of an independent source that they were merely quoting, refused to give a straight answer on the important "have I just been scammed" question, and publically lauded their own commitment to Absolute Honesty while they did it.
The gospel of Technical Truthism includes such sacred wisdom as:
- If the sentence has the word "allegedly" in it, and anyone has ever said it, it's true.
- If the sentence has the word "may" or "might" in it, it's true.
- If the sentence is about your belief state, and you can by any means reach that belief state, it's true.
- If the sentence is about what you've observed, and it is true about a subset of what you've observed, it's true.
I'm not sure which Disney movies get you into this, because every pop culture example I can think of is either the devil or a lawyer who looks like a standin for the devil. I think this philosophy is ridiculous and self-defeating. It defeats the entire point of telling the truth in the first place.
If you are an honest person, and others can by some mechanism know this, then they will believe you when you say things, and this can be used to share valuable information. If you are a liar and everyone knows it, there's nothing you can do to get the village to save you from the wolf, because when you yell wolf they just ignore you.
The purpose of a word is to carve reality at a joint useful for the discussion taking place, and we should pause here to note that the joint in question isn't "emits true statements", it's "emits statements that the other party is better off for listening to". Nobody should care if your statement "A wolf has been observed near our sheep!" is technically true if, when they come running, they find it was a drawing of a wolf and you're laughing at them. That is no better for their interests than an outright lie. The technical truth is useless as a defense, except against courts that are obligated to follow explicit laws and exact wordings. Nobody made out of meat should care.
Being Mistaken is Also as Bad as Lying
Just as we don't have reason to care if they technically saw a wolf, we also don't have much reason to care if they thought they saw a wolf and were merely sincerely mistaken. Sure, malice can be disincentivised with punishments in a way that mere errors are less susceptible to, but when it comes to whether we should listen to them next time, being frequently wrong because they're not very bright is just as bad as being just as wrong for any other reason.
The honest might say "By never lying, I get a reputation for never lying, so people will always believe me". This isn't true. They'd also have to never be mistaken, never be mislead by the lies of another, never misread anothers report, never stumble over their words and say something they didn't mean, never accidentally imply something they didn't mean and be mistaken for saying it outright, etc. Basically they'd have to be omniscient, but they're not omniscient. They're made out of meat too, remember.
Fortunately, you don't need a perfect reputation, just a good enough reputation that other people think it passes their Expected Utility Calculation to act on what you say. If you are an aspiring rationalist, you may well be so far above the median in accuracy of belief that you can get away with far above median dishonesty if you want, and still be an authority figure.
This is Partially a Response
In Meta-Honesty: Firming Up Honesty Around Its Edge-Case [LW · GW]", Eliezer writes, and the community seems to agree [? · GW] in the direction of a general premise that truth-speaking is admirable, and something rationalists should aspire to have more of.
As to whether the honest can better ability to discern lies than the liars, Eliezer writes:
This is probably not true in normal human practice for detecting other people's lies. I'd expect a lot of con artists are better than a lot of honest people at that.
I think this is probably correct. You can tell because Eliezer says so, and it's an admission against interest, so he wouldn't say it if he didn't believe it and he wouldn't believe it (because of self-interest biases) unless it was probably correct, but you might still check over your memories of your own life or try looking for an independent study anyway.
He goes on to write:
I once heard somebody claim that rationalists ought to practice lying, so that they could separate their internal honesty from any fears of needing to say what they believed. That is, if they became good at lying, they'd feel freer to consider geocentrism without worrying what the Church would think about it. I do not in fact think this would be good for the soul, or for a cooperative spirit between people. This is the sort of proposed solution of which I say, "That is a terrible solution and there has to be a better way."
Here (I think) he is straightforwardly wrong, and you can tell because he's only able to defend it by resorting to non-rational frames. Who cares if it is "good for the soul", souls aren't real and we're supposed to be trying to Win here. There does not in fact have to be a better way. Sometimes the best option isn't also the maximally honest one. Tradeoffs exist, and you aren't going to make those tradeoffs at anywhere near an optimal rate if you're refusing to ever think of the possibility for fear of spiritual damage.
Whether it is bad for a "cooperative spirit" I promise I will get back to.
The purpose of this essay is not to disagree with Eliezer's Meta-Honesty as a principle for how to be unusually honest despite the dangers (I think it's mostly correct given its premise), but rather to disagree that being unusually honest is a good idea in the first place.
Examples
It is very easy to come up with moral hypotheticals where You Must Lie or Else, but lets ceremonially do it anyway.
A Paragon of Morality is out travelling, when he is beset by bandits. They demand he hand over his gold or they will kill him and take it from his corpse. This is not a decision-theoretic threat [? · GW] because the bandits value getting his gold more than they disprefer commiting murder, but would otherwise avoid the murder if possible. If he hands over all his gold he will lose all his gold. If he hands over all the gold in his pockets, neglects the extra he has hidden in his sock, and says "I have given you all my gold" in a sufficiently convincing tone of voice, then he will lose less than all his gold.
These isn't Omega we're dealing with here, they're totally trickable by a moderately convincing performance. If he keeps some of the gold he can donate it to Givewell approves charities and save however many QALYs or whatever.
Does he have a moral obligation to lie?
Yeah, obviously. Just do the Expected Value Calculation. Why care about Honour here, they're bloody bandits. I think even the particularly devout agree here.
A Normally Honest Man is applying for a job as a Widget Designer. He has many years of industry experience in Widget Engineering. He has memorised the Widget Manufacturing Process. He's actually kind of obsessed with Widgets. Typically whenever a conversation becomes about Widgets he gushes openly and makes a bad impression with his in-laws. Since that incident he has developed the self control to pretend otherwise, and the rest of his personality is okay.
The interviewer works for a Widget Manufacturing company but seems to only care about Widgets a normal amount. He asks "How interested are you in Widgets?" He has learnt from previous job interviews that, if he answers honestly, the interviewer will think he is any of lying, insane, or too weird to deal with, and not hire him, even though this is not in the best financial interests of the company, were they fully informed.
Should he pretend to like widgets the amount most likely to get him hired, or does he have a moral obligation to keep answering honestly until he runs out of rent money and becomes homeless?
The thing I'm trying to point at here is that Not Lying is not a good decision principle in general. It might still be a valuable social norm e.g. for "Cooperative Epistemics" (It isn't, but I'll get to that later), but you definitely shouldn't think of it as any kind of bright line or guiding light.
These aren't the important examples to me though, the important example is this:
A Self-Improvement and Epistemics Nerd has an online community for Self-Improvement and Epistemics Nerds. Half the people reading it are autists with bad social skills, who weren't at exactly the right age to be saved by Disney's Tangled. They struggle with navigating ordinary social situations and obtaining true beliefs because they're bad at lying, and insufficiently aggressive at anticipating it in others.
Would they be doing anyone a favour in encourage a social norm of truthfulness and the expectation of truthfulness in others, when all those people will inevitably have to leave the computer one day and end up like the subjects of the previous two examples? Would they be making the world a better place?
I don't think they would be doing a favour, and I expect they would be making the world a worse place. In fact, if they pretended it was a good idea anyway, I think they would by lying.
Biting the Bullet
Other people are not all your friends, do not all love you, and have not sworn an unbreakable oath not to use the information you provide them against your interests. Sometimes you are made better off by them having less accurate information.
Saying words is just an action, like any other action. Whether the words are literally true or not is just a fact about the action, like any other fact about an action. It's not the morally important fact. You judge actions by their consequences, whether you expect it to lead to more good or bad. Then you take the action with the best consequences overall.
Far more important for an aspiring rationalist however is the inverse: You should expect other people to do similar, and (sometimes) emit sentences that it does not benefit you to listen to. Even if they're good people, even if they're paragons of virtue who donate to all the right charities. You have an obligation to think things through yourself and carefully check. You cannot sell this obligation to the other party.
How to actually do this will involve some details.
My Proposed Policy
Lie by default whenever you think it passes an Expected Value Calculation to do so, just as for any other action. Include the reputation affects of your statement, the loss of valuable credibility if you are known to be a liar, and the risk of revenge by angry victims if your lies hurt others in your calculation. If your lie would be breaking a valuable and delicate social institution that actually exists and is being stably participated in by your counterparties (you have to check for this, do not just assume one exists), consider the value of that institution also in your decision. If you could go to prison about it, remember that prison is a really incredibly terrible place and that even tiny probabilities of ending up there can quickly dominate Expected Values.
Practice lying until you are good at it, so you can use it when you need to. Practice quickly assessing whether it is a good idea to say something, seperately from whether that something is true. Practice to discern the lies of others, and better track and model reputation and reliability in different circumstances.
Once you have these tools, reassess all your beliefs for whether you really believe them or were just tricking yourself because you felt you needed to believe it to maintain your social relationships in the absence of your new ability to lie. For any such beliefs you find, secretly abandon it in favour of believing the truth. If necessary, lie to all your friends and pretend you still believe it to maintain your social position (If you wouldn't let yourself do this, you won't be able to really reassess the belief in the first place).
Treat Cooperate-Cooperate dynamics, where they locally exist, to be extremely valuable things that you would not want to cheaply destroy, but do not assume they exist where they do not. Require proof and err towards caution. If you think your friend is mistaken or overly naive, try to help them reach truth if and only if you aren't shooting yourself in the foot even moreso by doing that.
When your friends ask you about how trustworthy you are, make no implications that you are abnormally honest. Tell them truthfully (if it is safe to do so) about all the various bad incentives, broken social systems, and ordinary praxis that compel dishonesty from you and any other person, even among friends, and give them sincere advice about how to navigate these issues.
Build a mental model of how and when other people are trustworthy based on past behaviour, demographic and selection effects, random mannerisms, and anything else you find that is useful. As you know someone better, you will update away over time from the general distribution to a more accurate subpopulation and eventually a precise internal model of how that individual thinks. If a specific claim greatly affects your choices and would be cheap to check, check anyway, as your Expected Value Calculations direct you.
I know you can't actually do an Expected Value Calculation. I just mean pretend to do one to the best of your ability, make up a number, and then act on that. The universe won't be able to tell a made up number from a real one anyway, nobody else can check your internal thoughts. It's still a better policy than just trusting people.
Appearing Trustworthy Anyway
People often compress reputation into a single scalar, or worse a single boolean, when it really isn't.
If you mostly emit (verifiably) bad sentences that hurt the listener, they will eventually notice and stop listening. If you lie always and obviously about a specific topic they will disbelieve you about that topic but might still believe you about other things. If you lie only in ways where you can't get caught, like about your private preferences or beliefs ("yes, I am feeling fine", "yes, I do like your hat", "I don't know, I didn't see") then you're not going to be seen as dishonest even if you did somehow get caught.
Reputation is an over-simplification. The other person has in their mind a model of you and how you behave, that they use to make decisions. Your actions impact that model, and not in the Functional Decision Theory [? · GW] sense where they quickly end up with a perfect clone of you [? · GW]. They are not going to be able to accurately model the real you, because the real you is too big to fit in their tiny mental model.
Most of the time, the amount of brainpower you're putting into your facade is both wider and deeper than what they're putting into trying to get a read on you. They're distracted by other people and other tasks. They are going to apply limited heuristics because they have no other choice. If you want to be trusted, you do not even need to trick the other person, just the heuristic they're using to assess credibility.
Typically people will trust you more if you more accurately match their existing model of how a trustworthy person behaves (wearing a suit, sitting straight, answering politely, etc.) even when those things aren't causally related, and even when you are doing those things deceptively to gain their trust. If you show up to the job interview with your real personality and really are a person who would never mislead anyone, but that personality has features that correlate with dishonour in people they've met before, sucks to be you.
If you want a reputation and appearance of Trustworthiness, you have to roleplay the Flynn Rider of Appearing Trustworthy, not your real self who obsesses over the Actual Truth. Most people who obsess over the truth are crazy, and have so many mistaken beliefs that they're worse than liars. You do not want to look like them. The details and techniques of how to do this fill many books, so I have sprinkled some examples through this essay as a scavenger hunt. Or if you prefer, pick on any other persuasive writer you like and dig out all the ways they try to make you hallucinate credibility via text alone.
Cooperative Epistemics
I promised to get back to this earlier (I promise I made solely so I can point it out now, so you think me a promise-keeper and therefore more trustworthy (and I am now pointing that out too in the hopes for another chance for you to see how this works)).
The motto of science is not "If we all tell the Truth to each other, we can form a little bubble where we collectively figure out all the big issues and come to fully understand the world". The motto of science is nullius in verba, "take nobody's word for it".
You cannot actually make a community of good epistemics on the expectation of trust and cooperation. It is like trying to start a communist utopia on the expectation that everybody just does their part and takes only what they need. People will not just.
Even if they would just, that wouldn't even be good enough. A person who is trying to be truthful can still be mistaken. They can still be selectively silenced. They can still trick themselves out of fear or regret, or because it's necessary to protect their ego. They can have a psychotic break. Their account could be hacked.
People who have heard other people claim that title and for some reason believed them, are creating the worst possible set of incentives. The greatest possible force by which to bring sociopaths into your community, or to make otherwise good people decide just maybe this one time to fudge an answer. Nobody would notice. Everybody is just trusting them. Do them the charity of not pretending they wouldn't be making a terrible mistake by imagining they can take you or anyone else at their word. Build your Cooperative Epistemics on distrust instead.
Conclusion
I believe that we are all friends here: I am not an honest person. You can tell that's true because if it wasn't I wouldn't say it, would I? You can tell I think you friends, because if I didn't I'd lie and say I was honest. It is only because I believe this that I am admitting to the incentives around dishonesty, and trying to help you all navigate towards truth better in the choppy waters where you find yourselves.
Do not go through life as a pedantic idiot with no social model just because people on a forum for truthseekers who hate social models think it's admirable and virtuous. Especially do not do it if you are trying to accomplish some other task like saving lives or getting a job or getting someone to love you.
I imagine you are going to feel bad about it, like you are doing something wrong. That sense of wrongness will show on your face and make you look like a crazy person who panics over normal small talk, so you're going to have to get over it. To your benefit in getting over it, it isn't actually wrong.
Saying words is just an action, like any other action. You judge actions by their consequences. Are people made worse off or not? Most of the time, you're not poisoning a shared epistemic well. The well was already poisoned when you got here. It's more of a communal dumping ground at this point. Mostly you'd just be doing the sensible thing like everybody else does, except that you lack the instinct and intuition and need to learn to do it by rote.
When it makes sense to do so, when the consequences are beneficial, when society is such that you have to, when nobody wants the truth, when nobody is expecting the truth, when nobody is incentivising the truth: just lie to people.
51 comments
Comments sorted by top scores.
comment by Sonata Green · 2025-04-15T14:43:46.130Z · LW(p) · GW(p)
I feel like this post contains a valuable insight (it's virtuous to distrust and verify, to protect the community against liars), sandwiched inside a terrible framing (honor is for suckers).
comment by Julian Bradshaw · 2025-04-15T21:27:39.592Z · LW(p) · GW(p)
Have we forgotten Sam Bankman-Fried already? Let’s not renounce virtues in the name of expected value so lightly.
Rationalism was founded partly to disseminate the truth about AI risk. It is hard to spread the truth when you are a known liar, especially when the truth is already difficult to believe.
Replies from: D0TheMath, Jiro↑ comment by Garrett Baker (D0TheMath) · 2025-04-19T15:14:37.615Z · LW(p) · GW(p)
It seems pretty likely SBF happened because everyone in EA was implicitly trusting everyone else in EA. If people were more suspicious of each other, that seems less likely to have been allowed to happen.
↑ comment by Jiro · 2025-04-17T01:48:34.144Z · LW(p) · GW(p)
Scott once had a post about how it's hard to get advice only to the people who need it.
Sam Bankman-Fried may have lied too much (although the real problem was probably goals that conflict with ours) but the essay here is aimed at the typical LW geek, and LW geeks tend not to lie enough.
Replies from: Julian Bradshaw↑ comment by Julian Bradshaw · 2025-04-17T05:45:58.174Z · LW(p) · GW(p)
I'm not convinced SBF had conflicting goals, although it's hard to know. But more importantly, I don't agree rationalists "tend not to lie enough". I'm no Kantian, to be clear, but I believe rationalists ought to aspire to a higher standard of truthtelling than the average person, even if there are some downsides to that.
Replies from: eva_↑ comment by eva_ · 2025-04-17T06:25:08.159Z · LW(p) · GW(p)
What would you say to the suggestion that rationalists ought to aspire to have the "optimal" standard of truthtelling, and that standard might well be higher or lower than what the average person is doing already (since there's no obvious reason why they'd be biased in a particular direction), and that we'd need empirical observation and seriously looking at the payoffs that exist to figure out approximately how readily to lie is the correct readiness to lie?
Replies from: Julian Bradshaw↑ comment by Julian Bradshaw · 2025-04-17T21:58:24.557Z · LW(p) · GW(p)
since there's no obvious reason why they'd be biased in a particular direction
No I'm saying there are obvious reasons why we'd be biased towards truthtelling. I mentioned "spread truth about AI risk" earlier, but also more generally one of our main goals is to get our map to match the territory as a collaborative community project. Lying makes that harder.
Besides sabotaging the community's map, lying is dangerous to your own map too. As OP notes, to really lie effectively, you have to believe the lie. Well is it said, "If you once tell a lie, the truth is ever after your enemy." [LW · GW]
But to answer your question, it's not wrong to do consequentialist analysis of lying. Again, I'm not Kantian, tell the guy here to randomly murder you whatever lie you want to survive. But I think there's a lot of long-term consequences in less thought-experimenty cases that'd be tough to measure.
comment by jenn (pixx) · 2025-04-15T19:46:06.786Z · LW(p) · GW(p)
Thanks for writing this post! I think it's insightful, and agree about technical truthtelling being annoying. After I thought about it though, I come down on the side of disagreeing with your post, largely on practical grounds.
A few thoughts:
- You propose: Lie by default whenever you think it passes an Expected Value Calculation to do so, just as for any other action. This is fine, but the rest of the section doesn't make it clear that by default there are very few circumstances where it seems theoretically positive EV to lie (I think this situation happens once or twice a year for me at most, certainly not enough for there to be good feedback loops.) Lies are annoying to keep track of, they bite you in the ass often, and even if you're fine with lying, most people are bad at it. This means that the average liar will develop a reputation for dishonesty over time, which people generally won't tell you about, but will tell other people in your social network so they know to watch out. More explicitly, I disagree with the idea that since each person is on average not paying attention, lying is easy. This is because people love to gossip about other people in their social circle who are acting weird, and being noticed by any person means that the information will propagate across the group.
- You propose: Practice lying. Same as Tangled, this only works if you start very young. If you do this after high school, you will permanently burn social capital! In the case of you doing so with non-consensual subjects, you will be caught because you are bad at it, and people will think that you are deceptive or weird. In the case where you find parties who can actively help you become a more dishonest person, those people will reasonably trust you less, and also it seems generally unwise to trust such parties.
- Re: developing the skill of detecting the relative honesty of other people: I agree that this is a good skill to have, and that "people will lie to you" is a good hypothesis to entertain on a regular basis. However this is a separate skill tree, and also one where facts and logic™ can thankfully save you. I'm not terrible at assessing vibes, decent at thinking about if stories check out, and I also can tap into the network of mutual acquaintances if something seems subtly off or weird about a person. This has not made me any less terrible at lying.
- Advocating for more lying seems like especially bad advice to give to people with poor social skills, because they lack the skills to detect if they're succeeding at learning how to lie or if they're just burning what little social capital they have for no gain. For people with poor social skills, I recommend, like, reading books about improving your social skills or discussing their confusions with friends who are more clued in, and for autistic people I recommend developing a better model of how neurotypicals think. I have disagreements with some of the proposed models in the book, but I think A Field Guide to Earthlings by Ian Ford is a good place to start.
- The flip side to the average person not being totally honest, is that if you can credibly signal that you are unusually honest using expensive signals, there actually are many niches for you in the world, and people pay attention to that too. I touch on this in a previous post of mine on unusually scrupulous non-EA charities [LW · GW]. While it's true that a few folks on the website can stand to become a little savvier socially[1], I think in general it would be better if they chose to play to their advantage. This seems like the higher EV route to me. And this is actually one of the reasons that I'm annoyed about technical truth telling - people who practice it are technically honest but they're not even getting any good reputation for it because they're functionally deceiving people, badly.
- All of the best things in my life came from moments where it felt very scary to tell the truth, and then I was brave and did so anyways.
- ^
i think this case is generally overstated, btw. its true that some lw people are bad at social skills but i think the median user is probably fine.
↑ comment by Jiro · 2025-04-17T22:39:23.017Z · LW(p) · GW(p)
Advocating for more lying seems like especially bad advice to give to people with poor social skills, because they lack the skills to detect if they’re succeeding at learning how to lie or if they’re just burning what little social capital they have for no gain.
I think the advice works better as "if it's a social situation, and the situation calls for what you consider to be a lie, don't let that stop you." You do not have to tell someone that you're not feeling fine when they ask how you're doing. You do not need to tell them that actually the color they painted their house in is really ugly. And you certainly shouldn't go to a job interview, get asked for your biggest weakness, and actually state your biggest weakness.
If someone reads the advice and thinks "Lying, that's an idea! I'll use it every time I can" they've overcorrected by far too much.
comment by Cole Wyeth (Amyr) · 2025-04-19T03:32:36.307Z · LW(p) · GW(p)
I think about this topic a lot, and I appreciate your dissent, particularly since it helped me organize my thoughts a little. That said, I think you're almost completely wrong. The best way to get at the problem is probably to start with your examples. Not exactly in order, sorry.
The lesson here is likewise clear: If your actual personality isn't good enough, pretend to be Flynn Rider to everyone at all times, with the sole carve-out being people who love you, like your mother or a princess. This works because people who love you will find your openness endearing, whereas everyone else will think you pathetic and use it against you.
Here's a true story. I once met a lovely and intelligent woman who didn't like that I'm a bit blunt and ruthlessly truthseeking. I didn't stop being that way, and mainly for that reason we didn't become romantically involved. A few months later I met a lovely, intelligent, reasonable, sometimes blunt, and open-minded woman who did like that I'm a bit blunt and ruthlessly truthseeking. We've been dating for 2.5 years now and I'm on balance very happy with how everything worked out.
A Paragon of Morality is out travelling, when he is beset by bandits. They demand he hand over his gold or they will kill him and take it from his corpse. This is not a decision-theoretic threat [? · GW] because the bandits value getting his gold more than they disprefer commiting murder, but would otherwise avoid the murder if possible. If he hands over all his gold he will lose all his gold. If he hands over all the gold in his pockets, neglects the extra he has hidden in his sock, and says "I have given you all my gold" in a sufficiently convincing tone of voice, then he will lose less than all his gold.
These isn't Omega we're dealing with here, they're totally trickable by a moderately convincing performance. If he keeps some of the gold he can donate it to Givewell approves charities and save however many QALYs or whatever.
Does he have a moral obligation to lie?
He certainly doesn't have a moral obligation to tell the truth. But a lot of moral obligations change when someone points a gun at you. For instance, it becomes morally permissible (though not necessarily feasible) to shoot at them, or to give up what money you must and later steal it back at the first available opportunity. To me, the truth is something precious, and lying is like stealing the truth; it's permissible in some extreme and usually adversarial situations. With that said, I'm a bit of a rationalist dedicate/monk and I'd prefer to fight than lie - however I don't think everyone is rationally or otherwise compelled to follow suit, for reasons that will be further explained.
A Normally Honest Man is applying for a job as a Widget Designer. He has many years of industry experience in Widget Engineering. He has memorised the Widget Manufacturing Process. He's actually kind of obsessed with Widgets. Typically whenever a conversation becomes about Widgets he gushes openly and makes a bad impression with his in-laws. Since that incident he has developed the self control to pretend otherwise, and the rest of his personality is okay.
The interviewer works for a Widget Manufacturing company but seems to only care about Widgets a normal amount. He asks "How interested are you in Widgets?" He has learnt from previous job interviews that, if he answers honestly, the interviewer will think he is any of lying, insane, or too weird to deal with, and not hire him, even though this is not in the best financial interests of the company, were they fully informed.
Should he pretend to like widgets the amount most likely to get him hired, or does he have a moral obligation to keep answering honestly until he runs out of rent money and becomes homeless?
I don't know, he could say "Honestly, I enjoy designing widgets so much that others sometimes find it strange!" That would probably work fine. I think you can actually get a way with a bit more if you say honestly first and then are actually sincere. This would also signal social awareness.
I realize that I am in some sense dodging your hypothetical but I think your hypothetical is the problem. You haven't thought hard enough about how this guy can succeed without lying.
A Self-Improvement and Epistemics Nerd has an online community for Self-Improvement and Epistemics Nerds. Half the people reading it are autists with bad social skills, who weren't at exactly the right age to be saved by Disney's Tangled. They struggle with navigating ordinary social situations and obtaining true beliefs because they're bad at lying, and insufficiently aggressive at anticipating it in others.
Would they be doing anyone a favour in encourage a social norm of truthfulness and the expectation of truthfulness in others, when all those people will inevitably have to leave the computer one day and end up like the subjects of the previous two examples? Would they be making the world a better place?
Yes and yes.
Contrary to common belief, lesswrong is not an autism support group.
And you know what? I think it made the world much better. Now we have places online and in the real world (lighthaven, meetups, Berkeley) to gather and form a community around truthseeking and rationality. I like it. I'm glad it exists. I even think some important and powerful ideas have come out of it, and I think we've learned a lot together.
Saying words is just an action, like any other action. Whether the words are literally true or not is just a fact about the action, like any other fact about an action. It's not the morally important fact. You judge actions by their consequences, whether you expect it to lead to more good or bad. Then you take the action with the best consequences overall.
Saying words is an action, but it's not like any other action, because it can guide others towards or away from the truth. Similarly, torture is an action, but it's not like any other action, because it is when one person causes another immense pain intentionally.
Sure, we judge actions by their consequences, but we do not judge all actions in the same way. Some of them are morally repugnant, and we try very, very hard to never take them unless our hands our forced, and then only take them with immense regret and sorrow. There are various distinguishing factors. For instance, the consequences of torture seem likely to be almost always bad, so I never seriously consider it. Also, I don't want to be the sort of person who tortures people (both for instrumental reasons and to some extent for intrinsic reasons). It's actually pretty hard to fully disentangle my disprefernece for torture from its consequences, because torture is inherently about causing suffering and don't either want suffering to exist or to cause it (though the former is far more important to me).
My feelings about lying are the same. I love the truth, I love the truthseeking process, I love seeing curiosity in the eyes of children and adults and kittens. I hate lies, confusion, and deceiving others. This is partially because the truth is really useful for agents (and I like agents to be able to exercise their potential, typically), it's partially because telling the truth seems to be best for me in most cases, and it's partially because I just value truth.
Rationality can be about Winning [LW · GW], or it can be about The Truth, but it can't be about both. Sooner or later, your The Truth will demand you shoot yourself in the foot, while Winning will offer you a pretty girl with a country-sized dowry. The only price will be presenting various facts about yourself in the most seductive order instead of the most informative one.
It can totally be about both if truth is part of winning. Yes, there are sometimes tradeoffs, and truth is not the singular source of value. But I think most of us value it very strongly, so presenting these two axes as orthogonal is highly misleading. And I want to share the truth with other people in case they decide to value it too - if not, they can always choose not to face it.
Also, there's a missing mood in your example. When you value the truth, being honest tends to get you a lot of other things that you value; you tend to end up surrounded by the right people for you, being the kind of person you can respect, in the kind of place where you belong, even if you have to create it.
Now, you're probably going to say that I can't convince you by pure reason to intrinsically value the truth. That's right. However, I also can't convince you by pure reason to intrinsically value literally anything, and if you had written an essay about how we should consider killing or torturing people because it's just an action like any other, I would have objected on similar grounds. You're totally missing the fact that it's wrong, and also (separately!) the consequences of following your advice would probably be bad for you, and certainly for most of us, over the long run.
Replies from: eva_↑ comment by eva_ · 2025-04-19T06:40:12.519Z · LW(p) · GW(p)
I enjoyed reading this reply, since it's exactly the position I'm dissenting against phrased perfectly to make the disagreements salient.
I don't know, he could say "Honestly, I enjoy designing widgets so much that others sometimes find it strange!" That would probably work fine. I think you can actually get a way with a bit more if you say honestly first and then are actually sincere. This would also signal social awareness.
I think this is what eliezer describes as "The code of literal truth only lets people navigate anything like ordinary social reality to the extent that they are very fast on their verbal feet". This reply works if you can come up with it, or notice this problem in advice and plan it out, but in a face to face interview it takes quite a lot of skill (more than most people have) to phrase somethlng like that so that it comes off smoothly on a first try and without pausing to think for ten minutes. People who do not have the option of doing this because they didn't think of it quickly enough, get to choose between telling the truth as it sits in their head or else the first lie they come up with in the time it took the interviewer to ask the question.
I'm a bit of a rationalist dedicate/monk and I'd prefer to fight than lie - however I don't think everyone is rationally or otherwise compelled to follow suit, for reasons that will be further explained.
Now, you're probably going to say that I can't convince you by pure reason to intrinsically value the truth. That's right. However, I also can't convince you by pure reason to intrinsically value literally anything
This is exactly the heart of the disagreement! Truthtelling is a value, and you can if you want assign it so high a utility score that you wouldn't tell one lie to stop a genocide, but that's a fact about the values you've assigned things, not about what behaviours are rational in the general case or whether other people would be well-served by adopting the behavioural norms you'd encourage of them. It shouldn't be treated as intrinsically tied to rationalism, for the same reason that Effective Altruism is a different website. In the general case, do the actions that get you the things you value, and lying is just an action, an action that harms some things and benefits others that you may or may not value.
I could try to attack the behaviour of people claiming this value if I wanted, since it doesn't seem to make a huge amount of sense: If you value The Truth for it's own sake while still being a Utilitarian, how much disutility is one lie in human lives? If it is more than 1/5000 the average person tells more than 5000 lies in their life and it'd be a public good to kill newborns before they can learn language and get started, and if it is less than 1/5000 Givewell sells lives for ~$5k each so you should be happy lying for a dollar. This is clearly absurd, and what you value is your own truthtelling or maybe the honesty of specifically your immediate surroundings, but again why? What is it you're actually valuing, and have you thought about how to buy more of it?
The meaning of the foot fetish tangent at the start is, I don't understand this value that gets espoused as so important or how it works internally. It'd be incredibly surprising to learn evolution baked something like that into the human genome. I don't think Disney gave it to you. If it is culture it is not the sort of culture that happens because your ancestors practiced it and obtained success, but instead your parents told you not to lie because they wanted the truth from you whether it served you well to give it to them or not and then when you grow up you internalise that commandment even as everyone else is visibly breaking it in front of you. I have a hard time learning the internals of this value that many others claim to hold, because they don't phrase it like a value, they phrase it like an iron moral law that they must obey up to the highest limits of their ability without really bothering to do consequentialism about it, even those hear who seem like devout consequentialists about other moral things like human lives.
Replies from: Amyr↑ comment by Cole Wyeth (Amyr) · 2025-04-19T13:53:38.777Z · LW(p) · GW(p)
I don’t think you got it.
What did you think about my objection to the Flynn example, or the value of the rationalist community as something other than an autism support group? I feel like you sort of ignored my stronger points and then singled out the widget job interview response because it seems to miss the point, but without engaging with my explanation of how it doesn’t miss the point. The way that you constructed the hypothetical there was plenty of time to come up with an honest way to talk about how much he enjoyed widgets.
The one of the things I value is people knowing and understanding the truth, which I find to be a beautiful thing. It’s not because someone told me to be honest at some point, it’s because I’ve done a lot of mathematics and read a lot of books and observed that the truth is beautiful.
I also wouldn’t shoot someone so I could tell someone else the truth. I don’t know where you got these numbers.
I suppose I’m not completely longtermist about my pursuit of truth, but I’m not completely longtermist about my other values either - sometimes the short term is easier to predict and get feedback from etc.
↑ comment by eva_ · 2025-04-19T14:31:55.309Z · LW(p) · GW(p)
If that was me not getting it than probably I am not going to get it and continuing to talk has deminishing returns, but I'll try to answer your other questions too and am happy to continue replying in what I hope comes across as mutual good faith.
What did you think about my objection to the Flynn example
It was incredibly cute but the kind of thing where people's individual results tend to vary wildly. I am glad you are happy even if it was achieved by a different policy, but I don't think any of my main claims are strongly undermined by it.
or the value of the rationalist community as something other than an autism support group
I agree the rationalist community is not actually an autism support group, and in particular that it has value as a way for people who want to believe true things to collaborate around getting more accurate beliefs, as well as for people who want to improve the ways they think, make better decisions, optimise their lives etc. I think my thesis that truthtelling does not have the same essential character as truthseeking or truthbelieving is if not correct at least coherent and justifiable, and can be argued on its merits. I can want to believe true things so I can make better decisions without having an ideological commitment to honest speech, and people can collaborate around reaching true conclusions based on interrogating positions and seeking evidence rather than expecting and assuming honesty. For example I do not think at any point in interrogating my claims in this post you have had to assume I am honest, because I am trying to methodically attach my reasoning and justifications to everything I say and am not really expecting to be believed about things where I don't.
The way that you constructed the hypothetical there was plenty of time to come up with an honest way to talk about how much he enjoyed widgets.
This seems like a non-central objection. If it is your only objection, note that I could with more careful thought have constructed a hypothetical where there was even more time pressure and an honest way to achieve their goal was even less within reach, and then we'd be back at the position my first hypothetical was intended to provoke. Unless I suppose you think there is no possible plausible social situation ever where refusing to lie predictibly backfires, but I somehow really doubt that.
I also wouldn’t shoot someone so I could tell someone else the truth. I don’t know where you got these numbers.
The only number in my "how much bad is a lie if you think a lie is bad" hypothetical is taken from https://www.givewell.org/charities/top-charities under "effectiveness", rounded up. The assumption that you have to assign a number is a reference to coherent decisions imply consistent utilities [LW · GW], and the other numbers are made up to explore the consequences of doing so.
The one of the things I value is people knowing and understanding the truth, which I find to be a beautiful thing. It’s not because someone told me to be honest at some point, it’s because I’ve done a lot of mathematics and read a lot of books and observed that the truth is beautiful.
This is a more interesting reason than what I had (pessimistically) imagined, and I would count it a valid response to the side point I was making that intrinsic concern for personal truthtelling is prima facie weird. I think I agree with you that the truth is beautiful, I also read mathematics for fun and have observed it and felt the same way. I just don't attach the same feeling to honest speech. I would want to retort that people knowing the truth is not always best served by you saying the truth, and you could still justify making terribly cutthroat utilitarian trade-offs around e.g. committing fraud to get money to fund teaching mathematics to millions of people in the third world, since it increases total amount of people knowing and understanding the truth overall. I also acknowledge regular utilitarians don't behave like that for obvious second order reasons but my position is only that you have to think through the actual decision and not just assume the conclusion.
I feel like you sort of ignored my stronger points ... without engaging with my explanation of how it doesn’t miss the point
If I ignored your strongest argument it was probably because I didn't think it was central, didn't think it was your strongest, or otherwise misunderstood it. I'm actually unsure looking back which part I didn't focus on you meant for me to focus on. The "Sure, we judge actions by their consequences, but we do not judge all actions in the same way. Some of them are morally repugnant, and we try very, very hard to never take them unless our hands our forced" part maybe? The example you give is torture, which 1) always causes immediate severe pain by the definition of torture and 2) has been basically proven to be never useful for any goal other than causing pain in any situation you might reasonably end up in. Saying Torture is always morally repugnant is much more supported by evidence, and is very different from saying the same of an action that frequently hurts nobody and happens a hundred times a day in normal small talk.
Replies from: Amyr↑ comment by Cole Wyeth (Amyr) · 2025-04-19T16:43:08.748Z · LW(p) · GW(p)
I agree that if I could produce a wonderful truthseeking society by telling a few lies it would be worth it, I just think that extreme sincere honesty is a better path for predictable first and second order reasons.
comment by Jiro · 2025-04-15T05:41:23.351Z · LW(p) · GW(p)
He asks “How interested are you in Widgets?” He has learnt from previous job interviews that, if he answers honestly, the interviewer will think he is any of lying, insane, or too weird to deal with, and not hire him, even though this is not in the best financial interests of the company, were they fully informed.
By the standard "intentionally or knowingly cause the other person to have false beliefs", answering 'honestly' would be lying, and answering in a toned down way would not (because it maximizes the truth of the belief that the interviewer gets).
comment by Said Achmiz (SaidAchmiz) · 2025-04-20T01:49:42.788Z · LW(p) · GW(p)
Detailed commentary, as promised:
Rationality can be about Winning [LW · GW], or it can be about The Truth, but it can’t be about both. Sooner or later, your The Truth will demand you shoot yourself in the foot, while Winning will offer you a pretty girl with a country-sized dowry. The only price will be presenting various facts about yourself in the most seductive order instead of the most informative one.
If your highest value isn’t Winning, you do not get to be surprised when you lose. You do not even get to be disappointed. By revealed preference, you have to have a mad grin across your face, that you were able to hold fast to your highest-value-that-isn’t-winning all the way to the bitter end.
What if my highest value is getting a pretty girl with a country-sized dowry, while having not betrayed the Truth?
Then, if I only get one of those things, it’s worse than getting both of those things (and possibly so much worse that I don’t even consider them worthwhile to pursue individually; but this part is optional). Is there a law against having values like this? There is not. Do you get to insist that nevertheless I have to choose, and can’t get both? Nope, because the utility function is not up for grabs[1]. Now, maybe I will get both and maybe I won’t, but there’s no reason I can’t have “actually, both, please” as my highest value.
The fallacy here is simply that you want to force me to accept your definition of Winning, which you construct so as not to include The Truth. But why should I do that? The only person who gets to define what counts as Winning for me is me.
In short, no, Rationality absolutely can be about both Winning and about The Truth. This is no more paradoxical than the fact that Rationality can be about saving Alice’s life and also about saving Bob’s life. You may at some point end up having to choose between saving Alice and saving Bob, and that would be sad; and you would end up making some choice, in some manner, as fits the circumstance. The existence of this possibility is not particularly interesting, and has no deep implications. The goal is “both”.
(That said, the bit about the two armies was A++, and I strong-upvoted the post just for that.)
Technical Truth is as Bad as Lying
I wholly agree with this section…
… except for the last paragraph—specifically, this:
The purpose of a word is to carve reality at a joint useful for the discussion taking place, and we should pause here to note that the joint in question isn’t “emits true statements”, it’s “emits statements that the other party is better off for listening to”.
No, it’s not.
I’m not sure where this meme comes from[2], but it’s just wrong. Unless you are, like, literally my mother, “is the other party, speficially, better off for listening to this thing that I am saying” constitutes part of my motivation for saying things approximately zero percent of the time. It’s just not a relevant consideration at all—and I don’t think I’m even slightly unusual in this.
I say and write things[3] because I consider those things to be true, relevant, and at least somewhat important. That by itself is very often (possibly usually) sufficient for a thing to be useful in a general sense (i.e., I think that the world is better for me having said it, which necessarily involves the world being better for the people in it). Whether the specific person to whom the thing is nominally or factually addressed will be better off as a result of what I said or wrote is not my concern in any way other than that.
Sometimes I am additionally motivated by some specific usefulness of some specific utterance, but even in the edge case where the expected usefulness is exclusive to the single person to whom the utterance is addressed, I don’t consider whether that person will be better off for having listened to the thing in question. Maybe they won’t be! Maybe they will somehow be harmed. That’s not my business; they have the relevant information, which is true (and which I violated no moral precepts by conveying to them)—the rest is up to them.
Therefore if someone says “but if you lie, they’ll be better off”, my response is “weird thing to bring up; what’s the relevance?”.
Biting the Bullet
Basically correct. I will add that not only is it not always morally obligatory to tell the truth, but in fact it is sometimes morally obligatory to lie. Sometimes, telling the truth is wrong, and doing so makes you a bad person. Therefore the one who resolves to always tell the truth, no matter what, can in fact end up predictably doing evil as a direct result of that resolution.
There is no royal road to moral perfection. There is no way to get around the fact that you will always need to apply all of your faculties, the entirety of your reason and your conscience and everything else that is part of you, in order to be maximally sure (but never perfectly sure!) that you are doing the right thing. The moment you replace your brain with an algorithm, you’ve gone wrong. This fact does not become any less true even if the algorithm is “always tell the truth”. You can and should make rules, and you can and should follow them (rule consequentialism is superior to act consequentialism for all finite agents), and yet even this offers no escape from that final responsibility, which is always yours and cannot be offloaded to anyone or anything, ever.
Lie by default whenever you think it passes an Expected Value Calculation to do so, just as for any other action.
No, this is a terrible idea. Do not do this. Act consequentialism does not work. It doesn’t work no matter how much we say “yeah but just make up numbers” or “yeah you can’t actually do the calculation, but let’s pretend we can”. The numbers are fake and meaningless and we can’t do the calculation.
It’s still a better policy than just trusting people.
Definitely don’t just trust people. Trust, but verify. (See also. [LW(p) · GW(p)])
When your friends ask you about how trustworthy you are, make no implications that you are abnormally honest. Tell them truthfully (if it is safe to do so) about all the various bad incentives, broken social systems, and ordinary praxis that compel dishonesty from you and any other person, even among friends, and give them sincere advice about how to navigate these issues.
This, I agree with.
Cooperative Epistemics
I agree with all of this section. What I’ll note here is that there are people who will campaign very hard against the sort of thing you are advocating for here (“Do them the charity of not pretending they wouldn’t be making a terrible mistake by imagining they can take you or anyone else at their word. Build your Cooperative Epistemics on distrust instead.”), and for the whole “trying to start a communist utopia on the expectation that everybody just” thing. I agree that this actually has the opposite result to what anyone sensible would want.
Saying words is just an action, like any other action. You judge actions by their consequences. Are people made worse off or not? Most of the time, you’re not poisoning a shared epistemic well. The well was already poisoned when you got here. It’s more of a communal dumping ground at this point. Mostly you’d just be doing the sensible thing like everybody else does, except that you lack the instinct and intuition and need to learn to do it by rote.
When it makes sense to do so, when the consequences are beneficial, when society is such that you have to, when nobody wants the truth, when nobody is expecting the truth, when nobody is incentivising the truth: just lie to people.
This, on the other hand, is once again a terrible idea.
Look, this is going to sound fatuous, but there really isn’t any better general rule than this: you should only lie when doing so is the right thing to do.
“When the consequences are beneficial”—no, you can’t tell when the consequences will be beneficial, and anyhow act consequentialism does not and cannot work, so instead you should be a rule consequentialist and adopt rules about when lying is right, and when lying is wrong, and only lie in the first case and not the second case. (And you should have meta-rules about when you make your rules known to other people—hint, the answer is “almost always”—because much of the value of rules like this comes from them being public knowledge. And so on, applying all the usual ethical and meta-ethical considerations.)
“When society is such that you have to”—too general; furthermore, people, and by “people” I mean “dirty stinking liars who lie all the time, the bastards”, use this sort of excuse habitually, so you should be extremely wary of it. However, sometimes it actually is true. Once again you cannot avoid having to actually think about this sort of thing in much more detail than the OP.
“When nobody wants the truth”—situations like this are often the ones where telling the truth is exceptionally important and the right thing to do. But sometimes, the opposite of that.
“When nobody expects the truth”—ditto.
“When nobody is incentivizing the truth”—ditto.
The well was already poisoned when you got here. It’s more of a communal dumping ground at this point.
Wells can be cleaned, and new wells can be dug. (The latter is often a prerequisite for the former.)
The metaphorical utility function, that is. ↩︎
Although I have certain suspicions. ↩︎
That is, things which should be construed as in some sense possibly being true or false. In other words, I do not include here things like jokes, roleplaying, congratulatory remarks, flirting, requests, exclamations, etc. ↩︎
↑ comment by eva_ · 2025-04-20T05:53:58.156Z · LW(p) · GW(p)
I consider you to be basically agreeing with me for 90% of what I intended and your disagreements for the other 10% to be the best written of any so far, and basically valid in all the places I'm not replying to it. I still have a few objections:
What if my highest value is getting a pretty girl with a country-sized dowry, while having not betrayed the Truth? ... In short, no, Rationality absolutely can be about both Winning and about The Truth.
I agree the utility function isn't up for grabs and that that is a coherent set of values to have, but I have this criticism that I want to make that I feel I don't have the right language to make. Maybe you can help me. I want to call that utility function perverse. The kind of utilityfunction that an entity is probably mistaken to imagine itself as having.
For any particular situation you might find yourself in, for any particular sequence of actions you might do in that situation, there is a possible utilityfunction you could be said to have such that the sequence of actions is the rational behaviour of a perfect omniscient utility maximiser. If nothing else, pick the exact sequence of events that will result, declare that your utility function is +100 for that sequence of events and 0 for anything else, and then declare yourself a supremely efficient rationalist.
Actually doing that would be a mistake. It wouldn't be making you better. This is not a way to succeed at your goals, this is a way to observe what you're inclined to do anyway and paint the target around it. Your utility function (fake or otherwise) is supposed to describe stuff you actually want. Why would you want specifically that in particular?
I think the stronger version of Rationality is the version that phrases it as about getting the things you want, whatever those things might be. In that sense, if The Truth is merely a value, you should carefully segment it in your brain out from your practice of rationality: Your rationality is about mirroring the mathematical structure best suited for obtaining goals, and then to whatever degree you value The Truth above its normal instrumental value is something you buy where it's cheapest like all your other values. Mixing the two makes both worse, you pollute your concept of rational behaviour with a love of the truth (and therefore, for example, are biased towards imagining that other people who display rationality are probably honest, or other people who display honesty are probably rational) and you damage your ability to pursue the truth by not putting in the values category where it belongs where it will lead you to try to cheaply buy more of it.
Of course maybe you're just the kind of guy who really loves mixing his value for The Truth in with his rationality into a weird soup. That'd explain your actiosn without making you a walking violation of any kind of mathematical law, it'd just be a really weird thing for you to innately want.
I am still trying to find a better way to phrase this argument such that someone might find it persuasive of something, because I don't expect this phrasing to work.
I say and write things[3] [LW · GW] because I consider those things to be true, relevant, and at least somewhat important. That by itself is very often (possibly usually) sufficient for a thing to be useful in a general sense (i.e., I think that the world is better for me having said it, which necessarily involves the world being better for the people in it). Whether the specific person to whom the thing is nominally or factually addressed will be better off as a result of what I said or wrote is not my concern in any way other than that.
I think I meant something subtly different that what you've taken that part to mean. I think you understand that, f other people noticed a pattern that everything you said was false, irrelevant, or unimportant, they would eventually stop bothering to listen when you talk, and this would mean you'd lose the ability to get other people to know things, which is a useful ability to have. This is basically my position! Whether the specific person you address is better off in each specific case isn't materal because you aren't trying to always make them better off, you're just trying to avoid being seen as someone who predictibly doesn't make them better off. I agree that calculating the full expected consequences to every person of every thing you say isn't necessary for this purpose.
No, this is a terrible idea. Do not do this. Act consequentialism does not work. ... Look, this is going to sound fatuous, but there really isn’t any better general rule than this: you should only lie when doing so is the right thing to do.
I agree that Act Consequentialism doesn't really work. I was trying to be a Rule consequentialist instead wben I wrote the above rule. I agree that that sounds fatuous, but I think the immediate feeling is pointing at a valid retort: You haven't operationalized this position into a decision process that a person can actually do (or even pretend to do).
I took great effort to try to right down my policy as something explicit in terms a person could try to do (even though I am willing to admit it is not really correct mostly because finite agent problems), because a person can't be a real Rule Consequentialist without actually having a Rule. What is the rule for "Only lie when doing so is the right thing to do"? It sounds like an instruction to pass the act to my rightness calculator, but if I program that rule into my rightness calculator, and then give it any input, it gets into an infinite loop. I have an Act Consequentialist rightness calculator as a backup, but if I pass the rule "only lie when doing so is the right thing to do" into that as a backup I'm just right back at doing act consequentialism.
If you can write down a better rule for when to lie the than what I've put above (that is also better than the "never" or "only by coming up with galaxy-brained ways it technically isn't lying" or Eliezer's meta-honesty idea that I've read before) I'd consider you to have (possibly) won this issue, but that's the real price of entry. It's not enough to point out the flaws where all my rules don't work, you have to produce rules that work better.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2025-04-20T07:29:53.326Z · LW(p) · GW(p)
… stuff about perverse utility functions …
Well, there’s a couple of things to say in response to this… one is that wanting to get the girl / dowry / happiness / love / whatever tangible or intangible goals as such, and also wanting to be virtuous, doesn’t seem to me to be a weird or perverse set of values. In a sense, isn’t this sort of thing the core of the project of living a human life, when you put it like this? “I want to embody all the true virtues, and also I want to have all the good things.” Seems pretty natural to me! Of course, it’s also a rather tall order (uh, to put it mildly…), but that just means that it provides a challenge worthy of one who does not fear setting high goals for himself.
Somewhat orthogonally to this, there is also the fact that—well, I wrote the footnote about the utility function being metaphorical for a reason. I don’t actually think that humans (with perhaps very rare exceptions) have utility functions; that is, I don’t think that our preferences satisfy the VNM axioms—and nor should they. (And indeed I am aware of so-called “coherence theorems” and I don’t believe in them [LW · GW].)
With that constraint (which I consider an artificial and misguided one) out of the way, I think that we can reason about things like this in ways that make more sense. For instance, trying to fit truth and honesty into a utility framework makes for some rather unnatural formulations and approaches, like talking about buying more of it, or buying it more cheaply, etc. I just don’t think that this makes sense. If the question is “is this person honest, trustworthy, does he have integrity, is he committed to truth”, then the answer can be “yes”, and it can be “no”, and it could perhaps be some version of “ehhh”, but if it’s already “yes” then you basically can’t buy any more of it than that. And if it’s not “yes” and you’re talking about how cheaply you can buy more of it, then it’s still not “yes” even after you complete your purchase.
(This is related to the notion that while consequentialism may be the proper philosophical grounding for morality, and deontology the proper way to formulate and implement your morality so that it’s tractable for a finite mind, nevertheless virtue ethics is the “descriptively correct as an account of how human minds implement morality, and (as a result) prescriptively valid as a recommendation of how to implement your morality in your own mind, once you’ve decided on your object-level moral views”. Thus you can embody the virtue of honesty, or fail to do so. You can’t buy more of embodying some virtue by trading away some other virtue; that’s just not how it works.)
I think you understand that, f other people noticed a pattern that everything you said was false, irrelevant, or unimportant, they would eventually stop bothering to listen when you talk, and this would mean you’d lose the ability to get other people to know things, which is a useful ability to have.
Yes, of course; but…
Whether the specific person you address is better off in each specific case isn’t materal because you aren’t trying to always make them better off, you’re just trying to avoid being seen as someone who predictibly doesn’t make them better off.
… but the preceding fact just doesn’t really have much to do with this business of “do you make people better off by what you say”.
My claim is that people (other than “rationalists”, and not even all or maybe even most “rationalists” but only some) just do not think of things in this way. They don’t think of whether their words will make their audience better off when they speak, and they don’t think of whether the words of other people are making them better off when they listen. This entire framing is just alien to how most people do, and should, think about communication in most circumstances. Yeah, if you lie all the time, people will stop believing you. That’s just directly the causation here, it doesn’t go through another node where people compute the expected value of your words and find it to be negative.
(Maybe this point isn’t particularly important to the main discussion. I can’t tell, honestly!)
I took great effort to try to right down my policy as something explicit in terms a person could try to do (even though I am willing to admit it is not really correct mostly because finite agent problems), because a person can’t be a real Rule Consequentialist without actually having a Rule. What is the rule for “Only lie when doing so is the right thing to do”? It sounds like an instruction to pass the act to my rightness calculator, but if I program that rule into my rightness calculator, and then give it any input, it gets into an infinite loop. I have an Act Consequentialist rightness calculator as a backup, but if I pass the rule “only lie when doing so is the right thing to do” into that as a backup I’m just right back at doing act consequentialism.
If you can write down a better rule for when to lie the than what I’ve put above (that is also better than the “never” or “only by coming up with galaxy-brained ways it technically isn’t lying” or Eliezer’s meta-honesty idea that I’ve read before) I’d consider you to have (possibly) won this issue, but that’s the real price of entry. It’s not enough to point out the flaws where all my rules don’t work, you have to produce rules that work better.
Well… let’s start with the last bit, actually. No, it totally is enough to point out the flaws. I mean, we should do better if we can, of course; if we can think of a working solution, great. But no, pointing out the flaws in a proffered solution is valuable and good all by itself. (“What should we do?” “Well, not that.” “How come?” “Because it fails to solve the problem we’re trying to solve.” “Ok, yeah, that’s a good reason.”) In other words: “any solution that solves the problem is acceptable; any solution that does not solve the problem is not acceptable”. Act consequentialism does not solve the problem.
But as far as my own actual solution goes… I consider Robin Hanson’s curve-fitting approach (outlined in sections II and III of his paper “Why Health is Not Special: Errors in Evolved Bioethics Intuitions”) to be the most obviously correct approach to (meta)ethics. In brief: sometimes we have very strong moral intuitions (when people speak of listening to their conscience, this is essentially what they are referreing to), and as those intuitions are the ultimate grounding for any morality we might construct, if the intuitions are sufficiently strong and consistent, we can refer to them directly. Sometimes we are more uncertain. But we also value consistency in our moral judgments (for various good reasons). So we try to “fit a curve” to our moral intuitions—that is, we construct a moral system that tries to capture those intuitions. Sometimes the intuitions are quite strong, and we adjust the curve to fit them; sometimes we find weak intuitions which are “outliers”, and we judge them to be “errors”; sometimes we have no data points at all for some region of the graph, and we just take the output of the system we’ve constructed. This is necessarily an iterative process.
If the police arrest your best friend for murder, but you know that said friend spent the whole night of the alleged crime with you (i.e. you’re his only alibi and your testimony would completely clear him of suspicion), should you tell the truth to the police when they question you, or should you betray your friend and lie, for no reason at all other than that it would mildly inconvenience you to have to go down to the police station and give a statement? Pretty much nobody needs any kind of moral system to answer this question. It’s extremely obvious what you should do. What does act and/or rule consequentialism tell us about this? What about deontology, etc.? Doesn’t matter, who cares, anyone who isn’t a sociopath (and probably even most sociopaths who aren’t also very stupid) can see the answer here, it’s absurdly easy and requires no thought at all.
What if you’re in Germany in 1938 and the Gestapo show up at your door to ask whether you’re hiding any Jews in your attic (which you totally are)—what should you do? Once again the answer is easy, pretty much any normal person gets this one right without hesitation (in order to get it wrong, you need to be smart enough to confuse yourself with weird philosophy).
So here we’ve got two situations where you can ask “is it right to lie here, or to tell the truth?” and the answer is just obvious. Well, we start with cases like this, we think about other cases where the answer is obvious, and yet other cases where the answer is less obvious, and still other cases where the answer is not obvious at all, and we iteratively build a curve that fits them as well as possible. This curve should pass right through the obvious-answer points, and the other data points should be captured with an accuracy that befits their certainty (so to speak). The resulting curve will necessarily have at least a few terms, possibly many, definitely not just one or two. In other words, there will be many Rules.
(How to evaluate these rules? With great care and attention. We must be on the lookout for complexity, we must continually question whether we are in fact satisfying our values / embodying our chosen virtues, etc.)
Here’s an example rule, which concerns situations of a sort of which I have written before: if you voluntarily agree to keep a secret, then, when someone who isn’t in on the secret asks you about the secret, you should behave as you would if you didn’t know the secret. If this involves lying (that is, saying things which you know to be false, but which you would believe to be true if you were not in possession of this secret which you have agreed, of your own free will, to keep), then you should lie. Lying in this case is right. Telling the truth in this case is wrong. (And, yes, trying to tell some technical truth that technically doesn’t reveal anything is also wrong.)
Is that an obvious rule? Certainly not as obvious as the rules you’d formulate to cover the two previous example scenarios. Is it correct? Well, I’m certainly prepared to defend it (indeed, I have done so, though I can’t find the link right now; it’s somewhere in my comment history). Is a person who follows a rule like this an honest and trustworthy person, or a dishonest and untrustworthy liar? (Assuming, naturally, that they also follow all the other rules about when it is right to tell the truth.) I say it’s the former, and I am very confident about this.
I’m not going to even try to enumerate all the rules that apply to when lying is wrong and when it’s right. Frankly, I think that it’s not as hard as some people make it out to be, to tell when it is necessary to tell the truth and when one should instead lie. Mostly, the right answer is obvious to everyone, and the debates, such as they are, mostly boil down to people trying to justify things that they know perfectly well cannot be justified.
Indeed, there is a useful heuristic that comes out of that. In these discussions, I have often made this point (as I did in my top-level comment) that it is sometimes obligatory to lie, and wrong to tell the truth. The reason I keep emphasizing this is that there’s a pattern one sees: the arguments most often concern whether it’s permissible to lie. Note: not, “is it obligatory to tell the truth, or is it obligatory to lie”—but “is it obligatory to tell the truth, or do I have no obligation here and can I just lie”.
I think that this is very telling. And what it tells us (with imperfect but nevertheless non-trivial certainty) is that the person asking the question, or making the argument against the obligation, knows perfectly well what the real—which is to say, moral—answer is. Yes, the right thing to do is to tell the truth. Yes, you already know this. You have reasons for not wanting to tell the truth. Well, nobody promised you that doing the right thing will always be personally convenient! Nevertheless, very often, there is no actual moral uncertainty in anyone’s mind, it’s just “… ok, but do I really have to do the right thing, though”.
This heuristic is not infallible. For example, it does not apply to the case of “lying to someone who has no right to ask the question that they’re asking”: there, it is indeed permissible to lie[1], but no particular obligation either to lie or to tell the truth. (Although one can make the case for the obligation to lie even in some subset of such cases, having to do with the establishment and maintenance of certain communicative norms.) But it applies to all of these [LW(p) · GW(p)], for instance.
The bottom line is that if you want to be honest, to be trustworthy, to have integrity, you will end up constructing a bunch of rules to aid you in epitomizing these virtues. If you want to try to put together a complete list of such rules, that’s certainly a project, and I may even contribute to it, but there’s not much point in expecting this to be a definitively completable task. We’re fitting a curve to the data provided by our values, which cannot be losslessly compressed.
Assuming that certain conditions are met—but they usually are. ↩︎
↑ comment by eva_ · 2025-04-20T09:49:47.058Z · LW(p) · GW(p)
(Maybe this point isn’t particularly important to the main discussion. I can’t tell, honestly!)
Yeah I think it's an irrelevant tangent where we're describing the same underlying process a bit differently, not really disagreeing.
Frankly, I think that it’s not as hard as some people make it out to be, to tell when it is necessary to tell the truth and when one should instead lie. Mostly, the right answer is obvious to everyone, and the debates, such as they are, mostly boil down to people trying to justify things that they know perfectly well cannot be justified.
... the arguments most often concern whether it’s permissible to lie. Note: not, “is it obligatory to tell the truth, or is it obligatory to lie”—but “is it obligatory to tell the truth, or do I have no obligation here and can I just lie”. I think that this is very telling. And what it tells us (with imperfect but nevertheless non-trivial certainty) is that the person asking the question, or making the argument against the obligation, knows perfectly well what the real—which is to say, moral—answer is. Yes, the right thing to do is to tell the truth.
I think I disagree with this framing. In my model of the sort of person who asks that, they're sometimes selfish-but-honourable people who have noticed telling the truth ends badly for them and will do it if it is an obligation but would prefer to help themselves otherwise, but they are just as often altruistic-and-honourable people who have noticed telling the truth ends badly for everyone and are trying to convince themselves it's okay to do the thing that will actually help. There are also selfish-but-cowardly people who just care if they'll be socially punished for lying, or selfish-and-cruel people chewing at the bit to punish someone else for it, and similar, but moral arguments don't move to them either way so it doesn't matter.
More strongly I disagree because I think a lot of people have harmed themselves or their altruistic causes by failing to correctly determine where the line is, either lying when they shouldn't or not lying when they should, and it is too the communities shame that we haven't been more help with illuminating how to tell those cases apart. If smart hardworking people are getting it wrong so often, you can't just say the task is easy.
If you want to try to put together a complete list of such rules, that’s certainly a project, and I may even contribute to it, but there’s not much point in expecting this to be a definitively completable task. We’re fitting a curve to the data provided by our values, which cannot be losslessly compressed.
This is in total a fair response. I am not sure I can say that you have changed my mind without more detail and I'm not going to take down my original post (as long as there isn't a better post to take its place) because it's still I think directionally correct but thank you for your words.
comment by tailcalled · 2025-04-15T08:46:42.608Z · LW(p) · GW(p)
Actually, even if your personality is good enough, you should probably still pretend to be Flynn Rider, because his personality is better. It was, after all, carefully designed by a crack team of imagineers. Was yours? Didn't think so.
Personalities don't just fall into a linear ranking from worse to better.
Imagineers' job isn't to design a good personality for a friendless nerd, it's to come up with children's stories that inspire and entertain parents and which they proudly want their children to consume.
The parents think they should try to balance the demands of society with the needs of their children by teaching their children to scam the surrounding society but being honest about the situation with their loved ones. Disney is assisting the parents with producing propaganda/instructions for it.
https://benjaminrosshoffman.com/guilt-shame-and-depravity/
Basing your life on scamming society is a bad idea but you shouldn't solve it by also trying to scam your loved ones. If you are honest, you can more easily collaborate with others to figure out what is needed and how you can contribute and what you want.
comment by AnthonyC · 2025-04-15T10:50:37.428Z · LW(p) · GW(p)
Lie by default whenever you think it passes an Expected Value Calculation to do so, just as for any other action.
How do you propose to approximately carry out such a process, and how much effort do you put into pretending to do the calculation?
I'm not as much a stickler/purist/believer in honest-as-always-good as many around here, I think there are many times that deception of some sort is a valid, good, or even morally required choice. I definitely think e.g. Kant was wrong about honesty as a maxim, even within his own framework. But, in practice, I think your proposed policy sets much too low a standard, and in practice the gap between what you proposed vs "Lie by default whenever it passes an Expected Value Calculation to do so, just as for any other action," is enormous in both the theoretical defensibility, and in the skillfulness (and internal levels of honesty and self-awareness) required to successfully execute it.
Replies from: eva_↑ comment by eva_ · 2025-04-15T11:25:47.459Z · LW(p) · GW(p)
How do you propose to approximately carry out such a process, and how much effort do you put into pretending to do the calculation?
The thing I am trying to gesture at might be better phrased as "do it if it seems like a good idea, by the same measures as you'd judge if any other action was a good idea", but then I worry some overly consciencious people will just always judge lying to be a bad idea and stay in the same trap, so I kind of want to say "do it if it seems like a good idea and don't just immediately dismiss it or assign some huge unjustifiable negative weight to all actions that involve lying" but then I worry they'll argue over how much of a negative weight can be justified so I also want to say "assign lying a negative weight proportional to a sensible assessment of the risks involved and the actual harm to the commons of doing it and not some other bigger weight" and at some point I gave up and wrote what I wrote above instead.
Putting too much thought into making a decision is also not a useful behavioural pattern but probably the topic of a different post, many others have written about it already I think.
I think your proposed policy sets much too low a standard, and in practice the gap between what you proposed vs "Lie by default whenever it passes an Expected Value Calculation to do so, just as for any other action," is enormous
I would love to hear alternative proposed standards that are actually workable in real life and don't amount to tying a chain around your own leg, from other non-believers in 'honest-as-always-good'. If there were ten posts like this we could put them in a line and people could pick a point on that line that feels right.
Replies from: Seth Herd, AnthonyC↑ comment by Seth Herd · 2025-04-15T15:04:04.753Z · LW(p) · GW(p)
I think this issue of the difficulty of making each decision about lying as an independent decision is the main argument for treating it as a virtue ethics or deontological issue.
I think you make many good points in the essay arguing that one should not simply follow a rule of honesty. I think that in practice the difference can be split, and that is in fact what most rationalists and other wise human beings do. I also think it is highly useful to write this essay on the mini virtues of lying, so that that difference can be split well.
There are many subtle downsides to lying, so simply adding a bit of a fudge factor to the decision that weighs against it is one way to avoid taking forever to make that decision. You've talked about practicing making the decision quickly, and I suspect that is the result of that practice.
This is a separate issue, but your point about being technically correct is also a valuable one. It is clearly not being honest to say things you know will cause the listener to form false beliefs.
I have probably aired on the side of honesty as have many rationalists, treating it not as an absolute deontological issue and being willing to fudge a little on the side of technically correct to maintain social graces in some situations. I enjoy a remarkable degree of trust from my true friends, because they know me to be reliably honest. However, I have probably suffered reputational damages from acquaintances and failed friends, for whom my exceptional honesty has proven hurtful. Those people don't have adequate experience with me to see that I am reliably honest and appreciate the advantages of having a friend who can be relied upon to tell the truth. That's because they've ceased being my friend when they've been either insulted or irritated by my unhelpful honesty.
There is much here I agree with and much I disagree with. But I think this topic is hugely valuable for the rationalist community, and you've written it up very well. Nice work!
↑ comment by AnthonyC · 2025-04-15T11:35:31.031Z · LW(p) · GW(p)
We apply different standards of behavior for different types of choices all the time (in terms of how much effort to put into the decision process), mostly successfully. So I read this reply as something like, "Which category of 'How high a standard should I use?' do you put 'Should I lie right now?' in?"
A good starting point might be: One rank higher than you would for not lying, see how it goes and adjust over time. If I tried to make an effort-ranking of all the kinds of tasks I regularly engage in, I expect there would be natural clusters I can roughly draw an axis through. E.g. I put more effort into client-facing or boss-facing tasks at work than I do into casual conversations with random strangers. I put more effort into setting the table and washing dishes and plating food for holidays than for a random Tuesday. Those are probably more than one rank apart, but for any given situation, I think the bar for lying should be somewhere in the vicinity of that size gap.
comment by Said Achmiz (SaidAchmiz) · 2025-04-15T09:30:49.747Z · LW(p) · GW(p)
I agree with a lot of things in this post and disagree with a lot of things in this post, but before I comment in more detail, I would like to clarify one thing, please:
Are you aware that there exist moral frameworks that aren’t act consequentialism? And if so, are you aware that some people adhere to these other moral frameworks? And if so, do you think that those people are all idiots, crazy, or crazy idiots?
(These questions are not rhetorical. Especially the last one, despite it obviously sounding like the most rhetorical of the set. But it’s not!)
Replies from: eva_↑ comment by eva_ · 2025-04-15T09:44:54.823Z · LW(p) · GW(p)
Yes I am aware of other moral frameworks, and I freely confess to having ignored them entirely in this essay. In my defence, a lot of people are (or claim to be, or aspire to be) some variant of consequentialist or another. Against strict kantian deontologists I admit no version of this argument could be persuasive and they're free to bite the other bullet and fail to achieve any good outcomes sometimes produce avoidable bad outcomes. Against rule utilitarians (who I am counting as a primary target audience) this issue is much more thorny than to act utilitarians, but I am hoping to be persuasive that never lying is not actually a good rule to endorse and that they shouldn't endorse it.
I don't necessarily think they're crazy, but to various extents I think they'd be lowering their own effectiveness by not accepting some variation on this position, and they should at least do that knowingly.
Replies from: SaidAchmiz, Raemon↑ comment by Said Achmiz (SaidAchmiz) · 2025-04-15T10:08:45.351Z · LW(p) · GW(p)
Ok, thanks. (You omit from your enumeration rule consequentialists who are not utilitarians, but I infer that you have a similar attitude toward these as you do towards rule utilitarians.)
Well, as I am most partial to rule consequentialism, I have to agree that “this issue is much more thorny”. On the one hand, I agree with you that “never lie” is not a good rule to endorse (if even for the very straightforward reason that lying is sometimes not only permissible, but in fact is morally obligatory, so if you adopted a “never lie” rule then this would obligate you to predictably behave in an immoral way). On the other hand, I consider act consequentialism[1] to be obviously foolish and doomed (for boring, yet completely non-dismissable and un-avoidable, reasons of bounded rationality etc.), so your proposed solution where you simply “do the Expected Utility Calculation” is a non-starter. (You even admit that this calculation cannot be done, but then say to pretend to do it anyway; this looks to me like saying “the solution I propose can’t actually work, but do it anyway”. Well, no, if it can’t work, then obviously I shouldn’t do it, duh.)
(More commentary to come later.)
Utilitarianism, specifically (of any stripe whatsoever, and as distinct from non-utilitarian consequentialist frameworks) seems to me to be rejectable in a thoroughly overdetermined manner. ↩︎
↑ comment by Raemon · 2025-04-18T22:15:04.187Z · LW(p) · GW(p)
Against strict kantian deontologists I admit no version of this argument could be persuasive and they're free to bite the other bullet and fail to achieve any good outcomes.
Note that this is very different from what you said in your post, which is "sometimes you will lose." (And this one seems obviously false)
Replies from: eva_comment by cousin_it · 2025-04-15T11:54:06.455Z · LW(p) · GW(p)
I mean, Flynn Rider was also really good-looking. For a lot of people, maybe most, this look is just unattainable. Even if you can get in as good physical shape (which is far from easy), what if you're older, shorter, balder, have a goofy face and so on.
comment by dirk (abandon) · 2025-04-15T19:43:55.267Z · LW(p) · GW(p)
This is directionally correct and most lesswrongers could probably benefit from taking the advice herein, but goes too far (possibly as deliberate humor? The section about Flynn especially was quite funny XD).
I do take issue with the technical-truths section; I think using technical truths to trick people, while indeed a form of lying, is quite distinct from qualifying claims which would be false if unqualified. It's true that some philistines skim texts in order to respond to vibes rather than content, but the typical reader understands qualifiers to be part of the sentences which contain them, and to affect their meaning. That is why qualifiers exist, to change the meanings of the things they qualify, and choosing to ignore their presence is a choice to ignore the actual meaning of the sentences you're ostensibly reading.
Replies from: eva_↑ comment by eva_ · 2025-04-15T23:08:47.155Z · LW(p) · GW(p)
I think a distinction can be made between the sort of news article that's putting a qualifier in a statement because they actually mean it, and are trying to make sure the typical reader notices the qualifier, and the sort putting "anonymous sources told us" in front of a claim that they're 99% sure is made up, and then doing whatever they can within the rules to sell it as true anyway, because they want their audience of rubes to believe it. The first guy isn't being technically truthist, they're being honest about a somewhat complicated claim. The second guy is no better than a journalist who'd outright lie to you in terms of whether it's useful to read what they write.
comment by Elizabeth (pktechgirl) · 2025-04-19T00:44:29.889Z · LW(p) · GW(p)
I liked this post a lot more than I expected to, but I'm disappointed the only examples of lying are a combination of people who have no right to the information and people who are better off for you lying (in a way that gives them truer beliefs than if you'd told the literal truth).
The hard cases are much more interesting. What about lying to my landlord about renting a room on airbnb? What about saying your class will make people millionaires for the low low price of $1,000 (hey, it could happen)? What about hiding the rats from the health inspector?
Replies from: eva_, SaidAchmiz↑ comment by eva_ · 2025-04-19T06:03:29.018Z · LW(p) · GW(p)
I'm not so much of a pragmatist to say that you should run naked scams (for several reasons including that your students will notice when they don't become millionaires later and possibly be vengeful about it, other smarter people will notice the obviously fraudulent offer and assume everything else you offer is some kind of fraud too, the greater prevalence of fraud in the economy will make everyone less willing to buy anything ever until the whole economy stops, etc.) but I am enough of a pragmatist to demand actual reasons about why it isn't wise or why it will have negative consequence.
As for the landlord airbnb case, well I'd want to first ask questions about circumstance. You claimed a bandit doesn't have the right to the information, do you have a moral theory by which to say whether the landlord has a right to the information or not? Is the landlord already basically assuming you'll do this because everybody else does and they've factored it into the price of the rent, or would they spend resources trying to stop you? How much additional wear and tear would it cause, and would it be unfair to the landlord to impose those damages without additional compensation?
The health inspector rats case, I'd similarly think it depends on whether the rats are a real safety hazard likely to make customers sick, or just a politically imposed rule that doesn't really matter that you're arbitrarily being forced to comply with anyway (in which case sure cover it up).
↑ comment by Said Achmiz (SaidAchmiz) · 2025-04-19T00:49:51.616Z · LW(p) · GW(p)
The hard cases are much more interesting. What about lying to my landlord about renting a room on airbnb? What about saying your class will make people millionaires for the low low price of $1,000 (hey, it could happen)? What about hiding the rats from the health inspector?
None of these seem like hard cases to me. Lying is wrong (and pretty obviously so) in all three of these cases.
comment by Douglas_Knight · 2025-04-17T03:49:42.479Z · LW(p) · GW(p)
This is pure first-principles reasoning without a single glance at how humans actually behave, eg, how they assign a reputation for honesty.
comment by Shankar Sivarajan (shankar-sivarajan) · 2025-04-15T16:58:03.781Z · LW(p) · GW(p)
If you believed this, why would you write this post?
Replies from: eva_↑ comment by eva_ · 2025-04-15T23:01:12.573Z · LW(p) · GW(p)
I like a lot of the people in this space, have seen several of them hurt themselves by doing not this, would prefer they stopped, and nobody else seems to have written this post for me somewhere I can link to.
Replies from: erioire, xpostah↑ comment by ErioirE (erioire) · 2025-04-16T20:57:58.397Z · LW(p) · GW(p)
I've had similar experiences.
For me personally, in cases where:
- The Technical Truth is not my business: go ahead and lie to me and/or omit sensitive details if possible.
- — is a much more complex thing that I likely don't have the foundational understanding to grasp: tell me a portion and then check for comprehension, if I fail that just say some vague 'it's complicated' and give me some ideas of what to study if I really want to know.
- — would probably be disturbing for me to know and I am not likely to be negatively affected by not knowing: You can lie to me or omit some details. Alternatively, ask me what reference classes of things I would want to not be informed about.
- — would be likely to cause significant harm in my hands or the hands of those I would likely tell it to: obviously lie or omit.
After reflection, the situations where I would mind being lied to are when my future actions are contaminated by reliance on incorrect data. If the lie will not meaningfully affect my future actions I probably don't care. Although obviously not feasible to accurately predict all possible future actions I might take and why, giving it your best guess is usually sufficient since most conversations are trivial and irrelevant, particularly small talk.
As topic of conversation becomes more consequential, the importance of accuracy also increases.
↑ comment by samuelshadrach (xpostah) · 2025-04-19T15:58:47.159Z · LW(p) · GW(p)
Do you have examples?
Replies from: eva_↑ comment by eva_ · 2025-04-20T00:15:50.821Z · LW(p) · GW(p)
I do have examples that motivated me to write this, but they're all examples where people are still strongly disagreeing about the object level of what happened, or possibly lying about how they disagree on the object level and pretending they're committed to honesty. I thought about putting them in the essay but decided it wouldn't be fair and I didn't want to distract my actual thesis into a case analysis of how maybe all my examples have a problem other than over-adherence to bad honesty norms. Should I put them in a comment? I'm genuinely unsure. I could probably DM you them if you really want?
EDIT: okay fine you win. The public examples with nice writeups that I am most willing to cite are: Eneasz Brodski, Zack M Davis [LW · GW], Scott Alexander. There are other posts related to some of those but I don't want to exhaustively link everything anyone's said about it in this comment. I claim there are other people making in my opinion similar mistakes but I'm either unable or unwilling to provide evidence so you shouldn't believe me. I would prefer to leave as an exercise for the reader what any of those things have to do with my position because this whole line of inquiry seems incredibly cursed.
Replies from: Mo Nastri, Amyr, xpostah, xpostah↑ comment by Mo Putera (Mo Nastri) · 2025-04-20T03:19:57.894Z · LW(p) · GW(p)
I do think it'd be useful for the rest of us if you put them in a comment. :)
(FWIW I resonated with your motivation [LW(p) · GW(p)], but also think your suggestions fail on the practical grounds [LW(p) · GW(p)] jenn mentioned, and would hence on net harm the people you intend to help.)
↑ comment by Cole Wyeth (Amyr) · 2025-04-20T04:46:32.997Z · LW(p) · GW(p)
This is the kind of comment that becomes harder to take at face value (from you) after reading your dissent on honesty.
↑ comment by samuelshadrach (xpostah) · 2025-04-20T19:34:16.562Z · LW(p) · GW(p)
Update: I read your examples and I honestly don’t see how any of these 3 people would be better off by their own idea of what better off means, if they were less open or less truthful.
P.S. discussing anonymously is easier if you’re not confident you can handle the social repercussions of discussing it under your real name. I agree that morality is social dark matter and it’s difficult to argue in favour of positions that are pro-violence pro-deception etc under your real name.
↑ comment by samuelshadrach (xpostah) · 2025-04-20T06:02:53.588Z · LW(p) · GW(p)
If you can’t provide a few unambiguous examples of the dilemma in the post that actually happened in the real world, I’m less likely to take your post seriously.
Might be worth thinking more and then coming up with examples.
comment by Afterimage · 2025-04-15T10:18:39.579Z · LW(p) · GW(p)
Great post! I really enjoy your writing style. I agree with everything up to your last sentence of cooperative epistemics. It looks like a false equivalence between a community of perfect trust and a community based on mistrust. I'm thinking a community of "trust but verify" with a vague assumption of goodwill will capture all the benefits of mistrust without the risks of half rationalists or "half a forum of autists" going off the deep end and making a carrying error in their EV calculations to overly negative results.
Corrupted Hardware [? · GW] leads me to think we need to aim high to end up at an optimum level of honesty.
Edit: Thanks Cole and Shankar.
Replies from: Amyr, noggin-scratcher↑ comment by Cole Wyeth (Amyr) · 2025-04-15T13:25:42.270Z · LW(p) · GW(p)
Highlight, right-click, the little diagonal line thing that usually symbolizes links.
Replies from: shankar-sivarajan↑ comment by Shankar Sivarajan (shankar-sivarajan) · 2025-04-15T17:01:00.981Z · LW(p) · GW(p)
Or Ctrl-K
, the standard shortcut.
↑ comment by noggin-scratcher · 2025-04-15T11:05:06.652Z · LW(p) · GW(p)
comment by testingthewaters · 2025-04-17T10:48:36.219Z · LW(p) · GW(p)
I'm sorry that your life has come to a point where you might feel like adopting a fake personality and regularly deceiving those around you is how to be happy and get what you want. It's almost impossible to deliver good advice to a stranger over the internet, but I hope that one day you can find joy and fulfillment in being yourself.
Replies from: Amyr↑ comment by Cole Wyeth (Amyr) · 2025-04-19T03:37:23.084Z · LW(p) · GW(p)
This was heavily downvoted and the tone is in fact off but I think there is a little sliver of truth to it.
Replies from: testingthewaters↑ comment by testingthewaters · 2025-04-19T11:35:52.529Z · LW(p) · GW(p)
For my part, I didn't realise it became so heavily downvoted, but I did not mean it at all in an accusatory or moralizing manner. I also, upon reflection, don't regret posting it.