Rational lies
post by Alex Flint (alexflint) · 2009-11-23T03:32:08.789Z · LW · GW · Legacy · 10 commentsContents
10 comments
If I were sitting opposite a psychopath who had a particular sensitivity about ants, and I knew that if I told him that ants have six legs then he would jump up and start killing the surrounding people, then it would be difficult to justify telling him my wonderful fact about ants, regardless of whether I believe that ants really have six legs or not.
Or suppose I knew my friend's wife was cheating on him, but I also knew that he was terminally ill and would die within the next few weeks. The question of whether or not to inform him of my knowledge is genuinely complex, and the truth or falsity of my knowledge about his wife is only one factor in the answer. Different people may disagree about the correct course of action, but no-one would claim that the only relevant fact is the truth of the statement that his wife is cheating on him.
This is all a standard result of expected utility maximization, of course. Vocalizing or otherwise communicating a belief is itself an action, and just like any other action it has a set of possible outcomes, to which we assign probabilities as well as some utility within our value coordinates. We then average out the utilities over the possible outcomes for each action, weighted by the probability that they will actually happen, and choose the action that maximizes this expected utility. Well, that's the gist of the situation, anyway. Much has been written on this site about the implications of expected utility maximization under more exotic conditions such as mind splitting and merging, but I'm going to be talking about more mundane situations, and the point I want to make is that beliefs are very different objects from the act of communicating those beliefs.
This distinction is particularly easy to miss as the line between belief and communication becomes subtler. Suppose that a friend of mine has built a wing suit and is about to jump off the empire state building with the belief that he will fly gracefully through the sky. Since I care about my friend's well-being I try to explain to him the concepts of gravity and aerodynamics, and the effect it will have on him if he launches himself from the building. Examining my decision in detail, I have placed a high probability on his death if he jumps off the building, and calculated that, since I value his well-being, my expected utility would not be maximized by him making the leap.
But now suppose that my friend is particularly dull and unable or unwilling to grasp the concept of aerodynamics, and is hence unswayed by my argument. Having reasonably explained my beliefs to him, am I absolved of the moral responsibility to save him? Not from a utilitarian standpoint, since there are other courses of action available to me. I could, for example, tell him that his wing suit has been sabotaged by aliens --- a line of reasoning that I happen to know he'll believe given his predisposition towards X files-esque conspiracy theories.
Would doing so be contrary to my committed rationalist stance? Not at all; I have rationally analysed the options available to me and rationally chosen a course of action. The conditions for the communication of a belief to be deemed rational are exactly the same decision theoretic conditions applicable to any other action: namely that of being the expected utility maximizer.
If this all sounds too close to "tell people what they need to hear" then let's ask under what specific conditions it might be rational to lie. Clearly this depends on your values. If your utility function places high value on people falling to their death then you will tend to lie about gravity and aerodynamics as much as possible. However, for the purpose of practical rationality I'm going to assume for the rest of this article that some of your basic values align with my own, such as the value of fulfilled human existence, and so on.
Convincing somebody of a falsehood will, on average, lead to them making poorer decisions according to their values. My soon-to-be-airborne friend may be convinced not to leap from the building immediately, but may shortly return with a wing suit covered in protective aluminium foil to ward off those nasty interfering aliens. Nobody is exempt from the laws of rationality. To the extent that their values align with mine, convincing another of a falsehood will have at least this one negative consequence with respect to my own values. The examples I gave above are specific situations in which other factors dominate my desire for another person to be better informed during the pursuit of their goals, but such situations are the exception rather than the rule. All other things equal, lying to an agent with similar values to mine is a bad decision.
Had I actually convinced my friend of the nature of gravity and aerodynamics rather than spinning a story about aliens then next time he may return to the rooftop with a parachute rather than a tin foil wing suit. In the example I gave, this course of action was unlikely to succeed, but again this situation is the exception rather than the rule. In general, a true statement has the potential to improve the recipient's brain/universe entanglement and thereby improve his potential for achieving his goals, which, if his values align with my own, constitutes at least one factor in favour of truth-telling. All other things equal, telling the truth is a good decision.
This doesn't mean that telling the truth is valuable only in terms of its benefits to me. My own values include bettering the lives of others, so achieving "my goals" constitutes working towards the good of others, as well as my own.
Is there any other sense in which truth-telling may be considered a "good" in its own right? Naively one might argue that the act of uttering a truth could itself be a value in its own right, but such a utility function would be maximized by a universe tiled with tape players broadcasting mundane, true facts about the universe. It would be about as well-aligned with the values of typical human being as a paper clip maximizer.
It's a more reasonable position for rationality in others to be included among one's fundamental values. This, I feel, is more closely aligned with my own value. All other things equal, I would like those around me to be rational. Not just to live in a society of rationalists, though this is an orthogonal value. Not just to engage in interesting, stimulating discussion, though this is also an orthogonal value. And not just for others to succeed in achieving their goals, though this, again, is an orthogonal value. But to actually maximize the brain/universe entanglement of others, for its own sake.
Do you value rationality in others for its own sake?
10 comments
Comments sorted by top scores.
comment by LauraABJ · 2009-11-23T23:05:30.546Z · LW(p) · GW(p)
"Do you value rationality in others for its own sake?"
This is purely a question of aesthetics. You might as well ask if I value shiny jewelry for its own sake and not just as a social signal (and I do, I do!). My answer is yes, but rationality is only one quality among many I value in a human, and in practice, probably not a major one.
comment by cousin_it · 2009-11-23T13:01:39.569Z · LW(p) · GW(p)
Everyone seems to think they value truth more than other people. This may be a rationalization for the evolutionarily useful behavior of not blabbing too much.
Replies from: billswift↑ comment by billswift · 2009-11-24T13:27:41.930Z · LW(p) · GW(p)
Very ambiguous - Do you mean (1) that everyone values truth more than they value other people, or (2) that everyone believes they value truth more than other people value it? Or do you believe both are true? Or even another possibility that I missed?
Replies from: cousin_itcomment by dclayh · 2009-11-23T06:57:24.839Z · LW(p) · GW(p)
Eliezer previously on the utility of ethics and truth-telling particularly.
Replies from: PlaidX↑ comment by PlaidX · 2009-11-23T14:52:04.046Z · LW(p) · GW(p)
This I knew from the beginning: That if I had no ethics I would hold to even with the world at stake, I had no ethics at all. And I could guess how that would turn out.
This line has always confused me. How DOES he think "that" would turn out, aside from him being less of a whiny goody-two-shoes?
comment by akshatrathi · 2009-11-23T03:47:47.979Z · LW(p) · GW(p)
I'm going to be talking about more mundane situations, and the point I want to make is that beliefs are very different objects from the act of communicating those beliefs.
Isn't this what happens in a courtroom drama? The lawyers bend facts by the way they communicate it to maximize the utility of their argument. I haven't observed a real court case but can come up with scores of examples from bollywood movies!
comment by MendelSchmiedekamp · 2009-11-24T19:23:33.355Z · LW(p) · GW(p)
Lies, truth, and radical honesty are all that get in the way in understanding what is going on here.
You are communicating with someone, several of the many constantly changing layers (in addition to status signaling, empathy broadcasting, and performatives) of this communication are the transfer of information from you to that someone. The effectiveness of the communication of this information and its accuracy when received is something we can talk about fairly easily in terms of both instrumental (effectiveness) and epistemic (accurate) rationality.
To classify that communication as a lie or as truth or as honest (from your own perspective) involves unpacking social signals, conscious and unconscious intent, and is entirely irrelevant to any rational goal.
Considering that our societies place value on the signals shown by these terms, it may matter how our signals are received. This is an instrumental rationality question about increasing the likelihood of being seen as honest or as telling a lie.
It is essential not to confuse these two very different things. One of the first clues is to realize that when we talk about truth in rationality we mean something closely related to accuracy, but in communication it may be the same word, but it means something entirely different. This means that we should ban ourselves from using the word until we are quite sure we know what we mean by it.
comment by djcb · 2009-11-23T21:57:55.395Z · LW(p) · GW(p)
hmmm.... not-lying is moral in the kantian sense. It's often 'moral' in the utilitarian sense, though definitely not always -- as in your examples, or when the proverbial murderer is at the door, asking if his chosen victim is at home.
On top of that, I don't think rationality is necessarily moral. Many rationalists may also be very moral people, but that s not a requirement -- thus, I think there is plenty of room for lying in rationality, whether its moral (by some definition) or not. Often it may be very rational to be moral - but that is not necessarily always the case.
comment by Pavitra · 2009-11-23T04:19:04.686Z · LW(p) · GW(p)
Very close to yes, but no. I value higher intelligence as such, and rationality seems to be characteristic of the highest intelligences currently known (i.e., humans), but I don't assume that vastly superhuman intelligences will necessarily be characteristically rational any more than they will necessarily have lots of money or lots of bananas.