Lie to me?

post by pwno · 2009-06-24T21:56:15.638Z · LW · GW · Legacy · 32 comments

I used to think that, given two equally capable individuals, the person with more true information can always do at least as good as the other person. And hence, one can only gain from having true information. There is one implicit assumption that makes this line of reason not true in all cases. We are not perfectly rational agents; our mind isn’t stored in a vacuum, but in a Homo sapien brain.There are certain false beliefs that benefit you by exploiting your primitive mental warehouse, e.g., self-fulfilling prophecies.

Despite the benefits, adopting false beliefs is an irrational practice. If people never acquire the maps that correspond the best to the territory, they won’t have the most accurate cost-benefit analysis for adopting false beliefs. Maybe, in some cases, false beliefs make you better off. The problem is you'll have a wrong or sub-optimal cost-benefit analysis, unless you first adopt reason.

Also, it doesn’t make sense to say that the rational decision could be to “have a false belief” because in order to make that decision, you would have to compare that outcome against “having a true belief.” But in order for a false belief to work, you must truly believe in it — you cannot deceive yourself into believing the false belief after knowing the truth! It’s like figuring out that taking a placebo leads to the best outcome, yet knowing it’s a placebo no longer makes it the best outcome.

Clearly, it is not in your best interest to choose to believe in a falsity—but what if someone else did the choosing? Can’t someone whose advice you rationally trust be the decider of whether to give you false information or not (e.g. a doctor deciding whether you receive a placebo or not)? They could perform a cost-benefit analysis without diluting the effects of the false belief. We only want to know the truth, but prefer to be unknowingly lied to in some cases.

Which brings me to my question: do we program an AI to only tell us the truth or to lie when the AI believes (with high certainty) the lie will lead us to a net benefit over our expected lifetime?

Added: Keep in mind that knowledge of the truth, even for a truth-seeker, is finite in value. The AI can believe that the benefit of a lie would outweigh a truth-seeker's cost of being lied to. So unless someone values the truth above anything else (which I highly doubt), would a truth-seeker ever choose only to be told the truth from the AI?

32 comments

Comments sorted by top scores.

comment by jimrandomh · 2009-06-24T22:32:12.480Z · LW(p) · GW(p)

This topic has already been done to death, and then some.

Replies from: pwno, Vladimir_Nesov
comment by pwno · 2009-06-24T22:34:01.403Z · LW(p) · GW(p)

Ah, my bad.

comment by Vladimir_Nesov · 2009-06-25T11:14:45.949Z · LW(p) · GW(p)

This fact should be fixed by some articles on the wiki, with references to the previous discussions (so that you can substantiate you claim with a link, explaining to any newcommer where to look). This point seems to fit in the concepts of Truth and Self-deception; the latter article is currently almost empty.

ETA: Costs of rationality is a closer match.

Replies from: ThoughtDancer
comment by ThoughtDancer · 2009-06-25T15:57:50.989Z · LW(p) · GW(p)

The wiki is a good starting tool, but it's not yet as fully developed as I would like. I'm still working to develop sufficient background knowledge of the discussions, assumptions, and definitions used in Less Wrong so as to be sufficiently confident in commenting.

So I will forgive the occasions when someone who sincerely wants information and thoughtful reactions stumbles into spaces that have already been well-trodden.

Nevertheless, the wiki itself isn't yet fully developed with interconnections and links to definitions: until such internal tagging is complete, newer people will sometimes fail to find what they are searching for and will instead ask it directly.

I welcome these questions being asked, and if only as a sign that Less Wrong does not encourage self-censorship (which, I gather from conversations elsewhere, may have been a concern on Overcoming Bias).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-25T16:07:58.740Z · LW(p) · GW(p)

Of course the wiki is still in its infancy. All the more reason to shape it by contributing your own synthesis of the discussed concepts, especially when these concepts leave the impression of having been discussed to death.

comment by A1987dM (army1987) · 2014-02-20T17:14:11.292Z · LW(p) · GW(p)

It’s like figuring out that taking a placebo leads to the best outcome, yet knowing it’s a placebo no longer makes it the best outcome.

Unless what counts for whether the placebo works is what your System 1 believes, what counts for your rational cost-benefit analysis is what your System 2 believes, and you somehow manage to keep them separate (which is hard but not always impossible).

comment by cousin_it · 2009-06-25T09:49:29.035Z · LW(p) · GW(p)

Yep, done to death. Eliezer's answer: The Third Alternative. I hereby disclose that I downvoted you.

comment by Vladimir_Nesov · 2009-06-25T11:28:18.667Z · LW(p) · GW(p)

Which brings me to my question: do we program an AI to only tell us the truth or to lie when the AI believes (with high certainty) the lie will lead us to a net benefit over our expected lifetime?

This question is too anthropomorphic. Since you clearly mean a strong AI, it's no longer a question-answering machine, it's an engine for a supervised universe. At which point you ought to talk about the organization of the future, for example of fun theory.

comment by Furcas · 2009-06-24T22:35:40.544Z · LW(p) · GW(p)

Well, the AI wouldn't only have to predict how the lie will benefit the person who hears it, but also how the actions that result from holding a false belief might affect other individuals.

The above quibble aside, the answer to your question is pretty trivial. To a person who values the truth, knowledge is a benefit and will therefore be part of the AI's 'net benefit' calculation. Consequently, the kind of person who would want to program an AI to only tell him the truth would never be lied to by an AI that does a net benefit calculation.

The only question that remains is if we would want to program an AI to always tell the truth to people who want to be deceived at least some of the time. In other words, do we want other individuals to believe whatever they like, as long as it's a net benefit to them and doesn't affect the rest of us? My answer would be yes.

Of course, in the real world, popular false beliefs, such as religious ones, often do not lead to a net benefit for those who hold them, and even more often affect the rest of us negatively.

Replies from: pwno
comment by pwno · 2009-06-24T23:03:08.893Z · LW(p) · GW(p)

To a person who values the truth, knowledge is a benefit and will therefore be part of the AI's 'net benefit' calculation.

But knowledge of the truth has a finite value. What if the AI believed that the benefit of a lie would outweigh a truth-seeker's cost of being lied to?

So the question is, would any rational truth-seeker choose to only be told the truth by the AI?

Replies from: Furcas
comment by Furcas · 2009-06-25T00:20:43.206Z · LW(p) · GW(p)

A person doesn't have to 'infinitely value' truth to always prefer the truth to a lie. The importance put on truth merely has to be greater than the importance put on anything else.

That said, if the question is, is there a human, or has there ever been a human who values truth more than anything else, the answer is almost certainly no. For example, I care about the truth a lot, but if I were given the choice between learning a single, randomly chosen fact about the universe, and being given a million dollars, I'd pick the cash without too much hesitation.

However, as Eliezer has said many times, human minds only represent a tiny fraction of all possible minds. A mind that puts truth above anything else is certainly possible, even if it doesn't exist yet.

Replies from: pwno
comment by pwno · 2009-06-25T01:08:02.807Z · LW(p) · GW(p)

Now that we know we programmed an AI that may lie to us, our rational expectations will make us skeptical of what the AI says, which is not ideal. Sounds like the AI programmer will have to cover up the fact that the AI does not always speak the truth.

comment by timtyler · 2009-06-25T17:09:59.615Z · LW(p) · GW(p)

The truth would exact a terrible price on us, were we to learn of it. There is too much of it. It does not fit into our tiny minds. For most of the truth in the universe, we are better off not-knowing about it.

Replies from: tut, Alicorn
comment by tut · 2009-06-25T17:43:22.654Z · LW(p) · GW(p)

That is true, but it only means that we don't want to learn everthing. Not that we want to be lied to. The AI can dole out information as it becomes relevant to you without ever giving you false information.

comment by Alicorn · 2009-06-25T17:22:49.459Z · LW(p) · GW(p)

For most of the truth in the universe, we are better off not-knowing about it.

What makes you say that? Do you regret knowing a majority of the things you know?

Replies from: timtyler
comment by timtyler · 2009-06-25T17:59:13.488Z · LW(p) · GW(p)

It was part of an experiment to see if subscribers would vote down truths they are uncomfortable with and would rather not hear about.

Seriously, this is obvious - finding and remembering the truth has costs, and most truth in the universe is irrelevant to us because it concerns things which are far-away in spacetime.

Replies from: Vladimir_Nesov, orthonormal, JGWeissman
comment by Vladimir_Nesov · 2009-06-25T19:43:15.532Z · LW(p) · GW(p)

It's trivially right and connotationally wrong, both issues not contributing to the quality of remark.

Replies from: timtyler
comment by timtyler · 2009-06-25T22:15:36.176Z · LW(p) · GW(p)

This is a "truth, glorious truth post". And the comments were mostly just nodding. Less Wrong is full of this sort of truth-worshiping. The observation that truth has a price seems trivially-obvious to me, but it seems to me that a number of other people around here don't get it - and don't like it being pointed out to them.

Replies from: Alicorn
comment by Alicorn · 2009-06-25T22:21:41.321Z · LW(p) · GW(p)

I think talking about it as "a price" may be tripping speculative-fiction-quest-sacrifice-buttons or something. ("Yes, we have the MacGuffin, but at what cost?!") If all you're saying is that truths take up brain space and brain space is both finite and valuable, I guess I can't disagree with that, but I value my brain space mostly because it can hold truths.

Replies from: timtyler
comment by timtyler · 2009-06-25T22:36:54.065Z · LW(p) · GW(p)

Among truth-worshipers, saying anything bad about the truth is tantamout to blasphemy. Obviously that makes me one of those evil lie worshipers, who must be sacrificed with Occam's razor before I spread the terrible truth-slander further.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-25T22:49:53.773Z · LW(p) · GW(p)

That additional beliefs take up space and time to learn is an obvious point, nobody disagrees with that. Because this point is obvious, restating it as an important truth seems silly, and suggesting that other people don't accept it offensive. See Costs of rationality for the links to more interesting points.

Replies from: timtyler
comment by timtyler · 2009-06-26T06:26:44.932Z · LW(p) · GW(p)

Uh, the first paragraph of the post does explicity disagree, saying "one can only gain from having true information" - claiming the only exception is due to irrationality.

Replies from: JGWeissman
comment by JGWeissman · 2009-06-26T06:41:11.729Z · LW(p) · GW(p)

That was part of a line of reasoning that was introduced by "I used to think that". Pay more attention to context.

Replies from: timtyler
comment by timtyler · 2009-06-26T17:35:05.069Z · LW(p) · GW(p)

You have to read all 5 sentences in that paragraph, to read about the new position as I just described it. The original postion was much more wrong.

Replies from: JGWeissman
comment by JGWeissman · 2009-06-26T18:29:31.915Z · LW(p) · GW(p)

Note the difference between "There is one implicit assumption that makes this line of reason not true in all cases", an actual statement from the article, and the modified version "There is only one implicit assumption that makes this line of reason not true in all cases."

You are critiquing the modified version. It seems to me that you are searching for any unfavorable interpretation so that you can offer contrarian dissent.

Note that people have not been disagreeing with your point about the resource cost of representing true information, but also don't think that this is a problem for the article, and object to the unclear way you have presented your point.

comment by orthonormal · 2009-06-25T18:36:38.251Z · LW(p) · GW(p)

Your statement had two interpretations; one is mystical bullshit, and the other is trivially true but misleadingly written and irrelevant to the sort of truth-seeking we actually deliberate about. Thus the downvotes.

Replies from: timtyler, Vladimir_Nesov
comment by timtyler · 2009-06-25T22:02:43.352Z · LW(p) · GW(p)

That seems like an unsympathetic reading. Personally, I can't see the "mystical bullshit" interpretation after looking.

The original post ignored the costs of the truth. It seemed to me that someone needed to alert the nodding readership to the uncomfortable fact the truth has a price.

comment by Vladimir_Nesov · 2009-06-25T19:46:36.551Z · LW(p) · GW(p)

Heh. I wrote my comment before reading yours, and I expressed exactly the same conclusion.

Replies from: orthonormal
comment by orthonormal · 2009-06-25T21:13:41.159Z · LW(p) · GW(p)

Proof of concept for a coherent extrapolated rejoinder?

comment by JGWeissman · 2009-06-25T22:45:40.195Z · LW(p) · GW(p)

When comparing believing a truth to believing a corresponding lie, the resource costs of representing that truth should balance with the resource cost of representing the lie (otherwise the comparison also involves believing a precise theory versus believing an approximation).

The article was not about learning every fact about the entire universe, it was about the utility of believing lies about things we care about.

Replies from: timtyler
comment by timtyler · 2009-06-26T06:44:15.620Z · LW(p) · GW(p)

Check with the post's first paragraph. What it says it is about is how "one can only gain from having true information". In fact, most truths in the universe are bad for us, because of their costs - thus my reality check.

Replies from: JGWeissman
comment by JGWeissman · 2009-06-26T07:00:15.530Z · LW(p) · GW(p)

First, as I just explained elsewhere, you are misrepresenting the article by quoting out of context.

Second, there is a difference between claiming "one can only gain from having true information" (implicitly as opposed to having false information) and claiming that having true information is always a benifet.