Strategic ignorance and plausible deniability

post by Kaj_Sotala · 2011-08-10T09:30:26.205Z · LW · GW · Legacy · 59 comments

Contents

59 comments

This is the third part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.

The press secretary of an organization is tasked with presenting outsiders with the best possible image of the organization. While they're not supposed to outright lie, they do use euphemisms and try to only mention the positive sides of things.

A plot point in the TV series West Wing is that the President of the United States has a disease which he wants to hide from the public. The White House Press Secretary is careful to ask whether there's anything she needs to know about the President's health, instead of whether there's anything she should know. As the President's disease is technically something she should know but not something she needs to know, this allows the President to hide the disease from her without lying to her (and by extension, to the American public). As she then doesn't need to lie either, she can do her job better.

If our minds are modular, critical information can be kept away from the modules that are associated with consciousness and speech production. It can often be better if the parts of the system that exist to deal with others are blissfully ignorant, or even actively mistaken, about information that exists in other parts of the system.

In one experiment, people could choose between two options. Choosing option A meant they got $5, and someone else also got $5. Option B meant that they got $6 and the other person got $1. About two thirds were generous and chose option A.

A different group of people played a slightly different game. As before, they could choose between $5 or $6 for themselves, but they didn't know how their choice would affect the other person's payoff. They could find out, however – if they just clicked a button, they'd be told whether the choice was between $5/$5 and $6/$1, or $5/$1 and $6/$5. From a subject's point of view, clicking a button might tell them that picking the option they actually preferred meant they were costing the other person $4. Not clicking meant that they could honestly say that they didn't know what their choice cost the other person. It turned out that about half of the people refused to look at the other player's payoffs, and that many more subjects chose $6/? than $5/?.

There are many situations where not knowing something means you can avoid a lose-lose situation. If know your friend is guilty of a serious crime and you are called to testify in court, you may either betray your friend or commit perjury. If you see a building on fire, and a small boy comes to tell you that a cat is caught in the window, your options are to either risk yourself to save the cat, or take the reputational hit of neglecting a socially perceived duty to rescue the cat. (Footnote in the book: ”You could kill the boy, but then you've got other problems.”) In the trolley problem, many people will consider both options wrong. In one setup, 87% of the people who were asked thought that pushing a man to the tracks to save five was wrong, and 62% said that not pushing him was wrong. Better to never see the people on the tracks. In addition to having your reputation besmirched by not trying to save someone, many nations have actual ”duty to rescue” laws which require you to act if you see someone in serious trouble.

In general, people (and societies) often believe that if you know about something bad, you have a duty to stop it. If you don't know about something, then obviously you can't be blamed for not stopping it. So we should expect that part of our behavior is designed to avoid finding out information that would impose an unpleasant duty on us.

I personally tend to notice this conflict if I see people in public places who look like they might be sleeping or passed out. Most likely, they're just sleeping and don't want to be bothered. If they're drunk or on drugs, they could even be aggressive. But then there's always the chance that they have some kind of a condition and need medical assistance. Should I go poke them to make sure? You can't be blamed if you act like you didn't notice them, some part of me whispers. Remember the suggestion that you can fight the bystander effect by singling out a person and asking them directly for help? You can't pretend you haven't noticed a duty if the duty is pointed out to you directly. As for the bystander effect in general, there's less of a perceived duty to help if everyone else ignores the person, too. (But then this can't be the sole explanation, because people are most likely to act when they're alone and there's nobody else around to know about your duty. The bystander effect isn't actually discussed in the book, this paragraph is my own speculation.)

The police may also prefer not to know about some minor crime that is being committed. If it's known that they're ignoring drug use (say), they lose some of their authority and may end up punished by their superiors. If they don't ignore it, they may spend all of their time doing minor busts instead of concentrating on more serious crime. Parents may also pretend that they don't notice their kids engaging in some minor misbehavior, if they don't want to lose their authority but don't feel like interfering either.

In effect, the value of ignorance comes from the costs of others seeing you know something that puts you in a position in which you are perceived to have a duty and must choose to do one of two costly acts – punish, or ignore. In may own lab, we have found that people know this. When our subjects are given the opportunity to punish someone who has been unkind in an economic game, they do so much less when their punishment won't be known by anyone. That is, they decline to punish when the cloak of anonymity protects them.

The (soon-to-expire) ”don't ask, don't tell” policy of the United States military can be seen as an institutionalization of this rule. Soldiers are forbidden from revealing information about their sexuality, which would force their commanders to discharge them. On the other hand, commanders are also forbidden from inquiring into the matter and finding out.

A related factor is the desire for plausible deniability. A person who wants to have multiple sexual partners may resist getting himself tested for sexual disease. If he was tested, he might find out he had a disease, and then he'd be accused of knowingly endangering others if he didn't tell them about his disease. If he isn't tested, he'll only be accused of not finding out that information, which is often considered less serious.

These are examples of situations where it's advantageous to be ignorant of something. But there are also situations where it is good to be actively mistaken. More about them in the next post.

59 comments

Comments sorted by top scores.

comment by novalis · 2011-08-10T20:14:17.783Z · LW(p) · GW(p)

In the US, the law recognizes that people would sometimes benefit from plausible deniability, and thus sometimes has a standard of "knew or should have known."

comment by atucker · 2011-08-10T14:55:27.005Z · LW(p) · GW(p)

I have two standards of trust.

The first one is trusting that someone's goals are aligned with mine, and that they don't intentionally do anything bad.

The second one is trusting that someone's actions are going to be lined up with their intentions, and that they will take the initiative to ensure this.

The second is much harder to earn, but can be earned without needing to necessarily have my goals.

Replies from: randallsquared
comment by randallsquared · 2011-08-11T16:57:44.134Z · LW(p) · GW(p)

From the outside, it's not clear that we can do any better than saying that someone's intentions are nearly always lined up with their actions. Or, to do better than that, we have to side with a part of them as their "true self". But the idea that someone else's goals are aligned with yours is nearly always going to be false except for some very limited set of goals.

Replies from: smk
comment by smk · 2011-08-13T16:22:43.715Z · LW(p) · GW(p)

I kind of thought it went without saying that atucker was talking about a limited set of goals being aligned?

As a shorthand I have always tended to look at trust on two levels, which seem similar to atucker's but perhaps not the same? That is: I trust your intentions (in the relevant area), and I trust your competence (in that area). There's also the issue of diligence, which is a kind of competence and also a kind of intention, so that can complicate things perhaps, but for simplicity I just look at it as two levels.

Replies from: atucker
comment by atucker · 2011-08-14T02:05:30.567Z · LW(p) · GW(p)

Well, that was what I meant at least. That it was a limited set of alignment.

Also, I'm pretty sure that most naive morality assumes that people have a true self that is being dealt with. Revealed preferences are a pretty new idea, if I understand correctly, and people tend to get annoyed and defensive when it's brought up.

comment by DSimon · 2011-08-10T21:54:25.443Z · LW(p) · GW(p)

If he was tested, he might find out he had a disease, and then he'd be accused of knowingly endangering others if he didn't tell them about his disease. If he isn't tested, he'll only be accused of not finding out that information, which is often considered less serious. [emphasis added]

This strikes me as being a major societal bug. I agree that it's in the short term interests of the person to avoid getting tested. But, many people avoiding testing and thereby creating a pressure that it's normal and okay to do that, is what causes the silly situation in the first place.

A lot of the other scenarios you write about seem similar: strategic ignorance can be helpful for the individual in the short term, but a general policy of deliberate non-ignorance would be better for everybody overall.

comment by Strange7 · 2011-08-11T02:08:22.066Z · LW(p) · GW(p)

But then this can't be the sole explanation, because people are most likely to act when they're alone and there's nobody else around to know about your duty.

Actually I think concerns about plausible deniability explain that just fine. Someone who observes trouble while alone is in a fairly safe position, since they can withdraw at any point during investigation or intervention without any witnesses to accuse them of cowardice. That initial sense of safety motivates a greater degree of risk-taking, in the form of willingness to render assistance.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-08-11T07:26:28.368Z · LW(p) · GW(p)

An excellent point.

comment by Oscar_Cunningham · 2011-08-10T18:41:09.448Z · LW(p) · GW(p)

I always get confused by experiments involving how generous people are with money, because if I took 5/5 instead of 6/1 I'd be taking $3 from the experimenters! Who am I to say that they are less deserving than my co-experimentee?

Replies from: Alicorn, Friendly-HI, Khaled, None
comment by Alicorn · 2011-08-10T18:43:39.074Z · LW(p) · GW(p)

But if they don't spend their budget, their funding will be cut.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-08-10T18:46:21.142Z · LW(p) · GW(p)

Clearly the scientists must precommit to burning all the money they don't use.

Replies from: JGWeissman, Alicorn
comment by JGWeissman · 2011-08-10T19:09:09.019Z · LW(p) · GW(p)

Burning money just transfers the value of that money to holders of other money through deflation. So your generousity is still assigning some amount of value to your co-experimentee that otherwise would distributed throughout the economy.

But if the reward was something of intrinsic value, precommitting to destroying unused rewards could work.

comment by Alicorn · 2011-08-10T18:49:11.982Z · LW(p) · GW(p)

I doubt they can document that as an expense within the parameters of their grants.

Replies from: DSimon
comment by DSimon · 2011-08-10T21:46:34.767Z · LW(p) · GW(p)

Unless they also had another grant to research the fumes emitted by burning money.

Replies from: Strange7, MartinB
comment by Strange7 · 2011-08-11T02:02:32.278Z · LW(p) · GW(p)

No, for that they'd need a fixed amount of fumes.

Replies from: DSimon
comment by DSimon · 2011-08-11T18:17:39.309Z · LW(p) · GW(p)

But for a fixed amount of fumes, they need not a fixed amount of money but a fixed amount of bills. If their surplus is greater than expected, the experimenters can simply burn larger denominations.

Replies from: MartinB
comment by MartinB · 2011-11-18T08:40:00.712Z · LW(p) · GW(p)

Burning money in sufficent sums is privately paid deflation. If they hand out their money to subjects it gets back into the economic cycle. If they burn it it is gone.

comment by MartinB · 2011-11-18T08:38:06.828Z · LW(p) · GW(p)

On Southpark the smell of money cured Aids.

comment by Friendly-HI · 2011-08-11T18:56:50.438Z · LW(p) · GW(p)

Clearly, lesswrongers are not suitable for a very wide range of psychological experiments.

Most people never realize that they actually "take" the money from the researchers, who would probably put it to better use than Mr. Anonymous. So usually that set-up is no problem at all, and if someone as clever as you "gets it" and consequently ruins the experiment, that person is still just a tiny data-blip skewing the results by just an infinitesimal amount, given a proper sample-size.

comment by Khaled · 2011-08-13T02:11:03.419Z · LW(p) · GW(p)

You'd be taking $3 from the experimenters, but in return giving them data that represents your decision in the situation they are trying to simulate (which is a situation where only the two experimentees exist), though your point shows they didn't mange to set it up very accurately.

I realize it will be difficult to ignore the fact you mentioned once you notice it, just pointing out that not noticing it can be more advantageuos for the experimenter and yourself (not the other experimentee) - maybe another strategic ignorance

comment by [deleted] · 2011-08-11T15:09:52.106Z · LW(p) · GW(p)

I think in these experiments you're not supposed to care about the experimenters. Ideally the experiments would be done with something non-zero-sum, rather than with money, but that's much harder to arrange, so instead they just rely on the unspoken convention that the experimenters' costs should be disregarded.

Saving them the $3 isn't doing them any favours. They're willing to pay $10 to know how you'd distribute chunks of utility. You're taking $7 and not answering the question they want to answer.

comment by [deleted] · 2011-08-10T13:33:12.428Z · LW(p) · GW(p)

This stands directly in the way of the maxim, "Whatever can be destroyed by the truth, should be."

Replies from: None, MarkusRamikin, Davorak
comment by [deleted] · 2011-08-11T20:45:55.297Z · LW(p) · GW(p)

The maxim is incorrect (or at least overly general to sound deeply wise).

Cultivating ignorance in an adversary or competitor can give you a comparative advantage. A child taking the advice of trained and informed mental health professions that they are not ready to learn about something, say human sexuality, might preserve their emotional development. A person living under a totalitarian regime might do well to avoid sources of classified information, if learning that information makes them a threat to the state. Not telling my friend that their religious literature contains various harmful prescriptions makes sense until I can convince them that the literature is not morally infallible. Not reading the theorems for certain mathematical results increases my future expectation of fun, since I can then derive the results on my own. Privacy is often considered intrinsically valuable. Double-blind experimental procedure is used to filter out cognitive bias. For many more examples of hazardous information and strategic ignorance, see Nick Bostrom's draft paper on the subject here (.pdf).

Replies from: Desrtopa, None
comment by Desrtopa · 2011-08-13T22:36:20.024Z · LW(p) · GW(p)

A child taking the advice of trained and informed mental health professions that they are not ready to learn about something, say human sexuality, might preserve their emotional development.

Perhaps, but I'm skeptical that anyone's emotional development is really harmed by learning about human sexuality at an early age provided it's not done in a particularly shocking way. Sure, plenty of kids find it discomforting, and don't want to think about their parents "doing it," but does it cause lasting psychological harm? Without actual research backing up that conclusion, my initial guess would be "almost never."

Replies from: Swimmer963, lessdazed, None
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-08-14T06:05:39.417Z · LW(p) · GW(p)

Data point: I used to take books out of the adult section of the library as a fairly young child (8-9 years old) and though I was a little baffled by the sexual content, I don't remember finding it at all disturbing. I've been told that I now have an unusually open attitude to sex, though I'm still a little baffled by the whole phenomenon.

comment by lessdazed · 2011-08-13T22:52:34.486Z · LW(p) · GW(p)

At most, people learn which things they've heard aren't true about sexuality from school. Peers and media tell one what there is to know at a young age, if untrue things besides.

comment by [deleted] · 2011-08-14T01:49:54.964Z · LW(p) · GW(p)

It was more a template of hazardous information than an object level factual claim. Feel free to insert anything you would keep away from a child to protect their innocence. A number of shock sites come to mind.

comment by [deleted] · 2011-08-11T21:20:22.615Z · LW(p) · GW(p)

Yes, the maxim is overly broad. It is the nature of maxims.

EDIT: I understand where I erred now. In quoting EY, I accidentally claimed more than I thought I was. It's clear to me now that the above factors into two values: minimizing personal delusion, and minimizing other people's delusions. I hold the former, and I'm not as picky about the latter. (E.g., I have no problem refraining from going around disabusing people of their theism.)

I'm concerned that having done this was an ad-hoc justification after reading your laundry list of counter-examples, but I can't unread them, so...

comment by MarkusRamikin · 2011-08-10T13:45:59.070Z · LW(p) · GW(p)

I think the article is describing some existing mechanisms rather than prescribing what a rationalist should be doing.

Replies from: None
comment by [deleted] · 2011-08-10T13:48:46.714Z · LW(p) · GW(p)

These are examples of situations where it's advantageous to be ignorant of something.

But rationalists should win.

Replies from: MarkusRamikin, None
comment by MarkusRamikin · 2011-08-10T14:37:47.231Z · LW(p) · GW(p)

Good point. How would you resolve this contradiction, then?

Replies from: atucker, nerzhin
comment by atucker · 2011-08-10T14:54:08.717Z · LW(p) · GW(p)

I personally strive to know as much as I can about myself, even if it ultimately means that I believe a lot of less than flattering things.

Then I try to either use this knowledge to fix the problems, or figure out workarounds in presenting myself to others.

Some people are pretty okay with you knowing bad things about yourself if you wish that they aren't true. A lot of my closer friends are like that, so I can continue being totally honest with them. If someone isn't okay with that, then I either preempt all complaints by saying I messed up (many people find that less offensive than evasiveness), or avoid the conversations entirely.

In extreme cases, I'd rather know something about myself and hide it (either by omission and lying) or just let other people judge me for knowing it.

One convenient thing about allowing yourself to learn inconvenient truths is that its easier to realize when you're wrong, and should apologize. Apologies tend to work really well when you mean them, and understand why the other person is mad at you.

comment by nerzhin · 2011-08-10T16:13:04.988Z · LW(p) · GW(p)

There are three things you could want:

  1. You could want the extra dollar. ($6 instead of $5)

  2. You could want to feel like someone who care about others.

  3. You could genuinely care about others.

The point of the research in the post, if I understand it, is that (many) people want 1 and 2, and often the best way to get both those things is to be ignorant of the actual effects of your behavior. In my view a rationalist should decide either that they want 1 (throwing 2 and 3 out the window) or that they want 3 (forgetting 1). Either way you can know the truth and still win.

Replies from: atucker
comment by atucker · 2011-08-10T16:23:12.101Z · LW(p) · GW(p)

The problem with strategic ignorance is if the situation is something like 6/1 vs. 5/1000.

Most people care more about themselves than others, but I think that at that level most people would just choose to lose a dollar and give 999 more.

If you choose to not learn something, then you don't know what you're causing to happen, even if it would entirely change what you would want to do.

Replies from: JackEmpty
comment by JackEmpty · 2011-08-10T17:09:01.708Z · LW(p) · GW(p)

So it's not only strategic ignorance, but selective ignorance too. By which I mean to say it only applies highly selectively.

If you have enough knowledge about the situation to know it's going to be 6/1 and 5/5, or 5/1 and 6/5, then that's a pretty clear distinction. You have quite a bit of knowledge, enough to narrow it to only two situations.

But as you raised, it could be 6/1 & 5/5, or 6/1 & 5/1000 or 6/(.0001% increase of global existential risk) & 5/(.0001% increase of the singularity within your lifetime).

The implications of your point being, if you don't know what's at stake, it's better to learn what's at stake.

Replies from: atucker
comment by atucker · 2011-08-10T17:11:10.392Z · LW(p) · GW(p)

Yeah, pretty much.

comment by [deleted] · 2011-08-10T14:08:50.923Z · LW(p) · GW(p)

Then I guess sometimes, ---ists (as I like to refer to them) should remain purposefully ignorant, in contradiction to the maxim—if, that is, they actually care about the advantages of ignorance.

comment by Davorak · 2011-08-10T21:20:10.146Z · LW(p) · GW(p)

I do not see an obvious and direct conflict, can you provide an example?

Replies from: DSimon
comment by DSimon · 2011-08-10T21:56:16.995Z · LW(p) · GW(p)

The conflict seems to be that, according to the advice, a rationalist ought to (a) try to find out which of their ideas are false, and (b) evict those ideas. A policy of strategic ignorance avoids having to do (b) by deliberately doing a crappy job of (a).

Replies from: Davorak
comment by Davorak · 2011-08-10T22:49:20.420Z · LW(p) · GW(p)

The few specific situations that I drilled down on I found that "deliberately doing a crappy job of (a)" never came up. Some times however the choice was between doing (a)+(b) with topic (d) or doing (a)+(b) with topic (e), where it is unproductive to know (d). The choice is clearly to do (a)+(b) with (e) because it is more productive.

Then there is not conflict with "Whatever can be destroyed by the truth, should be." because what needs to be destroyed is prioritized.

Can you provide a specific example where conflict with "Whatever can be destroyed by the truth, should be." is ensured?

Replies from: DSimon
comment by DSimon · 2011-08-10T23:31:01.158Z · LW(p) · GW(p)

Okay, I think this example from the OP works:

A person who wants to have multiple sexual partners may resist getting himself tested for sexual disease. If he was tested, he might find out he had a disease, and then he'd be accused of knowingly endangering others if he didn't tell them about his disease. If he isn't tested, he'll only be accused of not finding out that information, which is often considered less serious.

Let's call the person Alex. Alex avoids getting tested in order to avoid possible blame; assuming Alex is selfish and doesn't care about their partners' sexual health (or the knock-on effects of people in general not caring about their partners' sexual health) at all, then this is the right choice instrumentally.

However, by acting this way Alex is deliberately protecting an invalid belief from being destroyed by the truth. Alex currently believes or should believe that they have a low probability (at the demographic average) of carrying a sexual disease. If Alex got tested, then this belief would be destroyed one way or the other; if the test was positive, then the posterior probability goes way upwards, and if the test is negative, then it goes downwards a smaller but still non-trivial amount.

Instead of doing this, Alex simply acts as though they already knew the results of the test to be negative in advance, and even goes on to spread the truth-destroyable-belief by encouraging others to take it on as well. By avoiding evidence, particularly useful evidence (where by useful I mean easy to gather and having a reasonably high magnitude of impact on your priors if gathered), Alex is being epistemically irrational (even though they might well be instrumentally rational).

This is what I meant by "deliberately doing a crappy job of [finding out which ideas are false]".

This does bring up an interesting idea, though, which is that it might not be (instrumentally) rational to be maximally (epistemically) rational.

Replies from: Davorak
comment by Davorak · 2011-08-11T00:13:56.431Z · LW(p) · GW(p)

Additional necessary assumption seems to be that Alex cares about "Whatever can be destroyed by the truth, should be." He is selfish but does his best to act rationally.

Let's call the person Alex. Alex avoids getting tested in order to avoid possible blame; assuming Alex is selfish and doesn't care about their partners' sexual health (or the knock-on effects of people in general not caring about their partners' sexual health) at all, then this is the right choice instrumentally.

Therefore Alex does not value knowing whether or not his has an std and instead pursues other knowledge.

However, by acting this way Alex is deliberately protecting an invalid belief from being destroyed by the truth. Alex currently believes or should believe that they have a low probability (at the demographic average) of carrying a sexual disease. If Alex got tested, then this belief would be destroyed one way or the other; if the test was positive, then the posterior probability goes way upwards, and if the test is negative, then it goes downwards a smaller but still non-trivial amount.

Alex is faced with the choice of getting an std test and improving his probability estimate of his state of infection or spending his time doing something he considers more valuable. He chooses to not to get an std test because the information is not very valuable him and focuses on more important matters.

Instead of doing this, Alex simply acts as though they already knew the results of the test to be negative in advance, and even goes on to spread the truth-destroyable-belief by encouraging others to take it on as well.

Alex is selfish and does not care that he is misleading people.

By avoiding evidence, particularly useful evidence (where by useful I mean easy to gather and having a reasonably high magnitude of impact on your priors if gathered), Alex is being epistemically irrational (even though they might well be instrumentally rational).

Avoiding the evidence would be irrational. Focusing on more important evidence is not. Alex is not doing a "crappy" job of finding out what is false he has just maximized finding out the truth he cares about.

I tried to present a rational, selfish, uncaring, Alex who chooses not to get an STD test even though he cares deeply about "Whatever can be destroyed by the truth, should be.", as far as his personal beliefs are concerned.

Replies from: rocurley, DSimon
comment by rocurley · 2011-08-11T15:32:17.819Z · LW(p) · GW(p)

Avoiding the evidence would be irrational. Focusing on more important evidence is not.

I disagree. In the least convenient world where the STD test imposes no costs on Alex, he would still be instrumentally rational to not take it. This is because Alex knows the plausibility of his claims that he does not have an STD will be sabotaged if the test comes out positive, because he is not a perfect liar.

(I don't think this situation is even particularly implausible. Some situation at a college where they'll give you a cookie if you take an STD test seems quite likely, along the same lines as free condoms.)

Replies from: Davorak
comment by Davorak · 2011-08-11T16:25:52.854Z · LW(p) · GW(p)

I disagree. In the least convenient world where the STD test imposes no costs on Alex, he would still be instrumentally rational to not take it. This is because Alex knows the plausibility of his claims that he does not have an STD will be sabotaged if the test comes out positive, because he is not a perfect liar.

In a world were STD tests cost absolutely nothing, including time, effort, thought, there would be no excuse to not have taken a test and I do not see a method for generating plausible deniability by not knowing.

Some situation at a college where they'll give you a cookie if you take an STD test seems quite likely

You situation does not really count as no cost though. In a world in which you must spend effort to avoid getting a STD test it seems unlikely that plausible deniability can be generated in the first place.

You are correct "Avoiding the evidence would be irrational." does seem to be incorrect in general and I generalized too strongly from the example I was working on.

Though this does not seem to answer my original question. Is there a by definition conflict between "Whatever can be destroyed by the truth, should be." and generating plausible deniability. The answer I still come up with is no conflict. Some truths should be destroyed before others and this allows for some plausible deniability for untruths low in priority.

Replies from: Desrtopa, MarkusRamikin
comment by Desrtopa · 2011-08-13T22:44:08.544Z · LW(p) · GW(p)

In a world were STD tests cost absolutely nothing, including time, effort, thought, there would be no excuse to not have taken a test and I do not see a method for generating plausible deniability by not knowing.

That's not really the least convenient possible world though, is it? The least convenient possible world is one where STD tests impose no additional cost on him, but other people don't know this, so he still has plausible deniability. Let's say that he's taking a sexuality course where the students are assigned to take STD tests, or if they have some objection, are forced to do a make up assignment which imposes equivalent inconvenience. Nobody he wants to have sex with is aware that he's in this class or that it imposes this assignment.

comment by MarkusRamikin · 2011-08-11T18:36:39.183Z · LW(p) · GW(p)

Some situation at a college where they'll give you a cookie if you take an STD test seems quite likely

You situation does not really count as no cost though. In a world in which you must spend effort to avoid getting a STD test it seems unlikely that plausible deniability can be generated in the first place.

I don't see how the situation is meaningfully different from no cost. "I couldn't be bothered to get it done" is hardly an acceptable excuse on the face of it, but despite that people will judge you more harshly when you harm knowingly rather than when you harm through avoidable ignorance, even though that ignorance is your own fault. I don't think they do so because they perceive a justifying cost.

I think the point that others have been trying to make is that gaining the evidence isn't merely of lower importance to the agent than some other pursuits, it's that gaining the evidence appears to be actually harmful to what the agent wants.

Replies from: Davorak
comment by Davorak · 2011-08-11T22:08:24.971Z · LW(p) · GW(p)

I think the point that others have been trying to make is that gaining the evidence isn't merely of lower importance to the agent than some other pursuits, it's that gaining the evidence appears to be actually harmful to what the agent wants.

Yes I was proposed the alternative situation where the evidence is just considered as lower value as an alternative that produces the same result.

I don't see how the situation is meaningfully different from no cost. "I couldn't be bothered to get it done" is hardly an acceptable excuse on the face of it

At zero cost(in the economic sense not in the monetary sense) you can not say it was a bother to get it done because a bother would be a cost.

comment by DSimon · 2011-08-11T03:01:15.905Z · LW(p) · GW(p)

Avoiding the evidence would be irrational. Focusing on more important evidence is not.

This is a very good point. We cannot gather all possible evidence all the time, and trying to do so would certainly be instrumentally irrational.

Is the standard then that it's instrumentally rational to prioritize Bayesian experiments by how likely their outcomes are to affect one's decisions?

Replies from: Davorak
comment by Davorak · 2011-08-11T13:49:25.571Z · LW(p) · GW(p)

Is the standard then that it's instrumentally rational to prioritize Bayesian experiments by how likely their outcomes are to affect one's decisions?

It weighs into the decision, but it seems like it is insufficient by itself. An experiment can change my decision radically but be on unimportant topic(s). Topics that do not effect goal achieving ability. It is possible to imagine spending ones time on experiments that change one's decisions and never get close to achieving any goals. The vague answer seems to be prioritize by how much the experiments will be likely to help achieve ones goals.

comment by SilasBarta · 2011-08-14T19:29:11.504Z · LW(p) · GW(p)

Negative knowledge values imply reflective or dynamic inconsistency in one's decision theory. So in each of these cases where you benefit from well-placed ignorance, something in the system has a major value inconsistency. It may very well be optimal for one person within the system to be strategically ignorant, but the system has some crucial flaw nonetheless.

For example, any time you have people deliberately becoming ignorant of an injured person needing assistance, the system is probably excessively punishing (or under-rewarding) those who happen to be nearby. Counterintuitively, then, it may be better to have blanket immunity for those nearby who fail to act so as to increase the chance people will be around at all and who have something to offer.

A similar analysis applies to the trolley problem, much to my consternation: if people regard it as moral to "shuffle" risk around (such as by diverting a trolley from a dangerous to a safe area), then people have to overspend on risk avoidance, thereby reducing the chance that anyone will be around at all to help in such situations, and reducing the total societal resources available to manage risk in the first place.

Replies from: Pavitra
comment by Pavitra · 2011-08-16T21:02:58.829Z · LW(p) · GW(p)

The negative value isn't on actual knowledge, it's on perceived knowledge. Genuine ignorance is advantageous only in the special case where knowledge is difficult to hide.

Replies from: SilasBarta
comment by SilasBarta · 2011-08-17T00:21:43.678Z · LW(p) · GW(p)

The same argument applies just the same to perceived knowledge as evidence of a reflective inconsistency in the system. To the extent that someone benefits from spending resources misrepresenting their state of knowledge the system is reflectively inconsistent.

Replies from: Pavitra
comment by Pavitra · 2011-08-17T01:06:30.991Z · LW(p) · GW(p)

I have about 85% confidence that what you're saying is correct, but I can't quite grasp it enough to verify it independently. Maybe I need to reread the TDT paper.

Replies from: SilasBarta
comment by SilasBarta · 2011-08-17T04:02:52.806Z · LW(p) · GW(p)

Yes, that's where I got my insight into the reason why reflectively consistent decision theories don't have negative knowledge values.

comment by mare-of-night · 2013-08-26T22:41:41.360Z · LW(p) · GW(p)

I've recently started to have real secrets to keep (which I'm not used to), and it's making me really appreciate when people help me keep plausible deniability.

I think I remember people on this site saying that the most important part of a secret is that a secret exists. How true that is seems to depend on whether the person who learns that a secret exists wants to cooperate with you on keeping the secret (including from themselves), and how good they are at doing that.

comment by Solvent · 2011-08-14T09:03:38.852Z · LW(p) · GW(p)

There should be a link to Nick Bostrom's taxonomy of information risks somewhere in your article. It's here.

comment by ChristianKl · 2015-03-24T18:39:51.872Z · LW(p) · GW(p)

In general, people (and societies) often believe that if you know about something bad, you have a duty to stop it. If you don't know about something, then obviously you can't be blamed for not stopping it.

I don't think that's very obvious. We often transfer responsibility to certain roles. The president of a country is supposed to be responsible for what the bureaucrats under him are doing whether or not he knows what they are doing.

In general it's a useful social norm to hold people accountable for the promises they make whether or not they can tell you a story of how they didn't have proper information to fulfill on the promise.

comment by Isaac King (KingSupernova) · 2021-10-15T00:06:43.617Z · LW(p) · GW(p)

I think you're confusing ignorance with other people's beliefs about that agent's ignorance. In your example of the police or the STD test, there is no benefit gained by that person being ignorant of the information. There is however a benefit of other people thinking the person was ignorant. If someone is able to find out whether they have an STD without anyone else knowing they've had that test, that's only a benefit for them. (Not including the internal cognitive burden of having to explicitly lie.)

comment by AshwinV · 2014-09-08T14:54:36.111Z · LW(p) · GW(p)

Any tips on how to overcome these effects? In other words, do you have a suggestion for a kind of behaviour/rationality technique where one can be trained to overcome any instinct to prevent yourself from finding out certain information. Generally, speaking knowing something is (usually) better than not knowing, even if the situation is potentially high cost on both options that you mentioned (punish or ignore). To illustrate with an example, if I find out that my boss is a child molester, but I am in desperate need of my current job, I may be less inclined to report it to an authority even if my society (correctly) imposes a high taboo on child molestation. Nevertheless, knowing still benefits me as I will ensure that my children never meet my employer.

The example that I provided may sound a little besides the point, but I am not disagreeing with the conclusion of the post. I am merely trying to point out that the "not knowing" inducing behavior may be proving costly, and extremely so. If I were to try and construct a problem based on a topic a little closer to the heart of the LW zeitgeist, it would probably sound something like this:

If I am a retired man of age 85, who is truly a rags to riches story, and who has overcome several set-backs in life, to acquire a tremendous amount of wealth, which I now plan on using to support an altruistic, but evidently jingoistic cause. I plan to give millions of dollars away to support only a certain ethnic group, (at this point i'm thinking more along the lines of people of my sub-caste as opposed to people born in country X or native speakers of language Y). This imaginary version of me would surely be hesitant to educate himself about the prospect of machine intelligence. He would surely be reluctant to part with his hard earned money (which was earned by countless meetings with clients and sleepless nights rather than buying land and finding oil beneath). This is particularly tragic because, even the members of my pet ethnic group are more likely to benefit from even a "less than singularity" advance in technology, than by using my vast (but honestly earned) wealth to set up more monasteries, academies and other institutions (which generally make use of buildings and marketing campaigns).

One obvious reply is to brute force the problem and simply force yourself and members of society whom you have influence over to simply learn about everything. This may not be an effective solution as the behavioral pattern of strategic ignorance is instinctive.

But if the reasons as to why this is so can be enumerated, then perhaps a strategy can be formed regarding how to tackle this rather critical issue.