[SEQ RERUN] The Bad Guy Bias

post by MinibearRex · 2012-12-21T03:44:12.843Z · LW · GW · Legacy · 16 comments

Today's post, The Bad Guy Bias was originally published on December 9, 2008. A summary:

 

Humans have a tendency to perceive tragedies caused by agents as worse than tragedies caused by other sources. This can cause us to, among other things, worry more about future catastrophes as a result of malevolent agents than as a result of unplanned events.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was True Sources of Disagreement, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

16 comments

Comments sorted by top scores.

comment by Eugine_Nier · 2012-12-21T05:15:39.220Z · LW(p) · GW(p)

This is actually a reasonable strategy. A pre-comitment to revenge is useful, but there's no point getting revenge on nature.

Replies from: almkglor
comment by almkglor · 2012-12-21T08:15:04.364Z · LW(p) · GW(p)

I suppose that works for pre-scientific, pre-rational thinking: back when you couldn't do a thing about nature, but you could do a thing about that schmuck looking at you funny.

However, now, as humanity's power grows, we can actually do something about nature: we can learn to predict earthquakes, build structures strong enough against calamity, vaccinate against pestilence, etc etc.

So the bias, I suppose, arises from evolution being too slow for human progress.

Replies from: Emile
comment by Emile · 2012-12-21T20:45:10.477Z · LW(p) · GW(p)

I think you're missing Eugine's point.

Consider someone that may or may not rape your daughter - the probability that he does so is function of how likely it is you'll spend your day and nights hunting him down in order to slowly torture him to death, with no concern for law or personal safety.

Consider an earthquake that may or may not destroy your house. The probability that it does so is independent of what you precommit to doing afterwards.

Sure, in both cases we can prevent the tragedy through other ways, but that's not the main issue.

(Edit: Oops, thanks Eugine)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-22T04:52:04.690Z · LW(p) · GW(p)

Correct.

B.T.W., I think you left out a word like "rape" or "kill" from the first example.

comment by buybuydandavis · 2012-12-21T05:44:50.475Z · LW(p) · GW(p)

worry more about future catastrophes as a result of malevolent agents than as a result of unplanned events.

Which makes sense to me, since the universe isn't out to get you, but malevolent agents are.

Replies from: Desrtopa
comment by Desrtopa · 2012-12-22T04:18:59.564Z · LW(p) · GW(p)

If the uncaring universe represents a greater level of preventable threat than malevolent agents, does it really matter?

Replies from: Alsadius, buybuydandavis
comment by Alsadius · 2012-12-22T08:25:38.637Z · LW(p) · GW(p)

Depends how easy the threat is to prevent. It's much easier to swear vendetta than it is to engineer flood barriers and quakeproof buildings.

comment by buybuydandavis · 2012-12-22T06:25:40.063Z · LW(p) · GW(p)

The uncaring universe may happen to be a greater threat, but the malevolent agent is trying to a be a threat, it's targeting you.

Replies from: Desrtopa
comment by Desrtopa · 2012-12-22T15:58:18.907Z · LW(p) · GW(p)

Yes, but if the fact that it's trying to be a threat doesn't make it as great a one, why should it take priority? The fact that they're trying to be a threat is what makes them a threat at all; they probably wouldn't be one otherwise.

Is there any additional utility in stopping malevolent agents from causing the exact same amount of harm as nonmalevolent ones? I don't see why there should be.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-12-22T20:38:14.454Z · LW(p) · GW(p)

As others have pointed out, malevolent agents can be signaled by revenge.

Malevolent agents have a preference for harming you. Malevolent agents probably have some form of intelligence, so that they can get better at harming you.

If you're doing a real calculation, it's marginal future harm reduction minus response cost with some time discount function. Obviously, there's no guarantee that you should choose to respond to the malevolent agent threat over the uncaring universe threat. The factors indicated are all of the "all other things being equal" sort.

I'll give you factors in favor of fighting the uncaring universe - those threats won't be signaled away, and likely have more universal application in time and space. Fighting malevolent agents takes care of this agent today. There will be more tomorrow. Overcoming the inconveniences of gravity pays dividends forever. Hail Science!

The thought occurred to me while watching Sherlock (as kindly recommended by others here). If Sherlock and Moriarty are so "bored" with the challenges presented by their simian neighbors, why don't they fight Death or engage in some other science project to make themselves useful? If they're such smarty boys, why don't they take on the Universe instead of slightly evolved primates?

Replies from: Desrtopa
comment by Desrtopa · 2012-12-22T22:13:57.607Z · LW(p) · GW(p)

Malevolent agents have a preference for harming you. Malevolent agents probably have some form of intelligence, so that they can get better at harming you.

In practice, unless it's in the case of an actual war though, they usually don't. Even if they're not responded to with swift action, gangs and murderers and so on generally will generally not evolve into supergangs and mass murderers.

The fact that malevolent entities can take countermeasures against being thwarted though, will tend to decrease the marginal utility of an investment in trying to stop them. Say that you try to keep weapons out of the hands of criminals, but they change means of getting their hands on weapons and only become slightly less well armed on average. If you were faced by another, nonsentient threat, which caused as much harm on average, but wouldn't take countermeasures against your attempts to resist it, you'd be likely to get much better results by trying to address that problem.

Of course, sometimes other thinking agents do pose a higher priority threat, and the fact that they respond to signalling and game theory incentives can tip the scales in favor of addressing them over other threats, but that doesn't mean that we evaluate those factors in anything close to a rational manner.

comment by Luke_A_Somers · 2012-12-21T15:43:38.497Z · LW(p) · GW(p)

The coincidence of this being rerun one week after a major school shooting is... so remarkable that I'm surprised no one had noted it yet.

Replies from: Desrtopa
comment by Desrtopa · 2012-12-22T04:21:25.122Z · LW(p) · GW(p)

I don't see how it's remarkable. Someone has to decide which sequence articles to rerun when, it's not as if the sequence reruns are random and independent of human intervention.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-22T04:54:40.736Z · LW(p) · GW(p)

Aren't the sequence reruns in chronological order?

Replies from: Desrtopa
comment by Desrtopa · 2012-12-22T05:18:14.973Z · LW(p) · GW(p)

It seems they are, it looks like I was mistaken.

comment by [deleted] · 2012-12-21T04:31:58.164Z · LW(p) · GW(p)

This bias seems a lot like hanlon's razor. Perhaps it's just a consequence of human intelligence growing: as one becomes smarter, one expects the rest of the world as well (lest one considers oneself a statistical outliar), and one begins to fear the misuse of that intelligence by others.