Punishing future crimes

post by Bongo · 2011-01-28T21:00:27.198Z · LW · GW · Legacy · 67 comments

Here's an edited version of a puzzle from the book "Chuck Klosterman four" by Chuck Klosterman.

It is 1933. Somehow you find yourself in a position where you can effortlessly steal Adolf Hitler's wallet. The theft will not effect his rise to power, the nature of WW2, or the Holocaust. There is no important identification in the wallet, but the act will cost Hitler forty dollars and completely ruin his evening. You don't need the money. The odds that you will be caught committing the crime are negligible. Do you do it?

When should you punish someone for a crime they will commit in the future? Discuss.

comment by Quirinus_Quirrell · 2011-01-28T22:10:16.126Z · LW(p) · GW(p)

When should you punish someone for a crime they will commit in the future?

Easy. When they can predict you well enough and they think you can predict them well enough that if you would-counterfactually punish them for committing a crime in the future, it influences the probability that they will commit the crime by enough to outweigh the cost of administering the punishment times the probability that you will have to do so. Or when you want to punish them for an unrelated reason and need a pretext.

Not every philosophical question needs to be complicated.

comment by Bongo · 2011-01-28T22:58:31.336Z · LW(p) · GW(p)

When they can predict you well enough and they think you can predict them well enough that if you would-counterfactually punish them for committing a crime in the future, it influences the probability that they will commit the crime by enough to outweigh the cost of administering the punishment times the probability that you will have to do so.

I don't understand one part. How do you determine the probability that you will have to administer the punishment?

comment by Jonii · 2011-01-29T16:17:53.903Z · LW(p) · GW(p)

So you can avoid being punished by not predicting potential punishers well enough, or by deciding to do something regardless of punishments you're about to receive? I'm not sure that's good.

comment by TheOtherDave · 2011-01-29T16:35:01.227Z · LW(p) · GW(p)

Can you say more about why you don't think it's good? I can think of several different reasons, some more valid than others, and the context up to this point doesn't quite constrain them.

comment by knb · 2011-01-29T06:10:42.853Z · LW(p) · GW(p)

It is interesting that utilitarian ethics would allow you to kill Hitler, but not steal his wallet.

comment by Vladimir_Nesov · 2011-01-29T10:24:32.905Z · LW(p) · GW(p)

Proper tool for the job!

comment by Strange7 · 2011-02-02T02:52:22.737Z · LW(p) · GW(p)

Utilitarian ethics might allow you to steal his walled and spend the money on feeding orphans, provided you also killed him in the process.

comment by ArisKatsaris · 2011-01-31T14:37:06.145Z · LW(p) · GW(p)

They can also allow you to preemptively invade countries, but not to molest kittens.

comment by NihilCredo · 2011-01-29T03:28:17.285Z · LW(p) · GW(p)

Punishment is pointless if you cannot expect anyone (who could potentially commit a certain undesirable act) to ever realise any connection between the undesirable act and its punishment. This seems to me to be the case in this scenario as described, so pre-punishing Hitler would be a waste of resources.

If we, however, imagine that we're living in the future, and the Time Travel Licensing Agency has declared it legal to go around vexating would-be serious criminals, then stealing Hitler's wallet and publishing the fact would help increase the impact of the TTLA's deterrent. But this is really a scenario where time travelling doesn't add anything interesting to the idea.

(Small aside: I'd replace stealing Hitler's wallet with, say, slashing his bike tires, to more effectively take the issue of guilt-free personal profit out of the equation without invoking the "you don't need the money" clause.)

comment by Eugine_Nier · 2011-01-29T04:06:03.071Z · LW(p) · GW(p)

Well, if having many time-travelers pre-punishing crimes is useful, then presumably having a single time-traveler pre-punish one crime is worth some fraction of that utility.

comment by NihilCredo · 2011-01-29T04:14:35.514Z · LW(p) · GW(p)

Only if (a) people know you did it, and why; and (b) you're not a one-shot time traveller, so that there is the potential for this kind of pre-punishment to happen again.

comment by Eugine_Nier · 2011-01-29T04:49:28.565Z · LW(p) · GW(p)

Only if (a) people know you did it, and why; and

It can still be effective if they don't as I discuss here.

(b) you're not a one-shot time traveler, so that there is the potential for this kind of pre-punishment to happen again.

[Insert standard TDT argument about how by doing this, you're acausally increasing the number of other time traveling pre-punishers.]

However, your main point, that the effectiveness of this scales non-linearly with the number of punishers is correct. However, this appears to be more of an acausal co-ordination problem.

comment by NihilCredo · 2011-01-29T05:06:02.726Z · LW(p) · GW(p)

It can still be effective if they don't as I discuss here.

Your argument seems sound - basically, if criminals get enough apparently "random" misfortunes, people will eventually associate criminal = unlucky loser and be somewhat discouraged from that path, am I getting this right?

I would just note that "having a single time-traveler pre-punish one crime is worth some fraction of that utility" doesn't really seem to fit this system, since a single pre-punishment falls well under the 'random noise' threshold so its deterrence effect is effectively zero. (This isn't really a factual disagreement, it just depends on how you interpret "fraction of utility" in a context where one act is useless but, say, a thousand are useful; is the single act's utility zero or k/1000? Personally, I straight-up refuse to treat utility as a scalar quantity.)

[Insert standard TDT argument about how by doing this, you're acausally increasing the number of other time traveling pre-punishers.]

I estimate acausal relationships between the behaviours of different individuals to be negligible.

comment by Eugine_Nier · 2011-01-29T05:37:10.580Z · LW(p) · GW(p)

I would just note that "having a single time-traveler pre-punish one crime is worth some fraction of that utility" doesn't really seem to fit this system, since a single pre-punishment falls well under the 'random noise' threshold so its deterrence effect is effectively zero.

There is no sharp "random noise threshold". A single act has some positive probability of increasing the amount of belief someone assigns to the proposition "crime doesn't pay". Rather the expected value of the change is positive.

(This isn't really a factual disagreement, it just depends on how you interpret "fraction of utility" in a context where one act is useless but, say, a thousand are useful; is the single act's utility zero or k/1000? Personally, I straight-up refuse to treat utility as a scalar quantity.)

That's why I called this an acausal coordination problem.

comment by Dorikka · 2011-01-28T21:24:46.937Z · LW(p) · GW(p)

So, to restate, you can inflict negative utility on someone who will later inflict negative utility on others in the future, but the former will not prevent the later. Do you do it?

Uh, no. I usually try not to make a practice of being cruel when nothing positive comes from it.

You only punish people for crimes so future crimes don't happen. If no one can see the correlation between the crime and your punishment, it does no good unless you are actually preventing them from committing a future crime.

comment by Bongo · 2011-01-28T22:05:43.051Z · LW(p) · GW(p)

And the for the case of punishing past crimes:

you can inflict negative utility on someone who earlier inflicted negative utility on others in the past, but the former will not prevent the latter.

I suppose you oppose that too?

comment by NihilCredo · 2011-01-29T03:38:56.095Z · LW(p) · GW(p)

There's no point in punishing a (past) undesirable act if nobody who could potentially commit that act is going to become aware of the punishment you inflicted.

Prisons are only a deterrent if people know that they exist.

comment by Dorikka · 2011-01-29T01:17:23.696Z · LW(p) · GW(p)

Not necessarily -- see the last part of my comment. Punishing past crimes often does serve to deter future crime because it provides the general population with evidence that people will get punished for their crimes. This much, I think, is obvious -- a nation in which modern law is enforced will beget less violence within its borders (all else being equal) than a pure anarchy would.

comment by benelliott · 2011-01-28T23:09:40.292Z · LW(p) · GW(p)

It will prevent infliction of future negative utility (or at least it is intended to).

comment by Bongo · 2011-01-28T23:30:51.677Z · LW(p) · GW(p)

So will punishing future crimes. If people see that criminals have a history of being punished in their past "for no reason", they won't wan't to become criminals as much.

comment by ata · 2011-01-28T23:54:22.734Z · LW(p) · GW(p)

If it appears to be happening "for no reason", most people will infer a much more plausible causal explanation than time-traveling punishment — for instance, that this type of hardship contributes to people becoming criminals.

comment by Dorikka · 2011-01-29T01:20:26.963Z · LW(p) · GW(p)

Upvoted. If my wallet is stolen, there has got to be an amazing amount of evidence before the hypothesis 'a time traveler is punishing me for future crimes' would even enter my consciousness. I think that this actually might have an Occam prior low enough that I should start seriously doubting my own sanity before I assign a significant probability to it.

comment by Broggly · 2011-01-31T18:49:35.230Z · LW(p) · GW(p)

Wasn't there some Twilight Zone episode about this, where a Jewish time traveller used a mind-control device to torment Hitler, which caused his anti-semitism?

comment by Bongo · 2011-01-29T01:20:41.013Z · LW(p) · GW(p)

Of course it wouldn't be time travel. People who were especially good at predicting other people's life paths would just do so and punish accordingly, or something.

Edit: I accept your point that future consequences don't suffice to justify time-travelling punishment.

comment by TheOtherDave · 2011-01-28T23:48:46.195Z · LW(p) · GW(p)

Of course, this arrangement doesn't even require the ability to predict the future (or travel into the past), as long as you pick people to punish who are deterred from crime solely by the threat of punishment. After all, once they've been punished for a future crime, they might as well commit it.

comment by ata · 2011-01-28T21:29:19.005Z · LW(p) · GW(p)

Agreed.

comment by Bongo · 2011-01-28T21:00:32.633Z · LW(p) · GW(p)

My idea is that you should punish them if they are the kind of person that before they do a crime, consider whether they could have avoided any past punishments by being the kind of person that doesn't do the crime.

comment by Jack · 2011-01-28T22:29:18.592Z · LW(p) · GW(p)

More people will think like this if there is a large body of stories involving great criminals with an unlikely frequency of minor harms and inconveniences in their past. Therefore, the set of ideal punishments for future crimes should be increased to include instances where the punishment is likely to be recorded for posterity (the exact set would be determined by the relative harm of the punishment and the contribution the punishment makes to the body of stories). If you would be the only person ever in the position to punish for past crimes then the body of stories will never happen and so the punishment is not worth it. If, on the other hand, many people will be in your position but those people may or may not carry out the punishment then it may make sense to choose as if you are making the decision for all those in that position.

This is a really complex calculation though and depends on things like the frequency with which future punishment will take place-given that you choose to carry it out in this instance, the direct harm of the punishment, the effect of punishment in spreading the belief in past punishments and the acausal deterrence effect the consideration of past punishments will have for those considering whether or not to commit a crime. Without having any of that information the answer could be anywhere between "never" and "always".

comment by Normal_Anomaly · 2011-01-29T01:02:08.014Z · LW(p) · GW(p)

More people will think like this if there is a large body of stories involving great criminals with an unlikely frequency of minor harms and inconveniences in their past.

In this hypothetical society, how would/should you react if you were the target of an unlikely frequency of minor harms?

comment by Bongo · 2011-01-29T01:09:21.500Z · LW(p) · GW(p)

Consider Newcomb's Problem with transparent boxes. Even if you see that box B is empty, you should still one-box. For the same reason, even if you're getting punished, you should still not become a criminal - and not out of moral concerns but for your own benefit.

comment by NihilCredo · 2011-01-29T03:42:51.467Z · LW(p) · GW(p)

Can you explain and/or link this analysis of transparent Newcomb? It looks very wrong to me.

comment by Bongo · 2011-01-29T13:57:30.384Z · LW(p) · GW(p)

Depends on what about it seems wrong. Do you disagree that you should one-box if you see that box B is empty? It's unintuitive, but it's the strategy that UDT stratightforwardly yields and that you would want to precommit to. Here's one bit of intuition: by being the kind of person that one-boxes even with an empty box, you force Omega to never give you an empty box, on pain of being wrong. Maybe these are relevant too.

comment by NihilCredo · 2011-01-29T22:00:54.653Z · LW(p) · GW(p)

Ok, it's like CM: right now (before Omega shows up) I want to be the kind of person who always one-boxes even if the box is empty, so that I'll never get an empty box. That is the rational and correct choice now.

This is not, however, the same thing as saying that the rational choice for someone staring at an empty B box is to one-box. It's a scenario that will never materialise if you don't screw up, but if you take it as the hypothesis that you do find yourself in that scenario (because, for example, you weren't rational before meeting Omega, but became perfectly rational afterwards), the rational answer for that scenario is to two-box. Yes, it does mean you screwed up by not wanting it sincerely enough, but it's the question that assumes you've already screwed up.

Translating this to the pre-punishment scenario, what this means is that - assuming a sufficient severity of average pre-punishment - a rational person will not want to ever become a criminal. So a rational person will never be pre-punished anyway. But if Normal_Anomaly asks: "Assume that you've been pre-punished; should you then commit crimes?" the answer is "Yes, but note that your hypothesis can only be true if I hadn't been perfectly rational in the past".

(Separate observation: Omega puts a million dollars in B iff you will one-box. Omega then reads my strategy: "if box B is opaque or transparent and full, I will one-box; if it is transparent and empty, I two-box". If box B is opaque, this forces Omega to put the money there. But if B is transparent, Omega will be right no matter what. Are we authorised to assume that Omega will choose to flip a coin in this scenario, or should we just say that the problem isn't well-posed for a transparent box? I'm leaning towards the latter. If the box is transparent and your choice is conditional on its content, you've effectively turned Omega's predictive ability against itself. You'll one-box iff Omega puts a million dollar there iff you one-box, loop.)

comment by Bongo · 2011-01-30T01:25:36.551Z · LW(p) · GW(p)

Transparent Newcomb is well-posed but, I admit, underspecified. So add this rule:

• Omega fills box B if you would one-box no matter what, leaves box B empty if you would two-box no matter what, flips a coin if you would one-box given a full box B and two-box given an empty box B, and doesn't invite you to his games in the first place you if you would two-box given a full box B and one-box given an empty box B.
comment by Vladimir_Nesov · 2011-02-04T23:01:42.776Z · LW(p) · GW(p)

Wrong rules, corrected here.

comment by Bongo · 2011-01-30T00:51:54.550Z · LW(p) · GW(p)
comment by NihilCredo · 2011-01-30T00:55:36.534Z · LW(p) · GW(p)

You can have your choice of a coin-flipping Omega, or an Omega that leaves box B empty unless you would one-box no matter what.

...or an Omega that fills box B unless you would two-box no matter what.

comment by Bongo · 2011-01-30T01:36:57.949Z · LW(p) · GW(p)

Indeed, you could have any mapping from pairs of (probability distributions over) actions to box-states, where the first element of the pair is what you would do if you saw a filled box B, and the second element is what you would do if you saw an empty box B. But I'm trying to preserve the spirit of the original Newcomb.

comment by Bongo · 2011-01-30T01:32:56.804Z · LW(p) · GW(p)

Sorry, decided that comment wasn't ready and deleted it, but you managed to see it. See my other comment.

comment by wedrifid · 2011-01-29T16:54:23.650Z · LW(p) · GW(p)

Can you explain and/or link this analysis of transparent Newcomb? It looks very wrong to me.

It's only wrong if you are the kind of person who doesn't like getting \$1,000,000.

If only all our knowledge of our trading partners and environment was as reliable as 'fundamentally included in the very nature of the problem specification'. You have to think a lot harder when you are only kind of confident and know the limits of your own mind reading capabilities.

comment by NihilCredo · 2011-01-29T22:04:20.343Z · LW(p) · GW(p)

If only all our knowledge of our trading partners and environment was as reliable as 'fundamentally included in the very nature of the problem specification'.

If you're going to make that kind of argument, you're dismissing pretty much all LW-style thought experiments.

comment by wedrifid · 2011-01-30T03:20:28.319Z · LW(p) · GW(p)

If you're going to make that kind of argument, you're dismissing pretty much all LW-style thought experiments.

I think you're reading in an argument that isn't there. I was explaining the most common reason why human intuitions fail so blatantly when encountering transparent Newcomb. If anything that is more reason to formalise it as a thought experiment.

comment by XiXiDu · 2011-01-29T10:26:06.874Z · LW(p) · GW(p)

Doesn't this mean that everyone has to be punished because at some point they might consider this? Also, who doesn't commit crimes? What constitutes a crime is completely subjective. Following through on this would cause a lot of punishment for opposing reasons. You might be punished and consider to precommit to not doing something, but what? Maybe the communist punished you so that you don't support democracy...I really don't get it.

comment by XiXiDu · 2011-01-29T10:10:57.759Z · LW(p) · GW(p)

I don't get it.

comment by Tesseract · 2011-01-28T21:10:50.958Z · LW(p) · GW(p)

Do you do it?

No.

You would be harming another human being without expecting any benefit from doing so. Punishment is only justified when it prevents more harm than it causes, and this is specified not to be the case.

Our sense that people 'deserve' to be punished is often adaptive, in that it prevents further wrongdoing, but in this case it is purely negative.

comment by Tiiba · 2011-01-28T23:32:33.321Z · LW(p) · GW(p)

Of course, he might become even more psycho from it.

comment by endoself · 2011-01-29T15:20:22.642Z · LW(p) · GW(p)

That would count as an effect on his future actions.

comment by [deleted] · 2012-07-26T23:31:14.847Z · LW(p) · GW(p)

The Academy of Fine Arts Vienna shouldn't have rejected Hitler!

comment by Vladimir_Nesov · 2011-01-29T10:19:58.380Z · LW(p) · GW(p)

And again I link to Yvain's excellent "Diseased thinking: dissolving questions about disease" for discussion of consequentialist and "words can be wrong"-aware analysis of blame-assignment and punishment.

comment by Dr_Manhattan · 2011-01-30T15:03:44.990Z · LW(p) · GW(p)

My Grandpa used to have a watch from when his batallion (of the Red Army) looted one Goring's mansions. He much enjoyed it, especially being jewish, (and the family enjoyed the story), and I don't much care how the f*g psychopath felt about it. I think I'd take the wallet.

comment by DanArmak · 2011-01-29T16:24:40.805Z · LW(p) · GW(p)

Suppose that I'm aware that future villains are punished in their pasts by wallet theft. I now wish to decide whether to commit a big crime; and I consider whether my wallet has often been stolen.

If it has, then I've already suffered the punishment, and have nothing to lose anymore - so I commit the crime.

If it hasn't, then I conclude committing the crime would lead to a time-traveling paradox, and don't commit it.

So the less you pre-punish people, the less you encourage them to be villains.

comment by Perplexed · 2011-01-29T01:32:15.255Z · LW(p) · GW(p)

When should you punish someone for a crime they will commit in the future?

If this is a question about justice, then the answer is "when you have jurisdiction". Otherwise, you risk double punishment.

On the other hand, if this is a question about cooperative game theory, then go ahead and punish if you know for sure they will transgress. But notice that punishment only serves its proper deterrent purpose when the criminal knows the transgression for which he is punished, and which player or coalition is taking credit for the punishment.

comment by Eugine_Nier · 2011-01-29T04:27:59.648Z · LW(p) · GW(p)

But notice that punishment only serves its proper deterrent purpose when the criminal knows the transgression for which he is punished, and which player or coalition is taking credit for the punishment.

Not necessarily. For evolution to reduce the number of crimes, it is only necessary that punishment causally correlate with crimes.

When dealing with other optimization processes, e.g., human brains it is only necessary for the person to notice that crime pays less without realizing why. It's not even necessary for the person to be aware that he's noticed that, simply that for the value the person assigns to how much crime pays to be less then it would be if you hadn't acted.

comment by Perplexed · 2011-01-29T06:01:24.520Z · LW(p) · GW(p)

I think you are right that evolution is not fussy about whether the punished agent understands the causality just so long as there is causation both from genes to crimes and from genes to punishment. That second causation (genes to punishment) may be through the causal intermediary of the crime, though it doesn't have to be.

Evolution is a kind of learning, but it isn't the organism that learns - it is the species. And, of course, evolution can learn even if punishment falls on the offspring [Edit: was "can direct the punishment to offspring"], rather than the actual offender. Deuteronomy 5:9 is much closer to Darwin than is Genesis.

If you want to have organisms do the learning, though, you need to direct the punishment more carefully, and to make the causal link between crime and punishment more obvious to the organism. We can distinguish two kinds of learning - unconscious (for example, operant conditioning) and conscious (game theory and rational agents).

As you point out, you can get learning from punishment, even if the organism is not aware of the causality - but it does seem that the punishment must be close in time to the action which provokes the punishment. Unconscious learning cannot work otherwise.

But with conscious learning, the punishment need not be close in time to the 'crime' - consciousness and language permit the linkage to be signaled by other cues. But I'm pretty sure it is important that it be noticed by the punished agent that the punishment is flowing from another conscious agent, that the reason for receiving punishment has to do with failure to adhere to an implicit or explicit bargain which exists between punisher and punishee, and that to avoid additional punishment it is necessary to get into conformance with the bargain.

comment by Perplexed · 2011-01-29T20:09:54.650Z · LW(p) · GW(p)

I wrote:

I'm pretty sure it is important that it be noticed by the punished agent that the punishment is ...

On further thought, that was silly of me. It is not just the person being punished who needs to know that (and why) the punishment is happening. Everyone needs to know. Everyone in the coalition. Everyone who is considering joining the coalition. And, if not everyone, then as many of them as possible. In theory, the punishee is not in any special position here with respect to "need to know". (In practice, though, he probably does have a greater need to know that he is being punished because he may not have known that his 'crime' was a punishable offense. Also, if he doesn't realize that he is being punished, he might feel justified in retaliating.)

comment by byrnema · 2011-01-28T23:10:03.363Z · LW(p) · GW(p)

I decided to send this comment as a message to Bongo.

comment by [deleted] · 2013-01-15T19:41:22.628Z · LW(p) · GW(p)

It seems (to me anyway) that if you punish future crimes without making it known to them that you've punished them in particular, you prevent the crimes of anyone who one-boxes on newcomb, whereas if you let someone know you've punished them in particular, you only prevent the crimes of anyone who one-boxes on transparent newcomb. You should, therefore, make it known to the world that you will punish future crimes, and that you will do so in a way that will not become noticeable until after the crime has been committed.

comment by MinibearRex · 2011-02-04T02:13:26.570Z · LW(p) · GW(p)

It's difficult to craft a Utilitarian argument for stealing his wallet. The only easy way to do so would be if the money went to charity.

That being said, I would probably still do so. As a rationalist, I know it's not a positive action, but it would still give me (irrational) emotional enjoyment. Plus, you get a great story out of it. Imagine being able to tell your friends that you ruined Hitler's evening.

comment by see · 2011-02-01T03:32:35.875Z · LW(p) · GW(p)

I note that by 1933, the SA had already been committing violent crimes under Hitler's command for over a decade. So the edited puzzle presented is fundamentally unrelated to the question, unless you think that a \$40 fine and a ruined evening is an excessive punishment for a decade of violent crimes.

comment by Bongo · 2011-01-29T20:42:32.848Z · LW(p) · GW(p)

Given that the cost of administering the punishment would be worth paying to prevent the crime...

You should punish them if they are the kind of person that before they do a crime, accurately consider

• whether being the kind of person that doesn't do the crime implies they should have avoided some punishments in the past that they in fact did receive.
• whether being the kind of person that does the crime implies that they should have received some punishments in the past that they in fact did not receive.

And then weigh the gains from the crime against the against the losses of the punishments they should have received or and the gains from the punishments they should have avoided.

OR

If at some point they considered whether to be the kind of person that does the above, and decided not to because that would make them susceptible to punishment of future crimes.

--

I think the above conditions are finally sufficient, but not necessary. Some other kinds of agents are worth to punishing for their future crimes too.

Also, the question I was answering above was not really "when should you punish someone for their future crimes", but "what kinds of people are worth punishing for their future crimes". Maybe that's why the answer is so long.

comment by wedrifid · 2011-01-29T09:07:20.975Z · LW(p) · GW(p)

Do you do it?

Yes.

The title here is misleading. "Punishing future crimes" isn't necessarily implied by the wallet theft. Not actively supporting the personal security and property rights of the to be villain is all that is required. If Hitler is considered morally irrelevant it is just a matter of whether having the money is better than not having it.

comment by XiXiDu · 2011-01-29T09:47:04.254Z · LW(p) · GW(p)

Someone who knows about existential risks but doesn't do anything to mitigate them, even though they could, might be responsible for more deaths than Hitler. Do you think it is OK to steal their money and donate it to the SIAI?

ETA I realize that this is too personal as this isn't inquiring about something objectively correct/incorrect but a subjective utility function.

comment by wedrifid · 2011-01-29T10:19:56.879Z · LW(p) · GW(p)

Someone who knows about existential risks but doesn't do anything to mitigate them, even though they could, might be responsible for more deaths than Hitler. Do you think it is OK to steal their money and donate it to the SIAI?

I only steal money from dead counterfactual future megalomaniacal villains.

comment by Vladimir_Nesov · 2011-01-29T10:40:33.401Z · LW(p) · GW(p)

Do you do it?

Yes.

In a previous discussion you unconvincingly stated that killing a villain would be sad, but now the answer "yes" seems to require that the villain is morally irrelevant. Is there other reason for your "yes" answer that doesn't rely on Hitler's terminal moral irrelevance? (It's unclear how your second paragraph relates to the answer you've given.)

comment by wedrifid · 2011-01-29T11:15:33.433Z · LW(p) · GW(p)

In a previous discussion you unconvincingly

It is seldom wise to engage in discussion that opens with indications of aggression. All my replies to your various other accusations still apply in their various context and I do not want to extend them further. I will note with respect to this context in particular that you misread my response.

but now the answer "yes" seems to require that the villain is morally irrelevant. Is there other reason for your "yes" answer that doesn't rely on Hitler's terminal moral irrelevance?

You threw 'terminal' in there gratuitously because it sounds bad. That wasn't part of the question. If the question was actually about killing Hitler I would probably actually leave him alive. That sort of drastic change is dangerous. In this world we do have a chance of surviving into the future, even if that task is difficult. I don't know what a world with Hitler killed would look like. Hitler is big as far as butterflys go. But this is just a small side point.

If you read somewhat more closely you will notice that I make an abstract point about a mistake in reasoning in labeling the scenario punishment in particular. I haven't said whether I value Hitler's experience of life positively negatively or neutrally. I can say that I value Hitler having \$40 less than me having \$40. So if some messed up Omega with a time machine ever offers me the chance to knock off Hitler's wallet I'm totally going to do it. You can (try to) shame me when it happens.

comment by Vladimir_Nesov · 2011-01-29T11:28:36.285Z · LW(p) · GW(p)

It is seldom wise to engage in discussion that opens with indications aggression. All my replies to your various other accusations still apply in their various context and I do not want to extend them further.

Just explaining the context for finding your current reply interesting (it is true that your statements didn't convince me, whatever their other qualities or however socially inappropriate this whole line of discussion is).

You threw 'terminal' in there gratuitously because it sounds bad.

No, I added this for specificity, because it seems to be the only source of reasons to not mug Hitler, I don't see how it would be instrumentally incorrect to do so given the problem statement. Hence, one salient hypothesis for why one decides to mug Hitler is that this source of reasons not to do so doesn't move them (but, obviously, this is just a hypothesis to consider, not strong enough to be believed outright, since there could be other reasons I didn't consider, or nontrivial implication of this reason that lead to the opposite conclusion, or the reason turns out not to be strong enough).

I can say that I value Hitler having \$40 less than me having \$40.

Oh, actually I didn't consider that, if the problem was stated so I'd agree that it's the thing to do, and the decision would have no Hitler-specificity to it. It would even be an instrumentally good decision, since I could invest the money to cause more goodness that Hitler would (here, some Hitler-specificity is necessary).

But the problem isn't stated so, it's not symmetrical, it's about "ruining his evening", which a lost opportunity to add \$40 to my net worth won't cause for me.

You can (try to) shame me when it happens.

Irrelevant to my intentions, I'm asking what's right, not presuming what's right.

comment by ArisKatsaris · 2011-01-31T14:34:45.580Z · LW(p) · GW(p)

But the problem isn't stated so, it's not symmetrical, it's about "ruining his evening", which a lost opportunity to add \$40 to my net worth won't cause for me.

I disagree with you on how the problem was stated -- "ruining his evening" isn't the only effect. You also get 40 dollars.

But even with a rephrasing that'd use the words "burning Hitler's wallet' instead (so that there's no benefit of 40 dollars for me) I might value satisfying my sadistic desire to ruin Hitler's evening more than I valued Hitler keeping his 40 dollars. Or not -- it depends how much I tolerated sadism against evil dictators in myself.

That doesn't mean I would kill Hitler for the emotional satisfaction (always assuming there's no measurable difference one way or another to future horrors): I value human life (even Hitler's life) more than I value my brief personal emotional satisfaction at having vengeance done.