Consequentialism and Accidents

post by Rodrigo Fernandes (rodrigo-fernandes) · 2020-06-06T18:29:11.511Z · LW · GW · 5 comments

This is a question post.

Contents

  Answers
    7 TAG
    2 Dagon
None
5 comments

Can't seem to find criticism on consequentialism regarding mistakes/accidents. For example, an act of someone who unwillingly saves 100 lives he was trying to kill is seen, to consequentialists, as moral as an act of someone who knowingly and voluntarily saved 100 lives. I intuitively regard those acts as not on the same moral pedestal, despite overall agreeing with the consequentialist/utilitarian approach to ethics. Would love to hear some thoughts on this.

Answers

answer by TAG · 2020-06-07T19:39:45.057Z · LW(p) · GW(p)

This is actually a pretty big topic.

https://plato.stanford.edu/entries/moral-luck/

answer by Dagon · 2020-06-07T05:10:04.072Z · LW(p) · GW(p)

Focus on the goodness of the action and the outcome, not of the person. Saving 100 lives is a good consequence, right? Whatever behavior led to it was a good action.

Trying to kill 100 is a bad thought-action, as the most expected consequence is 100 killings. This would be a bad consequence.

Fantasizing about killing 100 and then not doing it is ... neutral. No consequences.

[ note: oversimplified and possibly at odds with some thinking about consequentialism, especially for the common semi-consequentialist-with-deontological-fallback-when-it-gets-confusing philosophy that a lot of people use. I'm probably not in the mainstream when I say "having been lucky is good"].

comment by TAG · 2020-06-07T19:45:23.036Z · LW(p) · GW(p)

Focus on the goodness of the action and the outcome, not of the person.

For what purpose?

Replies from: Dagon
comment by Dagon · 2020-06-08T16:24:04.465Z · LW(p) · GW(p)
For what purpose?

Thank you! I was mostly just reacting to a question, without really thinking about why or acknowledging that there are distinct reasons to choose a framework to judge an action or person. Which are themselves different from using the framework to choose your own future actions. It's very useful to be reminded of the complexity.

For purposes of evaluating whether an action is something you should encourage or discourage in the future, you should generally recognize that people are often mistaken about their motivation and reasoning, and heavily weight the actual outcome of those behaviors.

For purposes of punishment or signaling to others about whether a person should be part of your society, you should probably use BOTH outcome and intent.

5 comments

Comments sorted by top scores.

comment by FactorialCode · 2020-06-06T21:49:25.358Z · LW(p) · GW(p)

I don't know about criticism, but the problem disappears once you start taking into account counterfactuals and the expected impact/utility of actions. Assuming the killer is in any way competent, then in expectation the killers actions are a net negative, because when you integrate over all possible worlds, his actions tend to get people killed, even if that's not how things turned out in this world. Likewise, the person who knowingly and voluntarily saves lives is going to generally succeed in expectation. Thus the person who willingly saves lives is acting more "moral" regardless of how things actually turn out.

This gets more murky when agents are anti-rational, and act in opposition to their preferences, even in expectation.

Replies from: TAG
comment by TAG · 2020-06-07T19:44:03.383Z · LW(p) · GW(p)

because when you integrate over all possible worlds, his actions tend to get people killed, v

I have never heard of a version of consequentialism that explictly says that consequences include non-actual possibilities. The idea seems to coincide with virtue theory in a way that is a bit suspicious. Virtue ethics make it very easy to make judgements about agents, since that is what it is all about. Consequentialism has difficulty , because of moral luck. But is judging an agent by their propensity-to-produce-desirable-consequences really different from judging them by their virtue ... or is it just a misleading re-naming of virtue?

Replies from: FactorialCode
comment by FactorialCode · 2020-06-07T20:07:41.901Z · LW(p) · GW(p)

I think virtue ethics and the "policy consequentialism" I'm gesturing at are different moral frameworks that will under the right circumstances make the same prescriptions. As I understand it, one assigns moral worth to outcomes, and the actions it prescribes are determined updatelessly. Whereas the other assigns moral worth to specific policies/policy classes implemented by agents, without looking at the consequences of those policies.

comment by Pattern · 2020-06-07T01:13:11.628Z · LW(p) · GW(p)
an act of someone who unwillingly saves 100 lives he was trying to kill

Has anyone ever done this?

comment by Donald Hobson (donald-hobson) · 2020-06-06T22:58:43.225Z · LW(p) · GW(p)

The different situations give different predictions for how people will act next time. You want to lock attempted murderers in jail because otherwise they might succeed next time. (And knowing that you might get punished even if you don't succeed gives a stronger deterrent to potential murderers). Likewise, if someone makes good decisions trying to save lives, but is unlucky, you still have reason to trust them more in future, and to reward them to encourage this behaviour.