The Trolley Problem and Reversibility

post by casebash · 2015-09-30T04:06:00.343Z · LW · GW · Legacy · 27 comments

Contents

27 comments

The most famous problem used when discussing consequentialism is that of the tram problem. A tram is hurtling towards the 5 people on the track, but if you flick a switch it will change tracks and kill only the one person instead. Utilitarians would say that you should flick the switch as it is better for there to be a single death than five. Some deontologists might agree with this, however, much more would object and argue that you don’t have the right to make that decision. This problem has different variations, such as one where you push someone in front of the train instead of them being on the track, but we’ll consider this one, as if it is accepted then it moves you a large way towards utilitarianism.

Let’s suppose that someone flicks the switch, but then realises the other side was actually correct and that they shouldn’t have flicked it. Do they now have an obligation to flick the switch back? What is interesting is that if they had just walked into the room and the train was heading towards the one person, they would have had an obligation *not* to flick the switch, but, having flicked it, it seems that they have an obligation to flick it back the other way.

Where this gets more puzzling is when we imagine Bob having observed Aaron flicking the switch? Arguably, if Aaron had no right to flick the switch, then Bob would have obligation to flick it back (or, if not an obligation, this would surely count as a moral good?). It is hard to argue against this conclusion, assuming that there is a strong moral obligation for Aaron not to flick the switch, along the lines of “Do not kill”. This logic seems consistent with how we act in other situations; if someone had tried to kill someone or steal something important from them; then most people would reverse or prevent the action if they could. 

But what if Aaron reveals that he was only flicking the switch because Cameron had flicked it first? Then Bob would be obligated to leave it alone, as Aaron would be doing what Bob was planning to do: prevent interference. We can also complicate it by imagining that a strong gust of wind was about to come and flick the switch, but Bob flicked it first. Is there now a duty to undo Bob's flick of the switch or does that fact that the switch was going to flick anyway abrogate that duty? This obligation to trace back the history seems very strange indeed. I can’t see any pathway to find a logical contradiction, but I can’t imagine that many people would defend this state of affairs.

But perhaps the key principle here is non-interference. When Aaron flicks the switch, he has interfered and so he arguably has the limited right to undo his interference. But when Bob decides to reverse this, perhaps this counts as interference also. So while Bob receives credit for preventing Aaron’s interference, this is outweighed by committing interference himself - acts are generally considered more important than omissions. This would lead to Bob being required to take no action, as there wouldn’t be any morally acceptable pathway with which to take action.

I’m not sure I find this line of thought convincing. If we don’t want anyone interfering with the situation, couldn’t we lock the switch in place before anyone (including Aaron) gets the chance or even the notion to interfere? It would seem rather strange to argue that we have to leave the door open to interference even before we know anyone is planning to do so. Next suppose that we don’t have glue, but we can install a mechanism that will flick the switch back if anyone tries to flick it. Principally, this doesn’t seem any different from installing glue.

Next, suppose we don’t have a machine to flick it back, so instead we install Bob. It seems that installing Bob is just as moral as installing an actual mechanism. It would seem rather strange to argue that “installing” Bob is moral, but any action he takes is immoral. There might be cases where “installing” someone is moral, but certain actions they take will be immoral. One example would be “installing” a policeman to enforce a law that is imperfect. We can expect the decision to hire the policeman to be moral if the law is general good, but, in certain circumstances, flaws in this law might make enforcement immoral. But here, we are imagining that *any* action Bob takes is immoral interference. It therefore seems strange to suggest that installing him could somehow be moral and so this line of thought seems to lead to a contradiction.

We consider one last situation: that we aren't allowed to interfere and that setting up a mechanism to stop interference also counts as interference. We first imagine that Obama has ordered a drone attack that is going to kill a (robot, just go with it) terrorist. He knows that the drone attack will cause collateral damage, but it will also prevent the terrorist from killing many more people on American soil. He wakes up the next morning and realises that he was wrong to violate the deontological principles, so he calls off the attack. Are there any deotologists who would argue that he doesn’t have the right to rescind his order? Rescinding the order does not seem to count as "further interference", instead it seems to count as "preventing his interference from occurring". Flicking the switch back seems functionally identical to rescinding the order. The train hasn’t hit the intersection; so there isn’t any casual entanglement, so it seems like flicking the switch is best characterised as preventing the interference from occurring. If we want to make the scenarios even more similar, we can imagine that flicking the switch doesn't force the train to go down one track or another, but instead orders the driver to take one particular track. It doesn't seem like changing this aspect of the problem should alter the morality at all.

This post has shown that deontological objections to the Trolley Problem tend to lead to non-obvious philosophical commitments that are not very well known. I didn't write this post so much as to try to show that deontology is wrong, as to start as conversation and help deontologists understand and refine their commitments better.

I also wanted to include one paragraph I wrote in the comments: Let's assume that the train will arrive at the intersection in five minutes. If you pull the lever one way, then pull it back the other, you'll save someone from losing their job. There is no chance that the lever will get stuck out that you won't be able to complete the operation on trying. Clearly pulling the lever, then pulling it back is superior to not touching it. This seems to indicate that the sin isn't pulling the lever, but pulling it without the intent to pull it back. If the sin is pulling it without intent to pull it back, then it would seem very strange that gaining the intent to pull it back, then pulling it back would be a sin.


27 comments

Comments sorted by top scores.

comment by Viliam · 2015-09-30T10:58:25.652Z · LW(p) · GW(p)

Let’s suppose that someone flicks the switch, but then realises the deontologists were actually correct and that they shouldn’t have flicked it. Do they now have an obligation to flick the switch back?

That's a great question!

My guess is that at the moment you touched the switch for the first time, you have violated the deontologist rules, and they are not reversible, so further touching the switch can only increase your sins.

(The whole idea of reverting your action to revert the result belongs to the consequentialist area. Deontologists care about the way, not the destination.)

As an analogy, imagine crossing the street at a red traffic light, and then crossing it back to "undo" your traffic violation. Sorry, does not work this way. You were supposed to stay where you are. You cross the street, you broke the law. Now you are guilty of breaking the law, and again, you are supposed to stay where you are now. Crossing the street again only means breaking the law twice.

Similarly, when you redirect the trolley, you are gulity of murdering one person and you are supposed to stop there. Redirect it back, and now you are guilty of temporarily endangering one person's life and killing five people. On the other hand, if you don't touch the switch, you are not guilty of anything.

Replies from: casebash
comment by casebash · 2015-09-30T14:02:24.907Z · LW(p) · GW(p)

Let's assume that the train will arrive at the intersection in five minutes. If you pull the lever one way, then pull it back the other, you'll save someone from losing their job. There is no chance that the lever will get stuck out that you won't be able to complete the operation on trying. Clearly pulling the lever, then pulling it back is superior to not touching it. This seems to indicate that the sin isn't pulling the lever, but pulling it without the intent to pull it back. If the sin is pulling it without intent to pull it back, then it would seem very strange that gaining the intent to pull it back, then pulling it back would be a sin.

Instead of thinking about crossing the road, then trying to uncross it, imagine that you are with a group of friends and you have told them to cross the road. You then realise that telling them to break the law was wrong, so you stop them before they cross. This is a better analogy as for the trolley problem, as pulling the lever didn't carry any inherent risk, they were only under risk in the condition you didn't pull it back. In contrast, for crossing the street, you've already created the risk the law is designed to prevent unconditionally. People may have already seen you cross the street which creates disrespect for the law. While someone may have already overheard your suggestion to cross the street or seen you pull the lever (before you pulled it back), this harm is still less then the harm caused by carrying the act to completion. In the crossing the street example, the most significant harms have already occurred, in the Trolley Problem, you can prevent them.

An analogy more similar to the crossing the street problem is imagining that you can bring one group back to life, in exchange for killing the other group. Perhaps this means the people who should have survived had you not interfered survive, but it also means that you killed two groups of people.

comment by Dagon · 2015-09-30T15:32:09.243Z · LW(p) · GW(p)

I think you mean "consequentialism", not "utilitarianism". And only some flavors of deontology make that strong a distinction between action and inaction. There are plenty of deontologists who'll tell you that standing still and letting 5/6 of a group die when you could change it to 1/6 is a wrong action.

The trolly problem is useful in examining one's intuitions, but doesn't tell us much about reasoned philosophy.

Replies from: casebash
comment by casebash · 2015-10-01T03:23:30.505Z · LW(p) · GW(p)

Thanks. Those are good clarifications. I've integrated them into the post.

comment by OrphanWilde · 2015-10-01T20:09:14.952Z · LW(p) · GW(p)

Bluntly: You don't know what you're talking about.

You're reasoning about deontology as if it were consequentialism. Deontology is primarily concerned with means, not ends, and your reasoning is based on the ends of non-interventialism (which is not, in fact, even the ethics of deontology), rather than the means taken to achieve this. The state of the switch, or of past actions, doesn't matter to deontology; it's what you do that matters.

A deontologist may think the consequentialists' actions in this scenario are unethical. This is not the same as desiring to undo those actions. What you miss is that the deontologist, just as much as the consequentialist, desires the end state where fewer people die, NOT the end state that is unchanged from a "natural" state - the deontologist, however, don't see the means as justifying the ends, and indeed don't see ends as justifications whatsoever. The deontologist sees it as wrong to sacrifice a person, even to save several; likewise, it would also be wrong to sacrifice several people, regardless of the justification, including your extremely contrived justification of undoing somebody else's actions.

In short, your caricature of a deontologist is merely a bad consequentialist, and doesn't follow deontology at all.

Replies from: Bound_up, casebash
comment by Bound_up · 2015-10-11T04:04:11.664Z · LW(p) · GW(p)

Seconded. A deontologist is not worried about the results insofar as determining their decision. They have a constraint against "making that call," "judging who should or shouldn't live," or "playing with peoples' lives/playing God."

If you violate one of those by flipping the lever, you done wrong. If they flip it back, they'd be making that same call, and do wrong.

Each successive purposeful flip of the lever would be an additional sin. You make it sound like the deontologist values the lever being left in its original position, and they don't. They just value not interfering with the lever, in whatever series of states it's been in the past.

Deontologists don't see these questions and consider the opposite answer to be the right choice, but rather they give no answer as their choice, like the Zen monk's MU.

They don't make different decisions because they value different ends, but because their means figure into the value equation, no matter what the ends are. Some means have enormous negative value for them even if locally they seem to work just fine.

Replies from: entirelyuseless
comment by entirelyuseless · 2015-10-11T13:11:49.762Z · LW(p) · GW(p)

Saying things like "some means have enormous negative value for them" misunderstands how deontology (and similar ethical systems) work.

Basically they work by considering the action as a particular thing which can be good or bad, completely distinct from the effects. The effects may be good, or they may be bad.

Given this analysis, if an action is bad, it is bad. That is a tautology. The effects of that action can be infinitely good, and it will not change the fact that the action is bad, just like if an object is red, that will not change just because everything else is green. This means that in a deontological system, the universe can end up better off after someone does something wrong. It is still wrong, in that system. It does not have anything to do with an enormous negative value; the total value of the universe after the action may have increased.

Replies from: Bound_up
comment by Bound_up · 2015-10-12T09:12:49.115Z · LW(p) · GW(p)

I wish I had been clearer.

This sounds like what I mean. They aren't just worried about the ends being good or bad, the means themselves (sometimes) have negative values, i.e., are wrong.

I said enormous negative value because I'm not positive whether a real deontologist could be eventually persuaded that a forbidden means would be permissible if the ends were sufficiently positive, i.e., steal something to literally save the entire world.

comment by casebash · 2015-10-02T00:36:34.407Z · LW(p) · GW(p)

I don't imagine that there'd be many deontologists who'd accept all of my arguments. But I'm sure there are deontologists who'd accept some of them. Like deontologists in favour of reversing the lever pull would probably accept my argument on why it is important to reverse the lever.

It appears that you're in the no undoing whatsoever category. What is your opinion on the Obama problem? Does he have a right to rescind his drone attack order? If so, what is the principled difference between this scenario and the Trolley Problem?

Replies from: OrphanWilde
comment by OrphanWilde · 2015-10-02T13:23:12.740Z · LW(p) · GW(p)

I'm a virtue ethicist, not a deontologist, and I don't find that the trolley problem is ethically difficult; leaving the trolley pointed at six people is ethically acceptable, as I don't regard situations arising outside one's own decisions to have ethical importance, and switching it to the one person is also ethically acceptable, as you're changing reality for the better.

There isn't a single deontological answer to the "Obama problem". What is and is not acceptable or desirable depends on the rules that make up Obama's theoretical deontology. Deontology is a descriptive model for ethics systems, not an ethics system in and of itself.

Replies from: casebash
comment by casebash · 2015-10-03T03:16:41.209Z · LW(p) · GW(p)

Would you say that switching it to the one person instead of the six constitutes a "good" or is just "ethically acceptable"?

Replies from: OrphanWilde
comment by OrphanWilde · 2015-10-05T13:58:19.398Z · LW(p) · GW(p)

Ethically acceptable. "Good" implies a relative "Bad", and again, I don't regard situations arising outside one's own decisions to have ethical importance.

comment by roystgnr · 2015-09-30T14:21:34.694Z · LW(p) · GW(p)

My trouble with the trolley problem is that it is generally stated with a lack of sufficient context to understand the long-term implications. We're saying there are five people on this track and one person on this other track, with no explanation of why? Unless the answer really is "quantum fluctuations", utilitarianism demands considering the long-term implications of that explanation. My utility function isn't "save as many lives as possible during the next five minutes", it's (still oversimplifying) "save as many lives as possible", and figuring out what causes five people to step in front of a moving trolley is critical to that! There will surely still be trolleys running tomorrow, and next month, and next year.

For example, if the reason five people feel free to step in front of a moving trolley is "because quasi-utilitarian suckers won't let the trolley hit us anyway", then we've got a Newcomb problem buried here too. In that case, the reason to keep the trolley on its scheduled track isn't because that involves fewer flicks of a switch, it's because "maintenance guy working on an unused track" is not a situation we want to discourage but "crowd of trespassers pressuring us into considering killing him" is.

Replies from: buybuydandavis, None, casebash
comment by buybuydandavis · 2015-09-30T21:22:16.269Z · LW(p) · GW(p)

"My trouble with the trolley problem is that it is generally stated with a lack of sufficient context to understand the long-term implications."

While limited knowledge is inconvenient, that's reality. We have limited knowledge. You place your bets and take your chances.

comment by [deleted] · 2015-10-01T01:34:33.861Z · LW(p) · GW(p)

In the least convenient possible world, you happen upon these people and don't know anything about them, their past, or their reasons.

Replies from: Jiro
comment by Jiro · 2015-10-02T16:06:20.487Z · LW(p) · GW(p)

If you don't know anything about them, there is some chance that deciding to pull the switch will change the incentives for people to feel free to step in front of trolleys.

Also, consider precommitting. You precommit to pull or not pull the switch based on whether pulling the switch overall saves more people, including the change in people's actions formed by the existence of your precommitment. (You could even model some deontological rules as a form of precommitting.) Whether it is good to precommit inherently depends on the long-term implications of your action, unless you want to have separate precommitments for quantum fluctuation trolleys and normal trolleys that people choose to walk in front of.

And of course it may turn out that your precommitment ends up making people worse off in this situation (more people die if you don't switch the trolley), but that's how precommitments work--having to follow through on the precommitment could leave things worse off without making the precommitment a bad idea.

Replies from: None
comment by [deleted] · 2015-10-03T03:04:29.252Z · LW(p) · GW(p)

Don't know if this is "least convenient world" or "most convenient world" territory, but I think it fits in the spirit of the problem:

No one will know that a switch was pulled except you.

Replies from: Jiro
comment by Jiro · 2015-10-03T06:03:23.039Z · LW(p) · GW(p)

That doesn't work unless you can make separate precommitments for switches that nobody knows about and switches that people might know about. You probably are unable to do that, for the same reason that you are unable to have separate precommitments for quantum fluctuations and normal trolleys.

Also, that assumption is not enough. Similarly to the reasoning behind superrationality, people can figure out what your reasoning is whether or not you tell them. You'd have to assume that nobody knows what your ethical system is, plus one wide scale assumption such as assuming that nobody knows about the existence of utilitarians (or of deontologists whose rules are modelled as utilitarian precommitment.)

comment by casebash · 2015-09-30T23:49:33.118Z · LW(p) · GW(p)

The purpose of the trolley problem is to consider the clash between deontological principles and allowing harm to occur. So the best situation to consider is one which sets up the purest clash possible. Of course, you can always consider multiple variants of the trolley problem if you then want to explore other aspects.

comment by tim · 2015-09-30T06:03:11.211Z · LW(p) · GW(p)

I am not a deontologist, but it's clear you're painting the entire school of thought with a fairly broad brush.

However, deontologists would say that you don’t have the right to make that decision.

It is hard to argue against this conclusion, assuming that there is a strong moral obligation for Aaron not to flick the switch, along the lines of “Do not kill”.

I can’t see any pathway to find a logical contradiction, but I can’t imagine that many people would defend this state of affairs.

It is hoped that this post won’t be oversimplified into a, “this is why you are wrong” post, but to help deontologists understand and refine their commitments better.

The entire tone of the post reeks of strawmanning. There is no discussion regarding how different sets of deontological rules might come to seperate conclusions. Each premise is assumed to be correct and there is zero effort made to exploring why it might be wrong (see: steel manning). And finally, almost every paragraph ends with a statement along the lines of:

  • "...it seems..."
  • "...seems consistent with..."
  • "...I can’t imagine that..."
  • "...this doesn’t seem..."
  • "...seems strange to suggest..."
  • "...this seems like a very hard position to defend."

If you ignore the ethical prescriptivism, there's not a whole lot of substance left.

Replies from: casebash
comment by casebash · 2015-09-30T06:32:59.856Z · LW(p) · GW(p)

Deontology is a very broad philosophical position and so it is hard to avoid broad brush strokes whilst also trying to keep an article to a reasonable length. If there is a need, then I will write a follow up article that dives into how this problem relates to more specific deontological positions based upon feedback.

"And finally, almost every paragraph ends with a statement along the lines of: it seems..." - the purpose of this article was to demonstrate that someone who has accepted deontology will most likely find themselves accepting some kind of strange philosophical commitments. I wanted to acknowledge the fact that there were many points at which a deontologist might object to my chain of thought and to not overrepresent how strong the chains of logic were.

comment by Diadem · 2015-10-01T10:57:10.919Z · LW(p) · GW(p)

It seems to me that there is an important difference between 'flipping the switch' and 'flipping the switch back', which is the intent of the action.

In the first scenario, your intent is to sacrifice one person to safe many.

In the second scenario, your intent is to undo a previous wrong.

Thus a deontologist may perfectly consistently argue that the first action is wrong (because you are never allowed to treat people only as means, or whatever deontological rule you want to invoke), while the second action is allowed.

comment by dumky · 2015-09-30T05:09:25.258Z · LW(p) · GW(p)

There is a third approach to the trolley problem, which I have not often seen discussed: whose property are the trolley tracks?

In other words, does this require a universal answer, or can this allow for diversity based on property and agreement? When you board a ship, you know that the captain has the last word when it comes to life-or-death situations, and different captains may have different judgement. The same goes for trolley tracks. The principles can be explained beforehand (such as "the mission comes first"). Another question that seems relevant to the person in charge (owner) is whether the 5 people are to blame for putting themselves at risk.

Replies from: casebash
comment by casebash · 2015-09-30T05:38:36.136Z · LW(p) · GW(p)

The trolley problem is a hypothetical situation designed to explore the clash between following deontological principles and allowing a great deal of harm to occur. So, in setting up this problem, we should be trying to limit these extraneous factors as much as possible. So we'll say that you are the person in charge with absolute authority over what call to make. We will say that none of this is your fault or anyone's fault - these people just mysteriously appeared on the tracks as a result of quantum fluctuations.

Replies from: dumky
comment by dumky · 2015-09-30T21:47:12.932Z · LW(p) · GW(p)

Sure, but that means there possibly is no answer (the problem is under-specified). Maybe the answer depends on preferences, not universal ethical or rational principles (such as deontological or utilitarian principles).

comment by Slider · 2015-09-30T11:39:24.808Z · LW(p) · GW(p)

This might be a different thougt experiment but it could be plausible that what appears to one subject to be a situation where 5 people are going to be crushed by a trolley might appear as a situation where 5 people are ready to jump away from trolleys path. Or to phrase it another way in order for the consequentialist position be alluding one needs to be reasonably sure that the difference in consequence is reasonably big.

One could argue that if you know that your perception of reality is distorted it might be a properous way of being as less agenty as possible. That is a fool migth be capable of redirecting a trolley towards an unsuspecting victim based on a delusion of non-real danger to other persons. That is while the harm is lesser that estimation might be truer. Thus if there are two people disagreeing whether to pull or not the one that is more unsure of his position should yield.

But I guess the intention is to assume that the danger is real. In a way if you assume you are not fit to make decisions discussing what decision you should make is somewhat selfdefeating. But it migth be relevant for reduced capacity.

I could also believe that one could phrase it as "What should the guideline for such situations be?" but I guess that would raise the question on whether the rules to follow are subject to some kind of rules. But it can highlight that such rules are inadequate to provide a choice. That in a way if you choose one way and state a principle as the guideline you followed it is an unsystematic choice as you could have chosen another rule that would have favoured the other option. Or it can lead one to claim that doing that way or another is not a moral choice as the book doesn't cover it.

comment by Shmi (shminux) · 2015-09-30T04:51:48.014Z · LW(p) · GW(p)

Ask Alicorn, the resident deontologist.