The Trolley Problem: Dodging moral questions

post by Desrtopa · 2010-12-05T04:58:34.599Z · LW · GW · Legacy · 131 comments

Contents

131 comments

The trolley problem is one of the more famous thought experiments in moral philosophy, and studies by psychologists and anthropologists suggest that the response distributions to its major permutations remain roughly the same throughout all human cultures. Most people will permit pulling the lever to redirect the trolley so that it will kill one person rather than five, but will balk at pushing one fat person in front of the trolley to save the five if that is the only available option of stopping it.

However, in informal settings, where the dilemma is posed by a peer rather than a teacher or researcher, it has been my observation that there is another major category which accounts for a significant proportion of respondents' answers. Rather than choosing to flip the switch, push the fat man, or remain passive, many people will reject the question outright. They will attack the improbability of the premise, attempt to invent third options, or appeal to their emotional state in the provided scenario ("I would be too panicked to do anything",) or some combination of the above, in order to opt out of answering the question on its own terms.

However, in most cases, these excuses are not their true rejection. Those who tried to find third options or appeal to their emotional state will continue to reject the dilemma even when it is posed in its most inconvenient possible forms, where they have the time to collect themselves and make a reasoned choice, but no possibility of implementing alternative solutions.

Those who appealed to the unlikelihood of the scenario might appear to have the stronger objection; after all, the trolley dilemma is extremely improbable, and more inconvenient permutations of the problem might appear even less probable. However, trolleylike dilemmas are actually quite common in real life, when you take the scenario not as a case where only two options are available, but as a metaphor for any situation where all the available choices have negative repercussions, and attempting to optimize the outcome demands increased complicity in the dilemma. This method of framing the problem also tends not to cause people to reverse their rejections. 

Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms. They will insist that there must be superior alternatives, that external circumstances will absolve them from having to make a choice, or simply that they have no responsibility to address an artificial moral dilemma.

When the respondents feel that they can possibly opt out of answering the question, the implications of the trolley problem become even more unnerving than the results from past studies suggest. It appears that we live in a world where not only will most people refuse complicity in a disaster in order to save more lives, but where many people reject outright the idea that they should have any considered set of moral standards for making hard choices at all. They have placed themselves in a reality too accommodating of their preferences to force them to have a system for dealing with situations with no ideal outcomes.

131 comments

Comments sorted by top scores.

comment by David_Gerard · 2010-12-05T12:43:56.299Z · LW(p) · GW(p)

It appears that we live in a world where not only will most people refuse complicity in a disaster in order to save more lives, but where many people reject outright the idea that they should have any considered set of moral standards for making hard choices at all.

Er, you're attaching too much value to hypothetical philosophical questions.

I'd have thought it obvious that they're dodging the question so as to avoid the possibility of the answer being taken out of context and used against them. Lose-lose counterfactuals are usually used for entrapment. This is a common form of hazing amongst schoolchildren and toward politicians, after all, so it's a non-zero possibility in the real world. It's the one real-world purpose contrived questions are applied to.

tl;dr: you have not given them sufficient reason to care about contrived trolley problems.

Replies from: Bongo, orthonormal
comment by Bongo · 2010-12-05T17:24:53.931Z · LW(p) · GW(p)

Er, you're overestimating how much value the other person attaches to hypothetical philosophical questions.

FTFY

Replies from: David_Gerard
comment by David_Gerard · 2010-12-05T17:43:01.573Z · LW(p) · GW(p)

You are, of course, correct. Thank you.

comment by orthonormal · 2010-12-14T22:17:26.593Z · LW(p) · GW(p)

IWICUTT.

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-22T09:25:50.075Z · LW(p) · GW(p)

Google shrugs at this. “I wish I could understand that too”?

Replies from: orthonormal
comment by orthonormal · 2012-04-22T16:14:03.291Z · LW(p) · GW(p)

I Wish I Could Upvote This Twice.

(Didn't quite catch on.)

comment by wedrifid · 2010-12-05T05:50:05.293Z · LW(p) · GW(p)

Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms. They will insist that there must be superior alternatives, that external circumstances will absolve them from having to make a choice, or simply that they have no responsibility to address an artificial moral dilemma.

Of course we do. It would be crazy to answer such a question in a social setting if there is any possibility of avoiding it. Social adversaries will take your answer out of context and spin it to make you look bad. Honesty is not the best policy and answering such questions is nearly universally an irrational decision. Even when the questions are answered the responses should not be considered to have a significant correlation to actual behaviour.

Replies from: diegocaleiro, Desrtopa, Dustin
comment by diegocaleiro · 2010-12-07T04:37:59.579Z · LW(p) · GW(p)

I think I have a more plausible suggestion than the "spin it to make you look bad"

Think evolutionarily.

It absolutely sucks to be a psycho serial killer in public, if you are into making friends and acquaintances and likely to be a grandpa.

It sucks less to show that you would kill someone, specially if you were the actor of the death.

It sucks less to show that you would only kill someone by omission, but not by action.

It sucks less if you show that your brain is so well tuned not to kill people, that you (truly) react disgusted even to conceive of doing it.

This is the woman I want to have a child with, the one that is not willing to say she would kill under any circumstance.

Now, you may say that in every case, I simply ignored what would happen to the five other people (the skinny ones). To which I say that your brain processes both informations separately,"me killing fat guy" "people being saved by my action" and you only need one half to trigger all the emotions of "no way I'd kill that fat guy"

Is this an evolutionary nice story that explains a fact with hindsight. Oh yes indeed.

But what really matters is that you compare this theory with the "distortion" theory that many comments suggested. Admit it, only people who enjoy chatting rationally in a blog think it so important that their arguments will be distorted. Common folks just feel bad about killing fat guys.

Replies from: handoflixue
comment by handoflixue · 2010-12-18T00:05:50.003Z · LW(p) · GW(p)

I'd actually argue that social signaling is probably more important to "common folk" than a lot of the people here. Specifically, the old post about "Why nerds are unpopular" (http://www.paulgraham.com/nerds.html) comes to mind here. I'm entirely willing to say "I'm willing to kill", because I value truth above social signaling

It also occurs to me that a big factor in my answer is that my social circle is full of people that I trust not to distort or misapply my answer. Put me in a sufficiently different social circle and eventually my "survival instincts" will get me to opt out of the problem as an excuse to avoid negative signaling.

If I just really didn't want to kill the fat guy, it'd be much easier to say "oh, goodness, I could never kill someone like that!" rather than opting out of answering by playing to the absurdity of the scenario.

Replies from: TAG
comment by TAG · 2018-05-08T12:19:40.211Z · LW(p) · GW(p)

value truth above social signaling

Are you sure you can't have both?

comment by Desrtopa · 2010-12-05T05:59:54.051Z · LW(p) · GW(p)

If attempting to avoid the question will also elicit a negative response, and the person really only wants to optimize their social standing, then they would be better off simply providing an answer calibrated to please whoever they most desired to avoid disapproval from. Mere signaling fails to account for many of these cases.

Replies from: Eliezer_Yudkowsky, JGWeissman, David_Gerard, wedrifid
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-05T19:02:47.938Z · LW(p) · GW(p)

then they would be better off simply providing an answer calibrated to please whoever they most desired to avoid disapproval from

No they wouldn't. Ambiguity is their ally. Both answers elicit negative responses, and they can avoid that from most people by not saying anything, so why shouldn't they shut up?

EDIT: In case it's not clear, I consider this tactic borderline Dark Arts (please note who originally said that ambiguity-ally line in HPMOR!), a purely political weapon with no role in conversations trying to be rational. I wouldn't criticize its use as a defense against some political nitwit who's trying to hurt you in front of an inexperienced audience; I would be unhappy with first-use of it as a primary political strategy.

Replies from: Desrtopa
comment by Desrtopa · 2010-12-05T22:19:04.711Z · LW(p) · GW(p)

In that case, I would expect them to reverse their rejection in the case of sufficient peer pressure, but this is frequently not the case.

Now I really do want to systematically test how people rejecting the dilemma respond to peer pressure. I've spent a great deal of time watching others deal with this particular dilemma , but my experience isn't systematically gathered or well documented.

In retrospect, I should have held off on making this post until gathering that data; I wrote it up more in frustration at dealing with the same situation again than out of a desire to be informative, and I feel like I should probably have taken a karma hit for that.

Replies from: orthonormal, Strange7
comment by orthonormal · 2010-12-14T22:23:19.791Z · LW(p) · GW(p)

I'd be interested in a trolley version of the Asch conformity experiment: line up a bunch of confederates and have them each give an answer, one way or another, and act respectfully to each other. Then see how the dodge rate of the real participant changes.

Then you could set it up so that one confederate tries to dodge, but is talked out of it. Etc.

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-15T22:58:31.913Z · LW(p) · GW(p)

I would too.

My prediction (~80% confidence) is, given one subject and six confederates and a typical Asch setup, if all confederates give the non-safe answer (e.g., they say "I'd throw one person under the train" or whatever), you'll see a 40-60% increase in the subject's likelihood of doing the same compared to the case where they all dodge.

If one confederate dodges and is chastised for it, I really don't know what to expect. If I had to guess, I'd guess that standard Asch rules apply and the effect of the local group's pressure goes out the window, and you get a 0-10% increase over the all-dodge case. But my confidence is low... call it 20%.

What I'd really be interested in is whether, after going through such a setup, subjects' answers to similar questions in confidential form change.

Replies from: orthonormal
comment by orthonormal · 2010-12-15T23:42:25.434Z · LW(p) · GW(p)

If one confederate dodges and is chastised for it, I really don't know what to expect. If I had to guess, I'd guess that standard Asch rules apply and the effect of the local group's pressure goes out the window

That's not the normal Asch setup- the dissenter isn't ridiculed for it; the subject feels free to dissent because they've seen someone else dissent and 'get away with it'. I would expect that the chastistement variation on any Asch test would produce even more, rather than less, conformity.

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-16T02:06:51.616Z · LW(p) · GW(p)

Yeah, I can see why you say that, and you might be right, but I'm not entirely sure. I've never seen the results of an Asch study where the dissenter is chastised. And this particular example is even weirder, because the thing they're being chastised for -- dodging the question -- is itself something that we hypothesize is the result of group conformity effects. So... I dunno. As I say, my confidence in this case is low.

comment by Strange7 · 2010-12-06T00:21:19.505Z · LW(p) · GW(p)

I would expect them to reverse their rejection in the case of sufficient peer pressure,

Unless, of course, they're willing to put up with some short-term hassling to avoid long-term problems. Given that either answer could be taken out of context and used against them by all the people currently applying that pressure, there's no point (short of, say, locking them in a room and depriving them of sleep for an extended period of time, which is really a whole different kettle of fish) where answering the question becomes preferable.

comment by JGWeissman · 2010-12-05T06:07:34.123Z · LW(p) · GW(p)

Giving either response can be harmful if you are trying to avoid the disapproval of someone who fails at conservation of expected evidence. (This failure could happen even to us rationalists who are aware of the possibility, by simply not thinking about how we would interpret the alternative response we did not observe, especially if our interpretation is influenced by a clever arguer who wants us to disapprove.)

comment by David_Gerard · 2010-12-05T12:46:44.888Z · LW(p) · GW(p)

If attempting to avoid the question will also elicit a negative response, and the person really only wants to optimize their social standing, then they would be better off simply providing an answer calibrated to please whoever they most desired to avoid disapproval from.

You appear to be saying "but they could give a perfect zinger of an answer!" Yes, they could. But refusing the question - "Homey don't play that" - is quite a sensible answer in most practical circumstances, and may discourage people from continuing to try to entrap them, which may be better than answering with a perfect zinger.

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-05T18:19:54.803Z · LW(p) · GW(p)

Well, it needn't be a zinger, per se.

They could, for example, give an answer that at the same time signaled their deep and profound compassion for people who are run over by trolleys and their willingness to... reluctantly... after exploring all available third alternatives to the extent that time allowed... and assuming as a personal favor to the questioner that they somehow were certain of all the facts that the problem asserts, even though that state isn't epistemically reachable... and with the understanding that they'd probably be in expensive therapy for years afterwards to repair the damage to their compliant-with-social-norms-really-honest-no-fooling psyches... throw one person under the train to save five people.

Wincing visibly while saying it would help, also.

This both signals their alliance with the "don't throw people under trains!" social norm and their moral sophistication. This is a general truth of political answers... the most useful answer is the one that lets everyone hear what they want to hear while eliciting disapproval from nobody. (Of course, in the long term that creates a community that disapproves of ambivalence. Politics is a semantic arms race, after all.)

In this vein, my usual answer to trolley questions and the like starts with "It depends: are you asking me what I think I would actually do in that setting? Or are you asking me what I think is the right thing to do in that setting? Because they're different."

But, yeah, I agree that refusing to answer the question can often be more practical, especially if you don't have an artful dodge ready to hand and aren't good at creating them on-the-fly.

Replies from: Strange7
comment by Strange7 · 2010-12-06T00:25:52.954Z · LW(p) · GW(p)

In this vein, my usual answer to trolley questions and the like starts with "It depends: are you asking me what I think I would actually do in that setting? Or are you asking me what I think is the right thing to do in that setting? Because they're different."

A non-answer is still safer. That parry, in and of itself, could be twisted into an admission that you routinely and knowingly violate your own moral code.

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-06T00:46:31.887Z · LW(p) · GW(p)

Not even twisted, really; it is such an admission. But entirely agreed that a non-answer is safer than such an admission. (I suppose "In this vein" is a mis-statement, then.)

comment by wedrifid · 2010-12-05T06:07:54.092Z · LW(p) · GW(p)

If attempting to avoid the question will also elicit a negative response, and the person really only wants to optimize their social standing, then they would be better off simply providing an answer calibrated to please whoever they most desired to avoid disapproval from.

It may be easier in the short term but in the future it will come back to haunt you with sufficient probability for it to dominate your decision making. Never answer moral questions honestly, lie (to yourself first, of course). If there is no good answer to give the questioner then avoid the question. If possible, make up the question you wish they asked and answer that one instead. Don't get trapped in a hostile frame of interrogation.

Mere signaling fails to account for many of these cases.

When it comes to morality there is nothing 'mere' about signalling. Signalling accounts for all of these cases.

Replies from: Desrtopa
comment by Desrtopa · 2010-12-05T06:16:29.016Z · LW(p) · GW(p)

Do you predict, then, that if you put a person in a group where every other person disapproves more of attempts to dodge the question than to provide either answer, and makes this known,then they will never refuse to answer the question on its own terms?

Also, what makes you believe that providing an answer will lead to negative repercussions? I've participated in more discussions of this topic than I could reasonably hope to count, never refused to provide my own answer, and have never observed others to revise their behavior towards me as a result. I can imagine how it might have negative repercussions for a person to provide an answer, but I've never known it to happen to anyone to a significant enough degree that they'd notice. It's possible that signaling accounts for some of these cases, but I think you're generalizing your own attitude to the entire population in a situation where it really doesn't apply.

Replies from: wedrifid, Eliezer_Yudkowsky
comment by wedrifid · 2010-12-05T06:27:20.193Z · LW(p) · GW(p)

I think you're generalizing your own attitude to the entire population

Excuse me? The opposite is closer to the truth. I've realised that my own attitude to interpreting things primarily in the abstract isn't universal. Even a minority of people who use verbal symbols primarily politically is enough to warrant caution.

Replies from: Desrtopa
comment by Desrtopa · 2010-12-05T06:34:37.286Z · LW(p) · GW(p)

Then I'll ask again whether you predict that in a group where everyone else projected disapproval of attempts to dodge the question, nobody would refuse to answer the question on its own terms. This should not be that hard to test; with a few Less Wrong collaborators, we should at least be able to carry it out in online form.

Replies from: David_Gerard, TheOtherDave
comment by David_Gerard · 2010-12-05T12:50:13.807Z · LW(p) · GW(p)

You could certainly engineer a circumstance in which answering questions about hypothetical lose-lose scenarios is considered better than avoiding them, e.g. philosophical discussion of hypothetical lose-lose scenarios. However, your original post does not restrict itself to these scenarios, but generalises to everyone who doesn't want to play that game, with no apparent understanding of the practical reasons people here are trying to explain to you for why people might very sensibly not want to play that game.

comment by TheOtherDave · 2010-12-05T18:03:15.967Z · LW(p) · GW(p)

Not to speak for wedrifid, but I agree with their main point, and I would not predict this.

What I would predict is that fewer people in such a group would dodge the question (and those that did would dodge it less strenuously) than in a group where everyone projected disapproval of throwing people under trolleys.

I would further predict that the reduction in dodging (DR) would be proportional to how confident the subject was that the group really did disapprove more of dodging the question than of throwing people under trolleys... that is, that the group wasn't lying, and that he wasn't misinterpreting the group norm. Given that priors strongly suggest the opposite -- that is, given that most groups are more opposed to throwing people under trolleys than avoiding a question -- I would expect obtaining significant confidence to be nontrivial.

Relatedly, I predict that DR would be proportional to how certain the subject was that their answer would be kept confidential.

By the way, as long as we're doing this exercise, I'd also predict that people who don't dodge the question in normal settings, but rather claim they'd throw someone under the train, are more likely to be contrarian in general -- that is, I'd expect that to correlate well with making other controversial claims. This is even more true for people who often bring up trolley problems in ordinary conversation.

Replies from: scav
comment by scav · 2010-12-14T11:03:18.372Z · LW(p) · GW(p)

I would further predict that the reduction in dodging (DR) would be proportional to how confident the subject was that the group really did disapprove more of dodging the question than of throwing people under trolleys...

Seriously? Well, sure. I for one would not dodge the question then, in case they would throw me under a trolley for it. :)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-05T19:05:33.902Z · LW(p) · GW(p)

Do you predict, then, that if you put a person in a group where every other person disapproves more of attempts to dodge the question than to provide either answer, and makes this known, then they will never refuse to answer the question on its own terms?

I predict this will have a large effect on the number who refuse to answer the question, increasing with the closeness of the peer group and the level of disapproval. Enough to flip 75% nonresponse to 25% nonresponse or something like that.

comment by Dustin · 2010-12-05T22:12:25.078Z · LW(p) · GW(p)

Do you have any evidence that the largest negative reaction comes from actually answering the question?

My feeling is that the largest negative social repercussion comes from rejecting the question. However, I'm not positive that I'm not generalizing from my own initial reaction to those who reject the question.

My general feeling is that taking a stance on such questions would be respected by those who I deal with on a day-to-day basis, and dodging the question would be less respected.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2010-12-05T22:35:42.033Z · LW(p) · GW(p)

I respect answering these questions more than dodging them and answer them myself whenever I know the answer (I would pull the switch; I would also push the fat man). I don't have a problem with being candid because most people whose opinions I care about prefer candidness. In one previous discussion with a bunch of non-rationalists who probably don't consider the questions often, there wasn't much dodging.

comment by Jack · 2010-12-05T09:50:48.420Z · LW(p) · GW(p)

Rather than choosing to flip the switch, push the fat man, or remain passive, many people will reject the question outright.

Counterfactual resistance is pretty common with all thought experiments, indeed it is the bane of undergraduate philosophy professors everywhere. We have no evidence that resistance is more common in ethical thought experiments or the trolley problem particularly than in thought experiments in other subfields: brain-in-vat hypotheticals, brain-transplant/hemisphere transplant, teleportation, Frankfurter cases etc. Which is to say most of this post is in need of citations. Maybe people just don't like convoluted thought experiments! I'm not even sure it's the case that many people do refuse to answer the question- how many instances could you possibly be basing this judgment on?

Ultimately, when provided with optimally inconvenient and general forms of the dilemma, most of those who rejected the question will continue to make excuses to avoid answering the question on its own terms.

How do you know this? I'm not demanding p-values but you haven't given us a lot to go on.

Replies from: Desrtopa
comment by Desrtopa · 2010-12-05T22:41:21.737Z · LW(p) · GW(p)

how many instances could you possibly be basing this judgment on?

Pretty many; I certainly haven't kept close count, but going by the age at which I was first introduced to the dilemma and the approximate number of times per year it's come up, I would estimate somewhere between 100 and 150

It would have been more accurate to say that many people will dismiss the dilemma on counterfactual grounds once, and a second prompt will separate the pedants and jokers from the true rejectors, who will persist in dismissing the question regardless of how it is framed, even in the face of peer pressure.

Anyway, on reflection, I feel like this post was probably not that well considered; I should have at least held off until I had reliable documentation of the phenomenon, with tests to winnow out alternate hypotheses. I still strongly suspect that a significant proportion of those rejecting the hypothesis are doing so based on a rejection of the idea that they should have any coherent moral system, but this impression rests too strongly on interpretations of what the rejectors have actually said to come across well in the post. I really didn't provide adequate data.

Replies from: Jack
comment by Jack · 2010-12-06T09:38:04.064Z · LW(p) · GW(p)

On the question itself I'm not sure having a coherent moral system is something it is important for people to have- though I'm hesitant to make the point since I'm not confident in my ability to make the claim convincing enough to avoid the downvotes that come from saying something that sounds so dumb at first.

Morality is the product of a chaotic, random and unguided process. There is no particular reason to expect human morality to be coherent. That isn't what evolution optimized it for. If the morality we evolved isn't coherent (a precise definition of coherent in this context I'll leave for later, or someone else) what should we do? A lot of people here seem to want to cull, shape or ignore our intuitions so that we act according to a coherent normative theory (preference utilitarianism for example). But to me this looks just like trying to shove a square peg into a round hole. You don't get more moral by sacrificing parochial deontological rules for abstract principles. If a hodge-podge is what we got then a hodge-podge is what we're stuck with (until we evolve a different hodge-podge). To demand that folk morality meet the demands of logic and coherence feels like a mistake to me. It also feels anti-human.

comment by rwallace · 2010-12-05T12:08:21.334Z · LW(p) · GW(p)

The purpose of thought experiments and other forms of simulation is to teach us to do better in real life. Obviously, no simulation can be perfectly faithful to real life. But if a given simulation is not merely imperfect but actively misleading, such that training in the simulation will make your real performance worse, then rejecting the simulation is a perfectly rational thing to do.

In real life, if you think the greater good requires you to do evil, you are probably wrong. Therefore, given a thought experiment in which the greater good really does require you to do evil, rejecting the thought experiment on the grounds of being worse than useless for training purposes, is a correct answer.

Replies from: Jack
comment by Jack · 2010-12-05T12:49:28.685Z · LW(p) · GW(p)

The purpose of thought experiments and other forms of simulation is to teach us to do better in real life.

Not at all. That's way too broad a claim and definitely not the case for the trolley problem. The purpose of the trolley problem is to isolate and identify people's moral intuitions.

Replies from: thomblake, Strange7
comment by thomblake · 2010-12-07T16:40:37.286Z · LW(p) · GW(p)

The purpose of the trolley problem is to isolate and identify people's moral intuitions.

Well, depending on what you're trying to nail down as "the purpose", that's not true. The purpose of the trolley problem was to serve as an example of the kinds of ridiculous thought experiments conceived of by moral philosophers (via Philippa Foot). But you know, Poe's Law.

Replies from: Jack
comment by Jack · 2010-12-07T17:16:53.783Z · LW(p) · GW(p)

I'm sure you've seen this at some point, but for others...

Consider the following case:

On Twin Earth, a brain in a vat is at the wheel of a runaway trolley. There are only two options that the brain can take: the right side of the fork in the track or the left side of the fork. There is no way in sight of derailing or stopping the trolley and the brain is aware of this, for the brain knows trolleys. The brain is causally hooked up to the trolley such that the brain can determine the course which the trolley will take.

On the right side of the track there is a single railroad worker, Jones,who will definitely be killed if the brain steers the trolley to the right. If the railman on the right lives, he will go on to kill five men for the sake of killing them, but in doing so will inadvertently save the lives of thirty orphans (one of the five men he will kill is planning to destroy a bridge that the orphan's bus will be crossing later that night). One of the orphans that will be killed would have grown up to become a tyrant who would make good utilitarian men do bad things. Another of the orphans would grow up to become G.E.M. Anscombe, while a third would invent the pop-top can.

If the brain in the vat chooses the left side of the track, the trolley will definitely hit and kill a railman on the left side of the track, "Leftie" and will hit and destroy ten beating hearts on the track that could (and would) have been transplanted into ten patients in the local hospital that will die without donor hearts. These are the only hearts available, and the brain is aware of this, for the brain knows hearts. If the railman on the left side of the track lives, he too will kill five men, in fact the same five that the railman on the right would kill. However, "Leftie" will kill the five as an unintended consequence of saving ten men: he will inadvertently kill the five men rushing the ten hearts to the local hospital for transplantation. A further result of "Leftie's" act would be that the busload of orphans will be spared. Among the five men killed by "Leftie" are both the man responsible for putting the brain at the controls of the trolley, and the author of this example. If the ten hearts and "Leftie" are killed by the trolley, the ten prospective heart-transplant patients will die and their kidneys will be used to save the lives of twenty kidney-transplant patients, one of whom will grow up to cure cancer, and one of whom will grow up to be Hitler. There are other kidneys and dialysis machines available, however the brain does not know kidneys, and this is not a factor.

Assume that the brain's choice, whatever it turns out to be, will serve as an example to other brains-in-vats and so the effects of his decision will be amplified. Also assume that if the brain chooses the right side of the fork, an unjust war free of war crimes will ensue, while if the brain chooses the left fork, a just war fraught with war crimes will result. Furthermore, there is an intermittently active Cartesian demon deceiving the brain in such a manner that the brain is never sure if it is being deceived.

QUESTION: What should the brain do?

[ALTERNATIVE EXAMPLE: Same as above, except the brain has had a commissurotomy, and the left half of the brain is a consequentialist and the right side is an absolutist.]

Replies from: FAWS
comment by FAWS · 2010-12-08T12:24:38.661Z · LW(p) · GW(p)

Choose the left track, because cancer kills more people than Hitler (assuming the cure would be delayed by at least 10 years, implementing it doesn't cost more than is currently spent on cancer and a few other things).

comment by Strange7 · 2010-12-06T00:12:56.441Z · LW(p) · GW(p)

And what is the purpose of identifying moral intuitions?

Replies from: DSimon
comment by DSimon · 2010-12-06T14:22:51.584Z · LW(p) · GW(p)

Figuring out how to manipulate those intuitions in order to increase sales of Frosted Flakes.

Replies from: Strange7
comment by Strange7 · 2010-12-06T16:11:14.710Z · LW(p) · GW(p)

In which case those who neither currently want Frosted Flakes nor want to want them are still best served by not participating.

Replies from: David_Gerard
comment by David_Gerard · 2010-12-06T16:18:28.672Z · LW(p) · GW(p)
  1. We just need to infiltrate the philosophy departments and get them to post to blogs to try to convince people that answering hypotheticals is what an honest thinking person should do.
  2. Lots of manipulable intuitions.
  3. Profit!
comment by wstrinz · 2010-12-08T04:14:08.025Z · LW(p) · GW(p)

I've used the trolley problem a lot, at first to show off my knowledge of moral philosophy, but later, when I realized anyone who knows any philosophy has already heard it, to shock friends that think they have a perfect and internally consistent moral system worked out. But I add a twist, which I stole from an episode of Radiolab (which got it from the last episode of MASH), that I think makes it a lot more effective; say you're the mother of a baby in a village in Vietnam, and you're hiding with the rest of the village from the Viet Cong. Your baby starts to cry, and you know if it does they'll find you and kill the whole village. But, you could smother the baby (your baby!) and save everyone else. The size of the village can be adjusted up or down to hammer in the point. Crucially, I lie at first and say this is an actual historical event that really happened.

I usually save this one for people who smugly answer both trolly questions with "they're the same, of course I'd kill one to save 5 in each case", but it's also remarkably effective at dispelling objections of implausibility and rejection of the experiment. I'm not sure why this works so well, but I think our bias toward narratives we can place ourselves in helps. Almost everyone at this point says they think they should kill the baby, but they just don't think they could, to which I respond "Doesn't the world make more sense when you realize you value thousands of complex things in a fuzzy and inconsistent manner?". Unfortunately, I have yet to make friends with any true psychopaths. I'd be interested to hear their responses.

Replies from: Alicorn, None, wedrifid, fr00t, Desrtopa, WrongBot, wedrifid
comment by Alicorn · 2010-12-08T04:37:44.007Z · LW(p) · GW(p)

This is only equivalent to a trolley problem if you specify that the baby (but no one else) would be spared, should the Viet Cong find you. Otherwise, the baby is going to die anyway, unlike the lone person on the second trolley track who may live if you don't flip the switch.

Replies from: shokwave, wstrinz, wedrifid
comment by shokwave · 2010-12-08T05:57:50.074Z · LW(p) · GW(p)

You could hack that in easily; surely most soldiers have qualms about killing babies.

comment by wstrinz · 2010-12-08T15:54:26.115Z · LW(p) · GW(p)

Great point. I've never thought of that and no-one I've ever tried this one has mentioned it either. This makes it more interesting to me that some people still wouldn't kill the baby, but that may be for reasons other than real moral calculation.

Replies from: TheOtherDave, CarlJ
comment by TheOtherDave · 2010-12-08T16:47:59.385Z · LW(p) · GW(p)

For my own part: I have no idea whether I would kill the baby or not.

And I have even less of an idea whether anyone else would... I certainly don't take giving answers like "I would kill the baby in this situation" as reliable evidence that the speaker would kill the baby in this situation.

But I generally understand trolley problems to be asking about what I think the right thing to do in situations like this is, not asking me to predict whether I will do the right thing in them.

Replies from: wstrinz
comment by wstrinz · 2010-12-08T21:08:56.999Z · LW(p) · GW(p)

I agree, I can't really reliably predict my actions. I think I know the morally correct thing to do, but I'm skeptical of my (or anyone's) ability to make reliable predictions about their actions under extreme stress. As I said, I usually use this when people seem overly confident of the consistency of their morality and their ability to follow it, as well as with people who question the plausibility of the original problem.

But I do recall the response distributions for this question mirroring the distribution for the second trolley problem; far fewer take the purely consequentialist view of morality than when they just have to flip a switch, even independent from their ability to act morally. I still don't find it incredibly illuminating, as all it shows is that our moral intuitions are fundamentally fuzzy, or at least that we value things other than just how many people live or die.

comment by CarlJ · 2015-08-26T09:09:36.485Z · LW(p) · GW(p)

Maybe this can work as an analogy:

Right before the massacre at My Lai, a squad of soldiers are pursuing a group of villagers. A scout sees them up ahead a small river and he sees that they are splitting and going into different directions. An elderly person goes to the left of the river and the five other villagers go to the right. The old one is trying to make a large trail in the jungle, so as to fool the pursuers.

The scout waits for a few minutes, when the rest of his squad team joins him. They are heading on the right side of the river and will probably continue on that way, risking to kill the five villagers. The scout signals to the others that they should go to the left. The party follows and they soon capture the elderly man and bring him back to the village center, where he is shot.

Should the scout instead have said nothing or kept running forward, so that his team should have killed the five villagers instead?

There are some problems with equating this to the trolley problem. First, the scout cannot know for certain before that his team is going in the direction of the large group. Second, the best solution may be to try and stop the squad, by faking a reason to go back to the village (saying the villagers must have run in a completely different direction).

comment by wedrifid · 2010-12-08T06:03:57.843Z · LW(p) · GW(p)

Even then would be rather a lot different to a trolley problem. After all it involves asking a mother whether she would sacrifice her own child for the 'greater good'. The only reasonable response I can think of for that question is a solid slap in the face! How dare they ask someone that!

comment by [deleted] · 2010-12-08T04:35:47.090Z · LW(p) · GW(p)

I immediately thought, "Kill the baby." No hesitation.

I happen to agree with you on morality being fuzzy and inconsistent. I'm definitely not a utilitarian. I don't approve of policies of torture, for example. It's just that the village obviously matters more than a goddamn baby. The trolley problem, being more abstract, is more confusing to me.

comment by wedrifid · 2010-12-08T04:27:45.735Z · LW(p) · GW(p)

Unfortunately, I have yet to make friends with any true psychopaths. I'd be interested to hear their responses.

They would say the same thing only with more sincerity.

comment by fr00t · 2010-12-08T20:01:53.916Z · LW(p) · GW(p)

The answer that almost everyone gives seems to be very sensible. After all, the question: "What do I believe I would actually do" and "What do I think I should do" are different. Obviously self modifying to the point where these answers are as consistent as possible in the largest subset of scenarios as possible is probably a good thing, but that doesn't mean such self modifying is easy.

Most mothers would simply be incapable of doing such a thing. If they could press a button to kill their baby, more would probably do so, just as more people would flip a switch to kill than push in front of a train.

You obviously should kill the baby, but it is much more difficult to honestly say you would kill a baby than flip a switch: the distinction is not one of morality but courage.

As a side note, I prefer the trolley-problem modification where you can have an innocent, healthy young traveler killed in order to save 5 people in need of organs. Saying "fat man", at least for me, obfuscates the moral dilemma and makes it somewhat easier.

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-08T20:16:21.390Z · LW(p) · GW(p)

self modifying to the point where these answers are as consistent as possible in the largest subset of scenarios as possible...

...weighted by the likelihood of those scenarios, and the severities of the likely consequences of behaving inconsistently in those scenarios.

Most problems of this sort are phrased in ways that render the situation epistemicly unreachable, which makes their likelihood so low as to be worth ignoring.

Re: your side note... am I correct in understanding you to mean that you find imagining killing a fat man less uncomfortable than imagining killing a healthy young traveler?

comment by Desrtopa · 2010-12-08T04:34:28.001Z · LW(p) · GW(p)

but it's also remarkably effective at dispelling objections of implausibility and rejection of the experiment.

If this were a real situation rather than an artificial moral dilemma, I'd say that if you can't silence the baby just by covering its mouth, you should shake it. It gets them to stop making noise, and while it's definitely not good for them, it'll still give the baby better odds than being smothered to death.

comment by WrongBot · 2010-12-08T04:38:46.186Z · LW(p) · GW(p)

I would smother the baby and then feel incredibly, irrationally guilty for weeks or months.

I am not a psychopath, but I am a utilitarian. I value having a consistent set of values more than I value any other factor that has come into conflict with that principle so far.

Replies from: wstrinz, wedrifid
comment by wstrinz · 2010-12-08T15:52:31.575Z · LW(p) · GW(p)

I hope I'd do the same. I've never had to kill anyone before though, much less my own baby, so I can't be totally sure I'd be capable of it.

comment by wedrifid · 2010-12-08T04:58:34.045Z · LW(p) · GW(p)

I am not a psychopath, but I am a utilitarian. I value having a consistent set of values more than I value any other factor that has come into conflict with that principle so far.

Utilitarian specifically or consequentialist?

Replies from: WrongBot
comment by WrongBot · 2010-12-08T05:06:37.471Z · LW(p) · GW(p)

Consequentialist; I should know better than to be imprecise about that here, especially because there are sad things I find to have great value.

comment by wedrifid · 2010-12-08T04:30:40.565Z · LW(p) · GW(p)

I usually save this one for people who smugly answer both trolly questions with "they're the same, of course I'd kill one to save 5 in each case", but it's also remarkably effective at dispelling objections of implausibility and rejection of the experiment. I'm not sure why this works so well, but I think our bias toward narratives we can place ourselves in helps. Almost everyone at this point says they think they should kill the baby, but they just don't think they could

The at this point part is interesting. Have you ever tried asking the question without the abstract priming? I'd like to see the difference.

comment by CronoDAS · 2010-12-06T00:41:56.843Z · LW(p) · GW(p)

"Remember, you can't be wrong unless you take a position. Don't fall into that trap." - Scott Adams

comment by TheOtherDave · 2010-12-05T07:17:36.053Z · LW(p) · GW(p)

An implicit assertion underlying this post seems to be that the sorts of people who answer trolley problems rather than dodge them are more likely to take action effectively in situations that require doing harm in order to minimize harm.

Or am I misunderstanding you?

If you are implying that: why do you believe that?

Replies from: Desrtopa
comment by Desrtopa · 2010-12-05T07:24:43.207Z · LW(p) · GW(p)

I wouldn't say that; just because a person can answer the question doesn't mean they have an outcome optimizing moral system, or even that they're not simply creating post hoc rationalizations of their knee jerk reactions, but it suggests that they believe in the value of having a comprehensive moral system. Whether anyone responding to the dilemma would take action effectively is another question entirely.

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-05T07:31:35.175Z · LW(p) · GW(p)

OK.

I had inferred from statements like "They [question-evaders] have placed themselves in a reality too accommodating of their preferences to force them to have a system for dealing with situations with no ideal outcomes." that you were comparing them to question-answerers, who do develop such a system and consequently deal effectively with such situations.

If your position is instead that whether people answer trolley questions or not in no way predicts whether they deal effectively with such situations, then what's the problem?

That is: OK, they evade the question, or they answer it. Either way, why is this an "unnerving implication"?

Replies from: Desrtopa
comment by Desrtopa · 2010-12-05T07:41:47.173Z · LW(p) · GW(p)

There may be other explanations I haven't adequately considered, but the impression I get from the people with whom I've discussed the matter, and on whom I based the post, is that they haven't internalized the idea that the world is inconvenient enough to call for a systematic way of dealing with problems that lack ideal solutions.

In consequentialist terms, I don't suppose that this is actually worse than constructing an ethical system that simply justifies natural non utilitarian inclinations post hoc, but it strikes me as sigificantly more naive.

Replies from: Perplexed, NancyLebovitz
comment by Perplexed · 2010-12-05T16:34:07.629Z · LW(p) · GW(p)

they haven't internalized the idea that the world is inconvenient enough to call for a systematic way of dealing with problems that lack ideal solutions.

Perhaps they have had bad experience with "a systematic way of dealing with problems that lack ideal solutions."

Hard cases make bad law is a well known legal adage. There is, I think, some wisdom exhibited in resisting systematizers armed with trolley problems.

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-05T18:27:25.563Z · LW(p) · GW(p)

There is, I think, some wisdom exhibited in resisting systematizers armed with trolley problems.

Nicely put.

This seems to me a special case of the "bar bet" rule: if someone offers to bet me $20 that they can demonstrate something, I should confidently expect to lose the bet, no matter how low my priors are on expecting the thing itself. (That said, in many contexts I should take the bet anyway.)

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2010-12-06T00:31:55.496Z · LW(p) · GW(p)

(That said, in many contexts I should take the bet anyway.)

I realize that this is off topic, but why?

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-06T01:02:09.261Z · LW(p) · GW(p)

It has to do with the social exchange of "bar bets" (I don't actually hang out at bars, but that's the trope; similar things happen in a lot of contexts). If I'm among friends (that is, it's an iterated arrangement) and I flatly refuse to participate just on the grounds that there has to be a catch somewhere, without being able to articulate a good theory for what the catch is, I lose status that may well be worth more to me than the bet was.

Replies from: DSimon
comment by DSimon · 2010-12-06T14:28:40.057Z · LW(p) · GW(p)

Also, if someone says to you "I'll bet you $20 I can ", what they're really saying is "I'm going to do , and it'll be super interesting and fun for all involved, especially if you put in $20 so as to add an element of risk to the proceedings".

The expectation is that some other night, you can bet them $20 about some interesting thing you can do.

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-06T14:32:31.752Z · LW(p) · GW(p)

Agreed. I think we're kind of saying the same thing here, though your explanation is a lot more accessible. (I really should know better than to try to talk about social patterns when my head has been recently repatterned by software requirements specification.)

comment by NancyLebovitz · 2010-12-05T11:24:27.861Z · LW(p) · GW(p)

They haven't internalized the idea that the world is inconvenient enough to call for a systematic way of dealing with problems that lack ideal solutions.

I suspect that they have internalized the idea that the world allows for ideal solutions, or at least non-negative solutions, because so much current fiction is based on happy endings.

I wonder if people from cultures which include tragic fiction would tend to answer the trolley problem differently.

Replies from: Jack
comment by Jack · 2010-12-05T11:59:12.569Z · LW(p) · GW(p)

I wonder if people from cultures which include tragic fiction would tend to answer the trolley problem differently.

There hasn't been an extensive global survey that I'm aware of but reasonably diverse samples have turned up apx. zero divergence between demographic groups

Btw folks, Phillipa Foot died last month. RIP.

comment by sketerpot · 2010-12-05T19:45:39.604Z · LW(p) · GW(p)

I get frustrated by this every time someone mentions the classic short story The Cold Equations (full text here). The premise of the story is a classic trolley problem (...In Space!), where a small spaceship carrying much-needed medical supplies gets a stowaway, which throws off its mass calculations. If the stowaway is not ejected into space, the ship will crash and the people on the planet will die of a plague. So the (innocent, lovable) stowaway is killed and ejected, and the day is saved. The end.

Whenever this comes up, somebody will attack the story as contrived, pointing out that it could have been prevented by some "Keep Out" signs and a few more door locks. This is usually treated as an excuse to dismiss the premise of the story entirely -- exactly what you describe as a common reaction to maximally inconvenient trolley problems.

(By the way, I searched on Less Wrong for previous discussions of The Cold Equations, and was pleasantly surprised that people around here seem much less inclined to use the story's plot holes as an excuse to dismiss the whole idea. The nits still get picked, but not to a facepalm-worthy extent.)

Replies from: Desrtopa, Jiro, thomblake
comment by Desrtopa · 2010-12-05T22:22:14.398Z · LW(p) · GW(p)

When you're writing an actual story, I feel like you have to maintain higher standards for plausibility than when you're writing a straight moral dilemma. I only know The Cold Equations by its reputation, but I can certainly understand how that sort of contrivance could hurt it on a literary level.

comment by Jiro · 2014-10-15T17:56:12.801Z · LW(p) · GW(p)

(Reply to old post)

The problem with "The Cold Equations" isn't just that it could have been prevented by signs and door locks. The problem is that the fact that it could have been prevented by signs and door locks turns it from "the laws of nature results in having to kill someone" to "human irresponsibility results in having to kill someone". Failing to take precautions to keep people out of a situation where they could die means the death is caused by negligence, not impersonal forces of nature.

comment by thomblake · 2010-12-07T16:43:34.891Z · LW(p) · GW(p)

What's frustrating about that? It doesn't make any sense, as if the fuel / weight had to be optimized that much, then they'd better damn well weigh the thing before takeoff, or whatever they need to do as a second-best option to detect stowaways / extra cargo / etc.

Replies from: shokwave
comment by shokwave · 2010-12-07T16:59:28.528Z · LW(p) · GW(p)

The frustrating thing is that people produce a specific criticism ("In this story, they could have thrown tables out the airlock, or put up more signs!") and presume they have shattered the premise of the story (there are situations where physical laws will require hard, horrifying choices, in these situations the physical laws will not bend no matter how immoral a decision it requires).

Replies from: thomblake
comment by thomblake · 2010-12-07T17:07:56.379Z · LW(p) · GW(p)

Ah. I don't think most folks would consider that very abstract notion "the premise of the story", though the author clearly thought it was the relevant detail. The characters behaved unrealistically, and shouldn't have been there in the first place. The same point is made very believably in many less contrived contexts, like stories about people trying to get on the Titanic's too-few lifeboats.

Replies from: shokwave
comment by shokwave · 2010-12-07T17:26:03.288Z · LW(p) · GW(p)

Well, the premise of the story was more to go directly against the grain of the current science fiction trend, which was clever-but-contrived escapes from seemingly physical-law-bound situations. So the author was restricted to science-fiction stories.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2012-03-20T21:28:24.481Z · LW(p) · GW(p)

Actually, the author kept writing "clever-but-contrived escapes" and it was the editor, John Campbell, who wanted to go against the grain:

I learned how strong the hand of the editor can be in shaping a story. John told me he had three times! sent "Cold Equations" back to Godwin, before he got the version he wanted. In the first two re-writes, Godwin kept coming up with ingenious ways to save the girl! Since the strength of this deservedly classic story lies in the fact the life of one young woman must be sacrificed to save the lives of many, it simply wouldn't have the same impact if she had lived.

John wasn't trying to take credit for having shaped one of the masterpieces in the SF field. His attitude and words clearly indicated he simply felt it was the responsibility of an editor to improve on any given story, where possible -- and he had done that.

http://www.challzine.net/23/23fivedays.html

comment by drc500free · 2010-12-09T19:30:04.688Z · LW(p) · GW(p)

Morality is in some ways a harder problem than friendly AI. On the plus side, humans that don't control nuclear weapons aren't that powerful. On the minus side, morality has to run at the level of 7 billion single instances of a person who may have bad information.

So it needs to have heuristics that are robust against incomplete information. There's definitely an evolutionary just-so story about the penalty of publically committing to a risky action. But even without the evolutionary social risk, there is a moral risk to permitting an interventionist murder when you aren't all-knowing.

This looks just like the bayesian 101 example of a medical test that is 99% accurate on a disease that has 1% occurance rate. If you say that I'm in a very rare situation that requires me to commit murder, I have to assume that there are going to be many more situations that could be mistaken for this one. The "least convenient universe" story is tantalizing, but I think it leads astray here.

comment by David_Gerard · 2010-12-06T09:15:16.887Z · LW(p) · GW(p)

Having posted lots in this thread about excellent reasons not to answer the question, I shall now pretend to be one of the students that frustrates Desrtopa so and answer. Thus cutting myself off from becoming Prime Minister, but oh well.

The key to the problem is: I don't actually know or care about any of these people. So the question is answered in terms of the consequences (legal and social) to me, not to them.

e.g. in real life, action with a negative consequence tends to attract greater penalties than lack of action. So pushing one in front to save five is right out. Actively switching to kill one instead of leaving the switch to five, that one would be tricky - I might feel it was a less bad response and hence do it, despite possible penalties for having dared take an action instead of just floundering. (There, an actual answer.)

If I actually know and like any of these people, the problem gets more complicated. If all the friends are on one branch, they win, everyone else loses. If there's options of which friends I kill (and that phrase popped into my head as "which friends I kill" rather than "which friends die" - I seem not to be shirking responsibility), then I have some tricky calculation to do.

Whatever happens, I do expect I would be extremely upset and not fully functional for a little while afterwards.

There. Is that enough not to fall at the first hurdle in Philosophy 100?

comment by thomblake · 2010-12-07T17:04:07.890Z · LW(p) · GW(p)

As an ethicist who routinely rejects trolley problems, I feel I must respond to this.

The trolley problem was first formulated by Philippa Foot as a parody of the ridiculous ethical thought experiments developed by philosophers of the time. Its purpose was to cause the reader to observe that the thought experiment is a contrived scenario that will never occur (apparently, it serves that purpose in most untrained folks), and thus serves as an indictment of how divorced reasoning about ethics in philosophy had become from the real world of ethical decision-making.

When I hear a trolley problem, I immediately try to start filling in details. Who are the five people, and who is the one? Why are they on the trolley tracks? Why am I the only person who can do something about it? Are there really no other alternatives, and if so, how is this known to me?

And if the best "least convenient possible world" ends up being one which doesn't even remotely resemble reality, then I don't mind if my moral compass outputs an undefined value in those spaces; my morality is built for the real world.

Replies from: David_Gerard, Desrtopa
comment by David_Gerard · 2010-12-08T10:56:23.803Z · LW(p) · GW(p)

But trolley-style problems have real application, e.g. for politicians. Someone with actual political power will frequently have lose-lose problems that aren't hypothetical, and know that they will be blamed whatever they do or don't.

Replies from: handoflixue
comment by handoflixue · 2010-12-18T00:24:14.935Z · LW(p) · GW(p)

If you're just genuinely curious where people who go, push come to shove, then all the creative solutions are obviously worthless data. If you're trying to get people to think about the real world, and firm up their own understanding, shouldn't we be berating the people who would blithely kill one person to save five, without thinking about a creative approach?


I'd say one is occasionally, rarely, in a situation where immediate action is truly required, and the trolley problem is good for developing a "moral reflex" there - just as marital arts give one a physical reflex for a fight that gives no time for thought.

However, the more common situation is the one where a creative approach, a third option, is exactly what we want. By discouraging such responses, I'd think this reinforces the rule "don't try creative solutions" and "you have no power except this little bit" - it encourages an attitude of mindless acceptance of the situation as presented, and insists that everything should be a dry moral arithmetic.

I'd feel most comfortable around someone whose answer is "I'd try to find a creative solution but, given push comes to shove, I'd kill one to save five".

comment by Desrtopa · 2010-12-07T17:09:16.205Z · LW(p) · GW(p)

The trolley problem was first formulated by Philippa Foot as a parody of the ridiculous ethical thought experiments developed by philosophers of the time. Its purpose was to cause the reader to observe that the thought experiment is a contrived scenario that will never occur (apparently, it serves that purpose in most untrained folks), and thus serves as an indictment of how divorced reasoning about ethics in philosophy had become from the real world of ethical decision-making.

I've never heard this before, and nothing I've read on the history or uses of the problem as a tool of psychological study suggest that this is the case. Where did you hear this?

Replies from: thomblake
comment by thomblake · 2010-12-07T17:20:55.654Z · LW(p) · GW(p)

Where did you hear this?

I'm not sure. It's more or less the received wisdom in virtue ethics, for which in the 20th century Foot was a foundational figure. I'll see if I can find a reference, though I'm sure I got that impression from the original text.

Replies from: Jack
comment by Jack · 2010-12-07T17:29:28.016Z · LW(p) · GW(p)

I believe this is the original and she seems to be using these thought experiments unironically, though I haven't read closely.

comment by Kevin · 2010-12-05T09:14:30.762Z · LW(p) · GW(p)

I think you are overly generalizing against people who don't like or don't understand philosophy.

comment by shokwave · 2010-12-05T06:08:29.190Z · LW(p) · GW(p)

even when it is posed in its most inconvenient possible forms, where they have the time to collect themselves and make a reasoned choice, but no possibility of implementing alternative solutions.

I am a conscientious "third-alternativer" on trolley problems, and to me this seems like an abuse of the least convenient possible world principle. If there is a world with no possibility of implementing alternative solutions, I will pick the outcome with the best consequences, but I don't believe there actually is a world with no possibility of alternatives - I reject the "possible" part of your least convenient possible world.

It would be like arguing to an atheist: "The least convenient possible world is one where the Christian God exists with probability 1."

Replies from: prase, Desrtopa
comment by prase · 2010-12-05T10:17:50.088Z · LW(p) · GW(p)

I am an atheist, and I have no problems in answering questions of type "if creationism were true, would you support its teaching in schools" or ''if Christian God exists, would you pray every day" (both answers are yes, if that matters). What's the problem with those hypotheticals? The questions are well formed, and although they are useless in the sense that their premise is almost certainly false, the answers can still reveal something about my psychology. I don't think answering such questions would turn me into a creationist.

Replies from: Nornagest
comment by Nornagest · 2010-12-05T19:22:30.609Z · LW(p) · GW(p)

IAWTC in principle, but have noticed in practice that similarly formed questions almost always segue into an appeal to popularity or an appeal to uncertainty. Since dealing with these arguments is time-consuming and frustrating (they're clearly fallacious, but that's not obvious to most audiences), it usually works better to reject the premises at step one.

Same goes for most trolleylike problems posed in casual debate.

comment by Desrtopa · 2010-12-05T06:28:59.473Z · LW(p) · GW(p)

The least convenient world for the purposes of what argument? The point of the least convenient world principle is to prevent yourself from taking outs on dilemmas that will prevent you from learning anything about your actual moral principles.

The relevance of the trolley problem is not, for the most part, to situations where there are only two alternatives (of which there are few,) but situations where there are no options without negative repercussions (of which there are many.)

Replies from: shokwave
comment by shokwave · 2010-12-05T06:43:49.567Z · LW(p) · GW(p)

no options without negative repercussions

And in that case I pick the option with the least negative repercussions. I guess that shows that I am consequentialist in my morality.

The relevance of the trolley problem is not, for the most part, to situations where there are only two alternatives (of which there are few)

I expressed concern in the other trolley problem thread that there are in fact many situations that appear to have two negative options and no obvious alternatives; when faced with these problems people may attempt to solve them with "trolley-problem logic" rather than looking for third alternatives, which leads to them systematically performing worse on these kinds of moral problems.

Replies from: Jack
comment by Jack · 2010-12-05T11:28:13.491Z · LW(p) · GW(p)

Talking to people about philosophical thought experiments seems extremely unlikely to affect their problem solving abilities in the real world. The trolley problem is a transparently unrealistic scenario, convoluted so that answers reveal part of the structure of someone's moral code. It isn't presented the way real-time crises are presented nor are participants encouraged to "solve it". Obviously looking for third options is a good idea in seemingly lose-lose scenarios but something is wrong with people if they are, in fact, incapable of accepting that for the purposes of a philosophical thought experiment there are only two choices and then making a decision between the two.

Replies from: shokwave
comment by shokwave · 2010-12-05T13:05:53.399Z · LW(p) · GW(p)

With all the literature on priming and pattern-matching (the common case of people presented with the real-world cryonics option pattern-matching it to Pascal's Wager and rejecting it), I don't think this possibility can be rejected out of hand. I don't think trolley problems are in need of censoring, I know what the purpose of trolley problems are, I can give you that information without having to accidentally prime myself to harm one friend to stop em from harming the entire friendship group.

Also,

philosophical thought experiments seems extremely unlikely to affect ... problem solving abilities in the real world

The field of decision theory seems somewhat predicated on this not being the case.

Obviously looking for third options is a good idea in seemingly lose-lose scenarios

Something about it mustn't be all that obvious - or maybe, it's obvious in hindsight.

something is wrong with people if they are, in fact, incapable of accepting that for the purposes of a philosophical thought experiment there are only two choices and then making a decision between the two.

I don't think any trolley-problem-rejector is actually incapable of accepting that. I think wedrifid is right, what happens is they come up with their answer (push the fat guy), they attempt to phrase it in a way that doesn't sound like murder (stop the cart with his ... body ...), they realise that no matter how they say it, the obvious answer is going to make them look like a cold-blooded killer (hey everyone! e just said e'd push a fat guy in front of a runaway cart!), and so they reject the question. Saying their rejection shows there's something wrong with them is the spinning-it-badly they were worried about in the first place (hey everyone! e can't even answer a simple question!).

Replies from: Jack
comment by Jack · 2010-12-05T14:01:16.948Z · LW(p) · GW(p)

I know what the purpose of trolley problems are, I can give you that information without having to accidentally prime myself to harm one friend to stop em from harming the entire friendship group.

I'm sure it isn't surprising that most people lack the typical Less Wrong poster's ability to articulate the abstract. The trolley problem is important precisely because it lets us get this information from people who aren't so articulate.

Even if people aren't capable of answering a question without it priming them (which, loosely speaking is probably true for all questions), thats a bad reason not to answer the question unless they think they're about to face some kind of crisis with a lot riding on their decision.

The field of decision theory seems somewhat predicated on this not being the case.

The field of decision theory is predicated on philosophical thought experiments priming the decision making of those who engage with them?

Something about it mustn't be all that obvious - or maybe, it's obvious in hindsight.

It's obvious in abstract.

I don't think any trolley-problem-rejector is actually incapable of accepting that. I think wedrifid is right, what happens is they come up with their answer (push the fat guy), they attempt to phrase it in a way that doesn't sound like murder (stop the cart with his ... body ...), they realise that no matter how they say it, the obvious answer is going to make them look like a cold-blooded killer (hey everyone! e just said e'd push a fat guy in front of a runaway cart!), and so they reject the question. Saying their rejection shows there's something wrong with them is the spinning-it-badly they were worried about in the first place (hey everyone! e can't even answer a simple question!).

I'm not convinced there are many trolley-problem-rejectors but certainly the kind of trolley-problem-rejector the OP talks about is easily explained by wedrifid's comment (and by several other explanations probably). The thesis that all trolley-problem-rejectors are pushers who realize they're in the minority is really interesting though. When I was saying something was wrong with the problem-rejectors I meant the idea of a principled rejection, not a rejection based on peer pressure, social fears and signaling.

Incidentally, I have trouble answering the problems on an object level, I think because I've spent too much time on the meta level questions the object level question no longer has a meaningful answer to me. I'd say both switching and pushing are acceptable but non-obligatory or supererogatory; but thats just an expression of my value pluralism. If you ask what I personally would do, I guess I wouldn't push the guy in front of the train but that doesn't feel like it communicates anything meaningful about my moral intuitions.

Replies from: shokwave
comment by shokwave · 2010-12-05T14:10:45.788Z · LW(p) · GW(p)

The field of decision theory is predicated on philosophical thought experiments priming the decision making of those who engage with them?

Sorry, I meant that the field of decision theory is based on the idea that philosophical thought experiments (like the prisoner's dilemma, stag hunt, etc) can affect your real-world problem solving skills (ie improve them).

The thesis that all trolley-problem-rejectors are pushers who realize they're in the minority is really interesting though

If I could develop it, I would probably say something along the lines of "The trolley problem is a cage match, deontological ethics against consequentialist. Rejectors are consequentialists who have a large weight on the consequences of breaking with deontological prescriptions. Rejecting the question is preferable to lying about one's own ethics, or breaking with one's ethical environment."

Replies from: David_Gerard
comment by David_Gerard · 2010-12-05T14:26:14.213Z · LW(p) · GW(p)

I think it's more generally explicable by lose-lose counterfactuals being in common use in the real world (politics, schoolyard) for purposes of entrapment - a rejection of lose-lose counterfactuals in general, rather than of the trolley question in particular. This would also explain why philosophy lecturers have such a hard time getting many people not to just outright reject counterfactuals, because a philosophy class will for many be the first time a lose-lose-counterfactual wasn't being used as a form of entrapment.

Edit: TheOtherDave below nails it, I think: it's not just lose-lose counterfactuals, people heuristically treat any hypothetical as a possible entrapment and default to the safe option of refusing to play. If they don't know you, they aren't just being stupid.

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-05T16:51:24.807Z · LW(p) · GW(p)

IME this is a special case of a more general refusal to answer "hypothetical questions", even when they aren't lose-lose.

I used to run into this a lot... someone says something, I ask some question about it of the form "So, are you saying that if X, then Y?" and they simply refuse to answer the question on the (sometimes unarticulated) grounds that I'm probably trying to trick them. (Tone of voice and bodyparl is really important here; I started running into this reaction less when I became more careful to project an air of "this is interesting and I'm exploring it" rather than "this is false and I am challenging it".)

This also used to infuriate me: I would react to it as an expression of distrust. It helped to explicitly understand what was going on, though... once I recognized that it actually was an expression of distrust, and that the distrust was entirely reasonable if they couldn't read my mind, I stopped getting so angry about it. (Which in turn helped with the bodyparl and tone issues.)

comment by handoflixue · 2010-12-18T00:35:11.091Z · LW(p) · GW(p)

I'd honestly find the far more plausible answer to be that people just have trouble with truly direct, unambiguous communication. My own experience is that either I'm very bad at such communication, or else other people are very bad at receiving it. When I ask extremely specific questions, people will usually assume a more generalized motive to asking it, and try to answer THAT question. I've had conversations with very smart people who kept re-interpreting my questions because they assumed I was trying to make a specific point or disprove some specific detail.

I'd actually find it very interesting to study how wording affects this question. You have the improbable scenario of the trolley, you have the far more realistic scenario of the crying baby, and then you have the simple and direct question "would you kill a person to save five others?"


Lastly, I'd say I have a very strong objection to this passage: "However, trolleylike dilemmas are actually quite common in real life, when you take the scenario not as a case where only two options are available, but as a metaphor for any situation where all the available choices have negative repercussions"

The Trolley scenario is a strong binary decision with perfect information and absolutely no creative thinking or alternate solution possible. Do you really think that comes up frequently in real life? If not, why not use an exercise that accommodates and praises creative solutions instead of rejecting them as being outside the binary scope of the exercise?

Replies from: Desrtopa
comment by Desrtopa · 2011-02-17T06:48:40.177Z · LW(p) · GW(p)

Kind of late to get back to this, but

The Trolley scenario is a strong binary decision with perfect information and absolutely no creative thinking or alternate solution possible. Do you really think that comes up frequently in real life? If not, why not use an exercise that accommodates and praises creative solutions instead of rejecting them as being outside the binary scope of the exercise?

Real life trolleylike dilemmas are generally ones where creative thinking has already been done, but has not turned up any solutions without serious downsides. In such cases, deferring the decision for a perfect solution, when enough time has been dedicated to creative thinking that more is unlikely to deliver a new, better solution, is itself a failure condition.

comment by ugquestions · 2010-12-08T06:12:11.604Z · LW(p) · GW(p)

The top 10% of humanity accumulates 30% of the worlds wealth. 20% of the humanity dies from preventable, premature death (and suffers horribly)

The proposition...

10% of the top 10% had all their wealth taken from them (lottery selection process) They are forced to work as hard and effectively as they had previously and were given only enough of the profits they produce to live modestly. They lose everything and work for 5 years and recieve 10% of original wealth back The next 10% of the top 10 % is selected The wealth taken is used to ensure the survival of the 20% dying from preventable premature death.

In this scenario 1% of people are forced to live modestly in order to save up to 20% of humanity. No-one need to kill or be killed.

It would probably be reasonable to say the top 20% of earners would be against this proposal. The majority of the bottom 40% would be in favour. If your reading this you are likely on of the other 40% of humankind who can choose to support or reject the proposal. What would you say?

I am aware there are many holes in the proposition (unintended consequences etc) however this is a hypothetical that is based on a real situation that exists now that we are all contributing to in one way or another.

Replies from: Elizabeth, cata, Jiro
comment by Elizabeth · 2010-12-09T06:37:03.658Z · LW(p) · GW(p)

There is a major flaw in your proposal: the bottom 40% would not be in favor. Some of them would be, but there is a demonstrable bias which causes people to be irrationally optimistic about their own future wealth. This bias is a major factor in the Republicans maintaining much of their base, among other things.

However, to answer your question, while I would not favor your proposal, I would favor a tax on all of that top ten percent which would garner the same revenue as your proposal.

Replies from: ugquestions
comment by ugquestions · 2010-12-09T07:41:46.592Z · LW(p) · GW(p)

an increase in tax would only create an increase in product prices as the wealthy try to recoup their losses. This would adversely affect the very people you would be trying to help. The middle class whose support you wold require would also be affected negarive and the proposal would be then over turned.

Increasing taxes would not work.

Replies from: wnoise
comment by wnoise · 2010-12-09T09:27:34.864Z · LW(p) · GW(p)

Huh? You're going to have to explain how increasing the tax (on the wealthy) would lead to increased product prices. They might try to recoup their losses. (Or they might decide it's not worth working as hard for less reward -- this is the usual assumption in Economics) But what's the mechanism that leads from that to raised prices? There is an optimal price to set to maximize profit. Raising prices past that point isn't going to increase profits, because the volume sold will be lower.

Replies from: ugquestions
comment by ugquestions · 2010-12-09T23:14:06.294Z · LW(p) · GW(p)

I guess I was thinking of necessities like food, water, electricity, medicine etc which the lack of is causinG the preventable premature deaths . Passing the costs of production on to consumers (including increases in tax) in order to maintain or grow profit margins is at the heart of our economic reality.

'Not worth working as hard for less reward" is the reason for the lottery for the top 10% of earners. Most of these kind of individuals would have the belief that this would be a lottery they would not win and therefore continue to work as they would. An increase for all the top 10% (tax) would only modify their behaviour and at least some proportion of the cost would enievitably be passed on to all consumers.

comment by cata · 2010-12-09T07:34:50.916Z · LW(p) · GW(p)

This is just too complicated a scenario to boil down to such a simple question. The efficacy of that kind of redistribution would depend on all sorts of other properties of the economy and of society. I can imagine cultures in which that would work well, and others in which it would trigger a bloodbath. I don't think it's meaningful to ask whether someone would support it "in general."

Replies from: ugquestions
comment by ugquestions · 2010-12-09T07:53:26.308Z · LW(p) · GW(p)

I was aware of the many possible negative consequences such an action could have ( and the impossiblity of it ever having a chance of happening) however if there was a majority support across a society above 75% would the basic idea of sacrificing a small number of people to a modest lifestyle in order to save a large number of people be something you could support. Would a bloodbath be triggered with such support. I pose the question and think its a meaninful question because it is in a "general" sense a decision societies and civilization as a whole ( and by extension all individuals) are making every day.

I spend $70 a month on entertainment. If I redirect this money I could save 7 people a month from a preventable premature death. We all make these decisions. If the question was a choice between throwing the fat person in front of the trolley of yourself in order to save people which would you prefer.

Also remember it is the "fat person" or wealthy that propels the trolley into these people to varying degrees.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-01-26T12:28:56.940Z · LW(p) · GW(p)

I spend $70 a month on entertainment. If I redirect this money I could save 7 people a month from a preventable premature death.

IIRC, the actual cost of saving a life is about $100-$1000, but certainly not $10.

Replies from: ata
comment by ata · 2011-01-26T17:30:13.174Z · LW(p) · GW(p)

Unless you're willing to save expected lives instead of having a high chance of saving currently-existing lives, of course. (In which case (IIRC) the cost of saving around 8 expected lives is $1, by Anna Salamon's estimate.)

Replies from: jkaufman
comment by jefftk (jkaufman) · 2012-03-20T21:19:30.043Z · LW(p) · GW(p)

How does she estimate $0.13 per expected life saved?

comment by Jiro · 2014-10-15T18:43:17.348Z · LW(p) · GW(p)

Replying to another old post, but isn't this suggestion just Omelas, except that you're replacing the one child with the 1%?

comment by ugquestions · 2010-12-07T12:14:57.628Z · LW(p) · GW(p)

It appears that we live in a world where not only will most people refuse complicity in a disaster in order to save more lives, but where many people reject outright the idea that they should have any considered set of moral standards for making hard choices at all.

We live in a world where most people refuse complicity in a disaster in order to "maintain a certain quality of life even though it costs many lives".

Perhaps this is the reason for opting out of answering the question, acting is just to hard. The decision and its consequences is for someone else. For most people this is the life they live. The power to think about and decide what to do in a disasterous situation is always in someone elses hands and is therefore not needed to be considered or contemplated at all.

I see this as one of the worst aspects of centralized power arrangements in communities because "complicity" of all citizens in man made disasters either through direct actions or inaction should be understood and acknowledged by all. Perhaps then prevention or a change in behaviour would produce better outcomes.

Then again, probably not.

comment by taw · 2010-12-05T15:30:18.601Z · LW(p) · GW(p)

This is still true:

  • Trolley problems make a lot of sense in deontological ethics, to test supposedly universal moral rules in extreme situations.
  • Trolley problems do not make much sense in consequentialist ethics, as optimal action for a consequentialist can differ drastically between messy complicated real world and idealized world of thought experiments.

If you're a consequentialist, trolley problems are entirely irrelevant.

Replies from: Bongo, None
comment by Bongo · 2010-12-05T17:34:09.373Z · LW(p) · GW(p)

The messy complicated real world never contains situations where you can sacrifice a few people to benefit many people?

Or if it does, in such situations we'll figure out the optimal action using completely different considerations from those we would use in the idealized case?

I don't believe either of those.

Replies from: taw, David_Gerard
comment by taw · 2010-12-05T19:55:24.607Z · LW(p) · GW(p)

In messy complicated real world always contains people with different agendas, massive uncertainty and disagreement about likely outcomes, moral hazard, and affected people pushing to get their desired result by any means available.

If assume them away, trolley problem has nothing to do with the real world.

Replies from: ewbrownv, David_Gerard
comment by ewbrownv · 2010-12-06T21:02:42.396Z · LW(p) · GW(p)

Exactly. The central problems of real-world morality center around dealing with the uncertainty, bias and signaling issues of realistic high-stakes scenarios. By assuming all that complexity away trolley problems end up about as relevant as an economics problem without money or preferences.

A more useful research program would focus on probing the effects of uncertainty and socal issues on moral decision-making. But that makes for poor cocktail party conversation.

comment by David_Gerard · 2010-12-06T21:12:33.915Z · LW(p) · GW(p)

Trolley problems may be useful if you're e.g. an extremely smart person doing Philosophy, Politics and Economics at Oxford and you're destined for a career in politics where dealing with real-life lose-lose hypotheticals is going to be part of the job. Or if you want to understand such people, e.g. because you're on one of the metaphorical tracks.

comment by David_Gerard · 2010-12-05T17:45:57.070Z · LW(p) · GW(p)

Of course, it does. This is why such hypotheticals are used to entrap politicians, the ones who usually have the job of making the decision.

It's not clear to me whether the avoidance or entrapment came first.

comment by [deleted] · 2010-12-06T00:08:39.652Z · LW(p) · GW(p)

If you're a consequentialist, trolley problems are easy.

Replies from: wedrifid
comment by wedrifid · 2010-12-06T02:27:50.116Z · LW(p) · GW(p)

If you're a consequentialist, trolley problems are easy.

Only if you know whether or not someone is watching!

That is, getting caught not acting like a deontologist is a consequence that must sometimes be avoided. This becomes relevant when considering, for example, whether to murder AGI developers with a relatively small chance of managing friendliness but a high chance of managing recursive self improvement.

Replies from: Desrtopa
comment by Desrtopa · 2010-12-07T17:14:25.737Z · LW(p) · GW(p)

This becomes relevant when considering, for example, whether to murder AGI developers with a relatively small chance of managing friendliness but a high chance of managing recursive self improvement.

Relevant, perhaps, but if you absolutely can't talk them out of it, the negative expected utility of allowing them to continue could outweigh that of being imprisoned for murder by a great deal.

Of course, it would take a very atypical person to actually carry through on that choice, but if humans weren't so poorly built for utility calculations we might not even need AGI in the first place.

comment by Snowyowl · 2010-12-05T05:38:29.031Z · LW(p) · GW(p)

I think there have posts about this before. Well, this and the "if it's not my responsibility, it's not my problem" mindset, which the trolley problem also touches on.

comment by [deleted] · 2012-01-15T05:31:01.214Z · LW(p) · GW(p)

It dawns on me that there is a much more general tendency among most people to try to bail out of moral dilemmas or other hypotheticals. I personal experience sometimes I wish it was socially accepted to shout "Stop making up alternate courses of action in my thought experiments!" but alas we all have to deal with the single inference step.

(Is there a generalization of that "take a third option" tendency on dilemmas and hypothetical situations?)

Replies from: wedrifid
comment by wedrifid · 2012-01-15T05:36:01.148Z · LW(p) · GW(p)

It dawns on me that there is a much more general tendency among most people to try to bail out of moral dilemmas or other hypotheticals.

And so they should. Moral dillemas are a social trap! If you must answer at all, never answer directly.

I just went in search of a comment thread in which myself and Eliezer both mentioned this issue. But it turns out that it was actually elsewhere in this thread

comment by Bongo · 2010-12-05T05:47:36.194Z · LW(p) · GW(p)

Excellent post. Seems to me that your points about how people react to moral problems apply to decision problems as well.