The Unselfish Trolley Problem

post by elharo · 2013-05-17T10:51:56.068Z · LW · GW · Legacy · 132 comments

Contents

132 comments

By now the Trolley Problem is well known amongst moral philosophers and LessWrong readers. In brief, there's a trolley hurtling down the tracks. The dastardly villain Snidely Whiplash has tied five people to the tracks. You have only seconds to act. You can save the five people by throwing a switch and transferring the trolley to another track. However the evil villain has tied a sixth person to the alternate track. Should you throw the switch?

When first presented with this problem, almost everyone answers yes. Sacrifice the one to save five. It's not a very hard choice.

Now comes the hard question. There is no switch or alternate track. The trolley is still coming down the tracks, and there are still five people tied to it. You are instead standing on a bridge over the tracks. Next to you is a fat man. If you push the man onto the tracks, the trolley car will hit him and derail, saving the five people; but the fat man will die. Do you push him?

This is a really hard problem. Most people say no, they don't push. But really what is the difference here? In both scenarios you are choosing to take one life in order to save five. It's a net gain of four lives. Especially if you call yourself a utilitarian, as many folks here do, how can you not push? If you do push, how will you feel about that choice afterwards?

Try not to Kobayashi Maru this question, at least not yet. I know you can criticize the scenario and find it unrealistic. For instance, you may say you won't push because the man might fight back, and you'd both fall but not till after the trolley had passed so everyone dies. So imagine the fat man in a wheelchair, so he can be lightly rolled off the bridge. And if you're too socially constrained to consider hurting a handicapped person, maybe the five people tied to the tracks are also in wheelchairs. If you think that being pushed off a bridge is more terrifying than being hit by a train, suppose the fat man is thoroughly anesthetized. Yes, this is an unrealistic thought experiment; but please play along for now.

Have your answer? Good. Now comes the third, final, and hardest question; especially for anybody who said they'd push the fat man. There is still no switch or alternate track. The trolley is still coming down the tracks, and there are still five people tied to it. You are still standing on a bridge over the tracks. But this time you're alone and the only way to stop the train is by jumping in front of it yourself. Do you jump? If you said yes, you would push the fat man; but you won't jump. Why?

Do you have a moral obligation to jump in front of the train? If you have a moral obligation to push someone else, don't you have a moral obligation to sacrifice yourself as well? or if you won't sacrifice yourself, how can you justify sacrificing someone else? Is it morally more right to push someone else than jump yourself? I'd argue the opposite...

Realistically you may not be able to bring yourself to jump. It's not exactly a moral decision. You're just not that brave. You accept that it's right for you to jump, and accept that you're not that moral. Fine. Now imagine someone is standing next to you, a skinny athletic person who's too small to stop the train themselves but strong enough to push you over into the path of the trolley. Do you still think the correct answer to the trolley problem is to push?

If we take it seriously, this is a hard problem. The best answer I know is Rawlsianism. You pick your answer in ignorance of who you'll be in the problem. You don't know whether you're the pusher, the pushed, or one of the people tied to the tracks. In this case, the answer is easy: push! There's a 6/7 chance you'll survive so the selfish and utilitarian answers converge.

We can play other variants. For instance, suppose Snidely kidnaps you and says "Tomorrow I'm going to flip a coin. Heads I'll put you on the tracks with 4 other people (and put a different person on the bridge next to the pusher). Tails I'll put you on the bridge next to a pusher." Should the pusher push? Actually that's an easy one because you don't know where you'll end up so you might as well save the four extra people in both scenarios. Your expected value is the same and everyone else's is increased by pushing.

Now imagine Snidely says instead he'll roll a die. If it comes up 1-5, he puts six people including you on the track. If it comes up 6, he lets you go and puts the other five people on the track. However if you agree to be tied to the track without a roll, without even a chance of escape, he'll let the other five people go. What now? Suppose he rolls two dice and they both have to come up 6 for you to go free; but he'll still let everyone else go if you agree. Will you save the other five people at the cost of a 1/36 chance of saving your own life? How about three dice? four? How many dice must Snidely roll before you think the chance of saving your own life is outweighed by the certainty of saving five others? 

Do you have your answers? Are you prepared to defend them? Good. Comment away, and you can even Kobayashi Maru the scenario or criticize the excessively contrived hypotheticals I've posed here. But be forewarned, in part 2 I'm going to show you an actual, non-hypothetical scenario where this problem becomes very real; indeed a situation I know many LessWrong readers are facing right now; and yes, it's a matter of life and death.

 

 


 

Update: It now occurs to me that the scenario can be tightened up considerably. Forget the bridge and the fat man. They're irrelevant details. Case 1 is as before. 5 people on one track, 1 on another. Pull the switch to save the 5 and kill the 1. Still not a hard problem.

Case 2: same as before, except this time you are standing next to the one person tied to the track who will be hit by the trolley if you throw the switch. And they are conscious, can talk to you, and see what you're doing. No one else will know what you did. Does this change your answer, and if so why?

Case 3: same as before, except this time you are the one person tied to the track who will be hit by the trolley if you throw the switch.

Folks here are being refreshingly honest. I don't think anyone has yet said they would throw the switch in case 3, and most of us (myself included) are simply admitting we're not that brave/altruistic/suicidal (assuming the five people on the other track are not our friends or family). So let's make it a little easier. Suppose in case 3 someone else, not you, is tied to the track but can reach the switch. What now?


Update 2: Case 4: As in case 3, you are tied to the track, five other unrelated people are tied to the opposite track, and you have access to a switch that will cause the trolley to change tracks. However now the trolley is initially aimed at you. The five people on the other track are safe unless you throw the switch. Is there a difference between throwing the switch in this case, and not throwing the switch in Case 3?

This case also raises the interesting question of legality. If there are any lawyers in the room, do you think a person who throws the switch in case 4--that is, saves themselves at the cost of five other lives--could be convicted of a crime? (Of course, the answer to this one may vary with jurisdiction.) Are there any actual precedents of cases like this?

 

 

 

132 comments

Comments sorted by top scores.

comment by SilasBarta · 2013-05-17T18:30:46.340Z · LW(p) · GW(p)

For instance, you may say you won't push because the man might fight back, and you'd both fall but not till after the trolley had passed so everyone dies. So imagine the fat man in a wheelchair, so he can be lightly rolled off the bridge. And if you're too socially constrained to consider hurting a handicapped person, maybe the five people tied to the tracks are also in wheelchairs. If you think that being pushed off a bridge is more terrifying than being hit by a train, suppose the fat man is thoroughly anesthetized.

Your modification of the problem to make "push the guy" the obvious answer still doesn't answer my objections to doing so, which are quite robust against modifications of the scenario that preserve "the sense" of the problem:

By intervening to push someone onto the track, you suddenly and unpredictably shift around the causal structure associated with danger in the world, on top of saving a few lives. Now, people have to worry about more heros drafting sacrificial lambs "like that one guy did a few months ago" and have to go to greater lengths to get the same level of risk. ...

... I don't pretend that that is what most people are thinking when they encounter the problem, but the "unusualness" of pushing someone off a bridge is certainly affecting their intuition, and so concerns about stability probably play a role.

In short, problems that divorce the scenario from its social context, in trying to "purify" the question, do in fact throw away morally-relevant information, which people (perhaps indirectly) incorporate into their decision-making.

Let me phrase the problem with such scenarios by posing another "moral" dilemma that makes the same attempt to delete relevant information in poor attempt to find the pure answer:

"Dilemma: You are driving on a road. Should you drive on the right, drive on the left, or center your car? What is the moral way to drive?"

You can imagine how that might go:

Me: Well, that would depend on what system currently exists for using the road, and, if none, if the area is inhabited ...
Them: NO. Forget about all that. Pick a road side and defend your answer.
Me: But there is no correct side of the road apart from the social context ...
Them: Great, another one of these guys. Okay, pretend there's a terrorist who will kill five people if you don't drive on the left, and your driving otherwise has no effect on anyone's safety.
Me: Well, sure, in that case, you should drive on the left, but now you're talking about a fundamentally different ...
Them: Aha! So you are a lefter!


OTOH, if you have a modification to the trolley case that preserves "the sense" of it while obviating my objection, I'm interested in hearing it.

Replies from: Alsadius
comment by Alsadius · 2013-05-19T02:49:41.051Z · LW(p) · GW(p)

Everyone has been knocked unconscious by Snidely Whiplash except you, you can reach the switch, and you have to decide which track gets run over. Nobody will know what you did, or even that you did anything, except you. The news stories won't say "Fat man thrown on tracks to save lives!", it'll just say that a trolley ran over some people in an act of cartoon villainy.

Replies from: SilasBarta
comment by SilasBarta · 2013-05-19T03:36:29.559Z · LW(p) · GW(p)

That's the case where both groups are on a track, not the case where I could push a safely-positioned non-tracker onto the track. And in that case I don't generally object to changing which track the trolley is on anyway.

In any case, this aspect would again fundamentally change the problem, while still not changing the logic I gave above:

Nobody will know what you did, or even that you did anything, except you.

This (if applied to the fat man case I actually object to) is basically saying that I can rewrite physics to the point where even being on a bridge above a train does not protect you from being hit by it. Thus, everything I said before, about it becoming harder to assess and trade off against risk, would apply, and making the change would be inefficient for the same reasons. (i.e. I would prefer a world in which risks are easier to assess, not one in which you have to be miles from any dangerous thing just to be safe)

Replies from: Alsadius
comment by Alsadius · 2013-05-19T03:59:06.560Z · LW(p) · GW(p)

In the two-track setup, only one of the tracks is going to get killed, even if you do nothing. Switching the train to a previously-safe track with someone on it is morally identical to throwing someone safe onto a single track, IMO.

Replies from: SilasBarta
comment by SilasBarta · 2013-05-19T04:06:39.297Z · LW(p) · GW(p)

That's an interesting opinion to hold. Would you care to go over the reasons I've given to find them different?

Replies from: Alsadius
comment by Alsadius · 2013-05-19T04:35:35.745Z · LW(p) · GW(p)

For clarity: from this post, I understood your objection to be primarily rooted in second-order effects. Your claim seems to be that you are not simply saving these people and killing those people by your actions, you are also destroying understanding of how the world works, wrecking incentive structures, and so on. If my understanding on this point is incorrect, please clarify.

Assuming the above is correct, my modification seems to deal with those objections cleanly. If you are the only one who knows what happened, then people aren't going to get the information that some crazy bastard threw some dude at a trolley, they're just going to go on continuing to assume that sort of thing only happens in debates between philosophy geeks. It is never known to have happened, therefore the second-order effects from people's reactions to it having happened never come up, and you can look at the problem purely with regard to first-order effects alone.

Replies from: SilasBarta
comment by SilasBarta · 2013-05-19T05:04:17.210Z · LW(p) · GW(p)

Replacing "like that guy did a few months ago" in my comment with something agentless and Silas-free such as "like seems to happen these days" doesn't, AFAICT, change the relevance of my objection: people are still less able to manage risk, and a Pareto disimprovement has happened in that people have to spend more to get the same-utility risk/reward combo. So your change does not obviate my distinction and objection.

Replies from: Alsadius
comment by Alsadius · 2013-05-19T05:25:43.152Z · LW(p) · GW(p)

But it has to be a real known problem in order for people's actions to change. Given that a pure trolley problem hasn't yet happened in reality, keeping it secret if it did happen should be plenty sufficient to prevent societal harm from the reactions.

Replies from: SilasBarta
comment by SilasBarta · 2013-05-19T07:11:19.887Z · LW(p) · GW(p)

But if I say that it's a good idea here, I'm saying it's a good idea in any comparable case, and so it should be a discernible (and Pareto-inefficient) phenomenon.

Replies from: Alsadius
comment by Alsadius · 2013-05-19T09:18:07.059Z · LW(p) · GW(p)

But if you limit "comparable cases" to situations where you can do it in secret, that's not a problem.

Replies from: SilasBarta
comment by SilasBarta · 2013-05-19T18:57:12.604Z · LW(p) · GW(p)

Again, the problem is not that people could notice me as being responsible. The problem is that it's harder to assess dangers at all, so people have to increase their margins of safety all around. If someone wants to avoid death by errant trolleys, it's no longer enough to be on a bridge overpass; they have to be way, way removed.

The question, in other words is, "would I prefer that causality were less constrained by locality?" No, I would not, regardless of whether I get the blame for it.

Replies from: Alsadius
comment by Alsadius · 2013-05-20T05:31:07.361Z · LW(p) · GW(p)

So your claim is that other people's reasoning processes work not based on evidence derived by their senses, but instead by magic. An event which they have no possible way of knowing about has happened, and you still expect them to take it into account and change their decisions accordingly. Do I have that about right?

Replies from: SilasBarta
comment by SilasBarta · 2013-05-20T14:00:55.496Z · LW(p) · GW(p)

If this kind of thing consistently happened (as it would have to, if I claim it should be done in every comparable case), then yes it would be discernible, without magic.

If this action is really, truly intended as a "one-off" action, then sure, you avoid that consequence, but you also avoid talking about morality altogether, since you've relaxed the constraint that moral rules be consistent altogether.

Replies from: Alsadius
comment by Alsadius · 2013-05-20T18:14:08.180Z · LW(p) · GW(p)

So morality is irrelevant in sufficiently unlikely situations?

Replies from: SilasBarta
comment by SilasBarta · 2013-05-20T18:42:11.927Z · LW(p) · GW(p)

No, your criticism of a particular morality is irrelevant if you stipulate that the principle behind its solution doesn't generalize. That is, if you say, "what would you do here if we stipulated that the reasoning behind your decision didn't generalize?" then you've discarded the requirement of consistency and the debate is pointless.

Replies from: Alsadius
comment by Alsadius · 2013-05-20T20:56:42.935Z · LW(p) · GW(p)

I think of it more as establishing boundary conditions. Obviously, you can't use the trolley problem on its own as sufficient justification for Lenin's policy of breaking a few eggs. But if the pure version of the problem leads you to the conclusion that it's wrong to think about then you avoid the discussion entirely, whereas if it's a proper approach in the pure problem then the next step is trying to figure out the real-world limits.

Replies from: SilasBarta
comment by SilasBarta · 2013-05-20T21:34:45.863Z · LW(p) · GW(p)

In this situation, you're trying to claim that the (action in your favored) solution to the pure version of the problem requires such narrow conditions that I can safely assume it won't imply any recognizable regularity to which people could adapt. My point is that, in that case:

1) You're no longer talking about trolley-like problems at all (as in my earlier distinction between the "which side of road problem" and the "which side of road + bizarre terrorist" problem, and

2) Since there is no recognizable regularity to the solution, the situation does not even serve to illuminate a boundary.

Replies from: Alsadius
comment by Alsadius · 2013-05-21T03:54:12.947Z · LW(p) · GW(p)

I'm trying to say that the problem exists mostly to fix a boundary. If killing one to save five is not okay, even under the most benign possible circumstances, then that closes off large fields of argument that could possibly be had. If it is, then it limits people out of using especially absolutist arguments in situations like murder law.

(The other advantage is that it gets people thinking about exactly what they believe, which is generally a good thing)

Also, re your side-of-road problem, I could actually come up with an answer for you in a minimal setup - assuming a new island that's building a road network, I'd probably go for driving on the right, because more cars are manufactured that way, and because most people are right-handed and the centre console has more and touchier controls on it than the door.

comment by wedrifid · 2013-05-17T16:50:36.978Z · LW(p) · GW(p)

Especially if you call yourself a utilitarian, as many folks here do, how can you not push?

Some are utilitarian. Most are consequentialist with some degree of altruistic preference.

Have your answer?

Flip. Push. (All else being unrealistically equal.)

Good. Now comes the third, final, and hardest question; especially for anybody who said they'd push the fat man. There is still no switch or alternate track. The trolley is still coming down the tracks, and there are still five people tied to it. You are still standing on a bridge over the tracks. But this time you're alone and the only way to stop the train is by jumping in front of it yourself. Do you jump?

No. I don't want to kill myself. I would rather the victims of the psychopath lived than died, all else being equal. But I care about my own life more than 5 unknown strangers. The revealed preferences of the overwhelming majority of other humans is similar. The only way this question is 'hard' is that it could take some effort to come up with answers that sound virtuous.

If you said yes, you would push the fat man; but you won't jump. Why?

I'm not a utilitarian. I care more about my life than about the overwhelming majority of combinations of 5 other people. There are exceptions. People I like or admire and people who are instrumentally useful for contributing to the other altruistic causes I care for. Those are the groups of 5 that I would be willing to sacrifice myself for.

Do you have a moral obligation to jump in front of the train?

No. (And anyone who credibly tried to force that moral onto me or those I cared about could be considered a threat and countered appropriately.)

If you have a moral obligation to push someone else, don't you have a moral obligation to sacrifice yourself as well?

No. That doesn't follow. It is also an error to equivocate between "I would push the fat man" and "I concede that I have a moral obligation to push the fat man".

or if you won't sacrifice yourself, how can you justify sacrificing someone else?

Exactly the same way you would justify sacrificing someone else if you would sacrifice yourself.

Do you have your answers? Are you prepared to defend them?

Defend them? Heck no. I may share my answers with someone who is curious. But defending them would imply that my decision to not commit suicide to save strangers somehow requires your permission or agreement.

But be forewarned, in part 2 I'm going to show you an actual, non-hypothetical scenario where this problem becomes very real; indeed a situation I know many LessWrong readers are facing right now; and yes, it's a matter of life and death.

So you had a specific agenda in mind. I pre-emptively reject whatever demands you are making of me via this style of persuasion and lend my support to anyone else who is morally pressured toward martyrdom.

Replies from: MugaSofer, drnickbone
comment by MugaSofer · 2013-05-19T18:45:44.303Z · LW(p) · GW(p)

You know, most people have a point in mind when they start writing something. It's not some sort of underhanded tactic.

Also, your own life by definition has greater instrumental value than others' because you can effect it. No non-virtuous sounding preferences required; certainly no trying to go from "revealed preferences" to someone's terminal values because obviously everyone who claims to be akraisic or, y'know, realizes they were biased and acts to prevent it is just signalling.

Replies from: wedrifid
comment by wedrifid · 2013-05-19T23:25:46.159Z · LW(p) · GW(p)

You know, most people have a point in mind when they start writing something. It's not some sort of underhanded tactic.

Not something I claimed. I re-assert my previous position. I oppose the style of persuasion used in the grandparent. Specifically, the use of a chain of connotatively-fallacious rhetorical questions.

Also, your own life by definition has greater instrumental value than others' because you can effect it.

That is:

  1. Not something that follows by definition.
  2. Plainly false as a general claim. There often going to be others that happen to have more instrumental value for achieving many instrumental goals for influencing the universe. For example if someone cares about the survival of humanity a lot (ie. more than about selfish goals) then the life of certain people who are involved in combating existential risk are likely to be more instrumentally useful for said someone than their own.
Replies from: MugaSofer
comment by MugaSofer · 2013-05-20T21:25:56.059Z · LW(p) · GW(p)

Not something I claimed. I re-assert my previous position. I oppose the style of persuasion used in the grandparent. Specifically, the use of a chain of connotatively-fallacious rhetorical questions.

That's a lovely assertion and all, but I wasn't responding to it, sorry. (I didn't find the questions all that fallacious, myself; just a little sloppy.) Immediately before that statement you said "So you had a specific agenda in mind."

It was this, and the (perceived?) implications in light of the context, that I meant to reply to. Sorry if that wasn't clear.

There often going to be others that happen to have more instrumental value for achieving many instrumental goals for influencing the universe. For example if someone cares about the survival of humanity a lot (ie. more than about selfish goals) then the life of certain people who are involved in combating existential risk are likely to be more instrumentally useful for said someone than their own.

Oh, come on. I didn't say it was more instrumentally valuable than any conceivable other resource. It has greater instrumental value than other lives. Individual lives may come with additional resources based on the situation.

That's like responding to the statement "guns aren't instrumentally useful for avoiding attackers because you're more likely to injure yourself than an attacker" with "but what if that gun was the only thing standing between a psychopath and hundreds of innocent civilians? What if it was a brilliant futuristic gun that knew not to fire unless it was pointing at a certified Bad Person? It would be useful then!"

If someone says something that sounds obviously wrong, maybe stop and consider that you might be misinterpreting it? Principle of charity and all that.

(I really hope I don't turn out to have misinterpreted you, that would be too ironic.)

Replies from: wedrifid
comment by wedrifid · 2013-05-21T01:18:57.053Z · LW(p) · GW(p)

I didn't find the questions all that fallacious, myself; just a little sloppy.

A complementary explanation to the ones I have already given you is that this post is optimised for persuading people like yourself, not people like me. I prefer a state where posts use styles of reasoning more likely to be considered persuasive by people like myself. As such, I oppose this post.

Replies from: MugaSofer
comment by MugaSofer · 2013-05-23T15:57:10.155Z · LW(p) · GW(p)

Well, if you phrase it like that ...

Why are you against diversity?! We should have posts for both people-like-you and people-like-me! Stop trying to monopolise LessWrong, people-like-wedrifrid!!

EDIT: This has been a joke. We now return you to your regularly scheduled LessWrong.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-23T16:12:38.645Z · LW(p) · GW(p)

If exerting influence is indistinguishable from trying to monopolize the community, then I reluctantly endorse trying to monopolize the community.

Replies from: MugaSofer
comment by MugaSofer · 2013-05-23T16:56:23.517Z · LW(p) · GW(p)

Sorry, I wasn't actually being serious. I'll edit my comment to make that clearer.

comment by drnickbone · 2013-05-17T17:23:26.939Z · LW(p) · GW(p)

I'm a bit puzzled by this. If you care about yourself more than the five strangers, then why push or flip?

Pushing is going to get you prosecuted for murder in most jurisdictions, and could even attract a death sentence in some of them. Flipping is less clear: you could get a manslaughter charge, or be sued by the family of the one tied to the alternate track. The five you saved might decide to contribute to your defence fund, but good luck with that.

Or suppose you construct the hypothetical so there is no legal comeback: still, why do you want to push a fat man off a bridge? It takes energy, you could pull a muscle, he could notice and hit back or pull you over too etc. etc.

Replies from: drethelin, wedrifid
comment by drethelin · 2013-05-17T17:48:31.721Z · LW(p) · GW(p)

You're being either really blind or deliberately obtuse. Caring more about your life than the life of five strangers doesn't mean you care infinitely more about yourself than you do about them. Maybe you'll pull a muscle flipping the switch? it's entirely legitimate to say that you'll take some costs upon yourself to do a big favor for 5 strangers without being willing to take the ultimate cost upon yourself.

Replies from: drnickbone, drnickbone
comment by drnickbone · 2013-05-17T21:41:12.609Z · LW(p) · GW(p)

Apologies. You are quite right, I was indeed being "really blind" and pretty obtuse as well (though not deliberately so). I've now spotted that the original poster explicitly said to ignore alll chances that the fat man would fight back, and presumably that extends to other external costs, such as retaliation by his relatives, the law etc. My bad.

I've also commented on this further down this thread. I now find my moral intuitions behaving very strangely in this scenario. I strongly suspect that my original intuitions were very closely related to all these knock-on factors which I've now been asked to ignore.

comment by drnickbone · 2013-05-17T18:13:02.021Z · LW(p) · GW(p)

No I was pointing out that in all realistic ways of constructing the hypothetical there are going to be quite major risks and costs to oneself in pushing the fat man: an obvious one being that he easily could fight back. This may indeed be one of the factors behind different moral intuitions. (We have no instincts about the cost-to-self of flipping a switch: although that could also be very high in the modern world, it takes some thinking to realise it).

For what it's worth, my own answers are "no flip, no push and no jump" for precisely such reasons: all too risky to self. Though if I had family members or close friends on the lines, I'd react differently. If there were a hundred or a thousand people on the line, I'd probably react differently.

Replies from: lfghjkl
comment by lfghjkl · 2013-05-17T21:07:27.658Z · LW(p) · GW(p)

No I was pointing out that in all realistic ways of constructing the hypothetical there are going to be quite major risks and costs to oneself in pushing the fat man

I'm guessing wedrifid isn't taking that into account because we were explicitly asked not to do that here:

Try not to Kobayashi Maru this question, at least not yet. I know you can criticize the scenario and find it unrealistic.

Replies from: drnickbone
comment by drnickbone · 2013-05-17T21:34:39.014Z · LW(p) · GW(p)

OK, my bad.

Thanks for the patient reminder to read the entire original post before jumping into commenting on the comments. I did in fact miss all the caveats about wheelchairs, light rolling, fat man being anaesthetised etc. Doh!

I guess elharo should also have stipulated that no-one has any avenging friends or relatives (or lawyers) in the entire scenario, and that the usual authorities are going to give a free-pass to any law-breaking today. Maybe also that I'll forget the whole thing in the morning, so there will be no residual guilt, angst etc.

To be honest, making the wheelchair roll gently into the path of the trolley is now looking very analogous to switching a trolley between two tracks: both seem mechanical and impersonal, with little to tell them apart. I find that I have no strong intuitions any more: my remaining moral intuitions are extremely confused. The scenario is so contrived that I'm feeling no sympathy for anyone, and no real Kantian imperatives either. I might as well be asked whether I want to kill a Martian to save five Venusians. Weird.

comment by wedrifid · 2013-05-18T00:06:50.057Z · LW(p) · GW(p)

EDIT: I have now read your replies to other peoples responses. I see you have already acknowledged the point. Consider this response retracted as redundant.

Flip. Push. (All else being unrealistically equal.)

Pushing is going to get you prosecuted for murder in most jurisdictions,

You are fighting the hypothetical. Note that my response refrained from fighting the hypothetical but did so explicitly and acknowledged the completely absurd nature of the assumption that there are no other consequences to consider. That disclaimer should be sufficient here.

Or suppose you construct the hypothetical so there is no legal comeback: still, why do you want to push a fat man off a bridge?

Because I want to save 5 people.

It takes energy, you could pull a muscle, he could notice and hit back or pull you over too etc. etc.

Again, I chose not to fight the hypothetical. As such I refrained from opting out of answering the moral question by mentioning distracting details that are excluded as considerations by any rigorous introduction to the thought experiment.

comment by Larks · 2013-05-17T13:17:17.675Z · LW(p) · GW(p)

The best answer I know is Rawlsianism.

No! That is not Rawlsianism. Rawls was writing about how to establish principles of justice to regulate the major institutions of society; he was not establishing a decision procedure. I think you mean UDT.

Replies from: lukeprog, Jade, BerryPick6
comment by lukeprog · 2013-05-17T23:18:22.074Z · LW(p) · GW(p)

That is not Rawlsianism. Rawls was writing about how to establish principles of justice to regulate the major institutions of society; he was not establishing a decision procedure.

Yes. "Rawlsianism" is mostly commonly used to refer to Rawls' theory of political justice specifically (e.g. Kordana & Tabachnick 2006).

I will briefly remark, however, that Rawls' original work on the justification of ethical principles was in the context of decision procedures. His first paper on the topic, "Outline of a Decision Procedure for Ethics" (1951) is pretty explicit about that. Also, other philosophers have gone on to borrow the Rawlsian approach to political justice for the purpose of justifying certain decision procedures in ethics or practical decision-making, e.g. Daniels (1979).

comment by Jade · 2013-05-18T02:07:35.890Z · LW(p) · GW(p)

elharo was referring to 'veil of ignorance,' a concept like UDT applied by Rawls to policy decision-making.

comment by BerryPick6 · 2013-05-17T14:12:30.265Z · LW(p) · GW(p)

Thank you. Thank you. Thank you.

comment by falenas108 · 2013-05-17T12:44:40.593Z · LW(p) · GW(p)

Try not to Kobayashi Maru this question, at least not yet. I know you can criticize the scenario and find it unrealistic. For instance, you may say you won't push because the man might fight back, and you'd both fall but not till after the trolley had passed so everyone dies. So imagine the fat man in a wheelchair, so he can be lightly rolled off the bridge. And if you're too socially constrained to consider hurting a handicapped person, maybe the five people tied to the tracks are also in wheelchairs. If you think that being pushed off a bridge is more terrifying than being hit by a train, suppose the fat man is thoroughly anesthetized. Yes, this is an unrealistic thought experiment; but please play along for now.

Just so you know, the common term for this around here is don't fight the hypothetical.

Replies from: None
comment by [deleted] · 2013-05-17T14:07:17.050Z · LW(p) · GW(p)

http://slatestarcodex.com/2013/05/17/newtonian-ethics/ fits surprisingly well here for two things published just a few hours apart.

comment by Izeinwinter · 2013-05-19T06:16:45.485Z · LW(p) · GW(p)

The reason people object to the fat man is that we have very strong intuitions about how physical systems work, and that includes "things rolling on tracks stay on tracks, and keep rolling regardless of how fat a guy is put in front of them". It is basically a bad philosophy problem because the way the problem reads to most people is "let 5 people die, or up the killcount to 6" which is, you know, not a difficult choice. And no matter how much you state that it will work, this intuition sticks with people. The lever version does not have the same conflict, because the problem does not have to fight against that basic reaction.

Replies from: Pentashagon
comment by Pentashagon · 2013-05-20T07:37:48.178Z · LW(p) · GW(p)

The "jump in front of the trolley" question has the same problem. If I can stop the trolley by jumping in front of it, won't the first person tied to the tracks stop it as well?

Why can't we just state these hypothetical problems more directly?

Is it better that five people should die or that one person should die if there are no other options? I prefer the latter.

Is it better that five people should die or that I should directly kill one person if there are no other options? I prefer the latter.

Is it better that five people should die or that I should die if there are no other options? I prefer the former.

Is it better to value my own life more than arbitrarily many other people's lives? I don't think so.

comment by Kindly · 2013-05-17T14:40:02.168Z · LW(p) · GW(p)

If you offer me a $1:$1 bet that a six-sided die doesn't land on a six, I take it. But now you tell me that the die landed on a six, and want to make the same bet about its outcome. Of course I don't give the same answer!

My life is worth more to me than other lives; I couldn't say by how much exactly, so I'm not prepared to answer any of the dice-rolling questions. However, I am aware that to person C, person A and person B have equal-worth lives, unless one of them is C's spouse or child, and this provides an opportunity to make deals that benefit both me and other people who value their own lives more than mine.

So, for example, I would endorse the policy that bridges be manned in pairs, each of the two people being ready to push the other off. This is, effectively, a commitment to following the unselfish strategy, but one that applies to everyone. TDT offers a solution that doesn't require commitments; but there, we need the vague assumption that I'm implementing the same algorithm as everyone else in the problem, and I'm not too sure that this applies.

Oh and also I think I would jump for a hypothetical wife and daughter (or even a hypothetical son, imagine that), but at that point the question becomes less interesting.

comment by wedrifid · 2013-05-17T17:10:03.727Z · LW(p) · GW(p)

Now imagine Snidely says instead he'll roll a die. If it comes up 1-5, he puts six people including you on the track. If it comes up 6, he lets you go and puts the other five people on the track. However if you agree to be tied to the track without a roll, without even a chance of escape, he'll let the other five people go. What now? Suppose he rolls two dice and they both have to come up 6 for you to go free; but he'll still let everyone else go if you agree. Will you save the other five people at the cost of a 1/36 chance of saving your own life? How about three dice? four? How many dice must Snidely roll before you think the chance of saving your own life is outweighed by the certainty of saving five others?

Is this going to be a "Boo Cryonics! Buy mosquito nets for Africans." set up?

Replies from: AlexanderRM
comment by AlexanderRM · 2015-03-24T22:13:38.635Z · LW(p) · GW(p)

First of all, I'd like to strongly suggest that one instead consider the comparison of buying mosquito nets for Africans, vs. investing in Cryonics for complete strangers (obviously after one gets cryonics for oneself). That separates that actual considerations from the selfishness question that often confuses utilitarian discussions. It seems like an optimal strategy would be to determine a set amount/portion of income which one is willing to donate to charities, and then decide what charities to donate to.

Anyway, with that out of the way: I want to point out that if you assume that the ability to make people immortal won't happen for another 100 years (or that it will be a regular technology rather than a singularity, and it will take more than 100 years for starving Africans to be able to afford it- which would be pretty horrifying and put more interesting factors into the problem), then the effects of Cryonics if successful are pretty dramatically different from simply "saving your life".

Of course, the chance (in our minds based on our current knowledge, in reality either might be settled) of Cryonics successfully making you immortal are well below 100%, and on the other hand there's some chance that some Africans you save with mosquito nets might live long enough to have their brain downloaded. In fact, a very near-future Singularity would bring in some pretty weird changes to utility considerations; for instance, with Cryonics you'd need to consider the chance of it happening before you even die, in which case the money you saved or invested to pay for it could have been used to save Africans.

...personally I'm partial to just investing charity money in technological research; either something ones thinks will lead to brain downloading, or to medical technology or whatever will save the most lives in conventional circumstances (quite possibly cheaper treatments for tropical diseases, or something similar which primarily affects those too poor to fund research).

comment by roystgnr · 2013-05-20T03:46:45.640Z · LW(p) · GW(p)

The trouble with "push the fat man" answers is that in the long run they don't result in a world with more people saved from runaway trains, just a world in which fat men no longer feel safe walking near train tracks. The same applies to "I am morally required to jump" answers. Replace "in the long run" with "in the short run" if you are short-sighted enough to pre-announce your answer, as we're doing in this thread; the average fat man and/or morally shameable man is probably capable of figuring out the implications.

Your potential victims may not be Omega, but they may still be capable of figuring out when you're planning to two-box.

comment by AlexMennen · 2013-05-17T16:22:44.442Z · LW(p) · GW(p)

This isn't a hard problem at all. I would push someone else onto the tracks (in the idealized, hypothetical, trolley problem) but I wouldn't jump. The reason is that pushing the guy onto the tracks isn't about doing the Right Thing™; it's about getting what I want. I want as many people as possible to live, but I care about my own life a lot more than the lives of small numbers of other people. It shouldn't be too hard to predict my answers to each of your variants based on this.

I would take no action in any real-life trolley problem unless there were a lot more lives at stake, because I care much more about not getting convicted of murder than I do about a mere 4 expected lives of people I don't know, and I think my chances are better if I take no action.

Replies from: MugaSofer, shminux
comment by MugaSofer · 2013-05-19T19:13:07.765Z · LW(p) · GW(p)

The reason is that pushing the guy onto the tracks isn't about doing the Right Thing™; it's about getting what I want. I want as many people as possible to live

That feeling of wanting to help people is what is referred to as "morality".

Replies from: RomeoStevens
comment by RomeoStevens · 2013-05-21T18:54:19.874Z · LW(p) · GW(p)

No, it is referred to as altruism. Morality is a fuzzy grouping of concepts around aggregate preferences.

Replies from: MugaSofer
comment by MugaSofer · 2013-05-23T15:47:42.253Z · LW(p) · GW(p)

Well, it's at the very least a part of morality, anyway.

Yeah, altruism is a more precise term, but it sounds less ... punchy compared to "morality".

comment by Shmi (shminux) · 2013-05-17T16:41:35.688Z · LW(p) · GW(p)

This isn't a hard problem at all.

I don't believe you:

The reason is that pushing the guy onto the tracks isn't about doing the Right Thing™; it's about getting what I want.

What would you do in the dice-roll cases? What do you currently do in the dice-roll cases, like driving or crossing the road?

I want as many people as possible to live, but I care about my own life a lot more than the lives of small numbers of other people.

What number is not small? What number of people you care about is not small?

because I care much more about not getting convicted of murder

Assume that there is no danger of anyone finding out, or that the judge is a perfect utilitarian, so this is not a consideration.

Replies from: AlexMennen
comment by AlexMennen · 2013-05-17T19:02:47.518Z · LW(p) · GW(p)

What would you do in the dice-roll cases?

I would cling to the small chance of living until that chance gets extremely tiny. I can't pinpoint how tiny it would have to be because I'm a human and humans suck at numbers.

What do you currently do in the dice-roll cases, like driving or crossing the road?

I don't do any sophisticated calculations. I just try to avoid accidents. What are you trying to get from my answer to that question?

What number is not small? What number of people you care about is not small?

I would sacrifice myself to prevent the entire human civilization from collapsing. I would not sacrifice myself to save 1000 other people. That leaves quite a large range, and I haven't pinned down where the breakeven point is. Deciding whether or not to sacrifice myself to save 10^5 other people is a lot harder than deciding whether or not to sacrifice myself to save 5 other people.

Assume that there is no danger of anyone finding out, or that the judge is a perfect utilitarian, so this is not a consideration.

I already said that I would kill one person to save five in the idealized trolley problem. My point was that if something like the trolley problem actually happened to me, it would not be the idealized trolley problem, and those assumptions you mention are false in real life, so I would not assume them while making my decision.

Edit: It's worth pointing out that people face opportunities to sacrifice their own welfare for others at much better than 1000:1 ratios all the time, and no one takes them except for a few weirdos like Toby Ord.

Replies from: army1987
comment by A1987dM (army1987) · 2013-05-19T10:33:19.392Z · LW(p) · GW(p)

I don't do any sophisticated calculations. I just try to avoid accidents.

http://www.smbc-comics.com/index.php?db=comics&id=2980#comic

comment by TheOtherDave · 2013-05-17T15:48:51.655Z · LW(p) · GW(p)

My answer to all three forms of the trolley problem is roughly the same... I almost undoubtedly wouldn't flip the switch, push the person, or sacrifice myself, but I endorse doing all three.

In the latter case, I would feel guilty about it afterwards (assuming the idealized version of the scenario where it's clear that I could have saved them by killing myself). In the former two cases, I would feel conflicting emotions and I'm not sure how I would resolve them.

Similar issues come up all the time with respect to, e.g., donating all of my assets to some life-saving charity.

I also often defect in Prisoner's Dilemmas, as long as I'm confessing my sins here.

comment by someonewrongonthenet · 2013-05-19T07:48:57.845Z · LW(p) · GW(p)

Though I am far too selfish to sacrifice myself, I just want to point out that anyone who would sacrifice themselves in this way (and not just because they are suicidal) better also living an extremely altruistic lifestyle right now.

As in, you devoted to making money largely for the purpose of donation, or you run a very successful charity. If you have unique and specialized brilliance, you might get off instead spending time solving an important problem only you can solve - but the circumstances better be extraordinary, and you are still donating almost all the income you get from that work.

Also, you have at least donated a kidney, and your excess blood and bone marrow are public property. You can forget about any immortality schemes you've got going (unless you think immortality for one person is some sort of giant moral positive that outweighs saving several lives, in which case all your donations are going towards other people's immortality)

Being extremely altruistic is hard.

Replies from: wedrifid
comment by wedrifid · 2013-05-19T12:26:25.215Z · LW(p) · GW(p)

Though I am far too selfish to sacrifice myself, I just want to point out that anyone who would sacrifice themselves in this way (and not just because they are suicidal) better also living an extremely altruistic lifestyle right now.

If such a person is already living an extremely altruistic lifestyle then they can reasonably expect to continue doing so. It would not be (coherently) altruistic to sacrifice themselves. It would be irrational self-indulgence.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-05-21T06:41:14.324Z · LW(p) · GW(p)

You're not gonna die from donating a kidney. At worst you run the risk of some minor health issues. You'll still be up and running to do more altruism for a long time - if you die a few years early, it will be at an age when your capacity to do altruism has greatly diminished.

Although you might right - among the capable, there are many who might lose more than the $2,500 dollars that Givewell things it takes to save one life by donating a kidney, due to time spent not working. Blood and bone marrow are probably a different story.

But even if you take the part about donating kidneys and bone marrow out...with a few exceptions for certain individuals with specialized skillsets you still have to single mindedly focus on making as much money as you can for donation.

Replies from: wedrifid
comment by wedrifid · 2013-05-21T06:58:59.553Z · LW(p) · GW(p)

You're not gonna die from donating a kidney.

There seems to be some confusion. I am saying that in addition to the altruistic trait prescribing acts like kidney donation (and much more) that trait also means that ending one's own life in such a trade would be prohibited for the most people. (Unless they have life insurance on themselves for as much or more than the expected value of their future altruistic acts and an efficient charity as beneficiary. But I'd consider exceptions that make dying a good thing somewhat beyond the scope of the ethics question.)

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-05-21T18:44:03.384Z · LW(p) · GW(p)

Oh, ok.

You are saying that someone who lives a life of austere self-sacrifice would not trade themselves in the trolley problem because, since the people they are saving are probably not extreme altruists, they could contribute more net utility by remaining alive and allowing the others to die?

I guess I agree. It would hinge on how many people were being saved. I'm not sure what the average person's net utility is (helping dependents such as children and spouse adds to the disutility resulting from death here), so I don't know how many lives it takes to justify the death of an extreme altruist of average ability.

Also, this only applies to extreme altruists - the rest of us can't use this as an excuse!

Replies from: wedrifid
comment by wedrifid · 2013-05-22T01:22:43.604Z · LW(p) · GW(p)

You are saying that someone who lives a life of austere self-sacrifice would not trade themselves in the trolley problem because, since the people they are saving are probably not extreme altruists, they could contribute more net utility by remaining alive and allowing the others to die?

Yes.

I guess I agree. It would hinge on how many people were being saved.

Yes. It would need to be more than you could save via earning as much money as possible and buying lives as cheaply as possible. Certainly more than 5 for most people (living in my country and most likely yours).

I'm not sure what the average person's net utility is (helping dependents such as children and spouse adds to the disutility resulting from death here)

The average person's net utility doesn't matter, since so far we have only been comparing saving average people to saving more average people. What could matter is if for some reason we believed that the 5 people on the track had more net utility per person than the (statistical) people expected to be saved by future altruistic efforts.

Also, this only applies to extreme altruists - the rest of us can't use this as an excuse!

For the extreme altruist this isn't an excuse (any more than a paperclip maximiser needs an excuse to create paperclips). The rest of us don't need an excuse (although given the extreme utility differences at play there are many people who both could use it as an excuse but would refrain from suicide even without it). I find that thinking in terms of 'excuses' is something of a hinderance. The wrong intuitions come into play (ie. the ones where we make up bullshit that sounds good to others instead of the ones where we make good decisions).

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-05-22T19:20:13.979Z · LW(p) · GW(p)

saving average people to saving more average people

Not so - we are comparing saving average people to saving people who would have died without aid.

It's possible (and I'm not making this claim, just pointing out the possibility) that the type of person who's life can be saved as cheaply as possible is contributing less net good to the lives of other people than the average person (who is self sufficient). In real-world terms, it is plausible that the latter is more likely to support a family or even donate money. I'm not saying that some people have more intrinsic value - but the deaths of some people might weigh more heavily on the survivors than the deaths of others.

Since humans are social animals, the total dis-utility resulting from the death of a human is {intrinsic value of human life} + {value that this individual contributes to other humans}. As I said before:

helping dependents such as children and spouse adds to the dis-utility resulting from death

I find that thinking in terms of 'excuses' is something of a hinderance.

I agree. I feel that people should never use excuses, at least never within their own minds - but the fact is that humans aren't neat like utility functions, and do use excuses to negotiate clashing preferences within themselves. That is why I felt the need to point out that other people who are reading this conversation should not use the (correct) observation you made about extreme altruists and apply it to themselves - it's quite conceivable that someone who was struggling with this moral dilemma would use it as a way to get out of admitting that they aren't living up to their ideal of "good".

Replies from: wedrifid
comment by wedrifid · 2013-05-23T03:55:23.962Z · LW(p) · GW(p)

It's possible (and I'm not making this claim, just pointing out the possibility) that the type of person who's life can be saved as cheaply as possible is contributing less net good to the lives of other people than the average person (who is self sufficient). In real-world terms, it is plausible that the latter is more likely to support a family or even donate money. I'm not saying that some people have more intrinsic value - but the deaths of some people might weigh more heavily on the survivors than the deaths of others.

Please see the sentence after the one you quoted.

That is why I felt the need to point out that other people who are reading this conversation should not use the (correct) observation you made about extreme altruists and apply it to themselves

I felt (and feel) obliged to point out to the same people that it is often an error to be persuaded to do Y by someone telling you that "X is no excuse not to do Y". Accepting that kind of framing can amount to allowing another to modify your preferences. Allowing others to change your preferences tends to be disadvantageous.

comment by glennonymous · 2013-05-17T12:13:13.404Z · LW(p) · GW(p)

Great post. Here's my unvarnished answer: I wouldn't jump, and the reasons why involve my knowledge that I have a 7-year old daughter and the (Motivated Reasoning and egotism alert!!) idea that I have the potential to improve the lives of many people.

Now of course, it's EXTREMELY likely that one or more of the other people in this scenario is a parent, and for all I know one of them will invent a cure for cancer in the future. In point of fact, if I were to HONESTLY evaluate the possibility that one of the other players has a potential to improve the planet more than I do, the likelihood may be as great as the likelihood that one of the other players is also a parent. Which makes me think that yes, my incentives are screwed up here and the correct answer is: I should be as willing to jump as to push the fat man off the bridge.

I also note that, if my wife or daughter was one of the people tied to the track, I would unhesitatingly throw myself off. This makes me conclude that I should want to throw myself off the bridge (because the supposedly, flimsily 'rational atruistic' reason -- that I have the potential to help people -- is revealed to be bogus). I still wonder, however, if there is any possible rational reason to not choose to sacrifice oneself in the scenario. I am unable to come up with one.

Replies from: ArisKatsaris, Skeeve, malcolmocean, MugaSofer, shminux
comment by ArisKatsaris · 2013-05-17T14:58:43.798Z · LW(p) · GW(p)

I still wonder, however, if there is any possible rational reason to not choose to sacrifice oneself in the scenario.

Of course there is -- e.g. if you care more for yourself than for other people, rationality doesn't compel you to sacrifice even a cent of your money, let alone you life, for the sake of others.

People must REALLY REALLY stop confusing what is "rational" and what is "moral". Rationality says nothing about what you value, only about how to achieve it.

They must also stop confusing "should" "would" and "I would prefer to".

Replies from: benelliott
comment by benelliott · 2013-05-17T17:59:43.187Z · LW(p) · GW(p)

I'm not sure what 'should' means if it doesn't somehow cash out as preference.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-05-17T18:22:19.992Z · LW(p) · GW(p)

I'm not sure what 'should' means if it doesn't somehow cash out as preference.

Yeah, "somehow" the two concepts are connected, we can see that, because moral considerations act on our preferences, and most moral philosophies take the preferences of others in considerations when deciding what's the moral thing to do.

But the first thing that you must see is that the concepts are not identical. "I prefer X to happen" and "I find X morally better" are different things.

Take random parent X and they'll care more about the well-being of their own child than for the welfare of a million other children in the far corner of the world. That doesn't mean they evaluate a world where a million other children suffer to be a morally better world than a world where just theirs does.

Here's what I think "should" means. I think "should" is an attempted abstract calculation of our preferences in the attempted depersonalization of the provided context. To put it differently, I think "should" is what we believe we'd prefer to happen if we had no personal stakes involved, or what we believe we'd feel about the situation if our empathy was not centralized around our closest and dearest.

EDIT TO ADD: If I had to guess further, I'd guess that the primary evolutionary reason for our sense of morality is probably not to drive us via guilt and duty but to drive us via moral outrage -- and that guilt is there only as in our imagined perception of the moral outrage of others. To test that I'd like to see if there's been studies to determine if people who are guilt-free (e.g. psychopaths) are also free of a sense of moral outrage.

Replies from: MugaSofer
comment by MugaSofer · 2013-05-19T21:33:22.287Z · LW(p) · GW(p)

guilt is there only as in our imagined perception of the moral outrage of others

Well, anonymity does lead to antisocial behavior in experiments ... and in 4chan, for that matter.

On the other hand, 4chan is also known for group hatefests of moral outrage which erupt into DDOS attacks and worse.

comment by Skeeve · 2013-05-17T13:39:04.349Z · LW(p) · GW(p)

I find myself thinking mostly around the same lines as you, and so far the best I've been able to come up with is "I'm willing to accept a certain amount of immorality when it comes to the welfare of my wife and child".

I'm not really comfortable with the implications of that, or that I'm not completely confident it's not still a rationalization.

Replies from: Kawoomba, Petruchio, MugaSofer
comment by Kawoomba · 2013-05-19T21:45:39.376Z · LW(p) · GW(p)

certain amount of immorality

Is there an amount of human suffering of strangers to avoid which you'd consent to have your wife and child tortured to death?

Also, you're "allowed" your own values -- no need for rationalizations for your terminal values, whatever they may be. If the implications make you uncomfortable (maybe they aren't in accordance with facets of your self-image), well, there's not yet been a human with non-contradictory values so you're in good company.

Replies from: Skeeve, Richard_Kennaway
comment by Skeeve · 2013-05-20T12:25:32.146Z · LW(p) · GW(p)

Is there an amount of human suffering of strangers to avoid which you'd consent to have your wife and child tortured to death?

Initially, my first instinct was to try and find the biggest font I could to say 'no'. After actually stopping to think about it for a few minutes... I don't know. It would probably have to be enough suffering to the point where it would destabilize society, but I haven't come to any conclusions. Yet.

If the implications make you uncomfortable (maybe they aren't in accordance with facets of your self-image), well, there's not yet been a human with non-contradictory values so you're in good company.

Heh, well, I suppose you've got a point there, but I'd still like my self-image to be accurate. Though I suppose around here that kind of goes without saying.

Replies from: Kawoomba
comment by Kawoomba · 2013-05-20T12:42:46.499Z · LW(p) · GW(p)

Initially, my first instinct was to try and find the biggest font I could to say 'no'. After actually stopping to think about it for a few minutes... I don't know. It would probably have to be enough suffering to the point where it would destabilize society, but I haven't come to any conclusions. Yet.

That sounds a bit like muddling the hypothetical, along the lines of "well, if I don't let my family be tortured to death, all those strangers dying would destabilize society, which would also cause my loved ones harm".

No. Consider the death of those strangers to have no discernible impact whatsoever on your loved ones, and to keep the numbers lower, let's compare "x strangers tortured to death" versus "wife and child tortured to death". Solve for x. You wouldn't need to watch the deeds in both cases (although feel free to say what would change if you'd need to watch when choosing against your family), it would be a button choice scenario.

The difference between myself and many others on LW is that not only would I unabashedly decide in favor of my loved ones over an arbitrary amount of strangers (whose fate wouldn't impact us), I do not find any fault with that choice, i.e. it is an accurate reflection of my prioritized values.

I'd still like my self-image to be accurate.

As the saying goes, "if the hill will not come to Skeeve, Skeeve will go to the hill". There's a better alternative to trying to rewrite your values to suit your self-image. Which is constructing an honest self-image to reflect your values.

Replies from: Skeeve
comment by Skeeve · 2013-05-20T16:11:15.319Z · LW(p) · GW(p)

That sounds a bit like muddling the hypothetical, along the lines of "well, if I don't let my family be tortured to death, all those strangers dying would destabilize society, which would also cause my loved ones harm".

That was the sort of lines I was thinking along, yes. Framing the question in that fashion... I'm having some trouble imagining numbers of people large enough. It would have to be something on the order of 'where x contains a majority of any given sentient species'.

The realization that I could willingly consign billions of people to death and be able to feel like I made the right decision in the morning is... unsettling.

As the saying goes, "if the hill will not come to Skeeve, Skeeve will go to the hill".

I wish I could upvote you a second time just for this line. But yes, this is pretty much what I meant; I didn't intend to imply that I wanted my self-image to be accurate and unchanging from what it is now, I'd just prefer it to be accurate.

comment by Richard_Kennaway · 2013-05-20T15:14:20.187Z · LW(p) · GW(p)

Is there an amount of human suffering of strangers to avoid which you'd consent to have your wife and child tortured to death?

The hypothetical is being posed is what to me is an unsatisfactory degree of abstraction. How about a more concrete form?

You are fighting in the covert resistance against some appallingly oppressive regime. (Goodness knows the 20th century has enough examples to choose from.) You get the news that the regime is onto you and have your wife and child hostage. What do you do?

Replies from: Kawoomba
comment by Kawoomba · 2013-05-20T15:39:41.113Z · LW(p) · GW(p)

We may grok that scenario in decidedly different ways:

Maybe it would serve the wife and child best if I were successful in my resistance to some degree, to have a better bargaining situation? Maybe if I gave myself up, the regime would lose any incentive to keep the hostages alive? At that point we'd just be navigating the intricacies of such added details. Better to stick with the intent of the actions: Personally, I'd take the course of action most likely to preserve the wife and child's well-being, but then I probably wouldn't have grown into a role which exposes family to the regime as high-value bargaining chips.

comment by Petruchio · 2013-05-17T14:26:06.735Z · LW(p) · GW(p)

What is immorality then? Even a theist would say "morality is that which is good and should be done, and immorality is that which is not good and should not be done." If you think it would be immoral to spare you wife and child, then you are saying it is not a good thing and shouldn’t be done. I am pretty sure protecting your family is a good thing, and most people would agree.

The problem is, I think, is not that it is immoral to not push you wife and child in front of a moving train, albeit to save 5 others, but that it is immoral to push any individual in front of a train to save some other individuals.

If you increase the numbers enough, though, I would think it changes, since you are not just saving others, but society, or civilization or a town or what have you. Sacrificing others for that is acceptable, but rarely does this require a single person’s sacrifice, and it usually requires the consent and deliberation of the society under threat. Hence why we have the draft.

Replies from: Skeeve
comment by Skeeve · 2013-05-17T14:51:41.542Z · LW(p) · GW(p)

What I mean by 'immorality' is that I, on reflection, believe I am willing to break rules that I wouldn't otherwise if it would benefit my family. Going back to the original switch problem, if it was ten people tied to the siding, and my wife and child tied to the main track, I'd flip the switch and send the train onto the siding.

I don't know if that's morally defensible, but it's still what I'd do.

Replies from: ArisKatsaris, Petruchio
comment by ArisKatsaris · 2013-05-17T15:03:01.372Z · LW(p) · GW(p)

I'm finding myself disappointed that so many people have trouble distinguishing between "would" "should" and "prefer"

You're just saying that
a) you'd prefer to save your family
b) you believe you would save your family.
c) you probably should not.

There's nothing at all contradictory in the above statements. You would do something and prefer to do something that you recognize you shouldn't. What you "prefer" and what you "would" and what you "should" are all different logical concepts, so there's no reason to think they always coincide even when they often do.

Replies from: Skeeve, Petruchio, DSherron
comment by Skeeve · 2013-05-17T15:28:43.641Z · LW(p) · GW(p)

I don't think I was having any trouble distinguishing between "would", "should", and "prefer". Your analysis of my statement is spot on - it's exactly what I was intending to say.

If morality is (rather simplistically) defined as what we "should" do, I ought to be concerned when what I would do and what I should do doesn't line up, if I want to be a moral person.

comment by Petruchio · 2013-05-17T15:19:01.679Z · LW(p) · GW(p)

Ah, but the [i]should[/i] coincide. And if this is a moral problem, it is in the realm of the [i]should[/i]. If it is a question if you are a moral person, then it it in the realm of the [i]would[/i]. As for [i]prefer[/i] that is the most fluid concept, meaning either a measuring of contrasting values, or your emotions of the matter.

comment by DSherron · 2013-05-22T15:42:58.841Z · LW(p) · GW(p)

This is incorrect. "Should" and "prefer" can't give different answers for yourself, unless you really muddle the entire issue of morality altogether. Hopefully we can all agree that there is no such thing as an objective morality written down on the grand Morality Rock (and even if there were there would be no reason to actually follow it or call it moral). If we can't then let me know and I'll defend that rather than the rest of this post.

The important question is; what the hell do we mean by "morality?" It's not something we can find written down somewhere on one of Jupiter's moons, so what exactly is it, where does it come from, and most importantly where do our intuitions and knowledge about it come from? The answer that seems most useful is that morality is the algorithm we want to use to determine what actions to take, if we could self-modify to be the kind of people we want to be. It comes from reflecting on our preferences and values and deciding which we think are really and truly important and which we would rather do without. We can't always do it perfectly right now, because we run on hostile hardware, but if we could reflect on all our choices perfectly then we would always choose the moral one. That seems to align with our intuitions of morality as the thing we wish we could do, even if we sometimes can't or don't due to akrasia or just lack of virtue. Thus, it is clear that there is a difference between what we "should" do, and what we "would" do (just as there is sometimes a difference between the best answer we can get for a math problem and the one we actually write down on the test). But it's clear that there is no difference between what we "should" do and what we would prefer we do. Even if you think my definition of morality is missing something, it should be clear that morality cannot come from anywhere other than our preferences. There simply isn't anywhere else we could get information about what we "should" do, which anyone in their right mind would not just ignore.

In short, if I would do x, and I prefer to do x, then why the heck would/should I care whether I should do x?! Morality in that case is completely meaningless; it's no more useful than whatever's written on the great Morality Rock. If I don't prefer to act morally (according to whatever system is given) then I don't care whether my action is "moral".

Replies from: ArisKatsaris, TheOtherDave
comment by ArisKatsaris · 2013-05-22T16:49:13.650Z · LW(p) · GW(p)

"Should" and "prefer" can't give different answers for yourself, unless you really muddle the entire issue of morality altogether.

But they do. "I know I shouldn't, but I want to". And since they do so often give different answer, they can give different answers.

Hopefully we can all agree that there is no such thing as an objective morality written down on the grand Morality Rock

I think we're both in agreement that when we talk about "morality" we are in reality discussing something that some part of our brain is calculating or attempting to calculate. The disagreement between us is about what that calculation is attempting to do.

The answer that seems most useful is that morality is the algorithm we want to use to determine what actions to take, if we could self-modify to be the kind of people we want to be.

First of all, even that's different from morality=preference -- my calculated Morality(X) of an action X wouldn't be calculated by my current Preference(CurrentAris, X), but rather by my estimated Preference(PreferredAris, X). So it would still allow that what I prefer to do is different from what I believe I should do.

Secondly, your definition doesn't seem to me to explain how I can judge some people more moral than me and yet NOT want to be as moral as they are -- can I invite you to read the Jain's death?

SOME SPOILERS for "The Jain's Death" follow below...
Near the end of the first part of the comic, the Jain in question engages in a self-sacrificial action, which I don't consider morally mandatory -- I'm not even sure it's morally permissible -- and yet I consider her a more moral person than I am. I don't want to be as moral as she is.

My own answer about what morality entails is that it's an abstraction of our preferences in the attempted de-selfcenteredness of context. Let's say that a fire marshall has the option of saving either 20 children in an orphanage or your own child.

What you prefer to do is that he save your own child. What you recognize as moral is that he save the 20 children. That's because if you had no stakes in the issue that's what your preference of what he should do would be. So morality is not preference, it's abstracted preference.

And abstracted preference feeds back to influence actual preference, but it doesn't fully replace the purely amoral preference.

So in that sense I can't depersonalize so much to consider the Jain's action better than mine would have been, so I don't consider her action morally better. I don't want to depersonalize that much either -- so I don't want to be as moral as she is. But she is more moral than me, because I recognize that she does depersonalize more, and lets that abstraction move her actions further than I ever would want to.

I think my answer also explains as to why some people believe morality objective and some view it as subjective. Because it's the subjective attempt at objectivity. :-)

In short, if I would do x, and I prefer to do x, then why the heck would/should I care whether I should do x?!

You would care about whether you should do x as a mere function of our brain -- we're wired so that the morality of a deed acts on our preferences. All other things being equal the positive morality of a deed tends to act positively on our preferences.

Replies from: DSherron
comment by DSherron · 2013-05-22T17:50:20.158Z · LW(p) · GW(p)

You have a very specific, universal definition of morality, which does seem to meet some of our intuitions about the word but which is generally not at all useful outside of that. Specifically, for some reason when you say moral you mean unselfish. You mean what we would want to do if we, personally, we're not involved. That captures some of our intuitions, but only does so insofar as that is a specific thing that sounds sort of good and that therefore tends to end up in a lot of moral systems. However, it is essentially a command from on high - thou shalt not place thine own interests above others. I, quite frankly, don't care what you think I should or shouldn't do. I like living. I value my life higher than yours, by a lot. I think that in general people should flip the switch on the trolley problem, because I am more likely to be one of the 5 saved than the 1 killed. I think that if I already know I am the one, they should not. I understand why they wouldn't care, and would flip it anyway, but I would do everything in my power (including use of the Dark Arts, bribes, threats, and lies) to convince them not to. And then I would walk away feeling sad that 5 people died, but nonetheless happy to be alive. I wouldn't say that my action was immoral; on reflection I'd still want to live.

The major sticking point, honestly, is that the concept of morality needs to be dissolved. It is a wrong question. The terms can be preserved, but I'm becoming more and more convinced that they shouldn't be. There is no such thing as a moral action. There is no such thing as good or evil. There are only things that I want, and things that you want, and things that other agents want. Clippy the paperclip maximizer is not evil, but I would kill him anyway (unless I could use him somehow with a plan to kill him later). I would adopt a binding contract to kill myself to save 5 others on the condition that everyone else does the same; but if I already know that I would be in a position to follow through on it then I would not adopt it. I don't think that somehow I "should" adopt it even though I don't want to, I just don't want to adopt it and should is irrelevant (it's exactly the same operation, mentally, as "want to").

Basically, you're trying to establish some standard of behavior and call it moral. And you're wrong. That's not what moral means in any sense other than that you have defined it to mean that. Which you can't do. You've gotten yourself highly confused in the process. Restate your whole point, but don't use the words moral or should anywhere (or synonyms). What you should find is that there's no longer any point to be made. "Moral" and "should" are buzzwords with no meaning, but they sound like they should be important so everyone keeps talking about them and throwing out nice-sounding things and calling them moral, and are contradicted by othe people with other nice things and calling them moral. Sometimes I think the fundamentalist theists have it better figured out; "moral" is what God says it is, and you care because otherwise you're thrown into fire!

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-05-22T19:15:22.772Z · LW(p) · GW(p)

I think that in general people should flip the switch on the trolley problem, because I am more likely to be one of the 5 saved than the 1 killed. I think that if I already know I am the one, they should not.

Let's consider two scenarios:
X: You are the one, the train is running towards the five, and Bob chooses to flip the switch so that it kills you instead.
Y: You are among the five, the train is running towards the one, and Bob chooses to flip the switch so that it kills the five instead.

In both scenarios Bob flips the switch and as a result you die -- but I think than in the case of action Y, where you are one of the five, you'd also be likely experiencing a sense of moral outrage towards Bob that you would be lacking in the case of action X.

There are only things that I want, and things that you want, and things that other agents want.

There exist moral considerations in someone choosing his actions much like there exist considerations of taste in someone choosing his lunch. If you fail to acknowledge this, you'll be simply be predicting the actions of moral individuals wrongly.

Restate your whole point, but don't use the words moral or should anywhere (or synonyms).

Okay. There's a mechanism in our brains that serves to calculate our abstracted preferences for behaviours -- in the sense of attempting to calculate a preference if we had no stakes in the given situation. The effect of this mechanism are several: it produces positive emotions towards people and behaviours that follow said abstracted preferences, negative emotions towards people and behaviours that don't follow said abstracted preferences, and it contributes in determining our own actions; causing negative self-loathing feelings (labelled guilt or shame) when we fail to follow said abstracted preferences.

What you should find is that there's no longer any point to be made. "Moral" and "should" are buzzwords with no meaning,

I think I did a good job above. You've failed to make your case to me that there is no meaning behind moral and should. We recognize the effects of morality (outrage, applauding, guilt), but we're not self-aware enough about the mechanism of our moral calculations itself. But that isn't surprising to me, there's hardly any pattern-recognition in our brains whose mechanism we are self-aware about (I don't consciously think "such-a-nose and such a face-shape" when my brain recognizes the face of my mother).

The only difference between optical pattern recognition and moral pattern recognition is that the latter deals with behaviours rather than visible objects. To tell me that there's no morality is like telling me there's no such thing as a square. Well sure, there's no Squareness Rock somewhere in the universe, but it's a actual pattern that our brains recognize.

Replies from: DSherron
comment by DSherron · 2013-05-22T20:30:20.786Z · LW(p) · GW(p)

It seems like a rather different statement to say that there exists a mechanism on our brain which tends to make us want to act as though we had no stakes in the situation, as opposed to talking about what is moral. I'm no evo-psych specialist but it seems plausible that such a mechanism exists. I dispute the notion that such a mechanism encompasses what is usually meant by morality. Most moral systems do not resolve to simply satisfying that mechanism. Also, I see no reason to label that particular mechanism "moral", nor the output of it those things we "should" do (I don't just disagree with this on reflection; it's actually my inuition that "should" means what you want to do, while impartiality is a disconnected preference that I recognize but don't associate even a little bit with should. I don't seem to have an intuition about what morality means other than doing what you should, but then I get a little jarring sensation from the contact with my should intuition...). You've described something I agree with after the taboo, but which before it I definitely disagree with. It's just an issue of semantics at this point, but semantics are also important. "Morality" has really huge connotations for us; it's a bit disingenuous to pick one specific part of our preferences and call it "moral", or what we "should" do (even if that's the part of our brain that causes us to talk about morality, it's not what we mean by morality). I mean, I ignore parts of my preferences all the time. A thousand shards of desire and all that. Acting impartially is somewhere in my preferences,, but it's pretty effectively drowned out by everything else (and I would self-modify away from it given the option - it's not worth giving anything up for on reflection, except as social customs dictate).

I can identify the mechanism you call moral outrage though. I experience (in my introspection of my self-simulation, so, you know, reliable data here /sarcasm) frustration that he would make a decision that would kill me for no reason (although it only just now occurred to me that he could be intentionally evil rather than stupid - that's odd). I oddly experience a much stronger reaction imagining him being an idiot than imagining him directly trying to kill me. Maybe it's a map from how my "should" algorithm is wired (you should do that which on reflection you want to do) onto the situation, which does make sense. I dislike the goals of the evil guy, but he's following them as he should. The stupid one is failing to follow them correctly (and harming me in the process - I don't get anywhere near as upset, although I do get some feeling from it, if he kills 5 to save me).

In short, using the word moral makes your point sound really different than when you don't. I agree with it, mostly, without "moral" or "should". I don't think that most people mean anything close to what you've been using those words to mean, so I recommend some added clarity when talking about it. As to the Squareness Rock, "square" is a useful cocept regardless of how I learned it - and if it was a Harblan Rock that told me a Harblan was a rectangle with sides length 2:9, I wouldn't care (unless there were special properties about Harblans). A Morality Rock only tells me some rules of behavior, which I don't care about at all unless they line up with the preferences I already had. There is no such thing as morality, except in the way it's encoded in individual human brains (if you want to call that morality, since I prefer simply calling it preferences); and your definition doesn't even come close to the entirety of what is encoded in human brains.

comment by TheOtherDave · 2013-05-22T17:31:40.994Z · LW(p) · GW(p)

"Should" and "prefer" can't give different answers for yourself, unless you really muddle the entire issue of morality altogether.

(shrug) I find that "prefer" can give different answers for myself all on its own.

The important question is; what the hell do we mean by "morality?"

I'm not sure that is an important question, actually. Let alone the important question. What makes you think so?

morality cannot come from anywhere other than our preferences.

No argument there, ultimately. But just because my beliefs about what I should do are ultimately grounded in terms of my preferences, it still doesn't follow that in every situation my beliefs about what I should do will be identical to my beliefs about what I prefer to do.

Given that those two things are potentially different, it's potentially useful to have ways of talking about the difference.

Replies from: DSherron
comment by DSherron · 2013-05-22T18:09:28.396Z · LW(p) · GW(p)

By the important question, I meant the important question with regard to the problem at hand. Ultimately I've since decided that the whole concept of morality is a sort of Wrong Question; discourse is vastly improved by eliminating the word altogether (and not replacing with a synonym).

What is the process which determines what you should do? What mental process do you perform to decide that you should or shouldn't do x? When I try and pinpoint it I just keep finding myself using exactly the same thoughts as when I decide what I prefer to do. When I try to reflect back to my days as a Christian, I recall checking against a set of general rules of good and bad and determine where something lies on that spectrum. Should can mean something different from want in the sense of "according to the Christian Bible, you should use any means necessary to bring others to believe in Christ even if that hurts you." But when talking about yourself? What's the rule set you're comparing to? I want to default to comparing to your preferences. If you don't do that then you need to be a lot more specific about what you mean by "should", and indeed why the word is useful at all in that context.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-22T18:47:54.001Z · LW(p) · GW(p)

The mental process I go through to determine my preferences is highly scope-sensitive.

For example, the process underlying asking "which of the choices I' have the practical ability to implement right now do I prefer?" is very different from "which of the choices I have the intellectual ability to conceive of right now do I prefer?" is very different from "do I prefer to choose from among my current choices, or defer choosing?"

Also, the answer I give to each of those questions depends a lot on what parts of my psyche I'm most identifying with at the moment I answer.

Many of my "should" statements refer to the results of the most far-mode, ego-less version of "prefer" that I've cached the results of evaluating. In those cases, yes, "should" is equivalent to (one and only one version of) "prefer." Even in those cases, though, "prefer" is not (generally) equivalent to "should," though in those cases I am generally happiest when my various other "prefers" converge on my "should".

There are also "should" statements I make which are really social constructs I've picked up uncritically. I make some effort to evaluate these as I identify them and either discard them or endorse them on other grounds, but I don't devote nearly the effort to that that would be required to complete the task. In many of those cases, my "should" isn't equivalent to any form of "prefer," and I am generally happiest in those cases when I discard that "should".

comment by Petruchio · 2013-05-17T15:01:45.037Z · LW(p) · GW(p)

I can see that. It will be a difficult choice, but I would do the same. I think it is morally defensible.

[Edit] On second thought, I am not a husband or father, but I would like to think I will one day have a family who has heroic virtue enough to be willing to sacrifice their live for others. How I would behave, again is subject to my emotions, but I would like to honor that wish.

comment by MugaSofer · 2013-05-19T21:33:59.819Z · LW(p) · GW(p)

As a non-parent, I endorse this comment.

comment by MalcolmOcean (malcolmocean) · 2013-05-17T17:06:10.363Z · LW(p) · GW(p)

There's a possible loophole in the possibility that living with the grief of your dead family (and specifically, the knowledge that you could have prevented it) would prevent you from making the world so super-awesome.

comment by MugaSofer · 2013-05-19T21:27:49.018Z · LW(p) · GW(p)

for all I know one of them will invent a cure for cancer in the future

What are the actual odds of that, though? Compared to the good you do (you're on LW, so I'm guessing you're more likely to do some rational altruism and save more than five lives than they are.)

I also note that, if my wife or daughter was one of the people tied to the track, I would unhesitatingly throw myself off. This makes me conclude that I should want to throw myself off the bridge (because the supposedly, flimsily 'rational atruistic' reason -- that I have the potential to help people -- is revealed to be bogus).

I would assume you're massively biased/emotionally compromised with regards to that scenario, just for evopsych reasons. So I'd be iffy about using that as a yard stick.

That said, you also presumably know them better, so there's the risk that you're treating the five victims as faceless NPCs.

I still wonder, however, if there is any possible rational reason to not choose to sacrifice oneself in the scenario. I am unable to come up with one.

Ultimately, it comes down the instrumental values. The five get a x5 and also automatically save four net lives, so you would have to be noticeably above average - but I'd say there's enough low-hanging fruit around that that's far from impossible.

After all, it's not like these people are signed up for cryonics.

comment by Shmi (shminux) · 2013-05-17T17:16:29.143Z · LW(p) · GW(p)

Which makes me think that yes, my incentives are screwed up here and the correct answer is: I should be as willing to jump as to push the fat man off the bridge.

Beware of the straw Vulcan/Dickensian rule "The needs of the many...". This is deontology disguising as utilitarianism. Sometimes it works and sometimes it doesn't, and you don't have to feel bad when it doesn't.

comment by drnickbone · 2013-05-17T16:54:19.707Z · LW(p) · GW(p)

What's the difference between the cases? There is quite a literature on this, including the theory that pushing the fat man triggers an emotional revulsion which flipping a switch does not. And answers to the effect that it is immoral to push the fat man are just attempts to rationalise this revulsion. That's a cynical, but possible, explanation.

Another theory is based on intentions: by flipping the switch, you are not intending to kill the one person on the alternate track (you'd be very glad if they somehow escaped in time), whereas in pushing the fat man, you are intending to kill or seriously injure him (if he just bounces off the trolley car unharmed, without stopping it, then pushing him achieves nothing). This doesn't quite work though, since you are presumably hoping that the fat man survives somehow.

A distinction that I haven't seen discussed so much is the Kantian one: that it is wrong to use people as means to ends rather than treating them as ends in their own right. Flipping a switch doesn't use anyone, whereas pushing the fat man off is clearly using him to stop the trolley.

This could also account for some of the distinction in the "honourable suicide" case: if I use myself to achieve an end that is of high ethical value (saving lives), then this is not in principle different from using my body to achieve any of my other (ethical) ends. (The Kantian has to grant some sort of exception to use of oneself, otherwise we can't get up in the morning.) So I think this supports the sense that jumping off myself is morally permissible (and even commendable), whereas pushing someone else off isn't.

I'm not so sure about the case where the fat man wants to jump himself, but needs help being lifted. That seems a little bit too close to assisted suicide, and the sense that this is wrong is probably based not on Kantian distinctions, but on whether it is a good social rule to allow killing in those circumstances. (It seems not, because the defence after the fact would always be "Of course the fat man wanted to jump! Prove that he didn't!")

comment by Petruchio · 2013-05-17T13:17:47.654Z · LW(p) · GW(p)

Myself, I am not a utilitarian, but a deontologist. I would flip the switch, because I have been given the choice to choose between two different losses, inescapably, and I would try to minimize this loss. As for pushing someone else in front of the trolley, I could not abide someone doing that to me or a loved one, throwing us from relative safety into absolute disaster. So I would not do it to another. It is not my sacrifice to make.

As for throwing myself in front of the trolley...

I would want to. In the calm state I am in right now, I would do it. In the moment, there is a more than probable chance that fear will take hold and I would not sacrifice myself for five others. But in this scenario, I would probably be too stressed to think to throw another person into a train, let alone myself. So if we are taking the effects of stress out of my cognitive calculations, I will take the effects of stress out of my moral calculations.

I would do it.

Replies from: benelliott
comment by benelliott · 2013-05-17T17:56:36.818Z · LW(p) · GW(p)

I could not abide someone doing that to me or a loved one, throwing us from relative safety into absolute disaster. So I would not do it to another. It is not my sacrifice to make.

I could not abide myself or a loved one being killed on the track. What makes their lives so much less important.

Replies from: Petruchio
comment by Petruchio · 2013-05-17T18:19:15.931Z · LW(p) · GW(p)

But would you approve of someone else doing the same thing? Again to you or a love one?

But I am starting to see the problem with fighting the hypothetical. It leads to arguments and borrowed offense, thus allowing the argument to lead into perpetuity. I can hypotectical be able to endure or not endure anything hypotectically, but this doesn't increase my rationality or utility.

This will conclude my posting on this page. Mayby OphanWilde's discussion will be a more appropriate topic than the Unselfish Trolley Problem.

comment by Kindly · 2013-05-18T14:49:03.570Z · LW(p) · GW(p)

Suppose in case 3 someone else, not you, is tied to the track but can reach the switch. What now?

I'm confused. If I'm not the one flipping the switch, what's the question you're asking?

Replies from: falenas108
comment by falenas108 · 2013-05-18T15:21:00.864Z · LW(p) · GW(p)

Would you still want them to flip the switch, even though it would result in your death.

Replies from: Kindly
comment by Kindly · 2013-05-18T16:50:37.288Z · LW(p) · GW(p)

Oh. This seems unnecessarily treading over previously covered ground. My short answer is "no".

My long answer would probably be some sort of formalization of "no, but I understand why they'd do it". I'd be happy with the cognitive algorithm that would make the other person flip the switch. But my feeling is that when you do the calculations, and the calculations say I should die, then demanding I should die is one thing... demanding I be happy about it is asking a bit much.

comment by CronoDAS · 2013-05-17T23:49:46.149Z · LW(p) · GW(p)

A variant:

Your country is being invaded by evil barbarians. They intend to steal anything portable of value that they can, which includes people that have value as slaves.

Should you volunteer to join your country's army? Would you?

Should your country institute conscription, if it will increase the chances of successfully fighting off the invaders?

If you answered "yes" to the previous question, would you vote to institute conscription, if it meant that you personally will be one of those people conscripted?

Replies from: wedrifid
comment by wedrifid · 2013-05-18T00:48:31.674Z · LW(p) · GW(p)

Should you volunteer to join your country's army?

The unpacking of the word 'should' here is more complicated than the remainder of the question by far.

Would you?

No. I don't unilaterally cooperate on commons problems. I also note that some people choosing to cooperate unilaterally can reduce the incentive for others to find away to enforce a more effective and complete solution. For example, an army of 1,000,000 naive altruists may be expected to beat the barbarians but with 900,000 casualties. In that case the others have an incentive to free-load. However an army of 10,000,000 conditional cooperators who constructed an enforcement mechanism may be expected to crush the enemy with overwhelming force, losing a mere 50,000 casualties. In this case volunteering is an evil act, not a good one.

Should your country institute conscription, if it will increase the chances of successfully fighting off the invaders?

If the choice is between using a volunteer army of the most altruistic and gullible and a conscripted army then yes, I prefer the conscription option particularly if it increases the chances of success. But it isn't my preferred mechanism.

The best way to get people to do a job is to pay them enough that they choose to. If people are not choosing to join the army that means they are not getting paid enough. The government has the power to coerce people into doing things and the preferred way of doing this in this case is taxation. Make everyone contribute to the war effort and let the market choose who pays in cash and who pays in risk of bodily harm but is compensated financially.

If you answered "yes" to the previous question, would you vote to institute conscription, if it meant that you personally will be one of those people conscripted?

Yes. Unless the country is sane enough that the conscription vote failing would result in a superior solution being constructed.

Replies from: TheOtherDave, CronoDAS, CronoDAS, None, wedrifid, MugaSofer
comment by TheOtherDave · 2013-05-18T02:29:33.526Z · LW(p) · GW(p)

For good or ill, many people in my country have a hard time thinking of themselves as part of a cooperative enterprise with much wealthier people who are cooperating through money while they cooperate through risking bodily harm.

Of course, I've never suggested that my country is particularly sane.

Replies from: wedrifid
comment by wedrifid · 2013-05-18T03:51:58.124Z · LW(p) · GW(p)

For good or ill, many people in my country have a hard time thinking of themselves as part of a cooperative enterprise with much wealthier people who are cooperating through money while they cooperate through risking bodily harm.

Fortunately your country is also not being invaded by evil barbarians intent on pillaging and enslaving you.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-18T04:07:08.373Z · LW(p) · GW(p)

Yes, that is fortunate.

Replies from: wedrifid
comment by wedrifid · 2013-05-18T04:49:50.006Z · LW(p) · GW(p)

I suspect you are confused but the intended connotations are hidden behind too much indirection to be sure. I acknowledge the social benefit of strategic vagueness (I believe you refer to it as 'hint culture') but in this case I do not consider it a behaviour to be encouraged.

comment by CronoDAS · 2013-05-18T06:40:59.658Z · LW(p) · GW(p)

Here's another dilemma, somewhat related. I don't know if it was here or somewhere else that I came across this.

A construction company wants to undertake a project that it expects to be profitable. However, this project will be dangerous and, if it is undertaken, some employees are going to die during its construction. The company has come up with two possible plans for building the project. In one plan, one employee, named John, is certain to die. In the other, there will be exactly three fatalities out of a group of 100, although nobody knows which employees it will be (and cannot know until it happens). The company can't legally force its employees to work on the project, but it can offer them money to do so. John will not accept any amount of money in exchange for certain death, but the company does have 100 employees that it can pay to accept a 3% chance of death. You, as a government decision maker, now have three choices:

1) Compel John to sacrifice himself so the project can be completed with the fewest number of deaths.
2) Allow the company to implement its second plan, in which three employees, randomly selected from a group of of 100, will die.
3) Deny the construction company permission to construct the project.

What choice do you make?

Replies from: wedrifid, MugaSofer
comment by wedrifid · 2013-05-18T08:05:34.507Z · LW(p) · GW(p)

1) Compel John to sacrifice himself so the project can be completed with the fewest number of deaths. 2) Allow the company to implement its second plan, in which three employees, randomly selected from a group of of 100, will die. 3) Deny the construction company permission to construct the project.

What choice do you make?

2.

comment by MugaSofer · 2013-05-19T16:53:41.873Z · LW(p) · GW(p)

Either two or three, depending on whether applying this as a general rule would cripple the economy.

If I worked for the company, however, I would probably choose number one, assuming I don't have the power to prevent the project or it would provide benefits that outweigh his death..

comment by CronoDAS · 2013-05-18T01:27:10.611Z · LW(p) · GW(p)

In the American Civil War, you could avoid conscription by paying $300 and hiring a substitute. This was widely regarded as unfair, as only fairly wealthy people could afford to pay, and was a major agitating factor in the New York City draft riots.

Replies from: wedrifid
comment by wedrifid · 2013-05-18T02:22:01.369Z · LW(p) · GW(p)

In the American Civil War, you could avoid conscription by paying $300 and hiring a substitute. This was widely regarded as unfair, as only fairly wealthy people could afford to pay, and was a major agitating factor in the New York City draft riots.

If my understanding of US history serves me one of the sides in that war was also fighting for slavery. I am more than willing to defy the 'fairness' intuitions that some people have got up in arms about in the past.

As a purely practical matter of implementing an economically sane solution with minimal hysterics by silly people I observe that implementing conscription and allowing people to buy out of it will tend to trigger entirely different 'fairness' instincts than implementing a wartime tax and paying soldiers the market rate. The latter solution would likely produce far less civil unrest than the former and the fact that the economic incentives are equivalent is largely irrelevant given that the objection wasn't rational in the first place.

Bizarrely enough I would expect more unrest from the wealthy in the "war tax and pay market rates" scenario ("It isn't fair that you are taking all this money from me! How dare you use my money to pay these low status people $500,000 a year. They do not deserve that.") and objection from (people affiliating with) lower classes in the case of the "conscript and trade" scenario ("It isn't fair that rich people can buy their way out of fighting but poor people can't!"). Complete reversal of political support due to terminology change in an implementation detail. People are crazy; the world is mad. Not even biased self-interested political influence can be trusted to be coherent.

comment by [deleted] · 2013-05-18T21:18:15.876Z · LW(p) · GW(p)

The best way to get people to do a job is to pay them enough that they choose to. If people are not choosing to join the army that means they are not getting paid enough.

This strikes me as correct in many cases, but I worry about applying the general rule to military service in particular. Soldiers who see themselves as working for pay have a lot less incentive to take on risk for the sake of their employer. And from what I remember from a few studies, offering people one kind of prominent incentive drowns out others: offer a kid a dollar for doing something he otherwise enjoyes and not only will he be less willing to do it without the dollar in the future, but he'll often do a worse job, taking no pleasure in it.

Replies from: MugaSofer
comment by MugaSofer · 2013-05-19T16:51:27.741Z · LW(p) · GW(p)

How about soldiers who see themselves as being forcibly conscripted?

Replies from: None
comment by [deleted] · 2013-05-19T17:09:55.303Z · LW(p) · GW(p)

I was objecting to the claim that the best way to get people to fight for their country was to pay them a lot of money. I think this is really quite a bad way to get people to do this particular job, and forcible conscription is pretty bad too. If those are your only two options, I don't really know which is worse. I'm sure that depends on the circumstances.

Replies from: MugaSofer
comment by MugaSofer · 2013-05-20T21:31:50.333Z · LW(p) · GW(p)

It just struck me as odd that you didn't address the analogous argument for the other side.

Of course, you're right, ideally people should be joining out of the goodness of their hearts.

No, wait, ideally we should find the best soldier and clone them.

Replies from: None
comment by [deleted] · 2013-05-20T22:19:26.856Z · LW(p) · GW(p)

Or we could just do it with flying assassin robots.

Replies from: MugaSofer
comment by MugaSofer · 2013-05-23T09:15:57.395Z · LW(p) · GW(p)

... clearly my ideal was not ideal enough compared to, say, reality.

comment by wedrifid · 2013-05-18T01:00:02.937Z · LW(p) · GW(p)

The best way to get people to do a job is to pay them enough that they choose to. If people are not choosing to join the army that means they are not getting paid enough. The government has the power to coerce people into doing things and the preferred way of doing this in this case is taxation. Make everyone contribute to the war effort and let the market choose who pays in cash and who pays in risk of bodily harm but is compensated financially.

In fact, the neither the central government nor coercion are (in principle) required. A sane country could rely on the country purely for it's role as a contract enforcer and solve such cooperation problems through normal market forces and assurance contracts. "Military Kickstarter" as it were. This of course generalises to an outright weirdtopia.

comment by MugaSofer · 2013-05-19T16:50:55.140Z · LW(p) · GW(p)

The best way to get people to do a job is to pay them enough that they choose to. If people are not choosing to join the army that means they are not getting paid enough. The government has the power to coerce people into doing things and the preferred way of doing this in this case is taxation. Make everyone contribute to the war effort and let the market choose who pays in cash and who pays in risk of bodily harm but is compensated financially.

Hmm. What if the economy can't support the level of taxation required to encourage enough soldiers? There's got to be a point at which this isn't the answer. (Libertarians would say it's the point at which you start taxing people.)

comment by Manfred · 2013-05-17T17:43:22.629Z · LW(p) · GW(p)

I'm a consequentialist, not a total utilitarian.

comment by Shmi (shminux) · 2013-05-17T16:46:26.518Z · LW(p) · GW(p)

You pick your answer in ignorance of who you'll be in the problem. You don't know whether you're the pusher, the pushed, or one of the people tied to the tracks. In this case, the answer is easy: push! There's a 6/7 chance you'll survive so the selfish and utilitarian answers converge.

You mean, the selfish answer becomes the utilitarian one. In general I find arguments which only work when you consciously refuse to take into account some of the information available to you (like whether you are the pusher) quite suspect. It seems anti-Bayesian.

Replies from: MugaSofer
comment by MugaSofer · 2013-05-19T18:53:10.213Z · LW(p) · GW(p)

It can be handy for dealing with the "hostile hardware" issue, when you don't want certain parts of you updating.

comment by Shmi (shminux) · 2013-05-17T15:14:42.125Z · LW(p) · GW(p)

Just to clarify, the trolley problem is related to the repugnant conclusion, where utilitarianism+additivity+transitivity+dense set of outcomes lead to counterintuitive decisions. You can live with these decisions if you are a perfect utilitarian, or you have to break or weaken some of the assumptions, if you are not.

comment by Lumifer · 2013-05-17T14:33:58.182Z · LW(p) · GW(p)

I am starting to think the rails for that trolley were laid around a mulberry bush.

The best answer I know...

What is "best"? If you want to compare (or at least rank) the options available to you, you need a metric. "Best" is vague enough to be useless. What yardstick do you want to use?

comment by Peter Wildeford (peter_hurford) · 2013-05-17T14:33:13.163Z · LW(p) · GW(p)

In a vacuum where I knew nothing in advance about myself or those on the track, I would morally advocate my jumping. But I think it's rather plausible I'll go on to save far more than just five more lives in my lifetime than the average person would, should I stay alive. So by not jumping, I actually save more people. (Though this might just be a convenient rationalization.)

comment by Courtney Landers (courtney-landers) · 2020-05-24T18:28:17.833Z · LW(p) · GW(p)

I would throw the switch to divert the train from the track with five people to the track with just the one, then immediately lay down on the tracks next to the person I’m condemning to die and beg his forgiveness while I wait for the train to kill us both. I don’t have to live more than a moment with the distress murdering that man would cause me, and I might be able to provide him a moment of comfort knowing that he’s not dying alone. Save the five, ease the suffering of the one to whatever very minimal degree you are capable, and die knowing you deserve to be on the tracks with him if you are willing to condemn him.

comment by DanielLC · 2013-05-17T22:58:19.805Z · LW(p) · GW(p)

I feel like I shouldn't sacrifice myself because I can save more than five lives. Then again, me not being able to do so would be part of the "all else being equal" thing that's implicitly assumed. If I take into account that I can save more lives, I also have to take into account that I could be arrested for pushing the fat man off the track.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-17T23:30:33.357Z · LW(p) · GW(p)

I feel like I shouldn't sacrifice myself because I can save more than five lives.

That seems like an odd reason, unless you assume that the five people tied to the tracks can't collectively save more lives than you can.

Replies from: DanielLC, juliawise
comment by DanielLC · 2013-05-18T00:35:36.090Z · LW(p) · GW(p)

I don't assume that they can't. Just that they won't. Most people don't seem to care much about saving lives.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-18T01:23:31.193Z · LW(p) · GW(p)

Ah, OK. As noted elsewhere, I was misled by you describing yourself as someone who can save lives, rather than as someone who will. Thanks for clarifying.

comment by juliawise · 2013-05-18T00:44:36.264Z · LW(p) · GW(p)

Maybe it's about how likely you are to save lives rather than how possible it is.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-18T01:21:59.510Z · LW(p) · GW(p)

Maybe. If so, I was misled by the fact that DanielLC didn't say he's likely to save lives, merely that he can.

comment by Pentashagon · 2013-05-20T08:13:09.804Z · LW(p) · GW(p)

Updated case 2: Ask the person tied to the track what they would do if our situations were reversed. Do that. My reasoning is that the people on the track are much more affected by the decision than the people off of the track, and therefore my utility is probably maximized by letting them maximize their own utility. If I can hear the other 5 people, I'll let them take a vote and respect it.

Updated cases 3 and 4 are identical, it's just that humans think about actions kind of strangely. In both cases I would save myself unless I was feeling particularly depressed or altruistic at the moment.

The legality of updated case 4 may have been tested in motor vehicle accidents. Steering onto the sidewalk to avoid an oncoming runaway vehicle is virtually identical. My guess is that if there was no way to avoid either accident it would be difficult to show guilt for either choice.

I seem to recall some stories about train engineers who had to make the choice of staying on their runaway train to blow the warning whistle or try to react in other limited ways to save other people before an imminent collision instead of jumping to safety. From my fuzzy memory it seemed like engineers who jumped could be charged with something like dereliction of duty. Captains of ships have similar requirements for remaining with their vessel in distress even if it carries personal risk. So the answer to all of these questions may depend on whether the person who can flip the switch is working for the trolley company at the time.

comment by drethelin · 2013-05-17T18:01:34.271Z · LW(p) · GW(p)

This post is totally pointless. Get to moral argument you're trying to make.