The Problem With Trolley Problems
post by lionhearted (Sebastian Marshall) (lionhearted) · 2010-10-23T05:14:07.308Z · LW · GW · Legacy · 113 commentsContents
I think "trolley problem" type thinking is commonly used in real life to advocate and justify bad policy. None 113 comments
A trolley problem is something that's used increasing often in philosophy to get at people's beliefs and debate on them. Here's an example from Wikipedia:
As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you - your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?
I believe trolley problems are fundamentally flawed - at best a waste of time, and at worst lead to really sloppy thinking. Here's four reasons why:
1. It assumes perfect information about outcomes.
2. It ignores the global secondary effects that local choices create.
3. It ignores real human nature - which would be to freeze and be indecisive.
4. It usually gives you two choices and no alternatives, and in real life, there's always alternatives.
First, trolley problems contain perfect information about outcomes - which is rarely the case in real life. In real life, you're making choices based on imperfect information. You don't know what would happen for sure as a result of your actions.
Second, everything creates secondary effects. If putting people involuntarily in harm's way to save others was an acceptable result, suddenly we'd all have to be really careful in any emergency. Imagine living in a world where anyone would be comfortable ending your life to save other people nearby - you'd have to not only be constantly checking your surroundings, but also constantly on guard against do-gooders willing to push you onto the tracks.
Third, it ignores human nature. Human nature is to freeze up when bad things happen unless you're explicitly trained to react. In real life, most people would freeze or panic instead of react. In order to get over that, first responders, soldiers, medics, police, firefighters go through training. That training includes dealing with questionable circumstances and how to evaluate them, so you don't have a society where your trained personnel act randomly in emergencies.
Fourth, it gives you two choices and no alternatives. I firmly reject this - I think there's almost always alternative ways to get there from here if you open your mind to it. Once you start thinking that your only choice is to push the one guy in front of the trolley or to stand there doing nothing, your mind is closed to all other alternatives.
At best, this means trolley problems are just a harmless waste of time. But I think they're not just a harmless waste of time.
I think "trolley problem" type thinking is commonly used in real life to advocate and justify bad policy.
Here's how it goes:
Activist says, "We've got to take from this rich fat cat and give it to these poor people, or the poor people will starve and die. If you take the money, the fat cat will buy less cars and yachts, and the poor people will become much more successful and happy."
You'll see all the flaws I described above in that statement.
First, it assumes perfect information. The activist says that taking more money will lead to less yachts and cars - useless consumption. He doesn't consider that people might first cut their charity budget, or their investment budget, or something else. Higher tax jurisdictions, like Northern Europe, have very low levels of charitable giving. They also have relatively low levels of capital investment.
Second, it ignores secondary effects. The activist assumes he can milk the cow and the cow won't mind. In reality, people start spending their time on minimizing their tax burden instead of doing productive work. It ripples through society.
Third, it ignores human nature. Saying "the fat cat won't miss it" is false - everyone is loss averse.
Fourth, the biggest problem of all, it gives two choices and no alternatives. "Tax the fat cat, or the poor people starve" - is there no other way to encourage charitable giving? Could we give charity visas where anyone giving $500,000 in philanthropy to the poor can get fast-track residency into the USA? Could we give larger tax breaks to people who choose to take care of distant relatives as a dependent? Are there other ways? Once the debate gets constrained to, "We must do this, or starvation is the result" you've got problems.
And I think that these poor quality thoughts on policy are a direct descendant of trolley problems. It's the same line of thinking - perfect information, ignores secondary effects, ignores human nature, and gives two choices while leaving no other alternatives. That's not real life. That's sloppy thinking.
Edit: This is being very poorly received so far... well, it was quickly voted up to +3, and now it's down to -2, which means controversial but generally negative reception.
Do people disagree? I understand trolley problems are an established part of critical thinking on philosophy, however, I think they're flawed and I wanted to highlight those flaws.
The best counterargument I see right now is that the value of a trolley problem is it reduces everything to just the moral decision. That's an interesting point, however, I think you could come up with better hypotheticals that don't suffer from this flaw. Or perhaps the particular politics example isn't popular? You can substitute in similar arguments for prohibition of alcohol, and perhaps I ought to have done that to make it less controversial. In any event, I welcome discussion and disagreement.
Questions for you: I think that trolley problems contain perfect information about outcomes in advance of them happening, ignore secondary effects, ignore human nature, and give artificially false constraints. Do you agree with that part? I think that's pretty much fact. Now, I think that's bad. Agree/disagree there? Okay, finally, I think this kind of thinking seeps over into politics, and it's likewise bad there. Agree/disagree? I know this is a bit of controversial argument since trolley problems are common in philosophy, but I'd encourage you to have a think on what I wrote and agree, disagree, and otherwise discuss.
113 comments
Comments sorted by top scores.
comment by AlanCrowe · 2010-10-23T19:19:37.759Z · LW(p) · GW(p)
There are psychologists, following in the foot steps of Stanley Milgram who set up situations, candid camera style, to see what people actually do. This is very different from asking hypothetical questions.
Taking the trolley problem at face value, we recognise it as the problem of military command. Five regiments are encircled. Staring at the map the general realises that if he commits a sixth regiment to battle at a key point the first five regiments will break-out and survive to fight another day, but the sixth regiment will be trapped and annihilated. Of course the general sends the sixth regiment into battle. The trolley problem is set up to be unproblematic. Sacrifice one to say five? Yes!
So what is it probing? Why do we have difficulty with the pencil and paper exercise when the real life answer is clear cut? We are not in fact generals, chosen for our moral courage and licensed to take tough decisions. We are middle-class wankers playing a social game. If we are answering the trolley problem, rather than asking it, we have been trapped into playing a game of "Heads I win, tails you lose."
The way the game works is that the hypothetical set up is unreasonable, so the social aggressor gets to vary the hypotheses after the victim has answered. If you don't throw the fat man in front of the trolley you have murdered five and are bad. If you do throw the fat man in front of the trolley then you are being over-confident in thinking that you know the outcome, in thinking that the five men are really in mortal peril and in thinking that throwing the fat man under the trolley is the only way to save them. You murder the fat man out of arrogance and are evil.
The trolley problem probes how well the players dodge verbal blows in social combat.
Replies from: Relsqui↑ comment by Relsqui · 2010-10-23T20:10:38.800Z · LW(p) · GW(p)
The trolley problem probes how well the players dodge verbal blows in social combat.
I'm not sure which way you intended it, but I find this a good argument against them. I rarely choose to invent artificial conflict for fun, and never by putting someone else in an uncomfortable position.
comment by [deleted] · 2010-10-23T07:09:52.274Z · LW(p) · GW(p)
Your argument could be phrased as:
- trolley problems are a philosophical tool to help in debate about moral beliefs.
- people sometimes use these tools out of context
- therefore trolley problems are "a waste of time at best" This doesn't follow. They're only a waste of time at best if they are never used in context or are inefficient then and you didn't discuss that at all.
You should have phrased that as: Even if trolley problems are good at testing moral intuitions in theory, discussing them might make people prone to these errors in real life moral thinking.
Replies from: lionhearted↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2010-10-23T07:22:38.427Z · LW(p) · GW(p)
Your argument could be phrased as ...
My argument is that putting forward a hypothetical situation with perfect foresight, ignoring secondary effects, ignoring human nature, and constraining decisions to two options leads to bad thinking.
You should have phrased that as: Even if trolley problems are good at testing moral intuitions in theory, discussing them might make people prone to these errors in real life moral thinking.
On the contrary - I don't think trolley problems are good at testing moral intuitions in theory.
Replies from: None↑ comment by [deleted] · 2010-10-23T11:45:21.709Z · LW(p) · GW(p)
My argument is that putting forward a hypothetical situation with perfect foresight, ignoring secondary effects, ignoring human nature, and constraining decisions to two options leads to bad thinking.
Yes, that is what you argue for. But an argument doesn't only contain obeservations, it needs a setup where you put the argument in context and a conclusion where you show how your observations relate to your setup. Your setup is that trolley problems are a theoretical tool but your observations all come from real life situations that simply doesn't match and that diminishes the quality of your argument.
On the contrary - I don't think trolley problems are good at testing moral intuitions in theory.
And that is what you have in your setup but then don't substantiate. That is what doesn't follow from your observations.
comment by MatthewW · 2010-10-23T12:42:00.019Z · LW(p) · GW(p)
I think trolley problems suffer from a different type of oversimplification.
Suppose in your system of ethics the correct action in this sort of situation depends on why the various different people got tied to the various bits of track, or on why 'you' ended up being in the situation where you get to control the direction of the trolley.
In that case, the trolley problem has abstracted away the information you need (and would normally have in the real world) to choose the right action.
(Or if you have a formulation which explicitly mentions the 'mad philosopher' and you take that bit seriously, then the question becomes an odd corner case rather than a simplifying thought experiment.)
Replies from: byrnema↑ comment by byrnema · 2010-10-24T21:38:10.897Z · LW(p) · GW(p)
Exactly. Context is very important.You can't just count deaths. For example, the example AlanCrowe gave above has an obvious answer because the military has a clear context: soldiers have already committed their lives and themselves to being 'one of a number'.
Based on the limited information of this trolley problem, I think my answer would have to consider that the entire universe would be a better place if 5 people died being run over by an unwitting machine than 1 person dying because he was deliberately pushed by one of his fellows.
Taking the constraints of the trolley problem at face value, one action a person might consider is asking the fat man to jump. If asked, ethically, the man should probably say yes. Given that, I am not sure it would be ethical to ask him. Finally, since the fat man could anticipate your asking, it might be most moral, then, to prevent him from jumping.
Thus over the course of a comment, I have surprised myself with the position that not only should you not push the man from jumping, you should prevent him if it occurs to him to do so. (That is, if his decision is impulsive, rather than a life commitment of self-sacrifice. I would not prevent a monk or a marine from saving the 5 persons.)
Replies from: erratio↑ comment by erratio · 2010-10-24T22:08:15.708Z · LW(p) · GW(p)
But if he does decide to jump, you have no way to know whether it's because he anticipated your asking or whether he came to that decision independently of you.
Replies from: byrnema, Relsqui↑ comment by byrnema · 2010-10-24T22:43:49.527Z · LW(p) · GW(p)
Yeah, preventing the man from jumping given a probability that he really, desperately wants to do it might be the only moral dilemma.
In the movie, 'A Trolley Problem', he should threaten to kill me if I try to prevent him. Or I should precommit to killing all the people he saves if he saves them, so he must kill me to secure the 5 lives. This would be a voluntary sacrifice of my life to prevent an involuntary sacrifice of life.
I suppose 5 people should try to prevent him. If he kills all five of us, he really wanted to do it.
(I'm sure exactly where this line of reasoning became inane, but at some point it did.)
comment by Joanna Morningstar (Jonathan_Lee) · 2010-10-23T08:31:47.787Z · LW(p) · GW(p)
The thrust of your argument appears to be that: 1) Trolley problems are idealised 2) Idealisation can be a dark art rhetorical technique in discussion of the real world. 3) Boo trolley problems!
There are a number of issues.
First and foremost, reversed stupidity is not intelligence. Even if you are granted the substance of your criticisms of the activists position, this does not argue per se against trolley problems as dilemmas. The fact that they share features with a "Bad Thing" does not inherently make them bad.
Secondly, the whole point of considering trolley problems is to elucidate human nature and give some measure of training in cognition in stressful edge cases. The observation that humans freeze or behave inconsistently is important. This is why the trolley problems have to be trued in the sense that you object to - if they are not, many humans will avoid thinking about the ethical question being posed. In essence "I don't like your options, give me a more palatable one" is a fully general and utterly useless answer; it must be excluded.
Thirdly, your argument turns on the claim that merely admitting trolley problems as objects of thought somehow makes people more likely to accept dichotomies that "justify tyranny and oppression". This is risible. Even if the dichotomy is a false one, you surely should find one or the other branch preferable. It is perfectly admissible to say:
"I prefer this option (implicitly you presume that will be the taxation), but that if this argument is to be the basis for policy, then there are better alternatives foo, bar, etc., and that various important real world effects have been neglected."
Those familiar with the trolley problems and general philosophical dilemmas are more likely to be aware of the idealisations and voice these concerns cogently if idealisations are used in rhetoric or politics.
Fourthly, in terms of data, I would challenge you to find evidence suggesting that study of trolley problems leads to acceptance of tyranny. I would note (anecdotally) that communities where one can say "trolley problem" without needing to explain further seem to have a higher density of the libertarians and anarchists than the general population.
So in rough summary: 1) Your conclusion does not follow from the argument. 2)Trolley problems are idealised because if they aren't humans evade rather than engage. 3) Noting and calling out dark arts rhetoric is roughly orthogonal to thinking about trolley problems (conditional on thinking). 4) Citation needed wrt. increased tyranny in those who consider trolley problems.
Replies from: Relsqui, lionhearted↑ comment by Relsqui · 2010-10-23T18:25:22.356Z · LW(p) · GW(p)
"I prefer this option (implicitly you presume that will be the taxation), but that if this argument is to be the basis for policy, then ..."
This is dangerous, in the real world. If you say "of these two options, I prefer X," I would expect that to be misinterpreted by non-literal-minded people as "I support X." In any real-world situation, I think it's actually smarter and more useful to say something like, "This is the wrong choice--there's also the option of Z" without associating yourself with one of the options you don't actually support. Similarly:
you surely should find one or the other branch preferable
Personally, I'm wary in general of the suggestion that I "should" intrinsically have a preference about something. I reserve the right not to have a preference worth expressing and being held to until I've thought seriously about the question, and I may not have thought seriously about the question yet. If I understand correctly, the original poster's point was that trolley problems do not adequately map to reality, and therefore thinking seriously about them in that way is not worth the trouble.
↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2010-10-23T15:22:18.897Z · LW(p) · GW(p)
The thrust of your argument appears to be that: 1) Trolley problems are idealised 2) Idealisation can be a dark art rhetorical technique in discussion of the real world. 3) Boo trolley problems!
This is strange, this is the second comment that summarized an argument that I'm not actually making, and then argues against the made up summary.
My argument isn't against idealization - which would be an argument against any sort of generalized hypothetical and against the majority of fiction ever made.
No, my argument is that trolley problems do not map to reality very well, and thus, time spent on them is potentially conducive to sloppy thinking. The four problems I listed were perfect foresight, ignoring secondary effects, ignoring human nature, and constraining decisions to two options - these all lead to a lower quality of thinking than a better constructed question would.
There's a host of real world, realistic dilemmas you could use in place of a (flawed) trolley problem. Layoffs/redundancies to try to make a company more profitable or keep the ship running as is (like Jack Welch at GE), military problems like fighting a retreating defensive action, policing problems like profiling, what burden of proof in a courtroom, a doctor getting asked for performance enhancing drugs with potentially fatal consequences... there's plenty of real world, reality-based situations to use for dilemmas, and we would be better off for using them.
Replies from: Jonathan_Lee, Relsqui↑ comment by Joanna Morningstar (Jonathan_Lee) · 2010-10-24T09:13:16.868Z · LW(p) · GW(p)
From your own summary:
I think that trolley problems contain perfect information about outcomes in advance of them happening, ignore secondary effects, ignore human nature, and give artificially false constraints.
Which is to say they are idealised problems; they are trued dilemmas. Your remaining argument is fully general against any idealisation or truing of a problem that can also be used rhetorically. This is (I think) what Tordmor's summary is getting at; mine is doing the same.
Now, I think that's bad. Agree/disagree there?
So, I clearly disagree, and further you fail to actually establish this "badness". It is not problematic to think about simplified problems. The trolley problems demonstrate that instinctual ethics are sensitive to whether you have to "act" in some sense. I consider that a bug. The problem is that finding these bugs is harder in "real world" situations; people can avoid the actual point of the dilemma by appealing for more options.
In the examples you give, there is no similar pair of problems. The point isn't the utilitarianism in a single trolley problem; it's that when two tracks are replaced by a (canonically larger) person on the bridge and 5 workers further down, people change their answers.
Okay, finally, I think this kind of thinking seeps over into politics, and it's likewise bad there. Agree/disagree?
You don't establish this claim (I disagree). It is worth observing that the standard third "trolley" problem is 5 organ recipients and one healthy potential donor for all. The point is to establish that real world situations have more complexity -- your four problems.
The point of the trolley problems is to draw attention to the fact that the H.Sap inbuilt ethics is distinctly suboptimal in some circumstances. Your putative "better" dilemmas don't make that clear. Failing to note and account for these bugs is precisely "sloppy thinking". Being inconsistent in action on the basis of the varying descriptions of identical situations seems to be "sloppy thinking". Failing on Newcomb's problem is "sloppy thinking". Taking an "Activists" hypothetical as a true description of the world is "sloppy thinking". Knowing that the hardware you use is buggy? Not so much.
↑ comment by Relsqui · 2010-10-23T18:25:59.703Z · LW(p) · GW(p)
This is strange, this is the second comment that summarized an argument that I'm not actually making, and then argues against the made up summary.
If the mistaken summaries are similar to each other, this may mean that the post did not get across the point you wanted it to get across.
Replies from: lionhearted↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2010-10-23T19:58:10.595Z · LW(p) · GW(p)
If the mistaken summaries are similar to each other
Nah, they were totally different summaries. Both used words I didn't say and that don't map at all to arguments I made... it's like they read something that's not there.
this may mean that the post did not get across the point you wanted it to get across.
That, or people mis-summarizing for argument's sake?
Either way, it's up to me to get the point across clearly. I thought this was a fairly simple, straightforward post, but apparently not.
comment by knb · 2010-10-24T01:13:13.458Z · LW(p) · GW(p)
I don't think trolley problems are used to argue for policies. Rather, the point of trolley problems is to reveal that the way humans normally do moral reasoning is not shut-up-and-multiply utilitarianism.
Activist says, "We've got to take from this rich fat cat and give it to these poor people, or the poor people will starve and die. If you take the money, the fat cat will buy less cars and yachts, and the poor people will become much more successful and happy."
While activists may try to trot out utilitarian justifications for their political arguments, nothing about trolley problems can be seen as bolstering their claims (either that redistribution is utility-maximizing, or that utilitarianism is itself the "correct" moral theory.) Trolley problem research isn't normative, it's descriptive.
comment by Matt_Stevenson · 2010-10-23T06:30:22.063Z · LW(p) · GW(p)
I think you are looking at the Trolley Problem out of context.
The Trolley Problem isn't suppose to represent a real-world situation. Its a simplified thought experiment designed to illustrate the variability of morality in slightly differing scenarios. They don't offer solutions to moral questions, they highlight the problems.
Replies from: lionhearted↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2010-10-23T06:53:30.881Z · LW(p) · GW(p)
I think you are looking at the Trolley Problem out of context.
I understand the supposed purpose of trolley problems, but I think they're conducive to poor quality thinking none the less.
They don't offer solutions to moral questions, they highlight the problems.
Right, but I think there's better ways of going about it. I wanted to keep the post brief and information-dense so I didn't list alternative problems, but there's a number you could use based in real history. For instance, a city is about to be in lost in war, and the military commander is going through his options - do you order some men to stay behind and fight to the death to cover the retreat of the others, ask for volunteers to do it, draw lots? Try to have everyone retreat, even though you think there's a larger chance your whole force could be destroyed? If some defenders stay, does the commander lead the defensive sacrificing force himself or lead the retreat? Etc, etc.
That sort of example would include imperfect information, secondary effects, human nature, and many different options. I think trolley problems are constructed so poorly that they're conducive to poor quality thought. There's plenty of examples you could use to discuss hard choices that don't suffer from those problems.
Replies from: Matt_Stevenson, djcb↑ comment by Matt_Stevenson · 2010-10-23T07:22:36.891Z · LW(p) · GW(p)
I would compare the trolley problem to a hypothetical physics problem. Just like a physicist will assume a frictionless surface and no air resistance, the trolley problem is important because it discards everything else. It is a reductionist attempt at exploring moral thought.
Replies from: lionhearted↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2010-10-23T07:32:08.265Z · LW(p) · GW(p)
Interesting thought, but it wouldn't be difficult to take the time to make situations more lifelike and realistic. There's plenty of real life situations that let you explore moral thought without the flaws listed above.
Replies from: Relsqui↑ comment by Relsqui · 2010-10-23T18:29:03.272Z · LW(p) · GW(p)
it wouldn't be difficult to take the time to make situations more lifelike and realistic.
It isn't necessarily difficult for a good physicist to factor in friction and air resistance, either. But those are distractions, unnecessarily drawing effort and attention away from the specific force actually being studied. That's what the trolley problem also tries to do: create a simplified environment so that a single variable can be examined.
Replies from: Perplexed↑ comment by Perplexed · 2010-10-23T19:48:59.123Z · LW(p) · GW(p)
But physicists don't ignore friction when performing experiments, they do so only in teaching. If philosophers used trolley problems only to teach ethics ("Push one fat philosopher onto the tracks, to save two drug addicts.") or to teach metaethics ("An adherent of virtue ethics probably wouldn't push") then I doubt that lionhearted would complain.
But we have psychologists using trolley problems to perform experiments (or, if from Harvard, to publish papers in which they claim to have conducted experiments). That is what I understand lionhearted to be objecting to.
Replies from: Relsqui, NancyLebovitz, None, Matt_Stevenson↑ comment by NancyLebovitz · 2010-10-24T01:21:33.012Z · LW(p) · GW(p)
Nitpick: I think you're implying that no philosophers are drug addicts.
Suppose that both the people on the bridge are sufficiently heavy to stop the trolley. Should one of them sacrifice themself, or are both obligated to try to preserve their lives by fighting not to be thrown off?
Replies from: Perplexed↑ comment by [deleted] · 2010-10-25T19:43:47.311Z · LW(p) · GW(p)
Physicists ignore friction when teaching, when thinking, and when performing experiments. Doing so reduces confusion, and allows for greater understanding of the effects of friction once attention is turned to it.
The fact that the analogous situation in moral philosophy increases confusion is revealing.
Replies from: Perplexed↑ comment by Matt_Stevenson · 2010-10-23T22:54:42.677Z · LW(p) · GW(p)
I think a better example than frictionless surfaces and no air resistance would be idealized symmetries. Once something like Coulomb's Law was postulated physicists would imagine the implications of charges on infinite wires and planes to make interesting predictions.
We use the trolley problem and its variations as thought experiments in order to make predictions we can test further with MRIs and the like.
So a publication on interesting trolley problem results would be like theoretical physics paper showing relativity predicts some property of black holes.
↑ comment by djcb · 2010-10-23T13:30:40.611Z · LW(p) · GW(p)
The trolley-problem is interesting in that it's a very simple way to show how most people's morals are not based on some framework like consequentialism (any flavor) or deontology or virtue ethics or... but are based on some vague intuitions that are not very consistent - with the ethical frameworks used post-hoc for rationalization.
The problem could be complicated (made more realistic) by adding unknowns, probabilities and so on, but would that bring any new insights?
comment by SilasBarta · 2010-10-24T16:36:04.346Z · LW(p) · GW(p)
This is actually very similar to an intuition I've had about this problem. The difference is that I compare it to a different scenario, and regard it as a reason, not to reject the trolley problem, but a reason to justify the optimality of not pushing the innocent bystander onto the tracks.
You compared it to wealth-redistribution attempting to optimize total utility, while I think a better comparison is discrimination law, something that matters close to me. (Edit: sorry for awkward phrasing, keeping it there because it was quoted.) In short, just as milking the cow makes it waste effort avoiding being milked, discrimination law simply makes employers drastically revise hiring practices so that they don't use the one that counts under the law as a "job posting".
So they rely more heavily on network hires and indirect signals of employee quality, and, ironically, make it much harder for people outside these networks -- usually the "protected" classes -- to even get a chance in the first place, no matter how good they could demonstrate their work to be!
People think they can just shove these "surprise" costs onto agents with "too much utility", ignoring the long-term ways they'll react. As I said before about the problem, one non-obvious result of shoving the guy onto the track is that drastically shifts around the risk profiles of various activities. Previously, standing on a track was riskier than staying off it. But a policy of switching this risk around makes it harder overall for people to ascertain risks, which makes them spend wasteful effort avoiding risks (since they have to assume they're understated).
Generally, it really bothers me when people talk in terms of policies that attempt to shift utility from privileged group A to pitiable group B, not realizing that A will still gets its privileges and exclude B -- it'll just become a more complex dance for B to figure out.
Replies from: Tenek↑ comment by Tenek · 2010-10-25T16:16:52.868Z · LW(p) · GW(p)
I've been rereading this comment for the past 10 minutes and I have no idea whether this is an (attempted) arms'-length assessment of discrimination law (I say attempted because of the "matters close to me" acknowledgement) or the bitter result of the author being turned down for a job. At first glance it looks like the latter, but this is exactly the sort of situation I would expect to see someone to make a completely rational analysis and not pay any attention to how it's going to come across to someone who doesn't know you're not just another bigot. (I call it Larry Summers Syndrome.
http://en.wikipedia.org/wiki/Lawrence_Summers#Differences_between_the_sexes )
It's one thing to talk about "risk profiles" or "incentives" in general terms, but when you actually want to implement something, it becomes a particular incentive, and there is no a priori reason to assume the cost will outweigh the benefit. When you concentrate on the existence of a cost (or benefit) and ignore the magnitude, you start making statements like "[the Bush tax cuts] increased revenue, because of the vibrancy of these tax cuts in the economy". Similarly, if you try to transfer utility from group A to group B, group A is going to be upset and try to minimize their loss - that doesn't mean that group A is going to completely negate it, or that group B is going to be worse off.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-10-25T20:19:24.699Z · LW(p) · GW(p)
Now I don't know what you're trying to say about me. My views on this don't count because you think I was turned down for a job and blamed discrimination law for it? Huh?
In any case, I agree that the costs of A's reaction don't necessarily negate the benefit. What I criticize is models that view such utility shifting as a one-shot enterprise without complications, which is a sadly common belief. What's more common -- just from my personal experience -- is that attempts to nobly shift utility this way result in making life more kafkaesque.
For example, when you ban IQ tests in employment screening, you don't get employers happily ignoring the information value of IQ tests -- rather, they just fob it off to a university, who will gladly give the IQ test and pass on a weaker measure of ability. If you mandate benefits, employers don't simply continue their hiring practices exactly as before but give workers more utility; rather, they cut back in other ways.
I believe I'm impacted by this mentality, because I would much rather be told, you don't qualify because of ____ than have to follow some complicated, unspoken signaling dance (i.e. the relative significance of having a network to having ability) that avoids officially-banned screening methods.
Yes, I've been turned down for jobs, but a) I've been gainfully employed in my field for 5 years, and b) my concern is not with being turned down, but with it being harder to find opportunities that could result in being turned down in the first place.
What would be a non-bigoted way to make the point I just did?
Replies from: Tenek↑ comment by Tenek · 2010-10-26T14:56:56.628Z · LW(p) · GW(p)
"I have no idea what criteria were used when I'm rejected for a job, and I'm not even seeing the jobs that never get posted because it's easier to hire someone you know than go through the formal process and jump through all its hoops." Maybe.
I don't think your views don't count - I was hoping that I'd gone to sufficient lengths to point out that while it might have just been bitterness, there was a substantial chance it wasn't. Maybe I underestimated the LW rationalist:racist ratio... actually, probably by a huge margin. %#$@.
So what would happen if you traded the kafkaesque life for the officially-banned screening methods? Would you rather have twice the number of job opportunities and lose 3/4 of them right away because you're ? Or would you rather that other people get rejected for them, if you don't personally have many of the 'undesirable' attributes?
Finally, let's go to story mode. A friend of mine applied for a job. They weren't allowed to ask her about her religion. But she has a name commonly found among members of a particular one. She got the job, and became the only employee (out of a couple dozen) not sharing the faith of the rest of them. So I guess they took a guess at her religion based on her name, and chose using that metric. I have no idea whether this is a success or failure of antidiscrimination laws. Without them, she'd have had no chance. With them, they tried anyways. But at least it was possible, even if she didn't fit in... and quit a few years later, when her husband got cancer and they blamed it on her not praying.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-10-26T18:04:27.094Z · LW(p) · GW(p)
At risk of exhibiting confirmation bias, your anecdote makes my point for me. With the ban on discrimination, your friend got misleading signals of which employers she would like, and her employer relied on weaker signals of things they can't ask about, leading to an incorrect inference -- and later, a disastrous mismatch.
So, far from eliminating the pernicious effects of bigotry, the law made people waste effort trying to route their efforts around it. Had there been no law, people could openly communicate their preferences in both directions, without having to go through the complication of sending weaker signals because they can't send the banned ones. And with less "noise", the cost of bigotted preferences becomes clearer. Explanation
I don't see why you're using this as an example of why anti-discrimination laws are good.
So what would happen if you traded the kafkaesque life for the officially-banned screening methods? Would you rather have twice the number of job opportunities and lose 3/4 of them right away because you're ? Or would you rather that other people get rejected for them, if you don't personally have many of the 'undesirable' attributes?
Certainly, you can pick the numbers to reach the conclusion that you want. But averaging over all possibilities, and weighting by likelihood, yes, I would prefer the environment that makes all job applicants not a potential albatross for employers, for the same reason I would prefer not to be exempt from lawsuits -- yes, there's a narrow benefit to laws saying "you can't discriminate against people named Silas", and to a personal exemption from lawsuits. But realistically, the way people respond to this will more than eliminate the benefit. (In case it's not clear -- do you see a hazard in associating with a stranger when you're guaranteed to have no legal recourse against what they do?)
So yes, I would rather have the option to most clearly communicate preferences, than have to dance around them, which results in situations where someone can be turned down for a job because of false beliefs that they can't even refute. This remains true, even and especially if I'm in a less fortunate end of the applicant pool (which I have been, like everyone else who has been under 14/16/18 at some point in their life).
(Btw, I'm not the one modding you down.)
Replies from: Tenek↑ comment by Tenek · 2010-10-26T20:08:28.137Z · LW(p) · GW(p)
I'm not using it as an example of why they're good. I'm offering it as an example because it's relevant to the topic.
Adding a cost to circumvent the law makes you less likely to do so, though. If you keep hiring people who are decidedly suboptimal because you have to use a lousy approximation of whatever characteristic you want, you might give up on it.
I get that you would rather, given that you're going to be rejected for your age/skin color/gender/etc, be told why. But if you want to reduce the use of those criteria, then banning it will stop the people who care a small amount (i.e not enough to bother getting around it.)
comment by Vladimir_Nesov · 2010-10-23T11:32:34.062Z · LW(p) · GW(p)
I believe trolley problems are fundamentally flawed - at best a waste of time, and at worst lead to really sloppy thinking. Here's four reasons why:
- It assumes perfect information about outcomes.
- It ignores the global secondary effects that local choices create.
- It ignores real human nature - which would be to freeze and be indecisive.
- It usually gives you two choices and no alternatives, and in real life, there's always alternatives.
Note that these properties are characteristic of most thought experiments, not just the trolley problem.
Take (3), for example. It talks about descriptive adequacy of a thought experiment, while the goal of the enterprise is to figure out what should be done. By analogy, asking "How much is 233*4945?" shouldn't be faced with an argument that since real human beings are unlikely to give a correct response in a few seconds, trying to answer this question leads to sloppy thinking, as it doesn't reflect what really happens when people try to answer it on the spot.
Replies from: lionhearted↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2010-10-23T15:30:46.223Z · LW(p) · GW(p)
It talks about descriptive adequacy of a thought experiment, while the goal of the enterprise is to figure out what should be done.
I partially agree. But the point is, in any emergency situation, you're going to default to your training if you're acting. Thus, individual moral intuitions give way a host of other concerns, and a body of history, literature, and tradition of the particular discipline (whether it be emergency first response, engineering, soldiering, policing, surgery, or any other form of life or death issue).
If you're going to spend the thought cycles, much better to use a real discipline. Here's one - there's two run down apartment buildings with roughly 200 people in them. Mortars were fired off the rooftops the night before, killing ~20 innocent civilians. The next day, military troops raid the buildings, arrest everyone, find a cache of weapons, and strongly suspect the people using them are among the 200 arrested. Everyone says they don't know who did it. What do you do with those people?
It addresses the same questions a trolley problem does, except it doesn't have the flaws a trolley problem has.
Replies from: jimrandomh, prase↑ comment by jimrandomh · 2010-10-23T16:30:32.985Z · LW(p) · GW(p)
It's like a trolley problem, except better, because it doesn't have the flaws a trolley problem has.
Except that it has a different problem: trying to answer the question quickly derails into complex real-world issues, but you can't reliably predict which real-world issue it will derail into. If you use that example and try to talk about whether it's okay to punish innocents to save others from being mortared, some readers will want to talk about fingerprints and forensics, some will want to talk about how poverty caused the situation in the first place, and some will want to talk about anti-mortar defense hardware. A trolley problem focuses the conversation in a way that real world problems can't, and when talking about philosophical issues that're confusing to begin with, that focus is something you can't do without.
Replies from: lionhearted↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2010-10-23T16:40:13.424Z · LW(p) · GW(p)
Thank you for replying. The downvotes without reply are confusing - I'm not sure exactly what people take issues with, whether they disagree on a particular grounds or just dislike the point viscerally.
A trolley problem focuses the conversation in a way that real world problems can't, and when talking about philosophical issues that're confusing to begin with, that focus is something you can't do without.
Trolley problem do that, but at some expense that I believe can lead to poor quality thought - constraining a situation to two possible decisions with predetermined outcomes. While a little messier, I think forcing people to actually think through a variety of scenarios and be creative is healthier, and you can still get at ethical systems. If you wanted to make it much simpler, there's still ways to do so without being forced into a constrained situation with predetermined outcomes. That's the issue I have - the idea that someone can tell you, "Here's your two options, and here's the outcomes from them" - I think this potentially primes people to listen to false dichotomies later, like in politics.
Maybe I'm mistaken, but I don't think so. At least, this is worth considering. Any time you get a false dichotomy with 20/20 foresight presented to you, I think, "That's a false dichotomy and you're claiming 20/20 foresight" would be a good answer. Considering how often even highly educated and smart people fall for the false dichotomy and believing someone who claims they know what the outcomes will be with certainty in advance, I believe this is a legitimate concern.
↑ comment by prase · 2010-10-23T20:16:41.764Z · LW(p) · GW(p)
But the point is, in any emergency situation, you're going to default to your training if you're acting.
Trolley problems aren't conceived as a model of an emergency situation. The emergency part is there mainly to emphasise the restricted choice. To push or not to push, there is no time for anything else. You can easily imagine a contrived trolley scenario with a plenty of time to decide.
I don't understand the analogy between trolley problems and your mortar scenario.
comment by Morendil · 2010-10-23T08:01:45.829Z · LW(p) · GW(p)
Related - Philippa Foot, renowned philosopher and unknown anthropologist. Foot died earlier this month. She was the originator of the trolley problem, in 1967.
comment by JoshuaZ · 2010-10-23T22:52:34.557Z · LW(p) · GW(p)
The point of using perfect information problems is that they should be simpler to handle. If a moral system can't handle the perfect information problems then it certainly can't handle the more complicated problems where there is a lack of perfect information. In this regard, this is similar to looking at Newcomb's Problem. The problem itself will never come up in that form. But if a decision theory can't give a coherent response to Newcomb's then there's a problem.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-10-23T23:04:59.245Z · LW(p) · GW(p)
JoshuaZ:
The point of using perfect information problems is that they should be simpler to handle. If a moral system can't handle the perfect information problems then it certainly can't handle the more complicated problems where there is a lack of perfect information.
Suppose however that system A gets somewhat confused on the simple perfect-information problem, while system B handles it with perfect clarity -- but when realistic complications are introduced, system B ends up being far more confused and inadequate than A, which maintains roughly the same level of confusion. In this situation, analysis based on the simple problem will suggest a wrong conclusion about the overall merits of A and B.
I believe that this is in fact the case with utilitarianism versus virtue ethics. Utilitarianism will give you clear and unambiguous answers in unrealistic simple problems with perfect-information, perfectly predictable consequences, and an intuitively obvious way to sum and compare utilities. Virtue ethics might get somewhat confused and arbitrary in these situations, but it's not much worse for real-world problems -- in which utilitarianism is usually impossible to apply in a coherent and sensible way.
Replies from: None↑ comment by [deleted] · 2010-10-25T19:35:40.885Z · LW(p) · GW(p)
Someone who claims to be confused about the trolley problem with clearly enumerated options and outcomes, but not confused about a real world problem with options and outcomes that are difficult to enumerate and predict, is being dishonest about his level of confusion. A virtue ethicist should be able to tell me whether pushing the fat man in front of the train is more virtuous, less virtuous, or as virtuous as letting the five other folks die.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-10-25T20:14:29.412Z · LW(p) · GW(p)
I think you misunderstood my comment, and in any case, that's a non sequitur, because the problem is not only with the complexity, but also the artificiality of the situation. I'll try to state my position more clearly.
Let's divide moral problems into three categories, based on: (a) how plausible the situation is in reality, and (b) whether the problem is unrealistically oversimplified in terms of knowledge, predictability, and inter-personal utility comparisons:
- Plausible scenario, realistically complex.
- Implausible scenario, realistically complex.
- Implausible scenario, oversimplified.
(The fourth logical possibility is not realistic, since any plausible scenario will feature realistic complications.) For example, trolley problems are in category (3), while problems that appear in reality are always in categories (1) and (2), and overwhelmingly in (1).
My claim is that utilitarianism provides an exact methodology for working with type 3 problems, but it completely fails for types 1 and 2, practically without exception. On the other hand, virtue ethics turns out to be more fuzzy and subjective when compared with utilitarianism in type 3 problems (though it still handles them tolerably well), but unlike utilitarianism, it is also capable of handling types 1 and 2, and it usually handles the first (and most important) type extremely well. Therefore, it is fallacious to make general conclusions about the merits of these approaches from thought experiments with type 3 problems.
Replies from: None↑ comment by [deleted] · 2010-10-25T21:04:30.378Z · LW(p) · GW(p)
I am not a utilitarian.
Virtue ethics handles scenarios of type 1 (plausible scenarios that are realistically complex) extremely well.
I agree with this similar statement: communities of people committed to being virtuous have good outcomes (as evaluated by Sewing-Machine). I do not agree with this similar statement: communities of people committed to being virtuous are less confused about morality than I am.
comment by Perplexed · 2010-10-23T20:50:02.053Z · LW(p) · GW(p)
Trolley problems appear not just in philosophy - some psychologists are using them in experiments as well. Here is one recent example.
In this case, at least, I think that many of your objections to the trolley problem don't apply. The researchers really are not interested in the ethics of deciding to sacrifice a fat man, they are interested in how the decision to sacrifice might change when the decision maker is on various drugs. And they already have brain imaging results for the trolley problem - so of course they would want to use the same problem in this experiment.
comment by sludgepuddle · 2010-10-23T20:31:42.979Z · LW(p) · GW(p)
Isn't The Least Convenient Possible World directly relevant here? I'm surprised it hasn't been mentioned yet.
Replies from: shokwave↑ comment by shokwave · 2010-10-24T03:59:59.168Z · LW(p) · GW(p)
It occurs to me that the Least Convenient World principle, and its applications in producing trolley problems, is actually a dangerous idea. The best response in any situation that looks like a trolley problem is to figure out how to defuse the situation. So maybe you can change the tracks so the trolley runs down a different line; maybe you can derail it with a rock on the tracks; maybe you can warn the five people or somehow rescue them; perhaps even you could jump onto the trolley and apply the brakes. These options are surely less feasible than using the fat man's body, but the cost of the 'fat man' course of action is one life. (Naively, if the expected outcome of the third way is less than 1 life lost, the third way is preferable)
This is a little bit like that Mad Psychologist joke:
A Mad Psychologist accosts you and a friend of yours in the street, and forces a gun into your hand. "Shoot your friend, or shoot yourself. Are you a selfless kindhearted hero, or a black-hearted selfish monster? Who will you choose? Muahaha!" You point the gun at the Psychologist and ask, "Can't I just shoot you?"
The trolley problems tend to forbid this kind of thinking, and the Least Convenient Possible World works to defeat this kind of thinking. But I think that this third-way thinking is important, that when faced with being gored by the left horn or the right horn of the bull, you ought to choose to leap between the horns over the bull's head, and that if you force people to answer this trolley problem with X or Y but never Z, they will stop looking for Zs in the real world.
Alternatively, read conchis's post , as it is far more succinct and far less emotive.
Replies from: NancyLebovitz, prase↑ comment by NancyLebovitz · 2010-10-24T12:33:03.540Z · LW(p) · GW(p)
I don't know if your alternatives are that much less plausible than thinking you can throw someone who weighs a good bit more than you do and is presumably resisting, and have them land with sufficient precision to stop the trolley.
Replies from: shokwave↑ comment by shokwave · 2010-10-24T14:23:00.372Z · LW(p) · GW(p)
I rather think they are more plausible and will save lives more surely than the fat man's corpse, but the thought experiment strongly implies that the fat man course of action will surely succeed - and I wanted to make my point without breaking any of the rules of the thought experiment, so as not to distract critics from the central argument.
↑ comment by prase · 2010-10-25T11:41:37.255Z · LW(p) · GW(p)
I strongly suspect that since many people don't like whatever conclusions can be infered from trolley problems, they try to dismiss trolley problems as "dangerous". If I find something really dangerous, it is the willingness to label uncomfortable ideas as dangerous when there are no better arguments around. The historical set of "dangerous" ideas includes heliocentrism, evolution, atheism, legal homosexuality.
Actually, nobody has yet demonstrated that in reality, people who are used to think about trolley problems or other simplified thought experiments are more prone to bad thinking.
comment by simplicio · 2010-10-24T03:09:19.422Z · LW(p) · GW(p)
- It assumes perfect information about outcomes.
- It ignores the global secondary effects that local choices create.
- It ignores real human nature - which would be to freeze and be indecisive.
- It usually gives you two choices and no alternatives, and in real life, there's always alternatives.
I broadly agree with this, but there's another reason trolley problems are flawed. Namely; it is hard to deconvolute one's judgment of impracticality (a la 4) from one's judgment of moral impermissibility. Pushing a fat guy is just such an implausibly stupid way to stop a trolley, and my intuition is going to keep shouting NO at that problem, no matter how much you verbally specify that I have perfect knowledge it will work.
Replies from: ata, torekp↑ comment by ata · 2010-10-24T04:03:42.522Z · LW(p) · GW(p)
I wonder if it's better or worse to construct problems that are implausible from the very start, instead of being potentially realistic up to a certain point where you're asked to suspend disbelief. (Similar to how we do decision problems here, with Omega being portrayed as a superintelligence from another galaxy who is nearly omniscient and whose sole goal appears to be giving people confusing decision problems. IIRC, conventional treatments of decision theory often portray the Predictor as a human and do not explain why his predictions tend to be accurate, only specifying that he has previously been right 99% or 100% of the time. I suspect that format tends to encourage people to make excuses not to answer the real question.) So, suppose instead of the traditional trolley problem, we say "An invincible demon appears before you with a hostage tied to a chair, and he gives you a gun. He tells you that you can shoot the hostage in the head or untie her and set her free, and that if and only if you set her free, he will go off and kill five other people at random. What do you do?" Does that make it better in worse, in terms of your ability to separate the implausibility of the situation from your ability to come to a moral judgment?
Replies from: Perplexed, Alicorn, Relsqui↑ comment by Perplexed · 2010-10-24T04:30:25.633Z · LW(p) · GW(p)
Your version adds an irrelevancy - the possible moral agency of the demon provides an out for the test subject: "It is not my fault those 5 people died; the demon did it." It is much more difficult to shift moral responsibility to the trolley.
Replies from: ata, hacksoncode↑ comment by hacksoncode · 2011-01-21T17:23:45.970Z · LW(p) · GW(p)
I'm not sure why it's perceived as more difficult. The trolley didn't just appear magically on the tracks. Someone put it there and set it moving (or negligently allowed it to).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-01-21T18:06:47.826Z · LW(p) · GW(p)
Well, I perceive it as more difficult because of my intuitions about how culpability travels up a causal chain.
For example, if someone dies because of a bullet fired into their heart from a gun shot by a hand controlled by a brain B following an instruction given by agent A, my judgment of culpability travels unattenuated through the bullet and the gun and the hand. To what degree it grounds out in B and A depends on to what degree I consider B autonomous... if B is an advanced ballistics-targeting computer I might be willing to call it a brain but still unwilling to hold it culpable for the death, for example. Either way, the bulk of the culpability grounds out there. I may go further and look at the social structures and contingent history that led to A and B (and the hand, bullet, gun, heart, etc.) being the way they are, but that will at best be in addition to the initial judgment, and I probably won't bother.
Similarly, if five people are hit by a trolley that rolled down a track that agent A chose not to stop, my intuitions of culpability ground out in A. Again, I may go further and look at the train switching systems and so on and so forth, but that will be in addition to the initial judgment, and I probably won't bother.
I find it helpful to remember that intuitions about culpability are distinct from beliefs about responsibility.
↑ comment by torekp · 2010-10-25T00:32:53.777Z · LW(p) · GW(p)
Problem #6: the situations are almost invariably underspecified. (Problem 2 is a special case of this.) The moral judgments elicited depend on features that are not explicit, about which the reader can only make assumptions. Such as, how did the five people get on the tracks? Kidnapped and tied up by Dick Dastardly? Do they work for the railroad (and might they then also be responsible for the maintenance of the trolley)? And so on.
When a researcher uses contrived problems to test people's moral intuitions, it would help to include a free-form question inviting the respondent to say what other information they need to form a moral judgment. That way, the next time the "trolley problem" is trotted out, the researchers will be in a better position to understand which features make a difference to the moral verdicts.
ETA: didn't see MatthewW's similar point until after I replied.
comment by jimrandomh · 2010-10-23T13:47:47.052Z · LW(p) · GW(p)
Yes, trolley problems are simple and reality is complex. We know this. The point of a trolley problem is not to provide a complete model for decision making in general, but to extract single data points about our preferences. imperfect information must be dealt with using probability distributions and expected utility; secondary effects must be included in our expected utility calculations. Indecision is irrelevant, in the same sense that someone facing Omega's problem with a time limit might leave someone with zero boxes. And of course, we do have to spend resources looking for third options, but that doesn't mean every problem will ultimately provide one.
Also, this would be better without the example from politics. If you want to face a mind-killing problem, face it directly, taking special care to defuse the mind-killing aspects. If you just need an example for an unrelated point, then talk about Louis XVI during the French Revolution.
Replies from: fche↑ comment by fche · 2010-10-23T14:34:39.487Z · LW(p) · GW(p)
I think the point is that such a mind-killing problem doesn't answer anything. A reasonable person may say "both puzzle options are awful, I don't want to play", but that doesn't mean that the same person can't give a moral argument for action or inaction in a more realistic situation.
comment by taw · 2010-10-24T09:22:05.418Z · LW(p) · GW(p)
Here's better shorter version of your post:
- Trolley problems make a lot of sense in deontological ethics, to test supposedly universal moral rules in extreme situations.
- Trolley problems do not make much sense in consequentialist ethics, as optimal action for a consequentialist can differ drastically between messy complicated real world and idealized world of thought experiments.
Also, politics is the mind-killer, don't use examples like that if you can help it.
comment by Mass_Driver · 2010-10-24T06:38:50.224Z · LW(p) · GW(p)
No vote; post was at +2 and that seems appropriate to me.
Trolley problems have four weaknesses: true.
It's bad that trolley problems have weaknesses: sure, but you didn't propose an alternative way of forcing people to reason about thorny moral problems. Criticizing trolley problems without proposing an alternative is like criticizing liberal democracy without proposing an alternative -- easy, valid, and pointless.
Trolley thinking seeps into politics: highly unlikely; most people-thoughts about politics are had by people who remember nothing from any philosophy class they ever had. To the extent that people assume perfect information, deal in extremes, etc., it's because of generic human biases, and not because people's rationality is being hijacked by long exposure to trolley contemplation.
Biases in political thinking are bad: true, but trivial.
Replies from: None, Relsqui, ciphergoth↑ comment by [deleted] · 2010-10-24T16:14:00.937Z · LW(p) · GW(p)
It's bad that trolley problems have weaknesses: sure, but you didn't propose an alternative way of forcing people to reason about thorny moral problems. Criticizing trolley problems without proposing an alternative is like criticizing liberal democracy without proposing an alternative -- easy, valid, and pointless.
With that comment, I have to ask about what you thought about the Less Wrong post about how critic's do matter, even when they don't always have alternatives.
Replies from: Mass_Driver↑ comment by Mass_Driver · 2010-10-24T17:04:15.327Z · LW(p) · GW(p)
You mean you missed it?
↑ comment by Relsqui · 2010-10-24T08:16:22.355Z · LW(p) · GW(p)
most people-thoughts about politics are had by people who remember nothing from any philosophy class they ever had
Got a citation, or just a snarky opinion?
Replies from: wnoise, Mass_Driver↑ comment by Mass_Driver · 2010-10-24T17:07:24.933Z · LW(p) · GW(p)
Just a snarky opinion. But if you disagree, stop 10 people on any street that isn't in a college town and politely ask them if they know what a trolley problem is. Bet you 2 or less say "yes." Alternatively, ask them if they know of a way of pushing people to give an answer to a precise ethical question without dodging the dilemma. Bet you 1 or less can name such a method.
↑ comment by Paul Crowley (ciphergoth) · 2010-10-24T10:15:09.957Z · LW(p) · GW(p)
I found this comment really useful, thanks!
comment by prase · 2010-10-23T20:49:05.531Z · LW(p) · GW(p)
I think that trolley problems contain perfect information about outcomes in advance of them happening, ...
True.
... ignore secondary effects, ...
Depends on what you mean. The problem is stated with a simple question: do you push the lever / fat man? You are not instructed to ignore whatever effects it may have. A trolley problem may be stated with some additional remark like "nobody will ever learn about your choice" which can implicitly suggest to ignore some possible real-world effects, but that isn't inherently present in every trolley problem.
... ignore human nature, ...
No. Some answers to the dilemma tend to ignore human nature, but the problem itself doesn't. And of course, there are many moral questions whose answer is both natural and correct with respect to whatever ethical theory we use, but those wouldn't make a good material for interesting and non-trivial discussions about morality.
... and give artificially false constraints.
True.
Now, I think that's bad. Agree/disagree there?
Disagree. You object to the fact that the trolley problem is an idealised scenario. I see that objection about as equally valid as an argument that formal logic is bad because it ignores biases and heuristics that all real people use.
... I think this kind of thinking seeps over into politics,
Any kind of thinking about decisions and morality can seep into politics. I don't think that too much reductionism is a frequent problem in politics, but even if it is, it is a problem of any formalised decision theory, or social science in general.
comment by Jonii · 2010-10-25T14:48:54.624Z · LW(p) · GW(p)
If you can come up with better hypotheticals, please do present one. The trolley-one is used so often most likely because there are no good alternatives.
Also, you seem to have missed the point of the trolley problem as it has been presented. The main point in thinking about it is that you notice that a), you have ethical intuitions, and b), they can change to polar opposite if the starting position is altered even a bit. That's just healthy experimentation that's supposed to provoke thoughts. I can't see how changing this example would prevent people from justifying their thoughts in politics by misguided simplifications.
comment by PeerInfinity · 2010-10-24T17:42:33.703Z · LW(p) · GW(p)
An obvious third alternative to the trolley problem: If you yourself are fat enough to save the 5 people in the trolley, then you jump on the tracks yourself, you don't push the other guy.
But if you're not fat enough, then yes, of course you push the fat guy onto the tracks, without hesitation. And you plead guilty to his murder. And you go to jail. One person dying and one person going to jail is preferable to 5 people dying. Or, if you're so afraid of going to jail that you would rather die, then you can also jump onto the tracks after pushing the fat man, to be extra sure of being able to stop the trolley.
Technically, you should jump onto the tracks if by doing so you think that you can increase the probability of saving the 5 people by more than 20 percent.
Here is an interesting blog post on the topic of third alternatives to philosophical puzzles like these: http://tailsteak.com/archive.php?num=497
I still consider myself a "Classical Utilitarian", by the way, even though I am aware of some of the disturbing consequences of this belief system.
And I agree with the main point of your post, and upvoted it. But the real purpose of trolley problems is to explore edge-cases of moral systems, not to advocate or justify real life policy.
Replies from: Vladimir_Nesov, nerzhin↑ comment by Vladimir_Nesov · 2010-10-24T21:38:04.461Z · LW(p) · GW(p)
An obvious third alternative to the trolley problem: If you yourself are fat enough to save the 5 people in the trolley, then you jump on the tracks yourself, you don't push the other guy.
To yourself, your own life could be significantly more important than life of others, so the tradeoff is different and you can't easily argue that moral worth of 5 people is greater than moral worth of your own life, while when you consider 6 arbitrary people, it does obviously follow that (expected) moral worth of 5 people is greater than moral worth of 1.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-10-25T06:51:37.302Z · LW(p) · GW(p)
To yourself, your own life could be significantly more important than life of others, so the tradeoff is different
If that's just a matter of personal utility functions then the problem vanishes into tautology. You act in accordance with whatever that function says; or in everyday language, you do whatever you want -- you will anyway. If not:
and you can't easily argue that moral worth of 5 people is greater than moral worth of your own life, while when you consider 6 arbitrary people, it does obviously follow that (expected) moral worth of 5 people is greater than moral worth of 1.
You can very easily argue that, and most moral systems do, including utilitarianism. (Eg. greater love hath no man etc.) You may personally not like the conclusion that you should jump in front of the trolley, but that doesn't change the calculation, it just means you're not doing the shutting up part. Or as it is written, "You gotta do what you gotta do".
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-10-25T10:30:56.468Z · LW(p) · GW(p)
If that's just a matter of personal utility functions then the problem vanishes into tautology. You act in accordance with whatever that function says; or in everyday language, you do whatever you want -- you will anyway.
Not at all necessarily, as you can also err. It's not easy to figure out what you want. Hence difficult to argue, as compared to 5>1 arbitrary people.
You can very easily argue that [your life is less valuable than that of 5 strangers], and most moral systems do, including utilitarianism.
But not as easily, and I don't even agree it's a correct conclusion, while for 5>1 arbitrary people I'd guess almost everyone agrees.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-10-25T13:01:43.195Z · LW(p) · GW(p)
But not as easily, and I don't even agree it's a correct conclusion, while for 5>1 arbitrary people I'd guess almost everyone agrees.
That just the part where one fails to shut up while calculating. No version of moral utilitarianism that I've seen distinguishes the agent making the decision from any other. If the calculation obliges you to push the fat man, it equally obliges you to jump if you're the fat man. If a surgeon should harvest a healthy person's organs to save five other people, the healthy person should volunteer for the sacrifice. Religions typically enjoin self-sacrifice. The only prominent systems I can think of that take the opposite line are Objectivism and Nietzsche's writings, and nobody has boomed them here.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-10-25T17:34:36.660Z · LW(p) · GW(p)
Just from knowing that A and B are some randomly chosen fruits, I can't make a value judgment and declare that I prefer A to B, because states of knowledge are identical. But if it's known that A=apple and B=kiwi, then I can well prefer A to B. Likewise, it's not possible to have a preference between two people based on identical states of knowledge about them, but it's possible to do so if we know more. People generally prefer themselves to relatives and friends to similar strangers to distant strangers.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-10-26T06:57:53.560Z · LW(p) · GW(p)
People generally prefer themselves to relatives and friends to similar strangers to distant strangers.
The trolley problem isn't about what people generally prefer, but about exploring moral principles and intuitions with a thought experiment. Moral systems, with the exceptions I noted, generally do not prefer the agent. Beyond the agent, they usually do prefer close people to distant ones (Christianity being an exception here), but in the trolley problem, all of the people are distant to the agent.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-10-26T10:02:19.777Z · LW(p) · GW(p)
The trolley problem isn't about what people generally prefer, but about exploring moral principles and intuitions with a thought experiment.
What's that about, if not what people prefer?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-10-26T10:36:45.680Z · LW(p) · GW(p)
I already pointed out that most moral principles do not specially favour the agent, while most people's preferences do. Nobody wants to be the one who dies that others may live, yet some people have made that decision. Whatever moral principles and intuitions are, therefore, they are something different from "what people prefer".
But I am fairly sure you know all this already, and I am at a loss to see where you are going with this.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-10-26T17:30:20.636Z · LW(p) · GW(p)
Nobody wants to be the one who dies that others may live, yet some people have made that decision.
Was that a good decision (not a rhetorical question)? Who judges? I understand that aggregated preference of humanity has a neutral point of view, and so in any given situation prefers lives of 5 given normal people to life of 1 given normal person. But is there any good reason to be interested in this valuation in making your own decisions?
Note that having preference for your own life over lives of others could still lead to decisions similar to those you'd expect from a neutral-point-of-view preference. Through logical correlation of decisions made by different people, your decision to follow a given principle makes other people follow it in similar situations, which might benefit you enough for the causal effect of (say) losing your own life to be overweighted by that acausal effect of having your life saved counterfactually. This would be exactly the case where one personally prefers to die so that others may live (so that others could've died so that you could've lived). It's not all about preference, even perfectly selfish agents would choose to self-sacrifice, given some assumptions.
Replies from: NihilCredo↑ comment by NihilCredo · 2010-10-26T17:55:56.162Z · LW(p) · GW(p)
Acausal relationships between human agents are astronomically overestimated on LW.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-10-26T18:23:57.852Z · LW(p) · GW(p)
That was a normative, not descriptive note. If all people acted according to a better decision theory, their actions would (presumably - I still don't have good understanding of this) look like having a neutral point of view, despite their preferences remaining self-centered. Of course, if we have most people act as they actually do, then any given person won't have enough acausal control over others.
Replies from: NihilCredo↑ comment by NihilCredo · 2010-10-26T18:34:27.565Z · LW(p) · GW(p)
Fair enough. The only small note I'd like to add is that the phrase "if all people acted according to a [sufficiently] better decision theory" does not seem to quite convey how distant from reality - or just realism - such a proposition is. It's less in the ballpark of "if everyone had IQ 230" than in that of "if everyone uploaded and then took the time to thoroughly grok and rewrite their own code".
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-10-26T18:39:03.956Z · LW(p) · GW(p)
I don't think that's true, as people can be as simple (in given situations) as they wish to be, thus allowing others to model them, if that's desirable. If you are precommited to choosing option A no matter what, it doesn't matter that you have a brain with hundred billion neurons, you can be modeled as easily as a constant answer.
Replies from: NihilCredo↑ comment by NihilCredo · 2010-10-26T18:58:55.824Z · LW(p) · GW(p)
You cannot precommit "no matter what" in real life. If you are an agent at all - if your variable appears in the problem - that means you can renege on your precommitment, even if it means a terrible punishment. (But usually the punishment stays on the same order of magnitude as the importance of the choice, allowing the choice to be non-obvious - possibly the rulemaker's tribute to human scope insensitivity. Not that this condition is even that necessary since people also fail to realise the most predictable and immediate consequences of their actions on a regular basis. "X sounded like a good idea at the time", even if X is carjacking a bulldozer.)
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-10-26T20:11:20.874Z · LW(p) · GW(p)
You cannot precommit "no matter what" in real life.
This is not a problem of IQ.
↑ comment by nerzhin · 2010-10-24T21:20:32.574Z · LW(p) · GW(p)
If you yourself are fat enough to save the 5 people in the trolley, then you jump on the tracks yourself
This implies that if you are designing an AI that is expected to encounter trolley-like problems, it should precommit to eating lots of ice cream.
Replies from: PeerInfinity↑ comment by PeerInfinity · 2010-10-25T02:41:47.725Z · LW(p) · GW(p)
ah, but what about a scenario where the only way to save the 5 people is to sacrifice the life of someone who is thin enough to fit through a small opening? eating ice cream would be a bad idea in that case.
Replies from: shokwave↑ comment by shokwave · 2010-10-25T10:45:26.725Z · LW(p) · GW(p)
All this shows is that it's possible to construct two thought experiments which require precommitment to mutually exclusive courses of action in order to succeed. Knowing of only one, you would precommit to the correct course of action, but knowing both, what are your options? Reject the concept of a correct moral answer, reject the concept of thought experiments, reject one of the two thought experiments, or reject one of the premises of either thought experiment?
I think I would reject a premise; that the course of action offered is the one and only way to help. Either that, or bite the bullet and accept that there are actual situations in which a moral system will condemn all options - almost the beginnings of a proof of incompleteness of moral theory.
Of course, it doesn't show that all possible moral theories are incomplete, just that any theory which founders on a trolley problem is potentially incomplete - but then, something tells me that given a moral theory, it wouldn't be hard to describe a trolley problem that is unsolvable in that theory.
comment by gmweinberg · 2010-10-23T19:08:03.845Z · LW(p) · GW(p)
Well, if the point of trolley problems is to gain some insight as to how we form moral judgments, I don't necessarily do even that particularly well. I suspect they don't even do that particularly well, since I suspect many respondents are going to give what they think is the approved answer, which is possibly different from what they would actually do. At best they provide insight as to why we might think of certain actions as moral or immoral.
But I sometimes see things like trolley problems used argue that there is something wrong with peoples' decision making processes, based on apparent inconsistencies in their responses. I think this is a crock. The fact is, we often have to make decisions very quickly based on woefully incomplete information. Yes we use heuristics, and yes sometimes which heuristic gets applied (and thus which choice we make) depends on how the problem is phrased, and that means sometimes we will give "inconsistent" answers. This is not a defect, it is the inevitable result of not having the luxury of infinite time to consider one's response.
comment by Will_Sawin · 2010-10-23T14:36:43.964Z · LW(p) · GW(p)
Your "ignore secondary effects" claim is weak - trolley-type situations would happen very rarely and there'd be no point in responding.
It's bad but necessary to get to the fundamental moral issue.
I don't think it seeps into politics. Similarity does not imply causation.
comment by Relsqui · 2010-10-23T05:33:13.038Z · LW(p) · GW(p)
Couple of typos:
- It the global secondary effects that local choices create.
Your sentence no verb.
We've got to take this rich fat cat and give it to these poor people
Give the cat to the poor people?
Replies from: lionhearted↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2010-10-23T05:38:25.647Z · LW(p) · GW(p)
Cheers. I fixed the first one, but missed the second one. Both fixed now.
Replies from: Relsquicomment by Desrtopa · 2010-12-05T04:59:09.445Z · LW(p) · GW(p)
I also think that you've misunderstood the significance of the trolley problem. As it happens, I was already intending to write a new post on the trolley problem when I came across this looking for related articles, so here it is, and I hope it explains why I find this post to be a worrying type of response to the dilemma.
comment by oliverbeatson · 2010-10-24T19:57:38.369Z · LW(p) · GW(p)
Ceteris paribus, I cannot imagine what number of people stupid enough to be sitting about on a train track (probably violating property rights for a start) it takes before saving them becomes worth the sacrifice of the average bystander.
Replies from: jimrandomh↑ comment by jimrandomh · 2010-10-24T20:44:42.124Z · LW(p) · GW(p)
Least convenient possible world: replace "sitting on a train track" with "tied to a train track by a villain".
Replies from: oliverbeatson↑ comment by oliverbeatson · 2010-11-02T22:58:42.470Z · LW(p) · GW(p)
This happened to me just last week.
In response to the downvote: Hmm, I wonder what fraction of people on railway tracks are there because they are reckless and what fraction are victims who are not generally at fault? I assumed my 'ceteris paribus' covered this sufficiently but perhaps villainous train-plots are more the norm than I thought. Given this, subsidising the risk of recklessly hanging around train tracks by having a policy of sacrificing innocent bystanders to stop trains will only prevent the emergence of a mechanism whereby people don't end up on train tracks, which is fairly surely, on net, suboptimal to the cause of not having people die in train accidents. Alternatively, I may have missed the point. This appeals to me as a possibility.
comment by magfrump · 2010-10-23T20:36:08.879Z · LW(p) · GW(p)
You make the claim that trolley problems ignore human nature and are thus conducive to sloppy thinking. It is claimed that people who know of trolley problems are more likely to be "anarchists and libertarians" and less likely to accept tyrrany. Ignoring the fact that this is orthogonal to your point, I would endorse the stronger claim that people who know of trolley problems are in general better thinkers.
On the other hand, people who know trolley problems have probably taken a university philosophy class and are thus in a totally different demographic than the general population. And just because people who have taken a class about thinking clearly think more clearly doesn't mean that the tools we use in those classes are very good. Or that they are any good at all.
When you say that trolley problems ignore human nature, I think there is a deeper truth to that. When you ask people to decide between letting people die and pushing someone in front of a train; well first off magnitude bias kicks in and they see it as "passive bad vs active bad." Then virtue ethics kicks in and they may see the situation as "me being bad" versus "something bad happening elsewhere."
When a general makes this decision for his troops, he is already in a position of having made decisions for five platoons which put them in danger, in order to have his virtue ethics of being a good person kick in he has to rescue the people he put in danger.
It has been brought up that humans may operate more naturally in virtue ethics; the standard trolley problem seems to me to be about times when virtue ethics and consequentialism will have unpleasant conflicts.
People have mentioned that the trolley problem specifically comes up rarely. I agree; I think that it is perfectly possible to pursue the virtues which make you a better consequentialist. So why spend time focusing on the conflict, as trolley problems do, rather than looking for a better set of philosophical tools?
comment by [deleted] · 2010-10-23T11:56:33.572Z · LW(p) · GW(p)
I think that trolley problems contain perfect information about outcomes in advance of them happening, ignore secondary effects, ignore human nature, and give artificially false constraints. Do you agree with that part?
"false constratins" carries a negative connotation. Here you setup your emotional argument later. This site is about rationality and you made a mistake here. Even if it is understandable and common human thinking it is that which is "sloppy thinking". Otherwise this premise is more or less correct.
Now, I think that's bad. Agree/disagree there?
Disagree. With the same argument you could say that animal testing is bad because it doesn't recognize the differences between humans and animals. The point is, it does. As long as you know the limits of the results of animal testing, they still serve as a pointer towards how humans will react to substances. And as long as you are aware that trolley problems are not real life problems they still serve as a pointer towards what a rational morality needs to answer.
Okay, finally, I think this kind of thinking seeps over into politics, and it's likewise bad there. Agree/disagree?
Except for the likewise. If people make choices based on thought experiments without thinking about whether they really apply, that is bad. But you have only shown how people could do that and not that they actually do. If your post was meant as a general warning against mindlessly reducing real life situations to thought experiments it would be a valid point, however the structure of your post doesn't make that point clear.
comment by lionhearted (Sebastian Marshall) (lionhearted) · 2010-10-23T07:34:45.926Z · LW(p) · GW(p)
Tangent - is there any way, like Reddit, to see how many upvotes and downvotes this has gotten? I see it yo-yoing back and forth, people don't seem neutral about this one. I'd be curious to see the exact numbers.