Please Don't Fight the Hypothetical
post by TimS · 2012-04-20T14:29:06.247Z · LW · GW · Legacy · 65 commentsContents
65 comments
It is a common part of moral reasoning to propose hypothetical scenarios. Whether it is our own Torture v. Specks or the more famous Trolley problem, asking these types of questions helps the participants formalize and understand their moral positions. Yet one common response to hypothetical scenarios is to challenge some axiom of the problem. This article is a request that people stop doing that, and an explanation of why this is an error.
First, a brief digression into law, which is frequently taught using hypothetical questioning. Under the Model Penal Code:
A person acts knowingly with respect to a material element of an offense when:
(i) if the element involves the nature of his conduct or the attendant
circumstances, he is aware that his conduct is of that nature or that such
circumstances exist; and
(ii) if the element involves a result of his conduct, he is aware that it is
practically certain that his conduct will cause such a result.
Hypothetical: If Bob sets fire to a house with Charlie inside, killing Charlie, is Bob guilty of knowing killing of Charlie? Bob genuinely believes throwing salt over one's shoulder when one sets a building on fire protects all the inhabitants and ensures that they will not be harmed - and did throw salt over his shoulder in this instance.
Let us take it as a given that setting someone on fire is practically certain to kill them. Nonetheless, Bob did not knowingly kill Charlie because Bob was not aware of the consequence of his action. Bob had a false belief that prevented him from having the belief required under the MPC to show knowledge.
The obvious response here is that, in practice, the known facts will lead to Bob's conviction of the crime at trial. This is irrelevant. Bob will be convicted at trial because the jury will not believe Bob's asserted belief was true. Unless Bob is insane or mentally deficient, the jury would be right to disbelieve Bob. But that missed the point of the hypothetical.
The purpose of the hypothetical is to distinguish between one type of mental state and a different type of mental state. If you don't understand that Bob is innocent of knowing killing if he truly believed that Charlie was safe, then you don't understand the MPC definition of knowing. Discussion of how mental states are proven at real trials, or whether knowing killing should be the only criminal statute about killing are different topics. Talking about those topics will not help you understand the MPC definition of knowing. Talking about those other topics is functionally identical to saying that you don't care about understanding the MPC definition of knowing.
Likewise, people who responds to the Trolley problem by saying that they would call the police are not talking about the moral intuitions that the Trolley problem intends to explore. There's nothing wrong with you if those problems are not interesting to you. But fighting the hypothetical by challenging the premises of the scenario is exactly the same as saying, "I don't find this topic interesting for whatever reason, and wish to talk about something I am interested in."
In short, fighting the premises of a hypothetical scenario is changing the topic to focus on something different than topic of conversation intended by the presenter of the hypothetical question. When changing the topic is appropriate is a different discussion, but it is obtuse to fail to notice that one is changing the subject.
Edit: My thesis "Notice and Justify changing the subject," not "Don't change the subject."
65 comments
Comments sorted by top scores.
comment by David_Gerard · 2012-04-20T19:49:00.738Z · LW(p) · GW(p)
People are very good at fighting hypotheticals because hypotheticals are typically an attempt to elicit a response that can be used against them. This infuriates tutors in Philosophy 100 classes, of course ...
Replies from: TheOtherDave, brazil84↑ comment by TheOtherDave · 2012-04-20T19:58:32.539Z · LW(p) · GW(p)
Sure. It used to infuriate me, too.
I have since then adopted the habit of responding with some variation of "I observe that you're fighting the hypothetical; I infer that's because you don't trust my motives in bringing it up. That's fair. Perhaps we can drop this subject for now, and return to it at some later time when I've earned your trust?"
Replies from: David_Gerard↑ comment by David_Gerard · 2012-04-20T21:55:23.751Z · LW(p) · GW(p)
That's brilliant. Or looks brilliant, anyway - how does it tend to work out in practice?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-20T23:13:44.792Z · LW(p) · GW(p)
Poorly.
Though better than anything else I've ever tried.
The only thing I can really say in its favor is it keeps me from getting quite so infuriated.
Edit: More precisely... the thing that works better is not approaching people who don't trust my motives with hypothetical questions. But assuming I've screwed that up, it works better than anything I've tried.
Replies from: David_Gerard↑ comment by David_Gerard · 2012-04-20T23:28:40.362Z · LW(p) · GW(p)
Heh. Oh well, it looked good!
↑ comment by brazil84 · 2014-05-12T13:34:22.742Z · LW(p) · GW(p)
People are very good at fighting hypotheticals because hypotheticals are typically an attempt to elicit a response that can be used against them. This infuriates tutors in Philosophy 100 classes, of course ...
Maybe that's so, but people need to lighten up a bit. On a discussion board such as this one, if you concede a point and later think better of it, there is nothing stopping you from explicitly revising your position. And actually learning something in the process.
comment by ewbrownv · 2012-04-20T19:01:01.497Z · LW(p) · GW(p)
Sadly, we don't live in a world where all hypotheticals are actually neutral excercises in deductive logic. In real debates it's quite common to see people constructing hypotheticals that implicitly assume their position on some issue is the correct one. If you accept one of these hypotheticals you've already lost the argument, regardless of the actual merits of the case. Thus, when you find yourself confronted with a hypothetical based on an incoherent ontology, corrupt definitions, or other examples of confused or dishonest thinking often the only viable response is to challenge the validity of the hypothetical itself.
In other words, 'don't fight the hypothetical' is generally equivalent to 'let your opponent define the terms of the debate however he pleases' - rarely good advice, especially outside of a classroom setting.
Replies from: TimS, orthonormal, brazil84, MugaSofer↑ comment by TimS · 2012-04-20T19:25:46.699Z · LW(p) · GW(p)
I don't think I disagree. Fighting the hypo is the equivalent of changing the subject. Sometimes, that's a good idea. I certainly endorse dodging loaded questions. Even better is calling people on the loaded question, if you are brave enough and think it will help. (Unfortunately, that's usually a question of relative status, not relative rationality - but trying can still sometimes raise the sanity line).
My point was only that people sometimes get confused about what is and what isn't staying on topic.
↑ comment by orthonormal · 2012-04-20T19:16:47.933Z · LW(p) · GW(p)
Is there a way to tell the difference between helpful hypotheticals that illustrate confusing topics in stark terms (like Newcomb's Dilemma) and malicious ones that try and frame a political argument for rhetorical purposes (I can think of some examples, I'm sure you can as well, but let's try to avoid particulars)?
Replies from: billswift↑ comment by billswift · 2012-04-20T21:07:39.216Z · LW(p) · GW(p)
As I pointed out on the thread about noticing when you're rationalizing
You can find multiple, independent considerations to support almost any course of action. The warning sign is when you don't find points against a course of action. There are almost always multiple points both for and against any course of action you may be considering.
If the hypothetical is structured so as to eliminate most courses of action, it has probably been purposely framed that way. Hypotheticals are intended to illuminate potential choices, if all but one choice has been eliminated, it is a biased alternative (loaded question).
Note that this is one reason I like considering fiction in ethical (and political theory, which is basically ethics writ large) reasoning - the scenarios are much simpler and more explicit than real world ones, but richer and less likely to be biased pbilosophically than scenarios designed for that purpose.
↑ comment by brazil84 · 2014-05-12T13:28:18.828Z · LW(p) · GW(p)
Sadly, we don't live in a world where all hypotheticals are actually neutral excercises in deductive logic. In real debates it's quite common to see people constructing hypotheticals that implicitly assume their position on some issue is the correct one. If you accept one of these hypotheticals you've already lost the argument, regardless of the actual merits of the case.
I'm not sure exactly what you have in mind, but I think in that situation the best response to be explicit about what's going on.
e.g. "I would definitely choose Option A as I think that generally speaking and all things being equal, it's better not to torture puppy dogs. But so what? What does that have to do with my original point?"
↑ comment by MugaSofer · 2012-12-24T22:10:26.565Z · LW(p) · GW(p)
Thus, when you find yourself confronted with a hypothetical based on an incoherent ontology, corrupt definitions, or other examples of confused or dishonest thinking often the only viable response is to challenge the validity of the hypothetical itself.
I generally just ironman their flawed hypothetical and carry on. Sometimes it turns out that the flaws are easily fixed and doing so does not invalidate their original point.
comment by JGWeissman · 2012-04-20T16:21:05.076Z · LW(p) · GW(p)
See also Least Convenient Possible World.
Replies from: roystgnr↑ comment by roystgnr · 2012-04-20T20:34:22.262Z · LW(p) · GW(p)
For a broad enough definition of "possible", the least convenient possible world is always maximally inconvenient, no matter what your epistemology or morality or decision theory is. Any formalized deductive logic is always going to have true-but-unproveable theorems. Any priors, Occam-like or not, are always going to lead to wildly incorrect posteriors when inducting based on data from a sufficiently perverse universe. Any decision theory is always going to fail against an Omega whose behavior is "see if he's running that decision theory, then punish him accordingly".
Normally that's not the end of the world. There is probably no Omega who hates us, and so we don't have to worry about our Universe being a deceptive design or about an otherwise rational decision-making process being punished because of its rationality.
But when we start answering hypotheticals, then we are potentially talking about deceptively designed universes whose conceptual existence is a deliberate attempt to punish viewpoints not held by the hypothesizer! That doesn't mean that hypotheticals aren't useful tools or shouldn't usually be answered, but it might still be reasonable to keep the possibility of exceptions in mind.
comment by Daniel_Starr · 2012-04-20T19:43:37.308Z · LW(p) · GW(p)
One good reason to "fight the hypothetical" is that many people propose hypotheticals insincerely.
Sometimes this is obvious: "But what if having a gun was the only way to kill Hitler?" or alternatively "But what if a world ban on guns had prevented Hitler?" are clearly not sincere questions but strawman arguments offered to try to sneak you into agreement with a much more mundane and debatable choice.
But the same happens with things like the Trolley problem or Torture v. Specks. Many "irrational" answers to hypotheticals come out of reflexes that are perfectly rational under ordinary circumstances. If the hypothetical-proposer isn't willing to allow that the underlying reflex has its uses, the hypothetical-proposer is ignorant or untrustworthy.
For example, while in principle killing by inaction is as bad as killing by action, in practice it's much easier to harm with a nonconformist action than to help -- consider how many times in a day you could kill someone if you suddenly wanted to. So a reflexive bias to have a higher standard of evidence for harmful action than harmful inaction has its uses.
So when someone is using hypotheticals to prove "people are stupid", they're not proving that in quite the way they think. Should we really play along with them?
Replies from: TimS↑ comment by TimS · 2012-04-20T19:56:52.691Z · LW(p) · GW(p)
But the same happens with things like the Trolley problem or Torture v. Specks. Many "irrational" answers to hypotheticals come out of reflexes that are perfectly rational under ordinary circumstances. If the hypothetical-proposer isn't willing to allow that the underlying reflex has its uses, the hypothetical-proposer is ignorant or untrustworthy.
It depends a bit on how well the discussion is going - if it isn't trying to reach true beliefs then that is your problem, not the appropriateness of the hypothetical.
But if the discussion is going well, then knee-jerk reactions to the hypothetical aren't helpful. The fact that a reflex answer usually works is no evidence that it handles edge cases well. And if it doesn't handle edge cases well, then it might not be a very coherent position after all.
Replies from: Daniel_Starr↑ comment by Daniel_Starr · 2012-04-20T20:10:02.598Z · LW(p) · GW(p)
if it isn't trying to reach true beliefs then that is your problem, not the appropriateness of the hypothetical.
Yeah, that's what I often find - otherwise smart people using an edge case to argue unreasonable but "clever" contrarian things about ordinary behavior.
"I found an inconsistency, therefore your behavior comes from social signalling" is bad thinking, even when a smart and accomplished person does it.
So if someone posts a hypothetical, my first meta-question is whether they go into it assuming that they should be curious, rather than contemptuous, about inconsistent responses. "Frustrated" I respect, and "confused" I respect, but "contemptuous"...
If you're comfortable being contemptuous about ordinary human behavior, you have to prove a whole heck of a lot to me about your practical success in life before I play along with your theoretical construct.
Ordinary human behavior can be your opponent, but you can't take it lightly - it's what allowed you to exist in the first place.
Replies from: TimScomment by Luke_A_Somers · 2012-04-21T01:13:52.599Z · LW(p) · GW(p)
I remember taking the ridiculous 'moral compass quiz' which was basically 101 repetitions of the trolley problem, sometimes by inaction, stated differently each time (and then symmetrized so each action variant was replaced by inaction). We were explicitly told to assume our knowledge is certain. Some of the circumstances assumed long chains of improbable-seeming events between sacrificing someone and saving 5 others.
I told 'em that maintaining uncertainty is irremovable from morality. Morality minus the capability of considering that you're wrong is broken, and leads to horrible outcomes.
Replies from: fubarobfusco, pedanterrific↑ comment by fubarobfusco · 2012-04-21T01:42:44.944Z · LW(p) · GW(p)
We were explicitly told to assume our knowledge is certain. Some of the circumstances assumed long chains of improbable-seeming events between sacrificing someone and saving 5 others.
Eliezer commented on this back in "Ends Don't Justify Means (Among Humans)", attempting to reconcile consequentialism with the possibility (observed in human politics) that humans may be running on hardware incapable of consequentialism accurate enough for extreme cases:
And now the philosopher comes and presents their "thought experiment"—setting up a scenario in which, by stipulation, the only possible way to save five innocent lives is to murder one innocent person, and this murder is certain to save the five lives. "There's a train heading to run over five innocent people, who you can't possibly warn to jump out of the way, but you can push one innocent person into the path of the train, which will stop the train. These are your only options; what do you do?"
An altruistic human, who has accepted certain deontological prohibits—which seem well justified by some historical statistics on the results of reasoning in certain ways on untrustworthy hardware—may experience some mental distress, on encountering this thought experiment.
So here's a reply to that philosopher's scenario, which I have yet to hear any philosopher's victim give:
"You stipulate that the only possible way to save five innocent lives is to murder one innocent person, and this murder will definitely save the five lives, and that these facts are known to me with effective certainty. But since I am running on corrupted hardware, I can't occupy the epistemic state you want me to imagine. Therefore I reply that, in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder the one innocent person to save five, and moreover all its peers would agree. However, I refuse to extend this reply to myself, because the epistemic state you ask me to imagine, can only exist among other kinds of people than human beings."
Now, to me this seems like a dodge. I think the universe is sufficiently unkind that we can justly be forced to consider situations of this sort. The sort of person who goes around proposing that sort of thought experiment, might well deserve that sort of answer. But any human legal system does embody some answer to the question "How many innocent people can we put in jail to get the guilty ones?", even if the number isn't written down.
As a human, I try to abide by the deontological prohibitions that humans have made to live in peace with one another. But I don't think that our deontological prohibitions are literally inherently nonconsequentially terminally right. I endorse "the end doesn't justify the means" as a principle to guide humans running on corrupted hardware, but I wouldn't endorse it as a principle for a society of AIs that make well-calibrated estimates. (If you have one AI in a society of humans, that does bring in other considerations, like whether the humans learn from your example.)
I have to admit, though, this does seem uncomfortably like the old aphorism quod licet Jovi, non licet bovi — "what is permitted to Jupiter is not permitted to a cow."
Replies from: Random832↑ comment by Random832 · 2012-04-23T12:33:32.920Z · LW(p) · GW(p)
But since I am running on corrupted hardware, I can't occupy the epistemic state you want me to imagine.
It occurs to me that many (maybe even most) hypotheticals require you to accept an unreasonable epistemic state. Even something so simple as trusting that Omega is telling the truth [and that his "fair coin" was a quantum random number generator rather than, say, a metal disc that he flipped with a deterministic amount of force, but that's easier to grant as simple sloppy wording]
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-23T13:04:44.268Z · LW(p) · GW(p)
In general, thought experiments that depend on an achievable epistemic state can actually be performed and don't need to remain thought experiments.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-04-23T14:16:09.424Z · LW(p) · GW(p)
They can depend on an achievable epistemic state, but be horribly impractical or immoral to set up (hello trolley problems).
↑ comment by pedanterrific · 2012-04-21T01:28:32.134Z · LW(p) · GW(p)
Okay... so, you are 99 percent certain of your information. Does that change your answer?
Replies from: Luke_A_Somers, Viliam_Bur↑ comment by Luke_A_Somers · 2012-04-23T14:06:37.676Z · LW(p) · GW(p)
Somehow, I doubt I could achieve any more than 1% confidence that, say, the best plan to save 5 children in a burning building was to stab the fireman with a piece of glass and knock his ladder over and pull him off it so his body fell nearly straight down and could serve a cushion for the 5 children, who would each get off the body soon enough to let the next one follow. Actually, I was not only to assume that that is the best plan, but that it is certain to work, and if I don't carry it out, the 5 children will certainly die, but not the fireman or me.
Or, alternately, that I'm on a motorboat, and there's a shark in the water who will eat 5 people unless I hit the accelerator soon enough and hard enough that one of my existing passengers will certainly be knocked off the back, and will certainly be eaten by the shark (perhaps it was a second shark, which would raise the likelihood that I couldn't do some fancy boatwork to get 'em all). I do not have time to tell anyone to hold on - I absolutely MUST goose the gas to get there that extra half second early that somehow makes the difference between all five being eaten and none of the five being eaten.
Replies from: pedanterrific↑ comment by pedanterrific · 2012-04-23T15:57:36.932Z · LW(p) · GW(p)
So your issue isn't actually with (moral) reasoning under uncertainty or the trolley problem in general, it's just with highly specific, really bad examples. Gotcha.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-04-24T15:13:21.362Z · LW(p) · GW(p)
I think in general, if you find your plans to be complicated, involve causing someone else a large up-front cost, and you have very high confidence in the plan, the moral thing is to audit your certainty.
↑ comment by Viliam_Bur · 2012-04-23T12:13:13.173Z · LW(p) · GW(p)
Just because I feel 99% certain of some information, it does not mean that I am right in 99% of situations. This should be included in calculation.
Even if I were a perfect Bayesian reasoner, most people aren't. Are we going to solve this one specific situation, or are we creating a general rule that all people will follow? Because it may be better to let 5 people die once than to create a precedent that will allow all kinds of irrational folks go around and kill a random person every time they feel that by doing so they have prevented hypothetical deaths of five other persons.
(If you want to go on and ask whether it is good to kill one person to prevent 99% chance of five people dying, assuming that we are absolutely sure about all these data, and assuming that this sets no kind of precedent or a slippery slope for people in similar circumstances... then the answer is: yes. -- But in real life the probability that such situation happens is much smaller than probability that I misunderstood the situation.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-23T13:13:25.155Z · LW(p) · GW(p)
Sure, but knowing that doesn't necessarily help. If I, in my travels, find myself in a situation that seems to me to be standing by a switch on a train track, while what I estimate to be a train approaches in such a way that I expect it will go down track A if left alone or track B if I pull the switch, and I observe what appear to be six people, one of whom is tied to track B and five of whom are tied to track A, it is of course possible that all of my observations and estimations are incorrect. But I'm still left with the question of what to do.
I mean, sure, if I pull the switch and it turns out that the five people who I thought were tied to track A are just lifelike mannequins, then I've just traded away a world in which nobody dies for a world in which someone dies, which isn't a choice I endorse.
On the other hand, if I don't pull the switch and it turns out that the person I thought was tied to track B is just a lifelike mannequin, then I've just traded away a world in which nobody dies for a world in which five people die, which isn't a choice I endorse either.
Any choice I make might be wrong, and might result in unnecessary deaths. But that doesn't justify any particular choice, including the choice to not intervene.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-04-23T14:14:01.914Z · LW(p) · GW(p)
The example you listed doesn't come close to addressing the topic. It isn't trolley problems in general - it's a particular variant of trolley problems where the rest of the information in the problem is fighting the certainty you are told to assume about the options.
comment by thomblake · 2012-04-20T21:04:58.274Z · LW(p) · GW(p)
Likewise, people who responds to the Trolley problem by saying that they would call the police are not talking about the moral intuitions that the Trolley problem intends to explore. There's nothing wrong with you if those problems are not interesting to you. But fighting the hypothetical by challenging the premises of the scenario is exactly the same as saying, "I don't find this topic interesting for whatever reason, and wish to talk about something I am interested in."
In moral reasoning, that isn't necessarily the case.
Thought experiments in the ethics literature are used to determine one's intuitions about morality, and the subject there is not the intuitions, but morality, and the intuitions are merely a means to an end. But, ethics thought experiments are often completely unrealistic - exactly the places we'd expect our normal heuristics to break down, and thus places where our intuition is useless. So while in a sense denying the hypothetical is changing the subject, in this case it's more like changing the subject back to what it was supposed to be. ("You want to talk about my intuitions about this case, but they're irrelevant, so let's talk about something relevant")
Replies from: novaliscomment by LyleN · 2014-11-25T19:58:19.365Z · LW(p) · GW(p)
In some situations you can keep people from fighting the hypothetical by asking a question which explicitly states the point of the hypothetical, instead of asking something vague.
E.g., for Newcomb's paradox, instead of asking "What do you choose?" (potential answer: "I don't really need a million dollars, so I'll just take the box with the $1000") ask "which choice of box(es) maximizes expected monetary gain?"
E.g., for the Monty Hall problem, instead of asking "Would you switch doors?" (potential answers: "I'd probably just stick with my gut" or "I like goats!") ask "does switching or not switching maximize your chances of getting the car?"
E.g., for the Prisoner's Dilemma, instead of "What do you do?" (potential answer: "try to escape!") ask "Which of the offered choices minimizes expected jail time?"
Additionally, since they are framed in a less personal way, these kinds of questions may be less likely to be perceived as traps or set-ups.
Unfortunately it's much trickier to apply this strategy to hypotheticals about moral intuition, because usually the purpose of these is to see what considerations the other person in particular attaches to a question like "what do you do?" (E.g., you can't just ask "which choice maximizes utility?" instead of "what do you do?" in a trolly problem, without circumventing the whole original point of the question). You may still be able to rein in responses by specifically asking something like "would you flip or not flip the lever?", though.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2014-11-25T22:21:46.939Z · LW(p) · GW(p)
FWIW, my answer in trolley problems is usually some variant of "Well, what I would do is probably dither ineffectually between alternatives until the onrushing train takes the choice out of my hands, but what I endorse doing is killing the smaller group." (Similarly, the truth is that faced with the canonical PD I would probably do whatever the cop tells me to do, whether I think it's the right thing to do or not. Fear is like that, sometimes. But that's really not the point of the question.)
comment by Evan_Guiney · 2012-04-20T20:09:23.298Z · LW(p) · GW(p)
I agree that challenging the axioms of a hypothetical can be understood as equivalent to changing the topic. But it simply doesn't follow from this that the proper way to respond to a hypothetical is never to change the topic! That's a patent logical fallacy. Your article is correct in most of its analysis, but the normative conclusion in the title is just not justified.
Do you really propose that the way to respond to Chalmers' Zombie hypothetical is to accept the axioms? But the axioms are ridiculous, and that's the problem!
In general, given a sufficiently strong set of axioms, a hypothetical can be constructed to irrefutably argue for any conclusion at all, with complete logical validity. Outlandish conclusions will require outlandish axioms, but they will be available. This might be ok if axioms were always clearly marked and explicit, but such is an impossibility with a language like English and its many hidden assumptions, and is doubly impossible given the need to reason with our brains which incorporate many axioms (read: biases) to which we have no conscious access.
Replies from: TimS↑ comment by TimS · 2012-04-20T20:13:51.099Z · LW(p) · GW(p)
The problem is that people don't always seem to notice that they are changing the topic. If you intentionally change the topic, that's one thing. If you unintentionally change the topic, I assert you are making a mistake.
It is not my intent to argue that changing the topic is not appropriate, because it often is the appropriate response.
comment by Random832 · 2012-04-20T17:50:24.026Z · LW(p) · GW(p)
Torture v. Specks
The problem with that one is it comes across as an attempt to define the objection out of existence - it basically demands that you assume that X negative utility spread out across a large number of people really is just as bad as X negative utility concentrated on one person. "Shut up and multiply" only works if you assume that the numbers can be multiplied in that way.
That's also the only way an interesting discussion can be held about it - if that premise is granted, all you have to do is make the number of specks higher and higher until the numbers balance out.
(And it's in no way equivalent to the trolley problem because the trolley problem is comparing deaths with deaths)
Replies from: orthonormal, TimS↑ comment by orthonormal · 2012-04-20T19:14:24.029Z · LW(p) · GW(p)
For some reason, people keep thinking that Torture vs. Specks was written as an argument for utilitarianism. That makes no sense, because it's the sort of thing that makes utilitarians squirm and deontologists gloat. What it is, instead, is a demand that if you're going to call yourself a utilitarian, you'd better really mean it.
EY's actual arguments for utilitarianism are an attempt to get you to conclude that you should choose Torture over Specks, despite the fact that it feels wrong on a gut level.
Replies from: wedrifid, Alicorn↑ comment by wedrifid · 2012-04-21T03:40:23.298Z · LW(p) · GW(p)
For some reason, people keep thinking that Torture vs. Specks was written as an argument for utilitarianism. That makes no sense, because it's the sort of thing that makes utilitarians squirm and deontologists gloat.
That gloating makes even less sense! There are people who gloat that their morality advocates doing that much additional harm to people? That sounds like a terrible move!
It seems to me that by the time you evaluate which one of two options are worse you have arrived both at the decision you would advocate and the decision you would be proud of. The only remaining causes for boasting being biased after you have thought it through would be if you thought the target audience would be particularly made up by people on your team.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-21T17:38:29.897Z · LW(p) · GW(p)
TvDS is a thought experiment in which (particular flavors of) deontology support a conclusion that most people find comfortable ("torture is bad, dust specks in your eye are no big deal") and (particular flavors of) utilitarianism support a conclusion that most people find uncomfortable ("torture is no big deal, dust specks in your eye are bad").
It makes perfect sense to me that people find satisfying being exposed to arguments in which their previously held positions make them feel comfortable, and find disquieting being exposed to arguments in which their previously held positions make them feel uncomfortable.
Replies from: wedrifid↑ comment by wedrifid · 2012-04-21T17:46:31.396Z · LW(p) · GW(p)
My point is that the motive for the boast is just that most people are naturally deontologists and so can be anticipated to agree with the deontological boast. Aside from that it is trivially the case that people can be expected to be proud of reaching the correct moral decision based on the fact that they arrived at any decision at all.
↑ comment by Alicorn · 2012-04-20T19:21:20.047Z · LW(p) · GW(p)
it's the sort of thing that makes utilitarians squirm and deontologists gloat
*gloat*
That is even more fun as an emote than I thought it would be.
Replies from: siodine↑ comment by siodine · 2012-04-20T19:38:52.505Z · LW(p) · GW(p)
Do you have some preexisting explanation for why you're a deontologist?
Replies from: TheOtherDave, Alicorn↑ comment by TheOtherDave · 2012-04-20T19:55:49.189Z · LW(p) · GW(p)
I am experiencing a strong desire at this moment for Alicorn to reply "Because it's the right thing to be."
It is only marginally stronger than my desire for her to reply "Because I expect it to have good results," though.
Replies from: siodine, thomblake, Alicorn↑ comment by Alicorn · 2012-04-20T20:13:05.643Z · LW(p) · GW(p)
I think "because it's the right thing to be" sounds more virtue-ethicist than deontologist.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-20T20:29:34.655Z · LW(p) · GW(p)
Is "because I should be" better?
Or do I not understand deontology well enough to make this joke?
↑ comment by TimS · 2012-04-20T17:59:30.550Z · LW(p) · GW(p)
I choose specks, but I found the discussion very helpful nonetheless.
Specifically, I learned that if you believe suffering is additive in any way, choosing torture is the only answer that makes sense. If you don't believe that (and I don't), then your references to "negative utility" are not as well defined as you think.
Edit: In other words, I think Torture v. Specks is just a restatement of the Repugnant Conclusion
Replies from: APMason, Random832, Nornagest↑ comment by APMason · 2012-04-20T20:05:27.815Z · LW(p) · GW(p)
Edit: In other words, I think Torture v. Specks is just a restatement of the Repugnant Conclusion.
The Repugnant Conclusion can be rejected by average-utilitarianism, whereas in Torture vs. Dustspecks average-utilitarianism still tells you to torture, because the disutility of 50 years of torture divided among 3^^^3 people is less than the disutility of 3^^^3 dustspecks divided among 3^^^3 people. That's an important structural difference to the thought experiment.
↑ comment by Random832 · 2012-04-20T18:16:52.598Z · LW(p) · GW(p)
Specifically, I learned that if you believe suffering is additive in any way, choosing torture is the only answer that makes sense.
Right. The problem was the people on that side seemed to have a tendency to ridicule the belief that it is not.
Replies from: TimS↑ comment by TimS · 2012-04-20T19:12:36.590Z · LW(p) · GW(p)
Yes, the ridicule was annoying, although I think many have learned their lesson.
The problem with our position is that it leaves us vulnerable to being Dutch-booked by opponents who are willing to be sufficiently cruel. (How much would you pay not to be tortured? Why not that amount plus $10?)
Replies from: David_Gerard↑ comment by David_Gerard · 2012-04-20T22:38:01.465Z · LW(p) · GW(p)
Yes, the ridicule was annoying, although I think many have learned their lesson.
Hmm ... what examples of learning their lesson are you thinking of?
Replies from: TimS↑ comment by TimS · 2012-04-21T00:38:52.602Z · LW(p) · GW(p)
This is a much more mature response to the debate.
Replies from: orthonormal↑ comment by orthonormal · 2012-04-22T18:06:48.839Z · LW(p) · GW(p)
Let's be clear: I do subscribe to utilitarianism, just not a naive one. (Long-range consequences and advanced decision theories make a big difference.) If I had magical levels of certainty of the problem statement, then I'd bite the bullet and pick torture. But in real life, that's an impossible state for a human being to occupy on object-level problems.
Truly meta-level problems are perhaps different; given a genie that magically understands human moral intuitions and is truly motivated to help humanity, I would ask it to reconcile our contradictory intuitions in a utilitarian way rather than in a deontological way. (It would take a fair bit of work to turn this hypothetical into something that makes real sense to ask, but one example is how to structure CEV.)
Does that make sense as a statement of where I stand?
comment by MinibearRex · 2012-04-26T21:14:32.093Z · LW(p) · GW(p)
I have on many occasions criticized a thought experiment, but I generally manage to avoid the specific type of criticism that you're talking about. The real problem is that a shockingly large number of the thought experiments discussed in philosophy are experiments of the form "If you kill someone, but they don't actually die, did you do something morally wrong?". The hypothetical situation is incoherent.
Earlier this week, I got into an argument in which someone attempted to demonstrate that things such as rights and liberty were fundamentally good, and not consequentially valuable, by asking whether a society in which all authority was concentrated in the hands of a few people, but which did not have problems with social injustice or rights violations, etc, was morally good. I was arguing that the hypothetical is flawed. If you concentrate power into the hands of a few people, you get negative consequences. I was indeed fighting the hypothetical, but I think there are cases in which doing so is a good idea.
comment by Bruno_Coelho · 2012-04-22T01:47:04.027Z · LW(p) · GW(p)
Hypothetical scenarios in conversations are used to win a debate, not to make predictions or seek truth. In most cases of personally conversations we don't have acess to relevant data, or time to process.
In papers maybe they should be more realistic, but people who use hypothetical scenarios like trolley cases tend to favor intuitions most part of time.
comment by hankx7787 · 2012-04-20T17:08:31.345Z · LW(p) · GW(p)
Always bite the bullet and address the heart of the hypothetical.
This is a great article that goes into detail on this point: http://lesswrong.com/lw/85h/better_disagreement/
Replies from: TimS↑ comment by TimS · 2012-04-20T18:02:24.810Z · LW(p) · GW(p)
Hank, that's one of my favorite articles.
Not sure why you are getting down-voted, but if you think "biting the bullet" is equivalent to "not fighting the hypo" then I've failed to communicate to you why fighting the hypo is bad.
Replies from: hankx7787