Rationality of sometimes missing the point of the stated question, and of certain type of defensive reasoning

post by Dmytry · 2011-12-29T13:09:35.483Z · LW · GW · Legacy · 18 comments

Imagine that you are being asked a question; a moral question involving an imaginary world. From the prior experience with people, you have learnt that people behave in a certain way; people are, for the most part, applied thinkers and whatever is your answer, it will become a cached thought that will be applied in the real world, should the situation arise. The whole rationale behind thinking of imaginary worlds may be to create cached thoughts.

Your answer probably won't stay segregated in the well defined imaginary world for any longer than it takes the person who asked the question to switch the topic; it is the real world consequences you should be most concerned about.

Given this, would it not be rational to perhaps miss the point but answer that sort of question in the real world way?

To give a specific example, consider this question from The Least Convenient Possible World :

You are a doctor in a small rural hospital. You have ten patients, each of whom is dying for the lack of a separate organ; that is, one person needs a heart transplant, another needs a lung transplant, another needs a kidney transplant, and so on. A traveller walks into the hospital, mentioning how he has no family and no one knows that he's there. All of his organs seem healthy. You realize that by killing this traveller and distributing his organs among your patients, you could save ten lives. Would this be moral or not?

First of all, note that the question is not abstract "If [you are absolutely certain that] the only way to save 10 innocent people is to kill 1 innocent person, is it moral to kill?" . There's a lot of details. We are even told that this 1 is a traveller, I am not exactly sure why but I would think that it references kin selection related instincts; the traveller has lower utility to the village than a resident.

In light of how people process answers to such detailed questions, and how the answers are incorporated into the thought patterns - which might end up used in the real world - is it not in fact most rational not to address that kind of question exactly as specified, but to point out that one of the patients could be taken apart for the best of other 9 ? And to point out the poor quality of life and life expectancy of the surviving patients?

Indeed, as a solution one could gather all the patients and let them discuss how they solve the problem; perhaps one will decide to be terminated, perhaps they will decide to draw straws, perhaps those with the worst prognosis will draw the straws. If they're comatose one could have a panel of 12 peers make the decision. There could easily be trillions of possible solutions to this not-so-abstract problem, and the trillions is not a figure of speech here. Privileging one solution is similar to privileging a hypothesis.

In this example, the utility of any villager can be higher to the doctor than of the traveller who will never return, and hence the doctor would opt to take apart the traveller for the spare parts, instead of picking one of the patients based on some cost-benefit metric and sacrificing that patient for the best of the others. The choice we're asked about turn out to be just one of the options, chosen selfishly; it is deep selfishness of the doctor that makes him realize that killing the traveller may be justified, but not realize the same about one of the patients, for the selfishness did bias his thought towards exploring one line of reasoning but not the other.

Of course one can say that I missed the point, and one can employ backward reasoning and tweak the example by stating that those people are aliens, and the traveller is totally histocompatible with each patient, but none of the patients are compatible with each other (that's how alien immune systems work: there are some rare mutant aliens whose tissues are not at all rejected by any other).

But to do so would be to completely lose the point of why we should expend mental effort to search for alternative solutions. Yes it is defensive thinking - what does it defend us from though? In this case it defends us from making a decision based on incomplete reasoning or a faulty model. All real world decisions are, too, made in imaginary worlds - in what we imagine the world to be.

Morality requires a sort of 'due process'; the good faith reasoning effort to find the best solution rather than the first solution that the selfish subroutines conveniently present for consideration; to explore the models for faults; to try and think outside the highly abbreviated version of the real world one might initially construct when considering the circumstances.

The imaginary world situation here is just an example; and so is the answer an example of reasoning that should be applied to such situations - the reasoning that strives to explore the solution space and test the model for accuracy.

Something else which is tangential to the main point of this article. If I had 10 differently broken cars and 1 working one, I wouldn't even think of taking apart the working one for spare parts, I'd take apart one of the broken ones for spare parts. Same would apply to e.g. having 11 children, 1 healthy, 10 in need of replacement of different organs. The option that one would be thinking of is to take the one that's least likely to survive, sacrifice for other 9; no one in their mind would even think of taking apart the healthy one unless there's very compelling prior reasons. This seem to be something that we would only consider for any time for a stranger. There may be hidden kin selection based cognitive biases that affect our moral reasoning.

edit: I don't know if it is OK to be editing published articles but I'm a bit of obsessively compulsive perfectionist and I plan on improving it for publishing it in lesswrong (edit: i mean not lesswrong discussion), so I am going to take liberty at improving some of the points but perhaps also removing the duplicate argumentation and cutting down the verbosity.

18 comments

Comments sorted by top scores.

comment by Eugine_Nier · 2011-12-30T03:29:00.336Z · LW(p) · GW(p)

By the way, in this post, Eliezer gives the following answer to the train dilemma:

"You stipulate that the only possible way to save five innocent lives is to murder one innocent person, and this murder will definitely save the five lives, and that these facts are known to me with effective certainty. But since I am running on corrupted hardware, I can't occupy the epistemic state you want me to imagine. Therefore I reply that, in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder the one innocent person to save five, and moreover all its peers would agree. However, I refuse to extend this reply to myself, because the epistemic state you ask me to imagine, can only exist among other kinds of people than human beings."

comment by TimS · 2011-12-29T13:57:36.396Z · LW(p) · GW(p)

If you don't find the issue interesting, that's fine. But if you fight the hypo, then you aren't having the same conversation as everyone else. As I said, that's a reasonable choice, but it's rude to pretend you are participating in a conversation when you really aren't.

Replies from: Dmytry
comment by Dmytry · 2011-12-29T14:24:49.414Z · LW(p) · GW(p)

The hypo here does not include the compatibility of traveller with everyone but their mutual transplant incompatibility. One has to fight the hypo (invent things that were omitted in the hypo) here not to arrive at alternative answer (sacrifice one of the patients). I think it is rude to expect people to fight for your hypo; if you are meaning it that the sacrifice of traveller is the only solution, then state it explicitly.

edit:

Furthermore, I'm not arguing what is polite. I'm discussing what is rational thing to do. There is example: someone asks me a question about electronics, with idealised components. I think of the real components that he likely has in mind, and see that the end result could burn down his house. I write an answer addressing a non-idealized circuit, perhaps outlining how to avoid the burn down house scenario. That person can find it impolite, of course, but it may often be a very good answer for that person to hear (and people are not very invested in their designs, usually, so it's unlikely the person would be offended, and i would likely be thanked for that answer). The person may be talking hypothetically but in years since he may need to build this thing.

Replies from: TimS
comment by TimS · 2011-12-29T14:55:30.360Z · LW(p) · GW(p)

The organ transplant discussion is a thought experiment, not an engineering problem. Assuming away the practical difficulties in order to figure out what we think is "right" is the whole point of the conversation.

There are lots of moral theories that can easily justify letting the patients die. For example, Jewish law holds that people facing moral choices can violate (just about) any rule, but cannot kill. That's not the utilitarian answer, but Rabbi Sacks would probably not be surprised to learn that he isn't a utilitarian. If he nonetheless tried to assert he was a utilitarian by adding facts to the hypothetical, that's rude. In short, the whole point is to learn about the shape of your ideas, not technical facts about medicine.

ETA: To be clear, my moral theory is not utilitarian.

Replies from: Dmytry
comment by Dmytry · 2011-12-29T15:05:56.782Z · LW(p) · GW(p)

Well, then take my post as a thought experiment as well. A thought experiment creating the situation where it could be right to consider the practical difficulties even though the person asking you the question wants you not to.

edit: in particular, here, the person asking that question IMO could be rather more enlightened by the unwanted answer than by just giving him the answer that he wants to hear: "Yes, fine, the utility is maximized by killing the traveller" (or by some un-utilitarian stuff that he can consider irrational). The conversation involves giving people answers they may not want to hear.

Replies from: TimS
comment by TimS · 2011-12-29T15:14:39.460Z · LW(p) · GW(p)

A person asking a question is always interested in additional relevant facts. What additional facts are relevant depends on whether the question is a thought-experience or more practical (like your engineering example). Not all facts are relevant all the time.

Replies from: Dmytry
comment by Dmytry · 2011-12-29T15:28:41.440Z · LW(p) · GW(p)

Well, I think it is relevant to the question if the explicit set-up in the question permits alternative solution distinct from those listed. I'm not much of utilitarian either but I do believe that real world complications are extremely relevant to the morality.

It's actually a very odd/special form of question, to those who grew up on write in rather than multiple choice answers. Here the person asking the question himself selects, out of multitude of possible solutions, one as special. While it is rude in some cultures to assume that not all of the solution space was explored, in other cultures it is not.

comment by prase · 2011-12-30T15:59:26.691Z · LW(p) · GW(p)

If the stated question lacks realism by omission, i.e. if the person who is asking doesn't realise that the question assumes conditions which don't hold in the real world, then I would endorse pointing this out and, perhaps, answering the more realistic version of the question. But if the discussion partners explicitly state they are not interested in realism and they want to have answered the question as it was stated with all assumptions whether they are real-world relevant or not, then insisting on answering a modified question instead is rude. Answering a modified question while pretending to be speaking about the original one is then not only rude but also dishonest. Now being rude and dishonest can sometimes be the rational thing to do (signalling concerns come to mind), but as a general policy for discussing hypothetical scenarios it is not. There is always the option of saying "I am not interested in the question (because of the inherent unrealistic assumptions)", and it is almost always superior.

Replies from: Dmytry
comment by Dmytry · 2011-12-31T10:00:05.740Z · LW(p) · GW(p)

Well, IMO one should not be asking a long winded hypothetical if one's point is merely to ask "If [you are absolutely certain that] the only way to save 10 innocent people is to kill 1 innocent person, is it moral to kill?" .

Why would one make a big hypothetical, which takes work, and build a special non-realistic world where the hypothetical would amount to a much shorter question one could have asked? What goal could be here to this behaviour? (Is it polite to waste everyone's time?) . I do assume people are rational; I hold rationality in high regard; so i assume there is some motivation to such effort. But I just can't see any motivation I agree with to making both a hypothetical and an alternate world instead of a simpler question which is apparently 'the point' of exercise.

Replies from: prase
comment by prase · 2012-01-02T12:30:21.378Z · LW(p) · GW(p)

I suppose that the 'big' hypothetical is constructed to make our imagination more vivid. "Is it moral to save 10 people by killing one innocent" sounds like a purely theoretical question; describing details, on the other hand, activates our instinctive responses. 'The point' is to show the conflict between the gut reaction and the consequentialist reasoning; the conflict wouldn't be enough apparent if the situation was described in a brief abstract way.

Be sure to realise that this is a pretty standard and probably necessary technique in all thought experiments which ought to illustrate flaws in human intuitive reasoning, and perhaps other thought experiments as well. It's almost the definition of "thought experiment" that it describes an unrealistic situation in a realistic detail. Why did Searle describe his Chinese room, if he could simply say "if understanding is equivalent to processing a specific alogrithm, then one could learn the algorithm without understanding the underlying sense, which is a paradox"? Because this wouldn't work as intended.

If you nevertheless think that the details in thought experiments are wasting your time, you should still plainly say that, rather than answering a different question or speculating about your interlocutor's motivations being different from what they say.

Replies from: Dmytry
comment by Dmytry · 2012-01-02T17:25:18.291Z · LW(p) · GW(p)

But it's not a Chinese room thought experiment. And the answering of a different question is a mere result of in fact visualizing this vivid detail in your head (which apparently not everyone is equally capable of) and giving an answer to that visualized detail, instead of the abstract binary question that this detail is supposed to equal to.

When I read that question I gave in example, I visualized this detail, along with implied detail (if you tell me its a tiger, i will visualize stripes, if you imply its homo sapiens i will visualize how sick homo sapiens recovers, how homo sapiens immune system works, et cetera), and I immediately see a ton of solutions. That's the way vivid examples are, they don't yield binary choices. I don't see why we should be giving answers more befit of the simple abstract questions to the vivid examples.

With regards to Chinese room experiment... What you should be visualizing is hundred billions people working simulating the neurons of a Chinese speaker (who will be talking some 400x slower than realtime). Or a single person, who's speaking Chinese at some trillions times slower than realtime. Or a room that works like Eliza bot in Chinese. In none of those cases are you amazed at the paradox; billions people have hivemind, products of human labour for billions years to answer a simple question easily amounts to a separate being, and a room that talks like Eliza is unimpressive.

Sorry, I am not vividly visualizing single person Chinese room which talks Chinese at anything but trillions times slower than realtime. Searle must find someone else to trick into visualizing vividly an impossibility, incorporating that as cached thought and then self-contradicting in amusing way. The whole paradox stems from vividly imagining a room that answers questions in a few seconds time, minutes maybe, rather than a room where the worker is performing trillions calculations, with storage space of terabytes, before he makes an answer. The world where Chinese room works as usually imagined, is the world where mathematics does not even apply.

Replies from: prase
comment by prase · 2012-01-02T19:58:56.348Z · LW(p) · GW(p)

I have mentioned the Chinese room as an example of details included in the thought experiments to activate certain intuitive reactions, especially in response to this:

But I just can't see any motivation I agree with to making both a hypothetical and an alternate world instead of a simpler question which is apparently 'the point' of exercise.

I certainly don't defend Searle's conclusion.

That's the way vivid examples are, they don't yield binary choices. I don't see why we should be giving answers more befit of the simple abstract questions to the vivid examples.

No real world situations yield binary choices. For any questions of form "do you prefer A to B or vice versa?" you are free to answer "in fact I prefer C". Only be aware that some people (me included) find this way of non-answering questions annoying; my experience is that it's a pattern often used in endless evasive debates where people are speaking past each other without moving anywhere. There is certain advantage in binary questions: thay may not reflect all aspects of realistic decisions, but they are conductive of efficient communication.

Replies from: Dmytry
comment by Dmytry · 2012-01-08T00:12:15.026Z · LW(p) · GW(p)

Well, that's fair enough but I find the thought experiments (like Chinese room) irritating as well; they typically try to coax you into making some reasoning mistake when reasoning visually with unrealistic assumptions, then it can be quite difficult for you to vocalize what's wrong, or even realize you made a mistake (and Chinese room is a perfect example of this). If one wants me to answer question - is it moral to kill one person when it is absolutely the only way to save 10 people and they all have same life expectancy - they should ask this question. To which I would answer something along the lines of error rate in this sort of decisions leading to far more deaths than it prevents, thus making the strategy of forbidding such decisions beforehand a correct strategy to precommit to (I am a game programmer hence if I think of future decision I think how to decide).

I don't like the following process:

You have abstract moral question.

You take time to make up much less abstract, more verbose, and more vivid example.

I am expected to ignore all the vivid detail and instead 'get the point' and answer the abstract moral question. (plus I can be asked to, so to say, visualize a tiger, and then be told 'but i told nothing of the stripes' if in my visualization i have stripes on the tiger and they matter)

Replies from: prase
comment by prase · 2012-01-08T13:22:20.403Z · LW(p) · GW(p)

I share your irritation with the Chinese room experiment. I don't share the same objection with the discussed hospital scenario, the level of non-realism is much lower in the latter. The Chinese room tacitly assumes all involved agents are normal people (so that our intuitions about knowing and understanding hold) while also assumes the man in the room's ability to learn a vast algorithm which we have even been unable to develop as a computer program yet. In the latter case, the non-realism is of sort "this doesn't usually happen".

Consider the scenario put in this form:

You have this dialogue with your doctor:

doctor: "I've had nightmares recently because what I've done. I feel I can't keep it for myself anymore and it can easily be you whom I tell my secret, if you don't object, of course."
you: "Go on."
"Well, I have killed a man. I have done it to save others, but still I suspect I might have done something very wrong. There were ten patients in the hospital, all in need of organ transplants. Each needed a different organ, and each of them was in a serious danger of dying if a donor doesn't appear quickly. Then a stranger wandered in. He wanted a routine checkup, but from the blood test I realised that, accidentally, he would probably be an ideal donor for all ten patients we had in the hospital. You know, we don't receive many donors in our hospital and we had little time. Almost certainly, this was the last chance to save those people."
"But, you couldn't be sure that the transplants would be successful, just based on a simple blood test."
"Of course, when I got the idea, I told the man that I need to do more tests to rule out my suspicion of a serious disease. I also asked him questions about his personal life to find out whether he had children or family who would regret his death. It turned out to not be the case."
"Yes, but even with an ideal donor, the quality and lenght of life of the transplantees are usually poor."
"Actually, according to my statistics half of the patients will survive twenty years with modest inconveniences. That's five people. One or two of the remaining five are going to die in a couple of years, but still, I was buying twenty years of life for five patients who would die in few weeks otherwise. The stranger was in his fifties so he could live for thirty years more."
"But, wasn't there another solution? You could kill one of the patients and use his organs, for example."
"Do you think this didn't occur to me? It couldn't be done. The patients were closely monitored and their families would sue the hospital under the slightest suspicion. If the truth comes out, the hospital will certainly lose the trust of the public, and perhaps be closed, causing many unnecessary deaths in the future. I was able to kill the stranger and arrange it as a traffic accident with head injury. I haven't been able to do that with the patients, or make up an alternative plan to secretly kill any of them."
"But there have to be thousands of alternative solutions. Literally."
"Maybe there were. I had thought about it for several days and no alternative solution had occured to me. After few days, the stranger insisted he couldn't stay longer. At that point I hadn't any alternative solution available, I was choosing between basically two alternatives. I chose to kill."

So, what are you going to do? Will you call police? Will you morally blame the doctor? In this setting you can't call for alternative solutions. The scenario is not probable, of course, but there are no blatantly absurd assumptions which would allow you to discard it as completely implausible, as in the Chinese room case.

Replies from: Dmytry
comment by Dmytry · 2012-01-09T12:28:30.184Z · LW(p) · GW(p)

Well, look at how you had to arrive at this example. There had to have been an iteration with a traveller, and the example had to be adjusted to make it so that this traveller is an ideal donor for 10 people, none of whom is a good donor for remaining 9. We're down to probabilities easily below 10^-10 meaning 'not expected to ever have happened in the history of medicine'. (Whereas the number of worldwide cases when someone got killed for organs is easily in the tens thousands) Human immune system does not work so conveniently for your argument, so you'll have to drop the transplant example and come up with something else.

This should serve as quite effective demonstration of how extremely rare such circumstances are. So rare that you can not reason about them without aid of another person who strikes down your example repeatedly, forcing you to refine it. At same time, the cases whereby something like this is done for personal gain, and then are rationalized as selfless and altruistic - those are commonplace.

Privileging those exceedingly improbable situations to the same level of consideration as the much much more probable situations is a case of extreme bias.

The issue with rare situations is that the false positive rate can be dramatically larger than the rate of event actually happening, meaning that majority of detected events are false positives. When you carelessly increase the number of lives saved in your taking apart the traveller example, you are linearly increasing the gain but exponentially decreasing the probability of this bizzare histocompatibility coincidence.

If I heard that story from a doctor - I would think - what is the probability of this histocompatibility coincidence? Very very low. I am guessing below 1E-10 (likely well below). What is the probability that doctor is beginning to succumb to a mental disorder of some delusionary kind? Far larger, on order of 1/1000 to 1/10000 . Meaning that when you hear such story, there is still a very low probability still that it is true, and most likely explanation for the story is that doctor is simply nuts (and most likely he did just lie to you about the entire thing for sake of argument or something). Meaning that it would be (from utilitarian standpoint) more optimal to do nothing (based on belief that story was entirely made up) or call the police (based on belief that he did actually kill someone, or is planning to). [Of course I would try to estimate probabilities as reliably as I can before calling the police, to far greater degree of confidence than in an argument here.]

edit: a good example, the ambulance story here: http://lesswrong.com/lw/if/your_strength_as_a_rationalist/

Basically, the thought experiments like that - usually they are just a way of forcing a person to make a mistake when reasoning non-verbally, in the hope that he wouldn't be able to vocalize the mistake (or even realize he made one). In that case the mistake that the example tries to trick reader into is ignoring the error rate of the agent which is making the decision, in the circumstances where that rate is BY FAR (at least by 6 orders of magnitude I'd say, for the 10 patients) the dominating number in the utility equation.

Likewise the Chinese room tries to trick reader into making a 14 orders of magnitude error or so. Such mindbogglingly huge errors slip past reason. We are not accustomed to being this wrong.

comment by J_Taylor · 2011-12-30T06:05:13.503Z · LW(p) · GW(p)

Are you a philosopher? Perhaps some other type of person who benefits from appearing as if he or she were a highly objective reasoner?

If not, you probably benefit from appearing empathetic and humanistic. In private, with trusted allies, it is fine to talk about the more, let us say, repugnant propositions that can be derived from one's beliefs. However, in public, one must be wary of what bullets one bites.

comment by dlthomas · 2011-12-29T23:19:11.600Z · LW(p) · GW(p)

edit: I don't know if it is OK to be editing published articles but I'm a bit of obsessively compulsive perfectionist and I plan on improving it for publishing it in lesswrong, so I am going to take liberty at improving some of the points but perhaps also removing the duplicate argumentation and cutting down the verbosity.

I think it's definitely okay to edit published articles, provided that the editing 1) is small scale and simply adds clarity or fixes typos and does not interact with any existing comments, or 2) it is made plain what editing was done.

comment by CronoDAS · 2011-12-30T01:59:14.995Z · LW(p) · GW(p)

Both TV Tropes and Eliezer endorse looking for alternative options. ;)