The Trolley Problem in popular culture: Torchwood Series 3

post by botogol · 2009-07-27T22:46:57.377Z · LW · GW · Legacy · 87 comments

Contents

  Numbers
  Quality
  Choice
  Rationality at the limit
None
87 comments

It's just possible that some lesswrong readers may be unfamiliar with Torchwood: It's a British sci-fi TV series, a spin-off from the more famous, and very long-running cult show Dr Who.

Two weeks ago Torchwood Series 3 aired. It took the form of a single story arc, over five days, shown in five parts on consecutive nights. What hopefully makes it interesting to rationalist lesswrong readers who are not (yet) Whovians was not only the space monsters (1) but also the show's determined and methodical exploration of an iterated Trolley Problem:  in a process familiar to seasoned thought-experimenters the characters were tested with a dilemma followed by a succession of variations of increasing complexity, with their choices ascertained and the implications discussed and reckoned with.

An hypothetical, iterated rationalist dilemma... with space monsters... and monsters a great deal more scary - and messier - than Omega -  what's not to like?

So, on the off chance that you missed it, and as a summer diversion from more academic lesswrong fare, I thought a brief description of how a familiar dilemma was handled on popular British TV this month, might be of passing interest (warning: spoilers follow)

The details of the scenario need not concern us too much here (and readers are warned not too expend too much mental energy exploring the various implausibilities, for want of distraction) but suffice to say that the 456, a race of evil aliens, landed on Earth and demanded that a certain number of children be turned over to them to suffer a horrible fate-worse-than-death or else we face the familiar prospects of all out attack and the likely destruction of mankind.

Resistance, it almost goes without saying, was futile.

The problems faced by the team could be roughly sorted into some themes

The Numbers dilemma - is it worth sacrificing any amount of children to save the rest?

The Quality dilemma: does it make any difference which children?

The choice dilemma: how should the sacrifical cohort be chosen?

The limits of human rationality: are there certain 'rational' decisions that are simply too much to expect a human being to be able to make?

Actually despite my jocular tone in the first paragraph I don't want to make too light of this series, as it was disturbing viewing.

Anyway: that being said: rationalist lesswrong community members may want to think dispassionately about the their answers before I reveal the conclusions that Russell T Davies (the writer) came to:

Numbers

Quality

Choice

This was handled by the politicians who considered two dimensions in the selection:

Rationality at the limit

On the question of 'how close' a straightforward evolutionary approach was used. Children of the decision-makers were safe, and grandchildren.

"And our nephews?" "Don't push it".

But the limits of rationality, it seems, are dependent upon gender: While it was recognised that no woman could be expected to agree to the rational sacrifice of her child, it was expected by some that men might have to, and in the end the main character - male - sacrificed a grandchild.

And that's it. Perhaps not a complete disposal of the trolley problem, but nevertheless an interesting excursion into the realms of philosphical dilemmas for a popular drama.   Rationalism is a meme - pass it on.


(1) like many TV aliens: surprisingly able to construct spaceships without the benefit of an opposable thumb
(2) Yes, that was actually the 1965 back-story.

87 comments

Comments sorted by top scores.

comment by ShardPhoenix · 2009-07-28T10:41:52.300Z · LW(p) · GW(p)

It annoys me how people on TV or in movies who have to make tough, unpopular decisions are almost never shown to be right - there always turns out to be a deus ex machina or other third way out. From a spoiler given in another comment here it seems like this is yet another case of that - not that that's surprising when RTD is writing.

Replies from: jwdink, CronoDAS
comment by jwdink · 2009-07-29T02:59:31.826Z · LW(p) · GW(p)

A good example of this (I think) is The Dark Knight, with the two ferries.

Replies from: michaelkeenan
comment by michaelkeenan · 2009-07-29T06:21:38.756Z · LW(p) · GW(p)

Agreed. The one that annoys me the most is in the first Spiderman movie (spoiler warning) when the Green Goblin drops Mary-Jane and a tram full of child hostages, forcing Spiderman to choose who to save. I was excited to see what his choice would be...but then he just saves everyone.

Replies from: wedrifid
comment by wedrifid · 2009-07-29T08:08:56.407Z · LW(p) · GW(p)

And if there is, in fact, anyone who still needs a spoiler warning for Spiderman one: The last ten minutes really spoilt it for me, skip 'em.

comment by CronoDAS · 2009-07-28T19:57:23.186Z · LW(p) · GW(p)

TV Tropes has more on this.

Replies from: NQbass7
comment by NQbass7 · 2009-07-29T15:03:09.938Z · LW(p) · GW(p)

Well I WAS planning on getting some work done today.. but now...

comment by mikem · 2009-07-28T09:10:47.196Z · LW(p) · GW(p)

Lesswrongers will be encouraged to learn that the Torchwood characters were rationalists to a man and woman - there was little hesitation in agreeing to the 456's demands.

Are you joking? They weren't rationalists, they were selfish. There is a distinction. They were looking after there own asses and those of their families (note that the complicit politicians specifically excluded their own family's children from selection, regardless of 'worth').

children - or units as they were plausibly referred to

What do you mean by 'plausibly'? They were referred to as units in order to dehumanize them. Because the people referring to the children as such recognized that what they were doing was abhorrently wrong, and so had to mask the fact, even to themselves, by obscuring the reality of what they were discussing: the wholesale slaughter of their fellows.

... governments paying attention to round up the orphans, the refugees and the unloved - for the unexpectedly rational reason of minimising the suffering of the survivors

That's laughable. It had nothing to do with minimizing suffering, that was a rationalization. They were doing it for the same reason any government targets the vulnerable; because there are few willing to protect them and argue for them. It was pretty clear if you watched the show that the children being targeted were hardly 'unloved'.

You can't consider the scenario without considering the precedent that it would set. The notion that there are wide swaths of the population -- children, who've never even had the opportunity to truly prove themselves or do much of anything -- who are completely without worth and sacrificeable at the whim of the government is untenable in a society that values things like individuality, personal autonomy, the pursuit of happiness and, well, human life! They would not be saving humanity, they would be mutilating it.

The poster failed to mention that the sacrificed children were being sentence to an eternal fate which worse than death.

And there is a difference between the actions of the government and the actions of the main character. One of them was fighting the monsters. The others were the monster's business partners.

Replies from: cousin_it
comment by cousin_it · 2009-07-28T09:40:28.718Z · LW(p) · GW(p)

The poster failed to mention that the sacrificed children were being sentenced to an eternal fate which worse than death.

I really wonder what other LWers will say about this. Would you prefer to give one person huge disutility, or destroy humankind? For extra fun consider a 1/2^^^3 chance of 3^^^3 disutility to that one person.

Eliezer in particular considers his utility function to be provably unbounded in the positive direction at least, thinks we have much more potential for pain than pleasure, thinks destroying humankind has finite disutility on the order of magnitude of "billions of lives lost" (otherwise he'd oppose the LHC no matter what), and he's an altruist and expected utility consequentialist. Together this seems to imply that he will have to get pretty inventive to avoid destroying humankind.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-10T22:12:08.406Z · LW(p) · GW(p)

1/2^^^3 = 2^^(2^^2) = 2^^(2^2) = 2^^4 = 2^2^2^2 = 65536.

Replies from: Larks, cousin_it, cousin_it
comment by Larks · 2009-09-10T22:22:02.492Z · LW(p) · GW(p)

1/65536, surely?

Replies from: Eliezer_Yudkowsky
comment by cousin_it · 2009-09-10T23:03:38.463Z · LW(p) · GW(p)

Oh, shit. Well... uhhhh.. in the least convenient impossible possible world it isn't! :-)

comment by cousin_it · 2009-09-14T04:53:37.472Z · LW(p) · GW(p)

Come to think, I don't even see how your observation makes the question any easier.

?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-14T18:26:22.644Z · LW(p) · GW(p)

1/65536 probability of someone suffering 3^^^3 disutilons? If humanity's lifespan is finite, that's far worse than wiping out humanity. (If humanity's lifespan is infinite, or could be infinite with probability greater than 1/65536, the reverse is true.)

Replies from: cousin_it
comment by cousin_it · 2009-09-14T18:39:36.329Z · LW(p) · GW(p)

I'll take that for an answer. Now let's go over the question again: if humanity's lifespan is potentially huge... counting "expected deaths from the LHC" is the wrong way to calculate disutility... the right way is to take the huge future into account... then everyone should oppose the LHC no matter what? Why aren't you doing it then - I recall you hoped to live for infinity years?

Replies from: CarlShulman
comment by CarlShulman · 2009-09-14T18:56:38.355Z · LW(p) · GW(p)

The very small probability of a disaster caused directly by the LHC is swamped by the possible effects (positive or negative) of increased knowledge of physics. Intervening too stridently would be very costly in terms of existential risk: prominent physicists would be annoyed at the interference (asking why those efforts were not being dedicated to nuclear disarmament or biodefence efforts, etc) and could discredit concern with exotic existential risks (e.g. AI) in their retaliation.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-14T19:17:47.880Z · LW(p) · GW(p)

Agree with all except the first sentence.

Replies from: cousin_it, CarlShulman, rhollerith_dot_com
comment by cousin_it · 2009-09-14T22:27:10.620Z · LW(p) · GW(p)

...Okay. You do sound like an expected utility consequentialist, I didn't quite believe that before. Here's an upvote. One more question and we're done.

Your loved one is going to be copied a large number of times. Would you prefer all copies to get a dust speck in the eye, or one copy to be tortured for 50 years?

comment by CarlShulman · 2009-09-14T19:23:03.446Z · LW(p) · GW(p)

Hmm? In light of Bostrom and Tegmark's Nature article?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-14T19:50:12.167Z · LW(p) · GW(p)

We don't know enough physics to last until the end of time, but we know enough to build computers; if I made policy for Earth, I would put off high-energy physics experiments until after the Singularity. It's a question of timing. But I don't make such policy, of course, and I agree with the rest of the logic for why I shouldn't bother trying.

Replies from: CarlShulman
comment by CarlShulman · 2009-09-14T20:09:04.225Z · LW(p) · GW(p)

I would postpone high-energy physics as well, but your argument seems mostly orthogonal to the claim you said you disagreed with.

New physics knowledge from the LHC could (with low probability, but much higher probability than a direct disaster) bring about powerful new technology: e.g. vastly more powerful computers that speed up AI development, or cheap energy sources that facilitate the creation of a global singleton. Given the past history of serendipitous scientific discovery and the Bostrom-Tegmark evidence against direct disaster, I think much more of the expected importance of the LHC comes from the former than from the latter.

comment by RHollerith (rhollerith_dot_com) · 2009-09-14T20:49:03.794Z · LW(p) · GW(p)

Same here. Unless I am missing something (and please do tell me if I am), the knowledge gained by the LHC is very unlikely to help much to increase the rationality of civilization or to reduce existential risk, so the experiment can wait a few decades, centuries or millenia till civilization has become vastly more rational (and consequently vastly better able to assess the existential risk of doing the experiment).

comment by abigailgem · 2009-07-28T09:04:19.814Z · LW(p) · GW(p)

Edit: major plot spoiler in this comment.

You miss out a major point of the story, that those who agree to sacrifice others' children are dishonourable, and that this matters; and that the main character, who sacrifices only one child to save all the rest (his grandchild) suffers terribly for this.

I would not argue from fictional evidence, but the storytellers seem keen to point this out. Also, when deciding to sacrifice children, all possible other courses of action must be eliminated first.

Edit: for me, the main interest of the trolley problem is the emotional response. Would you kill one to save five, if saving the five was certain if you killed the one, and impossible otherwise? Er, yes, I hope so, though I think such a situation, with such certainty, is unlikely. How do you feel about trolley problems generally? Horror and disgust. Then I see that even if I am not going to be in that situation, I may be in situations where I must behave rationally, and Stoically fight down emotional responses.

Replies from: nerzhin
comment by nerzhin · 2009-07-28T17:54:19.429Z · LW(p) · GW(p)

How do you feel about trolley problems generally? Horror and disgust. Then I see that even if I am not going to be in that situation, I may be in situations where I must behave rationally, and Stoically fight down emotional responses.

I think of trolley problems as being related to Newcomb's problem.

If you expect to encounter Newcomb-like problems, you change your source code to precommit yourself to one-box, even though it seems less rational. Evolution expects us to encounter trolley-like problems, and has changed our source code (that's the "horror and disgust") so that we are in some sense precommitted to not throw the switch. And in general, if you're not considering theoretical dilemmas in an armchair, there are very good reasons for that source code change.

comment by taw · 2009-07-28T15:48:09.199Z · LW(p) · GW(p)

There have been many similar situations historically - food supply was more or less equal to food demand, so if food supply got suddenly lower for whatever reason, there wasn't enough food for everyone, and some people had to die.

The usual algorithm was that the poor people would be priced out of the food market, until enough of them died to restore Malthusian equilibrium. Most of the dead would be children.

How is that morally different from the situation described in the post?

Replies from: mikem, None
comment by mikem · 2009-07-29T03:20:38.644Z · LW(p) · GW(p)

I can think of a couple of differences:

The poor people during a famine at least have a fighting chance, if slim. Somehow, by hook or by crook, attain money or food, or leave for a region where there is no famine.

Also, a famine is a matter of public knowledge, which allows the possibility for a society to collectively (or fragmentedly) come up with a solution. In the torchwood scenario, [small spoiler warning] the true nature of the threat and the solution devised by the executive branch were being kept a secret. In fact, they were actively suppressing groups who were moving for alternative stances towards the alien threat. If it were public knowledge, the to-be-sacrificed class would at least have the option of revolting against the powers/system/'algorithm' which was mandating their extermination.

comment by [deleted] · 2009-07-29T19:30:50.730Z · LW(p) · GW(p)

In the not-enough-food scenario, you have more bystander effect: the merchant doesn't necessarily feel like e's killing people by raising prices.

comment by Nanani · 2009-07-29T01:07:51.202Z · LW(p) · GW(p)

What if the children sacrificed are the ones who would expected to quickly die anyway? Young cancer patients, for instance, or any chld born with a defect that would have led to a very early death without modern medical care.

Their suffering would be diminished since it wouldn't last as long, and as a dark "plus", their parents were already facing the prospect of the child dying young.

I didn't see the show so I don't know if there was a caveat saying they had to be healthy children. Incidentally did the aliens describe the fate worse than death at all? Is it torture, or more like Borg-assimilation?

Replies from: Vladimir_Nesov, mikem
comment by Vladimir_Nesov · 2009-07-29T11:51:46.452Z · LW(p) · GW(p)

This criterion can't gather the required 10%.

comment by mikem · 2009-07-29T03:07:28.747Z · LW(p) · GW(p)

Spoiler warning!

The children were being incorporated into the bodies of the aliens. In the words of the rather terse (and uber-creepy) alien ambassador:

They create chemicals. The chemicals ... are good. We feel ... good. The chemicals are good.

Basically, they were being used as prosthetic glands secreting a narcotic, which the aliens found pleasant. That's why they wanted them. The aliens were shooting up, children. Somehow the (original 1965) children were preserved in their adolescent state, presumably indefinitely, melded into the alien's monstrous bodies. The nature of their subjective experience was left undisclosed, but it is hard to imagine it was pleasant, nor that, being reduced to glands, that they had any sort of autonomy.

So yeah, being a cancer patient or otherwise unhealthy child did not appear to be any 'advantage', at least not from the perspective of the child, since presumably their suffering existence would only be extended forever. Hard to see how the parents would see this as a plus either.

comment by gwern · 2009-07-28T08:42:15.359Z · LW(p) · GW(p)

"Answer: in a UK-centric political twist they chose those attending the schools with the poorest exam results."

Wait, what? Why not just go with those with the lowest scores period? Why use an indirect test like that (which probably has as much to do with socioeconomic status anyway)?

Replies from: ShardPhoenix
comment by ShardPhoenix · 2009-07-28T10:40:34.587Z · LW(p) · GW(p)

Presumably mainly because it's satirical, but it'd also be a heck of a lot simpler to round people up all together at school than to pick them out one by one.

Replies from: whpearson
comment by whpearson · 2009-07-28T12:27:42.848Z · LW(p) · GW(p)

They were on a tight time schedule so it was this reason they used.

comment by botogol · 2009-07-29T09:02:57.674Z · LW(p) · GW(p)

What a lot of comments (and I was worried that it was all too trivial. Lesson: never underestimate the power of Dr Who) Thanks all.

@Nanani - yes, indeed, the initial round up of 600 or so was composed of waifs and strays like that, inc the ill. But when the demand of 10% was acceded to there wasn't time to handpick

@SharedPhoenix - I agree and a strength of this story was that was no easy way out. The scenario was played out right to the end with the main character forced to make a rational sacrifice. OK, he found a way for it to be jsut one child, but there was still a choice.

@mikem - I disagree. Yes there were selfish cabinet members simply looking out for their own (this was dealt with in several contexts - there was an assumption that the interests of one's own child is beyond the limit of human rationality) however the decision to accede to this, and actually make this a policy was taken by the prime minister for rational reasons. He recognised that unless he spared the children of the decision makers and enforcers, there would be no decisions and no enforcing. It was purely rational. (And 'units' yes I meant that it was a plausible that such a sinister euphemism would be employed)

@jwdink - yes, I was surprised they took that route (the rational give-in rather than fight to death) in TV-Land it was an unusual decision. That's why I wrote the post about it :-)

Replies from: thomblake
comment by thomblake · 2009-07-29T18:47:18.618Z · LW(p) · GW(p)

Threaded comments are your friend. Just click "reply" on the comment you want to reply to. Bonus: the relevant person will be notified of your response.

comment by jwdink · 2009-07-29T03:00:32.088Z · LW(p) · GW(p)

That's horrible. They should've fought the space monsters in an all out war. Better to die like that than to give up your dignity. I'm surprised they took that route on the show.

Replies from: SilasBarta, thomblake, NQbass7
comment by SilasBarta · 2009-07-29T16:20:01.036Z · LW(p) · GW(p)

That's not as bad an idea as your post's -3 rating suggests. First of all, what's to ensure the aliens even keep their word? (I haven't seen this episode, so I don't know how that's handled.) For all we know, this could just be their way of "trolling" us so we get into an intraspecies flamewar and thus be unprepared for their actual plans, which are to attack and take whatever living children they can.

In that case, the "nuclear option" is to "kill the children before the aliens get to them" .. which ends the human race anyway. And if the human race is going to end anyway, why not take as many of them down with us as we can?

Replies from: jwdink
comment by jwdink · 2009-07-29T16:37:54.654Z · LW(p) · GW(p)

I'm surprised that was so downvoted too.

Perhaps I should rephrase it: I don't want to assert that it would've been objectively better for them to not give up the children. But can someone explain to me why it's MORE rational to give up in this situation?

Replies from: Aurini
comment by Aurini · 2009-08-02T07:31:32.837Z · LW(p) · GW(p)

I think it's my fault. I posted a... rather unpopular article about compromise.

I agree with your hawk/brinksmanship analysis of the strategy. I've found in life that 'the easy way out' is usually not so easy. I'm still trying to break it down into game-theory language appropriate for this site, however.

Replies from: Cyan
comment by Cyan · 2009-08-02T16:11:46.146Z · LW(p) · GW(p)

By calling the downvotes your fault, it seems that you're asserting that your post is a causal ancestor of the downvotes, which is a bit off, I think. I'd guess that the downvotes of your post and the downvotes of jwdink's comment have a common cause (that being a certain utility-maximizing mindset and/or set of preferences), but are not otherwise causally related.

comment by thomblake · 2009-07-29T18:14:15.413Z · LW(p) · GW(p)

I agree with SilasBarta below. Resistance may be futile but we'll give them a hell of a fight. They won't get our children even over our dead bodies.

comment by NQbass7 · 2009-07-29T15:07:57.458Z · LW(p) · GW(p)

How many lives is your dignity worth? Would you be willing to actually kill people for your dignity, or are you only willing to make that transaction if someone else is holding the knife?

Replies from: jwdink
comment by jwdink · 2009-07-29T16:38:51.997Z · LW(p) · GW(p)

I don't quite understand how your rhetorical question is analogous here. Can you flesh it out a bit?

I don't think the notion of dignity is completely meaningless. After all, we don't just want the maximum number of people to be happy, we also want people to get what they deserve-- in other words, we want people to deserve their happiness. If only 10% of the world were decent people, and everyone else were immoral, which scenario would seem the more morally agreeable: the scenario in which the 10% good people were ensured perennial happiness at the expense of the other 90%'s misery, or the reversed scenario?

I'm just seeing something parallel here: it's not brute number of people living that matters, so much as those people having worthwhile existences. After sacrificing their children on a gamble, do these people really deserve the peace they get?

(Would you also assert that Ozymandias' decision in The Watchmen was morally good?)

Replies from: eirenicon, Vladimir_Nesov, Psy-Kosh
comment by eirenicon · 2009-07-29T18:07:12.001Z · LW(p) · GW(p)

What do the space monsters deserve? If you factor in their happiness, it's an even more complicated problem. The space monsters need n human children to be happy. If you give them up, you have happy space monsters and (6 billion - n) happy (if not immediately, in the long term) humans. If you refuse, assuming the space monsters are unbeatable, you have happy space monsters and zero happy humans. The first scenario is better for both space monsters and humans. Sure, in the second scenario, the humans theoretically don't lose their dignity, but what does dignity mean to the dead?

To put it in another light, what if this situation happened a hundred years ago? Would you be upset that the people alive at the time caved in to the aliens' demands, or would you prefer the human race had been wiped out?

Replies from: jwdink
comment by jwdink · 2009-07-29T19:13:08.674Z · LW(p) · GW(p)

What do the space monsters deserve?

Haha, I was not factoring that in. I assumed they were evil. Perhaps that was close minded of me, though.

The first scenario is better for both space monsters and humans. Sure, in the second scenario, the humans theoretically don't lose their dignity, but what does dignity mean to the dead?

Some people would say that dying honorably is better than living dishonorably. I'm not endorsing this view, I'm just trying to figure out why it's irrational, while the utilitarian sacrifice of children is more rational.

To put it in another light, what if this situation happened a hundred years ago? Would you be upset that the people alive at the time caved in to the aliens' demands, or would you prefer the human race had been wiped out?

There are plenty of variables you can slide up and down to make one feel more or less comfortable with the scenario. But we already knew that, didn't we? That's what the original trolley problem tells us: that pushing someone off a bridge feels morally different than switching the tracks of a trolley. My concern is that I can't figure out how to call one impulse (the discomfort at destroying autonomy) an objectively irrelevant mere impulse, and another impulse (the comfort at preserving life) an objectively good fact. It seems difficult to throw just the bathwater out here, but I'd really like to preserve the baby. (See my other post above, in response to Nesov.)

Replies from: Vladimir_Nesov, eirenicon
comment by Vladimir_Nesov · 2009-07-29T19:39:56.282Z · LW(p) · GW(p)

Some people would say that dying honorably is better than living dishonorably. I'm not endorsing this view, I'm just trying to figure out why it's irrational, while the utilitarian sacrifice of children is more rational.

Utilitarian calculation is a more rational process of arriving at a decision, while for the output of this process (a decision) for a specific question you can argue that it's inferior to the output of some other process, such as free-running deliberation or random guessing. When you are comparing the decisions of sacrifice of children and war to the death, first isn't "intrinsically utilitarian", and the second isn't "intrinsically emotional".

Which of the decision is (actually) the better one depends on the preferences of one who decides, and preferences are not necessarily reflected well in actions and choices. It's instrumentally irrational for the agent to choose poorly according to its preferences. Systematic processes for decision-making allow agents to explicitly encode their preferences, and thus avoid some of the mistakes made with ad-hoc decision-making. Such systematic processes may be constructed in preference-independent fashion, and then given preferences as parameters.

Utilitarian calculation is a systematic process for computing a decision in situations that are expected to break intuitive decision-making. The output of a utilitarian calculation is expected to be better than an intuitive decision, but there are situations when utilitarian calculation goes wrong. For example, the extent to which you value things could be specified incorrectly, or a transformation that computes how much you value N things based on how much you value one thing may be wrong. In other cases, the problem could be reduced to a calculation incorrectly, losing important context.

However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs "war to the death" as the right decision.

Replies from: None, jwdink
comment by [deleted] · 2009-07-31T03:35:05.874Z · LW(p) · GW(p)

I'm not convinced utilitarian reasoning can always be applied to situations where two preferences come into conflict: Calculating "secondary" uncertain factors which could influence the value of each decision ruins the possibility of exactness. Even in the trolley problem, in all its simplicity, each decision has repercussions whose values have some uncertainty. Thus a decision doesn't always have a strict value, but a probable value distribution! We make a trolley decision by 1) Considering only so many iterations in trying to get a value distribution, and 2) seeing if there is a satisfying lack overlap between the two. When the two distributions overlap too much (and you know that they are approximate, due to the intractability of getting a perfect distribution), it's really a wild guess to say one decision is best.

Utilitarian calculation helps the process, by providing means of deciding when each value probability distribution is sharply enough defined, and whether the overlap meets your internal maximum overlap criteria (presuming that's sharply defined!), but no amount of reasoning can solve every moral dilemma a person might face.

comment by jwdink · 2009-07-29T20:18:19.780Z · LW(p) · GW(p)

Which of the decision is (actually) the better one depends on the preferences of one who decides

So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?

However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs "war to the death" as the right decision.

I don't see why I should agree with this statement. I was understanding a utilitarian calculation as either a) the greatest happiness for the greatest number of people or b) the greatest preferences satisfied for the greatest number of people. If a), then it seems like it might predictably give you answers that are at odds with moral intuitions, and have no way of justifying itself against these intuitions. If b), then there's nothing irrational about deciding to go to war with the aliens.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-29T20:38:01.131Z · LW(p) · GW(p)

So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?

You can't decide your preference, preference is not what you actually do, it is what you should do, and it's encoded in your decision-making capabilities in a nontrivial way, so that you aren't necessarily capable of seeing what it is.

Compare preference to a solution to an equation: you can see the equation, you can take it apart on the constituent terms, but its solution is nowhere to be found explicitly. Yet this solution is (say) uniquely defined by the equation, and approximate methods for solving the equation (analogized to the actual decisions) tend to give their results in the general ballpark of the exact solution.

Replies from: jwdink
comment by jwdink · 2009-07-29T21:13:34.244Z · LW(p) · GW(p)

You can't decide your preference, preference is not what you actually do, it is what you should do, and it's encoded in your decision-making capabilities in a nontrivial way, so that you aren't necessarily capable of seeing what it is.

You've lost me.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-29T21:18:53.310Z · LW(p) · GW(p)

The analogy in the next paragraph was meant to clarify. Do you see the analogy?

A person in this analogy is an equations together with an algorithm for approximately solving that equation. Decisions that the person makes are the approximate solutions, while preference is the exact solution hidden in the equation that the person can't solve exactly. The decision algorithm tries to make decisions as close to the exact solution as it can. The exact solution is what the person should do, while the output of the approximate algorithm is what the person actually does.

Replies from: jwdink
comment by jwdink · 2009-07-29T22:25:07.900Z · LW(p) · GW(p)

I suppose I'm questioning the validity of the analogy: equations are by nature descriptive, while what one ought to do is prescriptive. Are you familiar with the Is-Ought problem?

Replies from: None, Vladimir_Nesov
comment by [deleted] · 2009-07-31T03:12:58.658Z · LW(p) · GW(p)

jwdink, I don't think Vladimir Nesov is making an Is-Ought error. Think of this: You have values (preferences, desired ends, emotional "impulses" or whatever) which are a physical part of your nature. Everything you decide to do, you do because you Want to. If you refuse to acknowledge any criteria for behavior as valuable to you, you're saying that what feels valuable to you isn't valuable to you. This is a contradiction!

An Is-Ought problem arises when you attempt to derive a Then without an If. Here, the If is given: If you value what you value, then you should do what is right in accordance with your values.

Replies from: jwdink
comment by jwdink · 2009-08-04T17:35:34.690Z · LW(p) · GW(p)

But there seemed to be some suggestion that an avoidance of sacrificing the children, even to the risk of everyone's lives was a "less rational" value. If it's a value, it's a value... how do you call certain values invalid, or not "real" preferences?

Replies from: None
comment by [deleted] · 2009-08-06T23:22:46.078Z · LW(p) · GW(p)

I missed where Vladimir made that suggestion, though I'm sure others have. You can have an irrational value, if it's really a means and not an end (which is another value), but you don't recognize that, and call the means a value itself. Means to an end can of course be evaluated as rational. If anyone made the suggestion you mention, they probably presumed a single "basic" value of preserving lives, and considered the method of deciding to be a means, but denoted as a value.

(Of course, a value can be both a means and an end, which presents fun new complications...)

Replies from: jwdink, Vladimir_Nesov
comment by jwdink · 2009-08-07T20:58:11.858Z · LW(p) · GW(p)

I agree generally that this is what an irrational value would mean. However, the presiding implicit assumption was that the utilitarian ends were the correct, and therefore the presiding explicit assumption (or at least, I thought it was presiding... now I can't seem to get anyone to defend it, so maybe not) was that therefore the most efficient means to these particular ends were the most rational.

Maybe I was misunderstanding the presiding assumption, though. It was just stuff like this:

Lesswrongers will be encouraged to learn that the Torchwood characters were rationalists to a man and woman - there was little hesitation in agreeing to the 456's demands.

Or this, in response to a call to "dignity":

How many lives is your dignity worth? Would you be willing to actually kill people for your dignity, or are you only willing to make that transaction if someone else is holding the knife?

comment by Vladimir_Nesov · 2009-08-06T23:26:18.790Z · LW(p) · GW(p)

I think I hear you, but this comment is way confusing.

Replies from: jwdink
comment by jwdink · 2009-08-08T00:21:10.953Z · LW(p) · GW(p)

Haha, we must have very different criteria for "confusing." I found that post very clear, and I've struggled quite a bit with most of your posts. No offense meant, of course: I'm just not very versed in the LW vernacular.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-08T00:36:42.318Z · LW(p) · GW(p)

My comments can be confusing, or difficult to get over the wider inferential gaps. In this case I meant that nickernst's comment could just be expressed much more clearly.

comment by Vladimir_Nesov · 2009-07-29T22:35:03.914Z · LW(p) · GW(p)

The problem is a confusion. Human preference is something implemented in the very real human brain.

Replies from: jwdink
comment by jwdink · 2009-07-30T18:58:51.570Z · LW(p) · GW(p)

That's not a particularly helpful or elucidating response. Can you flesh out your position? It's impossible to tell what it is based on the paltry statements you've provided. Are you asserting that the "equation" or "hidden preference" is the same for all humans, or ought to be the same, and therefore is something objective/rational?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-31T10:42:34.641Z · LW(p) · GW(p)

Preference of a given human is defined by their brain, and can be somewhat different from person to person, but not too much. There is nothing "objective" about this preference, but for each person there is one true preference that is their own, and same could be said for humanity as a whole, with the whole planet defining its preference, instead of just one brain. The focus on the brain isn't very accurate though, since environment plays its part as well.

I can't do justice to the centuries-old problem with a few words, but the idea is more or less this. Whatever the concept of "preference" means, when the human philosophers talk about it, their words are caused by something in the world: "preference" must be either a mechanism in their brain, a name of their confusion, or something else. It's not epiphenomenal. Searching for the "ought" in the world outside human minds is more or less a guaranteed failure, especially if the answer is expected to be found explicitly, as an exemplar of perfection rather than evidence about what perfection is, to be interpreted in nontrivial way. The history of failure to find an answer while looking in the wrong place doesn't prove that the answer is nowhere to be found, that there is now positive knowledge about the absence of the answer is the world.

Replies from: jwdink
comment by jwdink · 2009-08-04T17:37:20.684Z · LW(p) · GW(p)

Okay, so I'll ask again: why couldn't the humans real preference be to not sacrifice the children? Remember, you said:

You can't decide your preference, preference is not what you actually do, it is what you should do

You haven't really elucidated this. You're either pulling an ought out of nowhere, or you're saying "preference is what you should do if you want to win". In the latter case, you still haven't explained why giving up the children is winning, and not doing so is not winning.

And the link you gave doesn't help at all, since, if we're going to be looking at moral impulses common to all cultures and humans, I'm pretty sure not sacrificing children is one of them. See: Jonathan Haidt

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-05T01:14:13.737Z · LW(p) · GW(p)

Okay, so I'll ask again: why couldn't the humans real preference be to not sacrifice the children?
[...]
In the latter case, you still haven't explained why giving up the children is winning, and not doing so is not winning.

It seems like you are seeing my replies as soldier-arguments for the object-level question about the sacrifice of children, stumped on a particular conclusion that sacrificing children is right, while I'm merely giving opinion-neutral meta-comments about the semantics of such opinions. (I'm not sure I'm reading this right.)

You can't decide your preference, preference is not what you actually do, it is what you should do.

You haven't really elucidated this. You're either pulling an ought out of nowhere, or you're saying "preference is what you should do if you want to win".

Preference defines what constitutes winning, your actions rank high in the preference order if they determine the world high in preference order. Preference can't be reduced to winning or actions, as these all are the sides of the same structure.

Replies from: jwdink
comment by jwdink · 2009-08-05T04:38:50.313Z · LW(p) · GW(p)

It seems like you are seeing my replies as soldier-arguments for the object-level question about the sacrifice of children, stumped on a particular conclusion that sacrificing children is right, while I'm merely giving opinion-neutral meta-comments about the semantics of such opinions. (I'm not sure I'm reading this right.)

...so you're NOT attempting to respond to my original question? My original question was "what's irrational about not sacrificing the children?"

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-05T05:58:04.848Z · LW(p) · GW(p)

There is nothing intrinsically irrational about any action, rationality or irrationality depends on preference, which is the point I was trying to communicate. Any question about "rationality" of a decision is a question about correctness of preference-optimization. So, my reply to your original question is that the question is ill-posed, and the content of the reply was explanation as to why.

Replies from: jwdink
comment by jwdink · 2009-08-05T16:57:57.956Z · LW(p) · GW(p)

Okay, that's fine. So you'll agree that the various people--who were saying that the decision made in the show was the rational route--these people were speaking (at least somewhat) improperly?

comment by eirenicon · 2009-07-29T20:02:58.871Z · LW(p) · GW(p)

Some people would say that dying honorably is better than living dishonorably. I'm not endorsing this view, I'm just trying to figure out why it's irrational, while the utilitarian sacrifice of children is more rational.

If a decision decreases [personal] utility, is it not irrational?

Some people would say that it is dishonourable to hand over your wallet to a crackhead with a knife. When I was actually in that situation, though (hint: not as the crackhead), I didn't think about my dignity. I just thought that refusing would be the dumbest, least rational possible decision. The only time I've ever been in a fight is when I couldn't run away. If behaving honourably is rational then being rational is a good way to get killed. I'm not saying that being rational always leads to morally satisfactory decisions. I am saying that sometimes you have to choose moral satisfaction over rationality... or the reverse.

As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless to effect it) survive honourably?

Replies from: jwdink
comment by jwdink · 2009-07-29T20:12:57.501Z · LW(p) · GW(p)

If a decision decreases utility, is it not irrational?

I don't see how you could go about proving this.

As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless to effect it) survive honourably?

Well, wait. Are we dealing with the happiness that results in the aftermath, or are we dealing with the moral value of the actions themselves? Surely these two are discrete. Don't the intentions behind an action factor into the morality of the action? Or are the results all that matter? If intentions are irrelevant, does that mean that inanimate objects (entities without intentions, good or bad) can do morally good things? If a tornado diverts from a city at the last minute, was that a morally good action?

I think intentions matter. It might be the case that, 100 years later, the next generation will be happier. That doesn't mean that the decision to sacrifice those children was the morally good decision-- in the same way that, despite the tornado-free city being a happier city, it doesn't mean the tornado's diversion was a morally good thing.

Replies from: eirenicon
comment by eirenicon · 2009-07-29T20:44:26.933Z · LW(p) · GW(p)

I should have said "decreases personal utility." When I say rationality, I mean rationality. Decreasing personal utility is the opposite of "winning".

Replies from: jwdink
comment by jwdink · 2009-07-29T21:19:01.187Z · LW(p) · GW(p)

Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".

Couldn't these people care about not sacrificing autonomy, and therefore this would be a value that they're successfully fulfilling?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-29T21:27:48.021Z · LW(p) · GW(p)

Yes they could care about either outcome. The question is whether they did, whether their true hidden preferences said that a given outcome is preferable.

Replies from: jwdink
comment by jwdink · 2009-07-29T22:25:48.298Z · LW(p) · GW(p)

What would be an example of a hidden preference? The post to which you linked didn't explicitly mention that concept at all.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-29T22:37:27.302Z · LW(p) · GW(p)

All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.

Replies from: jwdink, pjeby
comment by jwdink · 2009-07-30T19:00:07.651Z · LW(p) · GW(p)

All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.

Okay... so again, I'll ask... why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?

Replies from: orthonormal
comment by orthonormal · 2009-07-30T19:37:58.604Z · LW(p) · GW(p)

I understand your frustration, since we don't seem to be saying much to support our claims here. We've discussed relevant issues of metaethics quite heavily on Less Wrong, but we should be willing to enter the debate again as new readers arrive and raise their points.

However, there's a lot of material that's already been said elsewhere, so I hope you'll pardon me for pointing you towards a few early posts of interest right now instead of trying to summarize it in one go.

Torture vs. Dust Specks kicked off the arguing; Eliezer began arguing for his own position in Circular Altruism and The "Intuitions" Behind "Utilitarianism". Searching LW for keywords like "specks" or "utilitarian" should bring up more recent posts as well, but these three sum up more or less what I'd say in response to your question.

(There's a whole metaethics sequence later on (see the whole list of Eliezer's posts from Overcoming Bias), but that's less germane to your immediate question.)

Replies from: jwdink, jwdink
comment by jwdink · 2009-07-30T21:26:32.492Z · LW(p) · GW(p)

Oh, it's no problem if you point me elsewhere. I should've specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I'll check them out.

comment by jwdink · 2009-07-30T21:26:09.251Z · LW(p) · GW(p)

Oh, it's no problem if you point me elsewhere. I should've specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I'll check them out.

comment by pjeby · 2009-07-30T05:45:35.077Z · LW(p) · GW(p)

All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.

It's especially hard if you use models based on utility maximizing rather than on predicted error minimization, or if you assume that human values are coherent even within a given individual, let alone humanity as a whole.

That being said, it is certainly possible to map a subset of one's preferences as they pertain to some specific subject, and to do a fair amount of pruning and tuning. One's preferences are not necessarily opaque to reflection; they're mostly just nonobvious.

comment by Vladimir_Nesov · 2009-07-29T17:44:49.978Z · LW(p) · GW(p)

See Shut up and multiply.

Replies from: jwdink
comment by jwdink · 2009-07-29T19:05:28.588Z · LW(p) · GW(p)

Yeah, the sentiment expressed in that post is usually my instinct too.

But then again, that's the problem: it's an instinct. If my utilitarian impulse is just another impulse, then why does it automatically outweigh any other moral impulses I have, such as a value of human autonomy? If my utilitarian impulse is NOT just an impulse, but somehow is objectively more rational and outranks other moral impulses, then I have yet to see a proof of this.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-07-29T19:47:52.693Z · LW(p) · GW(p)

"shut up and multiply" is, in principle, a way to weigh various considerations like the value of autonomy, etc etc etc...

It's not "here's shut up and multiply" vs "some other value here", but "plug in your values + actual current situation including possible courses of action and compute"

Some of us are then saying "it is our moral position that human lives are so incredibly valuable that a measure of dignity for a few doesn't outweigh the massively greater suffering/etc that would result from the implied battle that would ensue from the 'battle of honor' route"

Replies from: jwdink
comment by jwdink · 2009-07-29T20:08:13.745Z · LW(p) · GW(p)

Ah, then I misunderstood. A better way of phrasing my challenge might be: it sounds like we might have different algorithms, so prove to me that your algorithm is more rational.

No one has answered this challenge.

comment by Psy-Kosh · 2009-07-29T17:37:00.695Z · LW(p) · GW(p)

If you take an action that you know will result in a greater amount of death/suffering, just for the sake of your own personal dignity, do you actually deserve any dignity from that?

ie, one can rephrase the situation as "are you so selfish as to put your own personal dignity above many many human lives?" (note, I have not watched the Torchwood episodes in question, merely going at this based on the description here.)

IF fighting them or otherwise resisting is known to be futile and IF there's sufficient reason to suspect that they will keep their word on the matter, then the question becomes "just about everyone gets killed" vs "most survive, but some number of kids get taken to suffer, well, whatever the experience of being used as a drug is. (eventual death within a human lifespan? do they remain conscious long past that? etc etc etc...)"

That doesn't make the second option "good", but if the choices available amount to those two options, then we need to choose one.

"Everyone gets killed, but at least we get some 'warm fuzzies of dignity'" would actually seem to potentially be a highly immoral decision.

Having said that..... Don't give up searching for alternatives or ways to fight the monsters-in-question that doesn't result in automatic defeat. What's said above applies to the pathological dilemma in the least convenient possible world where we assume there really are no plausible alternatives.

Replies from: jwdink
comment by jwdink · 2009-07-29T19:24:17.184Z · LW(p) · GW(p)

Well, sure, when you phrase it like that. But your language begs the question: it assumes that the desire for dignity/autonomy is just an impulse/fuzzy feeling, while the desire for preserving human life is an objective good that is the proper aim for all (see my other posts above). This sounds probable to be me, but it doesn't sound obvious/ rationally derived/ etc.

I could after all, phrase it in the reverse manner. IF I assume that dignity/autonomy is objectively good:

then the question becomes "everyone preserves their objectively good dignity" vs. "just about everyone loses their dignity for destroying human autonomy, but we get that warm fuzzy feeling of saving some people." In this situation, "Everyone loses their dignity, but at least they get to survive--in the way that any other undignified organism (an amoeba) survives" would actually seem to be a highly immoral decision.

I'm not endorsing either view, necessarily. I'm just trying to figure out how you can claim one of these views is more rational or logical than the other.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-07-29T19:49:27.974Z · LW(p) · GW(p)

Well then... I'd say a morality that puts the dignity of a few people (the decision makers) as having more importance than, well, the lives and well being of the majority of the human race is not a very good morality.

ie, I am claiming "it seems to be that a consequence of my morality is that..."

Alternately "sure, maybe you value 'battle of honor' more than human lives, but then your values don't seem to count as something I'd call morality"

Replies from: jwdink
comment by jwdink · 2009-07-29T20:13:51.509Z · LW(p) · GW(p)

Well then... I'd say a morality that puts the dignity of a few people (the decision makers) as having more importance than, well, the lives and well being of the majority of the human race is not a very good morality.

Okay. Would you say this statement is based on reason?