[SEQ RERUN] One Life Against the World
post by Tyrrell_McAllister · 2011-06-12T01:38:39.413Z · LW · GW · Legacy · 17 commentsContents
17 comments
Today's post, One Life Against the World, was originally published on 18 May 2007. A summary (taken from the LW wiki):
Saving one life and saving the whole world provide the same warm glow. But, however valuable a life is, the whole world is billions of times as valuable. The duty to save lives doesn't stop after the first saved life. Choosing to save one life when you could have saved two is as bad as murder.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Scope Insensitivity, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
17 comments
Comments sorted by top scores.
comment by JLrep · 2011-06-12T07:12:29.073Z · LW(p) · GW(p)
The premise that human lives can be treated in straightforward arithmetical terms (e.g., that two lives are twice as valuable as one, and twelve are three times as valuable as four, and so on) seems to me to lead to some disquieting places.
Specifically, it seems that any time we can definitely save two lives by killing one person, we ought to do so without hesitation, or at least seriously consider it. Yes, there is damage done by the killing—grieving loved ones, the loss of a good chef—but if the value of a human life is as high as we tend to think it is, it probably outweighs that damage. If six people will almost certainly survive with organ transplants, and will almost certainly die without, and the only match is the taxi driver outside, then get her on the operating table ASAP. If any of the sick people are paramedics, or if the taxi driver tends not to pay her credit card bills, then let us move all the quicker.
The only barrier to such behavior would be a demand for greater and methodical inquiry into the precise value of a human life, the bearing of personal factors on that value (age, health, quality of life), and the overall effect of whether particular people live or die (a person might be more worth saving if they are working on a cure to a deadly disease—but also if they are a pillar of the community, and whose death would tend to cause depression in those around them). And if this barrier is the only thing standing in our way, it would seem that we ought to be doing everything we can to overcome it, so that we can get started on the proactive business.
Replies from: MixedNuts, fubarobfusco↑ comment by MixedNuts · 2011-06-12T13:02:35.212Z · LW(p) · GW(p)
Nitpick: most utilitarians would refuse to harvest the taxi driver's organs, because bad things happen where people don't trust doctors.
But yeah, that's pretty much what we think (see the trolley problem). Utilitarians view "Should I kill one to save two?" as a choice between one death and two deaths, which is pretty straightforward. Whether you have blood on your hands isn't relevant - your feelings of guilt aren't worth one human life.
And refusing linear aggregation is disquieting as well. The sick child pleads for medication, and you rush to pay for it - then someone tells you "There are a million healthy children over there" and you say "Okay then" and go buy yourself a laptop.
Replies from: JLrep↑ comment by JLrep · 2011-06-13T08:20:56.618Z · LW(p) · GW(p)
I have a related question, as one still new to lesswrong: are there existing sequences on the philosophy behind/connected to utilitarianism, by which I mean, the notion that human lives, or life in general, has value? I assume there is either a sequence regarding this, or else a consensus which is generally accepted by the readers of this site (a consensus which, I hope, is nevertheless written out somewhere).
↑ comment by fubarobfusco · 2011-06-12T20:12:16.448Z · LW(p) · GW(p)
I've decided I really don't like a lot of ethics thought-experiments. Making people contemplate a horrible but artificially constrained scenario gets them to react strongly, but the constraints placed on the thought-experiment block us from using it to reason about almost everything that actually motivates that strong reaction.
Part of a real-world reason not to push someone in front of a runaway trolley to stop it is that it might not work. The person might fight back; the trolley's brakes might not actually be failing; the person might not be heavy enough to stop the trolley. But the trolley problem requires that we set these practical considerations aside and consider exactly two world-branches: kill the one person, or kill the five.
Another part of a real-world reason not to push someone in front of a runaway trolley is that other people might do bad things to you and your loved ones because you did it. You might go to jail, lose your job, be unable to provide for your kids, be dumped by your spouse or partner. If you're a doctor who saves lives every week, it would be pretty evil for you to throw your career away for the chance of saving a few people on a trolley track. If you're working on existential risks, your getting put in jail might literally cost us the world. But the trolley problem doesn't ask us to think about those consequences, just the immediate short-term ones: kill the one person, or kill the five.
In other words, the trolley problem doesn't ask us to exercise our full wisdom to solve a tricky situation. It asks us to be much stupider than we are in ordinary life; to blank out most of the likely consequences; to ignore many of the strongest and most worthwhile reasons that we might have to do (or refrain from doing) something. The only way the suggestion to reduce moral decision-making to "5 deaths > 1 death" can sound even remotely reasonable is to turn off most of your brain.
(I should add: My friend with a master's degree in philosophy thinks I'm totally missing the point of the distinction between ethical philosophy and applied morality.)
comment by AlphaOmega · 2011-06-12T08:38:48.096Z · LW(p) · GW(p)
Maximizing human life is an absurd idea in general. Does Yudkowsky not believe in Malthusian limits, and does he really take seriously such fantastic notions as "intergalactic civilizations"? Maybe he should rethink his ethics to incorporate more realistic notions like limits and sustainability and fewer tropes from science fiction.
Step back and take a look at yourselves and the state of the world folks. Monkeys at keyboards fantasizing about colonizing other galaxies and their computers going FOOM! while their civilization crumbles is quite an amusing spectacle!
Replies from: MixedNuts↑ comment by MixedNuts · 2011-06-12T12:53:26.169Z · LW(p) · GW(p)
"Rethink his ethics" because you think his goal is impossible? That's the purest example of sour grapes, like, ever.
Also, jaded cynicism is worthless. If civilization is collapsing, go prop it up.
Replies from: AlphaOmega↑ comment by AlphaOmega · 2011-06-12T18:14:28.872Z · LW(p) · GW(p)
Basing your ethics on far-fetched notions like "intergalactic civilizations" and the "Singularity" is the purest example of science fiction delusion. I would characterize this movement as an exercise in collective delusion -- very much like any other religion. Which I don't have a problem with, as long as you don't take your delusions too seriously and start thinking you have a holy mission to save the universe from the heathens. Unfortunately, that is exactly the sense I get from Mr. Yudkowsky and some of his more fanatical followers...
Replies from: JenniferRM, MixedNuts↑ comment by JenniferRM · 2011-06-12T19:37:50.073Z · LW(p) · GW(p)
See, this is one of the predictions people get totally wrong when they try to interpret singularity activism using religion as a template. It's not "saving the universe from the heathens" its "optimizing the universe on behalf of everyone, even people who are foolish, shortsighted, and/or misinformed".
Well formed criticism (even if mean-spirited or uncharitable) is very useful, because it helps identify problems that can be corrected once recognized, and it reduces the likelihood of an insanity spiral due to people agree with each other as a form of monkey grooming while trying to work together effectively. Poorly formed criticism is just noise.
You should talk more about the detailed mechanisms and processes of magical thinking, and less about trivial pattern patches to science fiction.
Replies from: MixedNuts, AlphaOmega↑ comment by AlphaOmega · 2011-06-12T20:17:14.901Z · LW(p) · GW(p)
Please spare me your "optimizations on my behalf" and refrain from telling me what I should talk about. Your language gives you away -- it's the same old grandiose totalitarian mindset in a new guise. Are these criticisms well-formed enough for you?
Replies from: JenniferRM↑ comment by JenniferRM · 2011-06-13T00:04:44.191Z · LW(p) · GW(p)
Yes, thank you, that's much more precise :-)
Fears of competitive exclusion and loss of autonomy seem entirely reasonable issues to be raised by anyone who thoughtfully considers the status quo trajectory of exponential technological improvements and looming resource limitations. However, it seems to me that singularitarians are generally aiming to honestly and ethical respond to these concerns, rather than actively doing something that would make the concerns more pressing.
If this isn't clear to people who know enough to troll with as much precision and familiarity as you, then I'd guess that something might be going wrong somewhere. Can you imagine something that you could see in this community that would allay some of your political concerns? What would constitute counter evidence for future scenarios whose prospects make you unhappy?
↑ comment by MixedNuts · 2011-06-12T19:40:20.851Z · LW(p) · GW(p)
(Emotional level note: Please upgrade your politeness level. I've been rude earlier, but escalating is a bad move even then; I'm de-escalating now. Your current politeness level is generating signs of treating debate as a conflict, and of trolling.)
Basing your ethics
Can you clarify that phrase? I can only parse it can "deriving your ethics from", but ethical systems are derived from everyday observations like "Hey, it seems bad when people die", then reasoning about it. Then the ethics exist, and "intergalactic civilizations are desirable" come from them.
Maybe you meant "designating those notions as the most desirable things"? They are consequences of the ethical system, yeah, but "The thing you desire most is impossible", while bad news, is no reason to change what you desire. (Which is why I called sour grapes.)
delusion
You seem to confuse "A positive Singularity is desirable" (valuing lives, ethical systems) and "A positive Singularity is likely" (pattern-matching with sci-fi).
science fiction delusion
You are invoking the absurdity heuristic. "Intergalactic civilizations and singularities pattern-match science fiction, rather than newspapers." This isn't bad if you need a three-second judgement, but is quite faillible (e.g., relativity, interracial marriage, atheism). It would be better to engage with the meat of the argument (why smarter-than-human intelligence is possible in principle, why AIs go flat or FOOM, why the stakes are high, why a supercritical AI is likely in practice (I don't actually know that one)), pinpoint something in particular, and say "That can't possibly be right" (backing it up with a model, a set of historical observations, or a gut feeling).
[religious vocabulary]
It's common knowledge on LW that both the rationality thing (LW) and the AI things (SIAI) are at unusually high risk of becoming cultish. If you can point to a particular problem, please do so; but reasoning by analogy ("They believe weird things, so do religions, therefore they're like a religion") proves little. (You know what else contained carbon? HITLER!)
you have a holy mission to save the universe
Are we talking feasibility, or desirability?
If feasibility, again, please point out specific problems. Or alternate ways to save the universe (incrementally, maybe, with charity for the poorest like VillageReach, or specialized research like SENS). Or more urgent risks to address ("civilization crumbles"). Or reasons why all avenues for big change are closed, and actions that might possibly slightly increase the probability of improving the world a little.
If desirability, well, yeah. People are dying. I need to stop that. Sure, it's hubris and reaching above myself, sure I'm going to waste a lot of money on the equivalent of alchemy and then do it again on the next promising project (and maybe get outright scammed at some point), sure after all that I'm going to fail anyway, but, you know, Amy is dead and that shouldn't happen to anyone else.
Replies from: steven0461↑ comment by steven0461 · 2011-06-12T20:04:04.772Z · LW(p) · GW(p)
Your current politeness level is generating signs of treating debate as a conflict, and of trolling.
Right, so why feed him?
Replies from: MixedNuts, AlphaOmega↑ comment by MixedNuts · 2011-06-12T20:10:05.211Z · LW(p) · GW(p)
Because honest debaters can think they're matching each other's politeness level and go from "Hey, you have a bug there" to "Choke on a buckets of cock". If AlphaOmega refuses to de-escalate, or if ey still looks like a troll when polite, I'll shrug and walk away.
"Yo momma's a cultist" is worthless, but be wary of ignoring all dissenters - evaporative cooling happens. (OTOH, Usenet.)
Edit: Aaand yup, ey's an ass. Oh well, that'll teach me a lesson.
↑ comment by AlphaOmega · 2011-06-12T20:36:54.807Z · LW(p) · GW(p)
Trolls serve an important function in the memetic ecology. We are the antibodies against outbreaks of ideological insanity and terminal groupthink. I've developed an entire philosophy of trolling, and am obligated to engage in it as a kind of personal jihad.
Replies from: jimrandomh↑ comment by jimrandomh · 2011-06-12T20:47:49.943Z · LW(p) · GW(p)
I've developed an entire philosophy of trolling, and am obligated to engage in it as a kind of personal jihad.
According to the web site linked in your profile, you are attempting to actively poison the memetic ecology by automated means. I'm not sure how to answer that, given that the whole site goes far over the top with comic book villainy, except to say that this particular brand of satire is probably dangerous to your mental health.
Replies from: AlphaOmega↑ comment by AlphaOmega · 2011-06-12T21:16:33.716Z · LW(p) · GW(p)
That site is obsolete. I create new sites every few months to reflect my current coordinates within the Multiverse of ideas. I am in the process of launching new "Multiversalism" memes which you can find at seanstrange.blogspot.com
There is no Universal truth system. In the language of cardinal numbers, Nihilism = 0, Universalism = 1, and Multiversalism = infinity.