Circular Altruism

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-22T18:00:00.000Z · LW · GW · Legacy · 310 comments

Followup toTorture vs. Dust Specks, Zut Allais, Rationality Quotes 4

Suppose that a disease, or a monster, or a war, or something, is killing people.  And suppose you only have enough resources to implement one of the following two options:

  1. Save 400 lives, with certainty.
  2. Save 500 lives, with 90% probability; save no lives, 10% probability.

Most people choose option 1.  Which, I think, is foolish; because if you multiply 500 lives by 90% probability, you get an expected value of 450 lives, which exceeds the 400-life value of option 1.  (Lives saved don't diminish in marginal utility, so this is an appropriate calculation.)

"What!" you cry, incensed.  "How can you gamble with human lives? How can you think about numbers when so much is at stake?  What if that 10% probability strikes, and everyone dies?  So much for your damned logic!  You're following your rationality off a cliff!"

Ah, but here's the interesting thing.  If you present the options this way:

  1. 100 people die, with certainty.
  2. 90% chance no one dies; 10% chance 500 people die.

Then a majority choose option 2.  Even though it's the same gamble.  You see, just as a certainty of saving 400 lives seems to feel so much more comfortable than an unsure gain, so too, a certain loss feels worse than an uncertain one.

You can grandstand on the second description too:  "How can you condemn 100 people to certain death when there's such a good chance you can save them?  We'll all share the risk!  Even if it was only a 75% chance of saving everyone, it would still be worth it - so long as there's a chance - everyone makes it, or no one does!"

You know what?  This isn't about your feelings.  A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain's feelings of comfort or discomfort with a plan.  Does computing the expected utility feel too cold-blooded for your taste?  Well, that feeling isn't even a feather in the scales, when a life is at stake.  Just shut up and multiply.

Previously on Overcoming Bias, I asked what was the least bad, bad thing that could happen, and suggested that it was getting a dust speck in your eye that irritated you for a fraction of a second, barely long enough to notice, before it got blinked away.  And conversely, a very bad thing to happen, if not the worst thing, would be getting tortured for 50 years.

Now, would you rather that a googolplex people got dust specks in their eyes, or that one person was tortured for 50 years?  I originally asked this question with a vastly larger number - an incomprehensible mathematical magnitude - but a googolplex works fine for this illustration.

Most people chose the dust specks over the torture.  Many were proud of this choice, and indignant that anyone should choose otherwise:  "How dare you condone torture!"

This matches research showing that there are "sacred values", like human lives, and "unsacred values", like money.  When you try to trade off a sacred value against an unsacred value, subjects express great indignation (sometimes they want to punish the person who made the suggestion).

My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective.  The governmental agency rejected the report because, they said, you couldn't put a dollar value on human life.  After rejecting the report, the agency decided not to implement the measure.

Trading off a sacred value (like refraining from torture) against an unsacred value (like dust specks) feels really awful.  To merely multiply utilities would be too cold-blooded - it would be following rationality off a cliff...

But let me ask you this.  Suppose you had to choose between one person being tortured for 50 years, and a googol people being tortured for 49 years, 364 days, 23 hours, 59 minutes and 59 seconds.  You would choose one person being tortured for 50 years, I do presume; otherwise I give up on you.

And similarly, if you had to choose between a googol people tortured for 49.9999999 years, and a googol-squared people being tortured for 49.9999998 years, you would pick the former.

A googolplex is ten to the googolth power.  That's a googol/100 factors of a googol.  So we can keep doing this, gradually - very gradually - diminishing the degree of discomfort, and multiplying by a factor of a googol each time, until we choose between a googolplex people getting a dust speck in their eye, and a googolplex/googol people getting two dust specks in their eye.

If you find your preferences are circular here, that makes rather a mockery of moral grandstanding.  If you drive from San Jose to San Francisco to Oakland to San Jose, over and over again, you may have fun driving, but you aren't going anywhere.  Maybe you think it a great display of virtue to choose for a googolplex people to get dust specks rather than one person being tortured.  But if you would also trade a googolplex people getting one dust speck for a googolplex/googol people getting two dust specks et cetera, you sure aren't helping anyone.  Circular preferences may work for feeling noble, but not for feeding the hungry or healing the sick. 

Altruism isn't the warm fuzzy feeling you get from being altruistic.  If you're doing it for the spiritual benefit, that is nothing but selfishness.  The primary thing is to help others, whatever the means.  So shut up and multiply!

And if it seems to you that there is a fierceness to this maximization, like the bare sword of the law, or the burning of the sun - if it seems to you that at the center of this rationality there is a small cold flame -

Well, the other way might feel better inside you.  But it wouldn't work.

And I say also this to you:  That if you set aside your regret for all the spiritual satisfaction you could be having - if you wholeheartedly pursue the Way, without thinking that you are being cheated - if you give yourself over to rationality without holding back, you will find that rationality gives to you in return.

But that part only works if you don't go around saying to yourself, "It would feel better inside me if only I could be less rational."

Chimpanzees feel, but they don't multiply.  Should you be sad that you have the opportunity to do better?  You cannot attain your full potential if you regard your gift as a burden.

Added:  If you'd still take the dust specks, see Unknown's comment on the problem with qualitative versus quantitative distinctions.

310 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by anonymous9 · 2008-01-22T18:29:13.000Z · LW(p) · GW(p)

400 people die, with certainty.

Should that be 100?

comment by Paul_Crowley · 2008-01-22T18:29:40.000Z · LW(p) · GW(p)
  1. 400 people die, with certainty.
  2. 90% chance no one dies; 10% chance 500 people die.

ITYM 1. 100 people die, with certainty.

comment by CarlShulman · 2008-01-22T19:02:32.000Z · LW(p) · GW(p)

Care to test your skills against the Repugnant Conclusion? http://plato.stanford.edu/entries/repugnant-conclusion/

Replies from: Kingreaper, None, daniel-amdurer
comment by Kingreaper · 2010-07-22T00:21:09.348Z · LW(p) · GW(p)

A life barely worth living is worth living. I see no pressing need to disagree with the Repugnant Conclusion itself.

However, I suspect there is a lot of confusion between "a life barely worth living" and "a life barely good enough that the person won't commit suicide".

A life barely good enough that the person won't commit suicide is well into the negatives.

Replies from: steven0461
comment by steven0461 · 2010-07-22T00:27:14.659Z · LW(p) · GW(p)

Not to mention the confusion between "a life barely worth living" and "a life that has some typical number of bad experiences in it and barely any good experiences".

Replies from: AndyC
comment by AndyC · 2014-04-22T11:31:19.362Z · LW(p) · GW(p)

I don't understand why it's supposed to be somehow better to have more people, even if they are equally happen. 10 billion happy people is better than 5 billion equally happy people? Why? It makes no intuitive sense to me, I have no innate preference between the two (all else equal), and yet I'm supposed to accept it as a premise.

Replies from: AlexanderRM, altleft
comment by AlexanderRM · 2015-03-27T21:28:40.150Z · LW(p) · GW(p)

Isn't it usually brought up by people who want you to reject it as a premise, as an argument against hedonic positive utilitarianism?

Personally I do disagree with that premise and more generally with hedonic utilitarianism. My utility function is more like "choice" or "freedom" (an ideal world would be one where everyone can do whatever they want, and in a non-ideal one we should try to optimize to get as close to that as possible), so based on that I have no preference with regards to people who haven't been born yet, since they're incapable of choosing whether or not to be alive. (on the other hand my intuition is that bringing dead people back would be good if it were possible... I suppose that if the dead person didn't want to die at the moment of death, that would be compatible with my ideas, and I don't think it's that far off from my actual, intuitive reasons for feeling that way.)

comment by altleft · 2015-03-28T16:53:18.355Z · LW(p) · GW(p)

It makes some sense in terms of total happiness, since 10 billion happy people would give a higher total happiness than 5 billion happy people.

comment by [deleted] · 2014-06-23T02:56:18.646Z · LW(p) · GW(p)

But the Repugnant Conclusion is wrong. People who don't exist have no interest in existing; they don't have any interests, because they don't exist. To make the world a better place means making it a better place for people who already exist. If you add a new person to that pool of 'people who exist', then of course making the world a better place means making it a better place for that person as well. But there's no reason to go around adding imaginary babies (as in the example from part one of the linked article) to that pool for the sake of increasing total happiness. It's average happiness on a personal level -- not total happiness -- which makes people happy, and making people happy is sort of the whole point of 'making the world a better place'. Or else why bother? To be honest, the entire Repugnant Conclusion article felt a little silly to me.

comment by Jerdle (daniel-amdurer) · 2020-10-22T16:29:36.941Z · LW(p) · GW(p)

My answer to it is that it's a case of status quo bias. People see the world we live in as world A, and so status quo bias makes the repugnant conclusion repugnant. But, looking at the world, I see no reason to assume we aren't in world Z. So the question becomes, would it be acceptable to painlessly kill a large percentage of the population to make the rest happier, and the intuitive answer is no. But that is the same as saying world Z is better than world A, which is the repugnant conclusion.

comment by Elise_Conolly · 2008-01-22T19:26:58.000Z · LW(p) · GW(p)

Whilst the your analysis of life-saving choices seems fairly uncontentious, I'm not entirely convinced that the arithmetic of different types of suffering add together the way you assume. It seems at least plausible to me that where dust motes are individual points, torture is a section of a contiuous line, and thus you can count the points, or you can measure the lengths of different lines, but no number of the former will add up to the latter.

Replies from: Kingreaper
comment by Kingreaper · 2010-07-22T00:05:39.887Z · LW(p) · GW(p)

A dust speck takes a finite time, not an instant. Unless I'm misunderstanding you, this makes them lines, not points.

Replies from: AndyC
comment by AndyC · 2014-04-22T11:36:40.829Z · LW(p) · GW(p)

You're misunderstanding. It has nothing to do with time -- it's not a time line. It means the dust motes are infinitesimal, while the torture is finite. A finite sum of infinitesimals is always infinitesimal.

Not that you really need to use a math analogy here. The point is just that there is a qualitative difference between specs of dust and torture. They're incommensurable. You cannot divide torture by spec of dust, because neither one is a number to start with.

Replies from: AlexanderRM, dxu, Slider
comment by AlexanderRM · 2015-03-27T21:41:32.247Z · LW(p) · GW(p)

I think the dust motes vs. torture makes sense if you imagine a person being bombarded with dust motes for 50 years. I could easily imagine a continuous stream of dust motes being as bad as torture (although possibly the lack of variation would make it far less effective than what a skilled torturer could do).

Based on that, Eliezer's belief is just that the same number of dust motes spread out among many people is just as bad as one person getting hit by all of them. Which I will admit is a bit harder to justify. One possible way to make the argument is to think in terms of rules utilitarianism, and imagine a world where a huge number of people got the choice, then compare one where they all choose the torture vs. one where they all choose the dust motes- the former outcome would clearly be better. I'm pretty sure there are cases where this could be important in government policy.

comment by dxu · 2015-03-27T21:56:26.052Z · LW(p) · GW(p)

the dust motes are infinitesimal

This is an interesting claim. Either it implies that the human brain is capable of detecting infinitesimal differences in utility, or else it implies that you should have no preference between having a dust speck in your eye and not having one in your eye.

comment by Slider · 2019-05-20T18:30:08.575Z · LW(p) · GW(p)

There is a perfectly good way of treating this as numbers. Transfinite division is a thing. With X people experiencing infinidesimal discomfort and Y people experiening finite discomfort if X and Y are finites then torture is always worse. With X being transfinite dust specks could be worse. But in reverse if you insist that the impacts are reals ie finites then there are finite multiples that go past each other that is for any r,y,z in R r>0,y>r, there is a z so that rz>y.

comment by George_McCandless · 2008-01-22T19:28:34.000Z · LW(p) · GW(p)

I'm sorry, but I find this line of argument not very useful. If I remember correctly (which I may not be doing), a googolplex is larger than the estimated number of atoms in the universe. Nobody has any idea of what it implies except "really, really big", so when your concepts get up there, people have to do the math, since the numbers mean nothing. Most of us would agree that having a really really lot of people bothered just a bit is better than having one person suffer for a long life. That has little to do with math and a lot to do with our preception of suffering and a feeling that each of us has only one life. Worrying about discontinuities in this kind of discussion seems almost puerile.

A more interesting discontinuity that we run into quite frequently is our willingness to make great efforts and sacrifices to save the lives of children and then decide that at the age of 18, young men cease to be children and we send them off to war. What happens in our brains when young men turn 18? Sure these 18 year olds are all testosterone fired up and looking for a fight, but the discontinuity of the moral logic is strange. Have you talked about this at all?

(By the way, one of the saddest museums in the world is in Salta, Argentina, where they display mummies of children who were made drunk and buried alive to placate a now long forgotten god, but that is getting off the point.)

Replies from: Strange7, Dojan
comment by Strange7 · 2011-07-02T15:34:36.871Z · LW(p) · GW(p)

What happens in our brains when young men turn 18?

They've probably already had sex once by then, and thus a fair chance to pass on their genes. Notice that we're not as eager to send 18-year-old women off to war.

comment by Dojan · 2011-10-27T15:22:56.394Z · LW(p) · GW(p)

Nobody has any idea of what it implies except "really, really big", so when your concepts get up there, people have to do the math, since the numbers mean nothing.

This applies just as much to numbers such as million and billion, which people mixes up regularly; the problem though is that people dont do the math, despite not understanding the magnitudes if the numbers, and those numbers of people are actually around.

Personaly, if I first try to visualize a crowd of a hundred people, and then a crowd of a thousand, the second crowd seems about three times as large. If I start with a thousand, and then try a hundred, this time around the hundred people crowd seems a lot bigger than it did last time. And the bigger numbers I try with, the worse it gets, and there is a long way to go to get to 7'000'000'000(# of people of earth). All sorts of biases seems to be at work here, anchoring among them. Result: Shut up and multiply!

[Edit: Spelling]

Replies from: Normal_Anomaly, Dojan
comment by Normal_Anomaly · 2011-11-27T17:18:04.341Z · LW(p) · GW(p)

This is an excellent point, but your spelling errors are distracting. You said "av" seven times when you meant "a", and "ancoring" in the last line should be "anchoring".

Replies from: Dojan
comment by Dojan · 2011-11-28T06:07:13.423Z · LW(p) · GW(p)

Wow, I must have been half asleep when writing that...

comment by Dojan · 2011-12-02T17:08:12.501Z · LW(p) · GW(p)

This is further evidenced by the fact that most people dont know about the long and short scales, and never noticed.

comment by Unknown · 2008-01-22T19:34:05.000Z · LW(p) · GW(p)

One can easily make an argument like the torture vs. dust specks argument to show that the Repugnant Conclusion is not only not repugnant, but certainly true.

More intuitively, if it weren't true, we could find some population of 10,000 persons at some high standard of living, such that it would be morally praiseworthy to save their lives at the cost of a googolplex galaxies filled with intelligent beings. Most people would immediately say that this is false, and so the Repugnant Conclusion is true.

Replies from: AlexanderRM, dxu
comment by AlexanderRM · 2015-03-27T21:50:15.350Z · LW(p) · GW(p)

Note here that the difference is between the deaths of currently-living people, and preventing the births of potential people. In hedonic utilitarian terms it's the same, but you can have other utilitarian schemes (ex. choice utilitarianism as I commented above) where death either has an inherent negative value, or violates the person's preferences against dying.

BTW note that even if you draw no distinction, your thought experiment doesn't necessarily prove the Repugnant Conclusion. The third option is to say that because the Repugnant Conclusion is false, it must be that the automatic response to your thought experiment is incorrect, i.e. that it's OK to wipe out a googolplex galaxies full of people with lives barely worth living to save 10,000 people. Although I feel like most people, if they rejected the killing/preventing birth distinction, would go with the Repugnant Conclusion over that.

comment by dxu · 2015-03-27T21:57:48.750Z · LW(p) · GW(p)

Interestingly enough, I don't find the Repugnant Conclusion all that repugnant. Is there anyone else here who shares this intuition?

comment by Lee · 2008-01-22T19:49:56.000Z · LW(p) · GW(p)

Eliezer, I am skeptical that sloganeerings ("shut up and calculate") will not get you across this philosophical chasm: Why do you define the best one-off choice as the choice that would be prefered over repeated trials?

comment by GreedyAlgorithm · 2008-01-22T20:24:55.000Z · LW(p) · GW(p)

Can someone please post a link to a paper on mathematics, philosophy, anything, that explains why there's this huge disconnect between "one-off choices" and "choices over repeated trials"? Lee?

Here's the way across the philosophical "chasm": write down the utility of the possible outcomes of your action. Use probability to find the expected utility. Do it for all your actions. Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences.

You might have a point if there existed a preference effector with incoherent preferences that could only ever effect one preference. I haven't thought a lot about that one. But since your incoherent preferences will show up in lots of decisions, I don't care if this specific decision will be "repeated" (note: none are ever really repeated exactly) or not. The point is that you'll just keep losing those pennies every time you make a decision.

  1. Save 400 lives, with certainty.
  2. Save 500 lives, with 90% probability; save no lives, 10% probability. What are the outcomes? U(400 alive, 100 dead, I chose choice 1) = A, U(500 alive, 0 dead, I chose choice 2) = B, and U(0 alive, 500 dead, I chose choice 2) = C.

Remember that probability is a measure of what we don't know. The plausibility that a given situation is (will be) the case. If 1.0A > 0.9B + 0.1*C, then I prefer choice 1. Otherwise 2. Can you tell me what's left out here, or thrown in that shouldn't be? Which part of this do you have a disagreement with?

Replies from: josinalvo
comment by josinalvo · 2014-12-05T19:00:50.664Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/Prisoner%27s_dilemma#The_iterated_prisoners.27_dilemma

(just an example of such a disconnect, not a general defence of disconects)

comment by Lee · 2008-01-22T20:32:44.000Z · LW(p) · GW(p)

Consider these two facts about me:

(1) It is NOT CLEAR to me that saving 1 person with certainty is morally equivalent to saving 2 people when a fair coin lands heads in a one-off deal.

(2) It is CLEAR to me that saving 1000 people with p=.99 is morally better than saving 1 person with certainty.

Models are supposed to hew to the facts. Your model diverges from the facts of human moral judgments, and you respond by exhorting us to live up to your model.

Why should we do that?

Replies from: DPiepgrass
comment by DPiepgrass · 2019-12-10T17:45:36.213Z · LW(p) · GW(p)

In a world sufficiently replete with aspiring rationalists there will be not just one chance to save lives probabilistically, but (over the centuries) many. By the law of large numbers, we can be confident that the outcome of following the expected-value strategy consistently (even if any particular person only makes a choice like this zero or one times in their life) will be that more total lives will be saved.

Some people believe that "being virtuous" (or suchlike) is better than achieving a better society-level outcome. To that view I cannot say it better than Eliezer: "A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain's feelings of comfort or discomfort with a plan."

I see a problem with Eliezer's strategy that is psychological rather than moral: if 500 people die, you may be devastated, especially if you find out later that the chance of failure was, say, 50% rather than 10%. Consequentialism asks us to take this into account. If you are a general making battle decisions, which would weigh on you more? The death of 500 (in your effort to save 100), or abandoning 100 to die at enemy hands, knowing you had a roughly 90% chance to save them? Could that adversely affect future decisions? (in specific scenarios we must also consider other things, e.g. in this case whether it's worth the cost in resources - military leaders know, or should know, that resources can be equated with lives as well...)

Note: I'm pretty confident Eliezer wouldn't object to you using your moral sense as a tiebreaker if you had the choice between saving one person with certainty and two people with 50% probability.

comment by Roland2 · 2008-01-22T20:34:52.000Z · LW(p) · GW(p)

Torture vs dust specks, let me see:

What would you choose for the next 50 days:

  1. Removing one mililiter of the daily water intake of 100,000 people.
  2. Removing 10 liters of the daily water intake of 1 person.

The consequence of choice 2 would be the death of one person.

Yudkowsky would choose 2, I would choose 1.

This is a question of threshold. Below certain thresholds things don't have much effect. So you cannot simply add up.

Another example:

  1. Put 1 coin on the head of each of 1,000,000 people.
  2. Put 100,000 coins on the head of one guy.

What do you choose? Can we add up the discomfort caused by the one coin on each of 1,000,000 people?

Replies from: Kingreaper, AgentME
comment by Kingreaper · 2010-07-22T00:16:18.705Z · LW(p) · GW(p)

These are simply false comparisons.

Had Eliezer talked about torturing someone through the use of googelplex of dust specks, your comparison might have merit, but as is it seems to be deliberately missing the point.

Certainly, speaking for someone else is often inappropriate, and in this case is simple strawmanning.

Replies from: bgaesop
comment by bgaesop · 2011-01-02T01:27:28.413Z · LW(p) · GW(p)

I really don't see how his comparison is wrong. Could you explain in more depth, please

Replies from: ata
comment by ata · 2011-01-02T01:52:52.314Z · LW(p) · GW(p)

The comparison is invalid because the torture and dust specks are being compared as negatively-valued ends in themselves. We're comparing U(torture one person for 50 years) and U(dust speck one person) * 3^^^3. But you can't determine whether to take 1 ml of water per day from 100,000 people or 10 liters of water per day from 1 person by adding up the total amount of water in each case, because water isn't utility.

Replies from: bgaesop
comment by bgaesop · 2011-01-04T01:19:59.355Z · LW(p) · GW(p)

Perhaps this is just my misunderstanding of utility, but I think his point was this: I don't understand how adding up utility is obviously a legitimate thing to do, just like how you claim that adding up water denial is obviously not a legitimate thing to do. In fact, it seems to me as though the negative utility of getting a dust speck in the eye is comparable to the negative utility of being denied a milliliter of water, while the negative utility of being tortured for a lifetime is more or less equivalent to the negative utility of dying of thirst. I don't see why it is that the one addition is valid while the other isn't.

If this is just me misunderstanding utility, could you please point me to some readings so that I can better understand it?

Replies from: ata
comment by ata · 2011-01-07T06:18:04.877Z · LW(p) · GW(p)

I don't understand how adding up utility is obviously a legitimate thing to do

To start, there's the Von Neumann–Morgenstern theorem, which shows that given some basic and fairly uncontroversial assumptions, any agent with consistent preferences can have those preferences expressed as a utility function. That does not require, of course, that the utility function be simple or even humanly plausible, so it is perfectly possible for a utility function to specify that SPECKS is preferred over TORTURE. But the idea that doing an undesirable thing to n distinct people should be around n times as bad as doing it to one person seems plausible and defensible, in human terms. There's some discussion of this in The "Intuitions" Behind "Utilitarianism".

(The water scenario isn't comparable to torture vs. specks mainly because, compared to 3^^^3, 100,000 is approximately zero. If we changed the water scenario to use 3^^^3 also, and if we assume that having one fewer milliliter of water each day is a negatively terminally-valued thing for at least a tiny fraction of those people, and if we assume that the one person who might die of dehydration wouldn't otherwise live for an extremely long time, then it seems that the latter option would indeed be preferable.)

Replies from: Will_Sawin, roystgnr
comment by Will_Sawin · 2011-01-09T23:46:59.407Z · LW(p) · GW(p)

In particular, VNM connects utility with probability, so we can use an argument based on probability.

One person gaining N utility should be equally good no matter who it is, if utility is properly calibrated person-to-person.

One person gaining N utility should be equally good as one randomly selected person out of N people gaining N utility.

Now we analyze it from each person's perspective. They each have a 1/N chance of gaining N utility. This is 1 unit of expected utility, so they find it as good as surely gaining one unit of utility.

If they're all indifferent between one person gaining N and everyone gaining 1, who's to disagree?

Replies from: bgaesop
comment by bgaesop · 2011-01-15T21:55:04.531Z · LW(p) · GW(p)

One person gaining N utility should be equally good no matter who it is, if utility is properly calibrated person-to-person.

That... just seems kind of crazy. Why would it be equally Good to have Hitler gain a bunch of utility as to have me, for example, gain that. Or to have a rich person who has basically everything they want gain a modest amount of utility, versus a poor person who is close to starvation gaining the same. If this latter example isn't taking into account your calibration person to person, could you give an example of what could be given to Dick Cheney that would be of equivalent Good as giving a sandwich and a job to a very hungry homeless person?

If they're all indifferent between one person gaining N and everyone gaining 1, who's to disagree?

I for one would not prefer that, in most circumstances. This is why I would prefer definitely being given the price of a lottery ticket to playing the lottery (even assuming the lottery paid out 100% of its intake).

Replies from: Will_Sawin
comment by Will_Sawin · 2011-01-15T23:25:49.357Z · LW(p) · GW(p)
  1. You can assume that people start equal. A rich person already got a lot of utility, while the poor person already lost some. You can still do the math that derives utilitarianism in the final utilities just fine.

  2. Utility =/= Money. Under the VNM model I was using, utility is defined as the thing you are risk-neutral in. N units of utility is the thing which a 1/N chance of is worth the same as 1 unit of utility. So my statement is trivially true.

Let's say, in a certain scenario, each person i has utility u_i. We define U to be the sum of all the u_i, then by definition, each person is indifferent between having u_i and having a u_i/U chance of U and a (1-u_i)/U chance of 0. Since everyone is indifferent, this scenario is as good as the scenario in which one person, selected according to those probabilities, has U, and everyone else has 0. The goodness of such a scenario should be a function only of U.

  1. Politics is the mind-killer, don't bring controversial figures such as Dick Cheney up.

  2. The reason it is just to harm the unjust is not because their happiness is less valuable. It is because harming the unjust causes some to choose justice over injustice.

Replies from: bgaesop, JGWeissman
comment by bgaesop · 2011-01-16T19:21:14.591Z · LW(p) · GW(p)

Let's say, in a certain scenario, each person i has utility ui. We define U to be the sum of all the ui, then by definition, each person is indifferent between having ui and having a ui/U chance of U and a (1-u_i)/U chance of 0.

I am having a lot of trouble coming up with a real world example of something working out this way. Could you give one, please?

You can assume that people start equal.

I'm not sure I know what you mean by this. Are you saying that we should imagine people are conceived with 0 utility and then get or lose a bunch based on the circumstances they're born into, what their genetics ended up gifting them with, things like that?

In my conception of my utility function, I place value on increasing not merely the overall utility, but the most common level of utility, and decreasing the deviation in utility. That is, I would prefer a world with 100 people each with 10 utility to a world with 99 people with 1 utility and 1 person with 1000 utility, even though the latter has a higher sum of utility. Is there something inherently wrong about this?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-01-16T20:31:06.244Z · LW(p) · GW(p)

I am having a lot of trouble coming up with a real world example of something working out this way. Could you give one, please?

One could construct an extremely contrived real-world example rather trivially. A FAI has a plan that will make one person Space Emperor, with who it is depending on some sort of complex calculation. It is considering whether doing so would be a good idea or not.

The point is that a moral theory must consider such odd special cases. I can reformulate the argument to use a different strange scenario if you like, but the point isn't the specific scenario - it's the mathematical regularity.

Are you saying that we should imagine people are conceived with 0 utility and then get or lose a bunch based on the circumstances they're born into, what their genetics ended up gifting them with, things like that?

My argument is based on a mathematical intuition and can take many different forms. That comment came from asking you to accept that giving one person N utility is as good as giving another N utility, which may be hard to swallow.

So what I'm really saying is that all you need to accept is that, if we permute the utilities, so that instead of me having 10 and you 5, you have 10 and I 5, things don't get better or worse.

Starting at 0 is a red herring for which I apologize.

Is there something inherently wrong about this?

"Greetings, humans! I am a superintelligence with strange values, who is perfectly honest. In five minutes, I will randomly choose one of you and increase his/her utility to 1000. The others, however, will receive a utility of 1."

"My expected utility just increased from 10 to 10.99. I am happy about this!"

"So did mine! So am I"

etc........

"Let's check the random number generator ... Bob wins. Sucks for the rest of you."

The super-intelligence has just, apparently, done evil, after making two decisions:

The first, everyone affected approved of

The second, in carrying out the consequences of a pre-defined random process, was undoubtedly fair - while those who lost were unhappy, they have no cause for complaint.

This is a seeming contradiction.

Replies from: bgaesop
comment by bgaesop · 2011-01-16T21:09:44.755Z · LW(p) · GW(p)

One could construct an extremely contrived real-world example rather trivially.

When I say a real world example, I mean one that has actually already occurred in the real world. I don't see why I'm obligated to have my moral system function on scales that are physically impossible, or extraordinarily unlikely-such as having an omnipotent deity or alien force me to make a universe-shattering decision, or having to make decisions involving a physically impossible number of persons, like 3^^^^3.

I make no claims to perfection about my moral system. Maybe there is a moral system that would work perfectly in all circumstances, but I certainly don't know it. But it seems to me that a recurring theme on Less Wrong is that only a fool would have certainty 1 about anything, and this situation seems analogous. It seems to me to be an act of proper humility to say "I can't reason well with numbers like 3^^^^3 and in all likelihood I will never have to, so I will make do with my decent moral system that seems to not lead me to terrible consequences in the real world situations it's used in".

So what I'm really saying is that all you need to accept is that, if we permute the utilities, so that instead of me having 10 and you 5, you have 10 and I 5, things don't get better or worse.

This is a very different claim from what I thought you were first claiming. Let's examine a few different situations. I'm going to say what my judgment of them is, and I'm going to guess what yours is: please let me know if I'm correct. For all of these I am assuming that you and I are equally "moral", that is, we are both rational humanists who will try to help each other and everyone else.

I have 10 and you have 5, and then I have 11 and you have 4. I say this was a bad thing, I'm guessing you would say it is neutral.

I have 10 and you have 5, and then I have 9 and you have 6. I would say this is a good thing, I'm guessing you would say this is neutral.

I have 10 and you have 5, and then I have 5 and you have 10. I would say this is neutral, I think you would agree.

10 & 5 is bad, 9 & 6 is better, 7 & 8 = 8 & 7 is the best if we must use integers, 6 & 9 = 9 & 6 and 10 & 5 = 5 & 10.

"Greetings, humans! I am a superintelligence with strange values, who is perfectly honest. In five minutes, I will randomly choose one of you and increase his/her utility to 1000. The others, however, will receive a utility of 1."

"My expected utility just increased from 10 to 10.99, but the mode utility just decreased from 10 to 1, and the range of the utility just increased from 0 to 999. I am unhappy about this."

Thanks for taking the time to talk about all this, it's very interesting and educational. Do you have a recommendation for a book to read on Utilitarianism, to get perhaps a more elementary introduction to it?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-01-16T22:33:11.559Z · LW(p) · GW(p)

When I say a real world example, I mean one that has actually already occurred in the real world. I don't see why I'm obligated to have my moral system function on scales that are physically impossible, or extraordinarily unlikely-such as having an omnipotent deity or alien force me to make a universe-shattering decision, or having to make decisions involving a physically impossible number of persons, like 3^^^^3.

It should work in more realistic cases, it's just that the math is unclear. If you are voting for different parties, and you think that your vote will affect two things - one, the inequality of utility, and two, how much that utility is based on predictable sources like inheritance and how much on unpredictable sources like luck. You might find that an increase to both inequality and luck would be a change that almost everyone would prefer, but your moral system bans. Indeed, if your system does not linearly weight people's expected utilities, such a change must be possible.

I am using the strange cases, not to show horrible consequences, but to show inconsistencies between judgements in normal cases.

I have 10 and you have 5, and then I have 11 and you have 4. I say this was a bad thing, I'm guessing you would say it is neutral.

Utility is highly nonlinear in wealth or other non-psychometric aspects of one's well-being. I agree with everything you say I agree with.

"My expected utility just increased from 10 to 10.99, but the mode utility just decreased from 10 to 1, and the range of the utility just increased from 0 to 999. I am unhappy about this."

Surely these people can distinguish there own personal welfare from the good for humanity as a whole? So each individual person is thinking:

"Well, this benefits me, but it's bad overall."

This surely seems absurd.

Note that mode is a bad measure if the distribution of utility is bimodal, if, for example, women are oppressed, and range attaches enormous significance to the best-off and worst-off individuals compared with the best and the worst. It is, however, possible to come up with good measures of inequality.

Thanks for taking the time to talk about all this, it's very interesting and educational. Do you have a recommendation for a book to read on Utilitarianism, to get perhaps a more elementary introduction to it?

No problem. Sadly, I am an autodidact about utilitarianism. In particular, I came up with this argument on my own. I cannot recommend any particular source - I suggest you ask someone else. Do the Wiki and the Sequences say anything about it?

Replies from: bgaesop
comment by bgaesop · 2011-01-17T00:06:02.138Z · LW(p) · GW(p)

Note that mode is a bad measure if the distribution of utility is bimodal, if, for example, women are oppressed, and range attaches enormous significance to the best-off and worst-off individuals compared with the best and the worst. It is, however, possible to come up with good measures of inequality.

Yeah, I just don't really know enough about probability and statistics to pick a good term. You do see what I'm driving at, though, right? I don't see why it should be forbidden to take into account the distribution of utility, and prefer a more equal one.

One of my main outside-of-school projects this semester is to teach myself probability. I've got Intro to Probability by Grinstead and Snell sitting next to me at the moment.

Surely these people can distinguish there own personal welfare from the good for humanity as a whole? So each individual person is thinking:

"Well, this benefits me, but it's bad overall."

This surely seems absurd.

But it doesn't benefit the vast majority of them, and by my standards it doesn't benefit humanity as a whole. So each individual person is thinking "this may benefit me, but it's much more likely to harm me. Furthermore, I know what the outcome will be for the whole of humanity: increased inequality and decreased most-common-utility. Therefore, while it may help me, it probably won't, and it will definitely harm humanity, and so I oppose it."

Do the Wiki and the Sequences say anything about it?

Not enough; I want something book-length to read about this subject.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-01-17T03:40:33.223Z · LW(p) · GW(p)

I do see what you're driving at. I, however, think that the right way to incorporate egalitarianism into our decision-making is through a risk-averse utility function.

But it doesn't benefit the vast majority of them, and by my standards it doesn't benefit humanity as a whole. So each individual person is thinking "this may benefit me, but it's much more likely to harm me.

You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!

Not enough; I want something book-length to read about this subject.

Ask someone else.

Replies from: bgaesop
comment by bgaesop · 2011-01-17T19:52:02.120Z · LW(p) · GW(p)

You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!

Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.

Ask someone else.

Okay. I'll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.

Replies from: Will_Sawin, Perplexed
comment by Will_Sawin · 2011-01-17T20:42:47.099Z · LW(p) · GW(p)

Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.

Risk-averse means that your utility function is not linear in wealth. A simple utility function that is often used is utility=log(wealth). So having $1,000 would be a utility of 3, $10,000 a utility of 4, $100,000 a utility of 5, and so on. In this case one would be indifferent between a 50% chance of having $1000 and a 50% chance of $100,000, and a 100% chance of $10,000.

This creates behavior which is quite risk-averse. If you have $100,000, a one-in-a-million chance of $10,000,000 would be worth about 50 cents. The expected profit is $10 dollars, but the expected utility is .000002. A lottery which is fair in money would charge $10, while one that is fair in utility would charge $.50. This particular agent would play the second but not the first.

The Von Neumann-Morgenstern theorem says that, even if an agent does not maximize expected profit, it must maximize expected utility for some utility function, as long as it satisfies certain basic rationality constraints.

Okay. I'll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.

Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.

Replies from: bgaesop
comment by bgaesop · 2011-01-17T22:41:59.742Z · LW(p) · GW(p)

Thanks for the explanation of risk averseness.

Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.

I just checked the front page after posting that reply and did just that

comment by Perplexed · 2011-01-17T20:59:25.011Z · LW(p) · GW(p)

Here is an earlier comment where I said essentially the same thing that Will_Sawin just said on this thread. Maybe it will help to have the same thing said twice in different words.

comment by JGWeissman · 2011-01-16T19:27:11.155Z · LW(p) · GW(p)

(1-u_i)/U

That should be (1-u_i/U).

Also, "_" is markdown for italics. To display underscores, use "\_".

comment by roystgnr · 2012-03-13T06:24:13.205Z · LW(p) · GW(p)

If you look at the assumptions behind VNM, I'm not at all sure that the "torture is worse than any amount of dust specks" crowd would agree that they're all uncontroversial.

In particular the axioms that Wikipedia labels (3) and (3') are almost begging the question.

Imagine a utility function that maps events, not onto R, but onto (R x R) with a lexicographical ordering. This satisfies completeness, transitivity, and independence; it just doesn't satisfy continuity or the Archimedian property.

But is that the end of the world? Look at continuity: if L is torture plus a dust speck (utility (-1,-1)). M is just torture (utility (-1,0)) and N is just a dust speck ((0,-1)), then must there really be a probability p such that pL + (1-p)N = M? Or would it instead be permissable to say that for p=1, torture plus dust speck is still strictly worse than torture, whereas for any p<1, any tiny probability of reducing the torture is worth a huge probabilty of adding that dust speck to it?

(edited to fix typos)

comment by AgentME · 2011-08-09T21:09:09.360Z · LW(p) · GW(p)

Agree - I was kind of thinking it as friction. Say you have 1000 boxes in a warehouse, all precisely where they need to be. Being close to their current positions is better than not. Is it better to A) apply 100 N of force over 1 second to 1 box, or B) 1 N of force over 1 second to all 1000 boxes? Well if they're frictionless and all on a level surface, do option A because it's easier to fix, but that's not how the world is. Say that 1 N against the boxes isn't even enough to defeat the static friction: that means in option B, none of the boxes will even move.

Back to the choice between A) having a googolplex of people have a speck of dust in their eye vs B) one person being tortured for 50 years: in option A, you have a googolplex of people who lead productive lives who don't even remember that anything out of the ordinary happened to them suddenly (assuming one single dust speck doesn't even pass the memorable threshold), and in option B, you have a googolplex - 1 of people leading productive lives who don't remember anything out of the ordinary happening, and one person being tortured and never accomplishing anything.

comment by Wendy_Collings · 2008-01-22T20:53:00.000Z · LW(p) · GW(p)

Eliezer, can you explain what you mean by saying "it's the same gamble"? If the point is to compare two options and choose one, then what matters is their values relative to each other. So, 400 certain lives saved is better than a 90% chance of 500 lives saved and 10% chance of 500 deaths, which is itself better than 400 certain deaths.

Perhaps it would help to define the parameters more clearly. Do your first two options have an upper limit of 500 deaths (as the second two options seem to), or is there no limit to the number of deaths that may occur apart from the lucky 4-500?

comment by Mike3 · 2008-01-22T20:59:54.000Z · LW(p) · GW(p)

Many were proud of this choice, and indignant that anyone should choose otherwise: "How dare you condone torture!" I don't think that's a fair characterization of that debate. A good number of people using many different reasons thought something along the lines of negligible "harm" * 3^^^3<50 years of torture. That many people spraining their ankle or something would be a different story. Those harms are different enough that it's by no means obvious which we should prefer, and it's not clear that trying to multiply is really productive, whereas your examples in this entry are indeed obvious.

comment by Mike_Kenny · 2008-01-22T21:06:02.000Z · LW(p) · GW(p)

"The primary thing is to help others, whatever the means. So shut up and multiply!"

Would you submit to torture for 50 years to save countless people? I'm not sure I would, but I think I'm more comfortable with the idea of being self-interested and seeing all things through the prism of self interest.

Similar problem: if you had this choice--you can die peacefully and experience no afterlife, or literally experience hell for 100 years if one was rewarded with an eternity of heaven, would you choose the latter? Calculating which provides the greatest utility, the latter would be preferable, but I'm not sure I would choose it.

comment by Adam_Safron · 2008-01-22T21:37:56.000Z · LW(p) · GW(p)

Eliezer, as I'm sure you know, not everything can be put on a linear scale. Momentary eye irritation is not the same thing as torture. Momentary eye irritations should be negligible in the moral calculus, even when multiplied by googleplex^^^googleplex. 50 years of torture could break someone's mind and lead to their destruction. You're usually right on the mark, but not this time.

Replies from: phob
comment by phob · 2011-01-04T17:53:47.455Z · LW(p) · GW(p)

Would you pay one cent to prevent one googleplex of people from having a momentary eye irration?

Torture can be put on a money scale as well: many many countries use torture in war, but we don't spend huge amounts of money publicizing and shaming these people (which would reduce the amount of torture in the world).

In order to maximize the benefit of spending money, you must weigh sacred against unsacred.

Replies from: jeremysalwen, AndyC
comment by jeremysalwen · 2012-10-12T01:29:30.206Z · LW(p) · GW(p)

I certainly wouldn't pay that cent if there was an option of preventing 50 years of torture using that cent. There's nothing to say that my utility function can't take values in the surreals.

comment by AndyC · 2014-04-22T11:52:29.865Z · LW(p) · GW(p)

There's an interesting paper on microtransactions and how human rationality can't really handle decisions about values under a certain amount. The cognitive effort of making a decision outweighs the possible benefits of making the decision.

How much time would you spend making a decision about how to spend a penny? You can't make a decision in zero time, it's not physically possible. Rationally you have to round off the penny, and the spec of dust.

comment by tcpkac · 2008-01-22T22:09:47.000Z · LW(p) · GW(p)

To get back to the 'human life' examples EY quotes. Imagine instead the first scenario pair as being the last lifeboat on the Titanic. You can launch it safely with 40 people on board, or load in another 10 people, who would otherwise die a certain, wet, and icy death, and create a 1 in 10 chance that it will sink before the Carpathia arrives, killing all. I find that a strangely more convincing case for option 2. The scenarios as presented combine emotionally salient and abstract elements, with the result that the emotionally salient part will tend to be foreground, and the '% probabilities' as background. After all no-one ever saw anyone who was 10% dead (jokes apart).

comment by Adam_Safron · 2008-01-22T22:10:21.000Z · LW(p) · GW(p)

Eliezer's point would have been valid, had he chosen almost anything other than momentary eye irritation. Even the momentary eye-irritation example would work if the eye irritation would lead to serious harm (e.g. eye inflammation and blindness) in a small proportion of those afflicted with the speck of dust. If the predicted outcome was millions of people going blind (and then you have to consider the resulting costs to society), then Eliezer is absolutely right: shut-up and do the math.

Replies from: HungryHobo
comment by HungryHobo · 2014-06-13T12:59:07.183Z · LW(p) · GW(p)

Imagine that you had the choice but once you've made that choice it will be applied the same way whenever someone will get tortured, magic intervenes, saves that one person and a googleplex other people get a spec in their eye.

it feels like it's not a big deal if it happens once or twice but imagine that across all the universes where it applies it ended up triggering 3,153,600,000 times, not even half the population of our world.

suddenly a googleplex of people are suffering constantly and half blinded most of the time.

it feels small when it happens once but the same has to apply when it happens again and again.

comment by Lee · 2008-01-22T22:16:05.000Z · LW(p) · GW(p)

GreedyAlgorithm, this is the conversation I want to have.

The sentence in your argument that I cannot swallow is this one: "Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences." This is circular, is it not?

You want to establish that any decision, x, should be made in accordance w/ maximum expected utility theory ("shut up and calculate"). You ask me to consider X = {x_i}, the set of many decisions over my life ("after a while"). You say that the expected value of U(X) is only maximized when the expected value of U(x_i) is maximized for each i. True enough. But why should I want to maximize the expected value of U(X)? That requires every bit as much (and perhaps the same) justification as maximizing the expected value of U(x_i) for each i, which is what you sought to establish.

Replies from: pandamodium
comment by pandamodium · 2010-12-02T13:29:53.899Z · LW(p) · GW(p)

This whole argument only washes if you assume that things work "normally" (eg like they do in the real field, eg are subject to the axioms that make addition/subtraction/calculus work). In fact we know that utility doesn't behave normally when considering multiple agents (as proved by arrows impossibility theorm), so the "correct" answer is that we can't have a true pareto-optimal solution to the eye-dust-vs-torture problem. There is no reason why you couldn't contstruct a ring/field/group for utility which produced some of the solutions the OP dismisses, and in fact IMO those would be better representations of human utility than a straight normal interpretation.

comment by Lee · 2008-01-22T22:20:10.000Z · LW(p) · GW(p)

(I should say that I assumed that a bag of decisions is worth as much as the sum of the utilities of the individual decisions.)

comment by Chris_Hallquist · 2008-01-22T23:33:06.000Z · LW(p) · GW(p)

I'm seconding the worries of people like the anonymous of the first comment and Wendy. I look at the first, and I think "with no marginal utility, it's an expected value of 400 vs an expected value of 450." I look at the second and think "with no marginal utility, it's an expected value of -400 vs. an expected value of -50." Marginal utility considerations--plausible if these are the last 500 people on Earth--sway the first case much more easily than they do the second case.

comment by Ben_Jones · 2008-01-22T23:37:22.000Z · LW(p) · GW(p)

So we can keep doing this, gradually - very gradually - diminishing the degree of discomfort...

Eliezer, your readiness to assume that all 'bad things' are on a continuous scale, linear or no, really surprises me. Put your enormous numbers away, they're not what people are taking umbrage at. Do you think that if a googol doesn't convince us, perhaps a googolplex will? Or maybe 3^^^3? If x and y are finite, there will always be a quantity of x that exceeds y, and vice versa. We get the maths, we just don't agree that the phenomena are comparable. Broken ankle? Stubbing your toe? Possibly, there is certainly more of a tangible link there, but you're still imposing your judgment on how the mind experiences and deals with discomfort on us all and calling it rationality. It isn't.

Put simply - a dust mote registers exactly zero on my torture scale, and torture registers fundamentally off the scale (not just off the top, off) on my dust mote scale.

You're asking how many biscuits equal one steak, and then when one says 'there is no number', accusing him of scope insensitivity.

Replies from: phob
comment by phob · 2011-01-04T17:55:15.532Z · LW(p) · GW(p)

So you wouldn't pay one cent to prevent 3^^^3 people from getting a dust speck in their eye?

Replies from: Hul-Gil
comment by Hul-Gil · 2011-05-16T06:33:11.524Z · LW(p) · GW(p)

Sure. My loss of utility from losing the cent might be less than the gain in utility for those people to not get dust specks - but these are both what Ben might consider trivial events; it doesn't address the problem Ben Jones has with the assumption of a continuous scale. I'm not sure I'd pay $100 for any amount of people to not get specks in their eyes, because now we may have made the jump to a non-trivial cost for the addition of trivial payoffs.

Replies from: Salivanth
comment by Salivanth · 2012-05-01T12:23:30.359Z · LW(p) · GW(p)

Ben Jones didn't recognise the dust speck as "trivial" on his torture scale, he identified it as "zero". There is a difference: If dust speck disutility is equal to zero, you shouldn't pay one cent to save 3^^^3 people from it. 0 3^^^3 = 0, and the disutility of losing one cent is non-zero. If you assign an epsilon of disutility to a dust speck, then 3^^^3 epsilon is way more than 1 person suffering 50 years of torture. For all intents and purposes, 3^^^3 = infinity. The only way that Infinity(X) can be worse than a finite number is if X is equal to 0. If X = 0.00000001, then torture is preferable to dust specks.

Replies from: Hul-Gil, inemnitable
comment by Hul-Gil · 2012-05-01T18:22:43.804Z · LW(p) · GW(p)

Well, he didn't actually identify dust mote disutility as zero; he says that dust motes register as zero on his torture scale. He goes on to mention that torture isn't on his dust-mote scale, so he isn't just using "torture scale" as a synonym for "disutility scale"; rather, he is emphasizing that there is more than just a single "(dis)utility scale" involved. I believe his contention is that the events (torture and dust-mote-in-the-eye) are fundamentally different in terms of "how the mind experiences and deals with [them]", such that no amount of dust motes can add up to the experience of torture... even if they (the motes) have a nonzero amount of disutility.

I believe I am making much the same distinction with my separation of disutility into trivial and non-trivial categories, where no amount of trivial disutility across multiple people can sum to the experience of non-trivial disutility. There is a fundamental gap in the scale (or different scales altogether, à la Jones), a difference in how different amounts of disutility work for humans. For a more concrete example of how this might work, suppose I steal one cent each from one billion different people, and Eliezer steals $100,000 from one person. The total amount of money I have stolen is greater than the amount that Eliezer has stolen; yet my victims will probably never even realize their loss, whereas the loss of $100,000 for one individual is significant. A cent does have a nonzero amount of purchasing power, but none of my victims have actually lost the ability to purchase anything; whereas Eliezer's, on the other hand, has lost the ability to purchase many, many things.

I believe utility for humans works in the same manner. Another thought experiment I found helpful is to imagine a certain amount of disutility, x, being experienced by one person. Let's suppose x is "being brutally tortured for a week straight". Call this situation A. Now divide this disutility among people until we have y people all experiencing (1/y)*x disutility - say, a dust speck in the eye each. Call this situation B. If we can add up disutility like Eliezer supposes in the main article, the total amount of disutility in either situation is the same. But now, ask yourself: which situation would you choose to bring about, if you were forced to pick one?

Would you just flip a coin?

I believe few, if any, would choose situation A. This brings me to a final point I've been wanting to make about this article, but have never gotten around to doing. Mr. Yudkowsky often defines rationality as winning - a reasonable definition, I think. But with this dust speck scenario, if we accept Mr. Yudkowsky's reasoning and choose the one-person-being-tortured option, we end up with a situation in which every participant would rather that the other option had been chosen! Certainly the individual being tortured would prefer that, and each potentially dust-specked individual* would gladly agree to experience an instant of dust-speckiness in order to save the former individual.

I don't think this is winning; no one is happier with this situation. Like Eliezer says in reference to Newcomb's problem, if rationality seems to be telling us to go with the choice that results in losing, perhaps we need to take another look at what we're calling rationality.


*Well, assuming a population like our own, not every single individual would agree to experience a dust speck in the eye to save the to-be-tortured individual; but I think it is clear that the vast majority would.

Replies from: Salivanth, Desrtopa, OnTheOtherHandle, Multiheaded, Yvain, Eliezer_Yudkowsky, AlexanderRM, dxu, DPiepgrass
comment by Salivanth · 2012-05-25T12:50:53.058Z · LW(p) · GW(p)

You might be right. I'll have to think about this, and reconsider my stance. One billion is obviously far less than 3^^^3, but you are right in that the 10 million dollars stolen by you would be preferable to me than the 100,000 dollars stolen by Eliezer. I also consider losing 100,000 dollars less than or equal to 100,000 times as bad as losing one dollar. This indicates one of two things:

A) My utility system is deeply flawed. B) My utility system includes some sort of 'diffiusion factor' wherein a disutility of X becomes <X when divided among several people, and the disutility becomes lower the more people it's divided among. In essence, there is some disutility for one person suffering a lot of disutility, that isn't there when it's divided among a lot of people.

Of this, B seems more likely, and I didn't take it into account when considering torture vs. dust specks. In any case, some introspection on this should help me further define my utility function, so thanks for giving me something to think about.

comment by Desrtopa · 2012-05-25T13:39:34.935Z · LW(p) · GW(p)

A cent does have a nonzero amount of purchasing power, but none of my victims have actually lost the ability to purchase anything

Assuming that none of them end up one cent short for something they would otherwise have been able to pay for, which out of a billion people is probably going to happen. It doesn't have to be their next purchase.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-08-20T01:01:32.983Z · LW(p) · GW(p)

But this is analogous to saying some tiny percentage of the people who got dust specks would be driving a car at that moment and lose control, resulting in an accident. That would be an entirely different ballgame, even if the percent of people this happened to was unimaginably tiny, because in an unimaginably vast population, lots of people are bound to die of gruesome dust-speck related accidents.

But Eliezer explicitly denied any externalities at all; in our hypothetical the chance of accidents, blindness, etc are literally zero. So the chances of not being able to afford a vital heart transplant or whatever for want of a penny must also be literally zero in the analogous hypothetical, no matter how ridiculously large the population gets.

Replies from: Desrtopa
comment by Desrtopa · 2012-08-20T13:03:51.096Z · LW(p) · GW(p)

Not being able to pay for something due to the loss of money isn't an externality, it's the only kind of direct consequence you're going to get. If you took a hundred thousand dollars from an individual, they might still be able to make their next purchase, but the direct consequence would be their being unable to pay for things they could previously have afforded.

comment by OnTheOtherHandle · 2012-08-20T00:57:02.898Z · LW(p) · GW(p)

Another thing that seems to be a factor, at least for me, is that there's a term in my utility function for "fairness," which usually translates to something roughly similar to "sharing of burdens." (I also have a term for "freedom," which is in conflict with fairness but is on the same scale and can be traded off against it.)

Why wouldn't this be a situation in which "the complexity of human value" comes into play? Why is it wrong to think something along the lines of, "I would be willing to make everyone a tiny bit worse off so that no one person has to suffer obscenely"? It's the rationale behind taxation, and while it's up for debate many Less Wrongers support moderate taxation if it would help a few people a lot while hurting a bunch of people a little bit.

Think about it: the exact number of dollars taken from people in taxes don't go directly toward feeding the hungry. Some of it gets eaten up in bureaucratic inefficiencies, some of it goes to bribery and embezzlement, some of it goes to the military. This means if you taxed 1,000,000 well-off people $1 each, but only ended up giving 100 hungry people $1000 each to stave of a painful death from starvation, we as utilitarians would be absolutely, 100% obligated to oppose this taxation system, not because it's inefficient, but because doing nothing would be better. There is to be no room for debate; it's $100,000 - $1,000,000 = net loss; let the 100 starving peasants die.

Note that you may be a libertarian and oppose taxation on other grounds, but most libertarians wouldn't say you are literally doing morality wrong if you think it's better to take $1 each from a million people, even if only $100,000 of it gets used to help the poor.

I could easily be finding ways to rationalize my own faulty intuitions - but I managed to change my mind about Newcomb's problem and about the first example given in the above post despite powerful initial intuitions, and I managed to work the latter out for myself. So I think, if I'm expected to change my mind here, I'm justified in holding out for an explanation or formulation that clicks with me.

Replies from: AndyC
comment by AndyC · 2014-04-22T12:07:25.068Z · LW(p) · GW(p)

That makes no sense. Just because one thing cost $1, and another thing cost $1000, does not mean that the first thing happening 1001 times is better than the second one happening once.

Preferences logically precede prices. If they didn't, nobody would be able to decide what they were willing to spend on anything in the first place. If utilitarianism requires that you decide the value of things based on their prices, then utilitarians are conformists without values of their own, who derive all of their value judgments from non-utilitarian market participants who actually have values.

(Besides, money that is spent on "overhead" does not magically disappear from the economy. Someone is still being paid to do something with that money, who in turn buys things with the money, and so on. And even if the money does disappear -- say, dollar bills are burnt in a furnace -- it still would not represent a loss of productive capacity in the economy. Taxing money and then completely destroying the money (shrinking the money supply) is sound monetary policy, and it occurs on a regular (cyclical) basis. Your whole argument here is a complete non-starter.)

comment by Multiheaded · 2012-11-20T10:55:58.110Z · LW(p) · GW(p)

As a rather firm speck-ist, I'd like to say that this is the best attempt at a formal explanation of speckism that I've read so far! I'm grateful for this, and pleased that I no longer need to use muddier and vaguer justifications.

comment by Scott Alexander (Yvain) · 2012-11-20T22:43:48.510Z · LW(p) · GW(p)

Thank you for trying to address this problem, as it's important and still bothers me.

But I don't find your idea of two different scales convincing. Consider electric shocks. We start with an imperceptibly low voltage and turn up the dial until the first level at which the victim is able to perceive slight discomfort (let's say one volt). Suppose we survey people and find that a one volt shock is about as unpleasant as a dust speck in the eye, and most people are indifferent between them.

Then we turn the dial up further, and by some level, let's say two hundred volts, the victim is in excruciating pain. We can survey people and find that a two hundred volt shock is equivalent to whatever kind of torture was being used in the original problem.

So one volt is equivalent to a dust speck (and so on the "trivial scale"), but two hundred volts is equivalent to torture (and so on the "nontrivial scale"). But this implies either that triviality exists only in degree (which ruins the entire argument, since enough triviality aggregated equals nontriviality) or that there must be a sharp discontinuity somewhere (eg a 21.32 volt shock is trivial, but a 21.33 volt shock is nontrivial). But the latter is absurd. Therefore there should not be separate trivial and nontrivial utility scales.

Replies from: fubarobfusco, AndyC, AlexanderRM
comment by fubarobfusco · 2012-11-20T22:55:39.614Z · LW(p) · GW(p)

Except perception doesn't work like that. We can have two qualitatively different perceptions arising from quantities of the same stimulus. We know that irritation and pain use different nerve endings, for instance; and electric shock in different quantities could turn on irritation at a lower threshold than pain. Similarly, a dim colored light is perceived as color on the cone cells, while a very bright light of the same frequency is perceived as brightness on the rod cells. A baby wailing may be perceived as unpleasant; turn it up to jet-engine volume and it will be perceived as painful.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-11-21T05:21:04.839Z · LW(p) · GW(p)

Okay, good point. But if we change the argument slightly to the smallest perceivable amount of pain it's still biting a pretty big bullet to say 3^^^3 of those is worse than 50 years of torture.

(the theory would also imply that an infinite amount of irritation is not as bad as a tiny amount of pain, which doesn't seem to be true)

Replies from: Nornagest, drnickbone
comment by Nornagest · 2012-11-21T05:42:48.602Z · LW(p) · GW(p)

the theory would also imply that an infinite amount of irritation is not as bad as a tiny amount of pain, which doesn't seem to be true)

I'm increasingly convinced that the whole Torture vs. Dust Specks scenario is sparking way more heat than light, but...

I can imagine situations where an infinite amount of some type of irritation integrated to something equivalent to some finite but non-tiny amount of pain. I can even imagine situations where that amount was a matter of preference: if you asked someone what finite level of pain they'd accept to prevent some permanent and annoying but non-painful condition, I'd expect the answers to differ significantly. Granted, "lifelong" is not "infinite", and there's hyperbolic discounting and various other issues to correct for, but even after these corrections a finite answer doesn't seem obviously wrong.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-11-23T23:35:27.792Z · LW(p) · GW(p)

Well, for one thing, pain is not negative utility ....

Pain is a specific set of physiological processes. Recent discoveries suggest that it shares some brain-space with other phenomena such as social rejection and math anxiety, which are phenomenologically distinct.

It is also phenomenologically distinct from the sensations of disgust, grief, shame, or dread — which are all unpleasant and inspire us to avoid their causes. Irritation, anxiety, and many other unpleasant sensations can take away from our ability to experience pleasure; many of them can also make us less effective at achieving our own goals.

In place of an individual experiencing "50 years of torture" in terms of physiological pain, we might consider 50 years of frustration, akin to the myth of Sisyphus or Tantalus; or 50 years of nightmare, akin to that inflicted on Alex Burgess by Morpheus in The Sandman ....

comment by drnickbone · 2012-11-21T08:33:52.317Z · LW(p) · GW(p)

(the theory would also imply that an infinite amount of irritation is not as bad as a tiny amount of pain, which doesn't seem to be true)

Hmm not sure. It seems quite plausible to me that for any n, an instance of real harm to one person is worse than n instances of completely harmless irritation to n people. Especially if we consider a bounded utility function; the n instances of irritation have to flatten out at some finite level of disutility, and there is no a priori reason to exclude torture to one person having a worse disutility than that asymptote.

Having said all that, I'm not sure I buy into the concept of completely harmless irritation. I doubt we'd perceive a dust speck as a disutility at all except for the fact that it has small probability of causing big harm (loss of life or offspring) somewhere down the line. A difficulty with the whole problem is the stipulation that the dust specks do nothing except cause slight irritation... no major harm results to any individual. However, throwing a dust speck in someone's eye would in practice have a very small probability of very real harm, such as distraction while operating dangerous machinery (driving, flying etc), starting an eye infection which leads to months of agony and loss of sight, a slight shock causing a stumble and broken limbs or leading to a bigger shock and heart attack. Even the very mild irritation may be enough to send an irritable person "over the edge" into punching a neighbour, or a gun rampage, or a borderline suicidal person into suicide. All these are spectacularly unlikely for each individual, but if you multiply by 3^^^3 people you still get order 3^^^3 instances of major harm.

Replies from: AndyC
comment by AndyC · 2014-04-22T12:34:54.628Z · LW(p) · GW(p)

With that many instances, it's even highly likely that at least one of the specs in the eye will offer a rare opportunity for some poor prisoner to escape his captors, who had intended to subject him to 50 years of torture.

comment by AndyC · 2014-04-22T12:25:49.956Z · LW(p) · GW(p)

First of all, you might benefit from looking up the beard fallacy.

To address the issue at hand directly, though:

Of course there are sharp discontinuities. Not just one sharp discontinuity, but countless. However, there is not particular voltage at which there is a discontinuity. Rather, increasing the voltage increases the probability of a discontinuity.

I will list a few discontinuities established by torture.

  1. Nightmares. As the electrocution experience becomes more severe, the probability that it will result in a nightmare increases. After 50 years of high voltage, hundreds or even thousands of such nightmares are likely to have occurred. However, 1 second of 1V is unlikely to result in even a single nightmare. The first nightmare is a sharp discontinuity. But furthermore, each additional nightmare is another sharp discontinuity.

  2. Stress responses to associational triggers. The first such stress response is a sharp discontinuity, but so is every other one. But please note that there is a discontinuity for each instance of stress response that follows in your life: each one is its own discontinuity. So, if you will experience 10,500 stress responses, that is 10,500 discontinuities. It's impossible to say beforehand what voltage or how many seconds will make the difference between 10,499 and 10,500, but in theory a probability could be assigned. I think there are already actual studies that have measured the increased stress response after electroshock, over short periods.

  3. Flashbacks. Again, the first flashback is a discontinuity; as is every other flashback. Every time you start crying during a flashback is another discontinuity.

  4. Social problems. The first relationship that fails (e.g., first woman that leaves you) because of the social ramifications of damage to your psyche is a discontinuity. Every time you flee from a social event: another discontinuity. Every fight that you have with your parents as a result of your torture (and the fact that you have become unrecognizable to them) is a discontinuity. Every time you fail to make eye contact is a discontinuity. If not for the torture, you would have made the eye contact, and every failure represents a forked path in your entire future social life.

I could go on, but you can look up the symptoms of PTSD yourself. I hope, however, that I have impressed upon you the fact that life constitutes a series of discrete events, not a continuous plane of quantifiable and summable utility lines. It's "sharp discontinuities" all the way down to elementary particles. Be careful with mathematical models involving a continuum.

Please note that flashbacks, nightmares, stress responses to triggers, and social problems do not result from specs of dust in the eye.

comment by AlexanderRM · 2015-03-27T22:03:59.860Z · LW(p) · GW(p)

A better metaphor: What if we replaced "getting a dust speck in your eye" with "being horribly tortured for one second"? Ignore the practical problems of the latter, just say the person experiences the exact same (average) pain as being horribly tortured, but for one second.

That allows us to directly compare the two experiences much better, and it seems to me it eliminates the "you can't compare the two experiences"- except of course with long term effects of torture, I suppose; to get a perfect comparison we'd need a torture machine that not only does no physical damage, but no psychological damage either.

On the other hand, it does leave in OnTheOtherHandle's argument about "fairness" (specifically in the "sharing of burdens" definition, since otherwise we could just say the person tortured is selected at random). Which to me as a utilitarian makes perfect sense; I'm not sure if I agree or disagree with him on that.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-11-21T21:22:56.899Z · LW(p) · GW(p)

For a more concrete example of how this might work, suppose I steal one cent each from one billion different people, and Eliezer steals $100,000 from one person. The total amount of money I have stolen is greater than the amount that Eliezer has stolen; yet my victims will probably never even realize their loss, whereas the loss of $100,000 for one individual is significant. A cent does have a nonzero amount of purchasing power, but none of my victims have actually lost the ability to purchase anything; whereas Eliezer's, on the other hand, has lost the ability to purchase many, many things.

Isn't this a reductio of your argument? Stealing $10,000,000 has less economic effect than stealing $100,000, really? Well, why don't we just do it over and over, then, since it has no effect each time? If I repeated it enough times, you would suddenly decide that the average effect of each $10,000,000 theft, all told, had been much larger than the average effect of the $100,000 theft. So where is the point at which, suddenly, stealing 1 more cent from everyone has a much larger and disproportionate effect, enough to make up for all the "negligible" effects earlier?

See also: http://lesswrong.com/lw/n3/circular_altruism/

Replies from: Bugmaster, CCC
comment by Bugmaster · 2012-11-21T21:37:48.200Z · LW(p) · GW(p)

It seems like you and Hul-Gil are using different formulae for evaluating utility (or, rather, disutility); and, therefore, you are talking past each other.

While Hul-Gil is looking solely at the immediate purchasing power of each individual, you are considering ripple effects affecting the economy as a whole. Thus, while stealing a single penny from a single individual may have negligible disutility, removing 1e9 such pennies from 1e9 individuals will have a strong negative effect on the economy, thus reducing the effective purchasing power of everyone, your victims included.

This is a valid point, but it doesn't really lend any support to either side in your argument with Hul-Gil, since you're comparing apples and oranges.

Replies from: IainM
comment by IainM · 2012-11-22T13:34:46.460Z · LW(p) · GW(p)

I'm pretty sure Eliezer's point holds even if you only consider the immediate purchasing power of each individual.

Let us define thefts A and B:

A : Steal 1 cent from each of 1e9 individuals. B : Steal 1e7 cents from 1 individual.

The claim here is that A has negligible disutility compared to B. However, we can define a new theft C as follows:

C: Steal 1e7 cents from each of 1e9 individuals.

Now, I don't discount the possibility that there are arguments to the contrary, but naively it seems that a C theft is 1e9 times as bad as a B theft. But a C theft is equivalent to 1e7 A thefts. So, necessarily, one of those A thefts must have been worse than a B theft - substantially worse. Eliezer's question is: if the first one is negligible, at what point do they become so much worse?

Replies from: Bugmaster
comment by Bugmaster · 2012-11-27T08:48:06.273Z · LW(p) · GW(p)

I think this is a question of ongoing collateral effects (not sure if "externalities" is the right word to use here). The examples that speak of money are additionally complicated by the fact that the purchasing power of money does not scale linearly with the amount of money you have.

Consider the following two scenarios:

A). Inflict -1e-3 utility on 1e9 individuals with negligible consequences over time, or B). Inflict a -1e7 utility on a single individual, with further -1e7 consequences in the future.

vs.

C). Inflict a -1e-3 utility on 1e9 individuals leading to an additional -1e9 utility over time, or B). Inflict a one-time -1e7 utility on a single individual, with no additional consequences.

Which one would you pick, A or B, and C or D ? Of course, we can play with the numbers to make A and C more or less attractive.

I think the problem with Eliezer's "dust speck" scenario is that his disutility of option A -- i.e., the dust specs -- is basically epsilon, and since it has no additional costs, you might as well pick A. The alternative is a rather solid chunk of disutility -- the torture -- that will further add up even after the initial torture is over (due to ongoing physical and mental health problems).

The "grand theft penny" scenario can be seen as AB or CD, depending on how you think about money; and the right answer in either case might change depending on how much you think a penny is actually worth.

comment by CCC · 2012-11-22T14:45:43.604Z · LW(p) · GW(p)

Money is not a linear function of utility. A certain amount is necessary to existance (enough to obtain food, shelter, etc.) A person's first dollar is thus a good deal more valuable than a person's millionth dollar, which is in turn more valuable than their billionth dollar. There is clearly some additional utility from each additional dollar, but I suspect that the total utility may well be asymptotic.

The total disutility of stealing an amount of money, $X, from a person with total wealth $Y, is (at least approcximately) equal to the difference in utility between $Y and $(Y-X). (There may be some additional disutility from the fact that a theft occurred - people may worry about being the next victim or falsely accuse someone else or so forth - but that should be roughly equivalent for any theft, and thus I shall disregard it).

So. Stealing one dollar from a person who will starve without that dollar is therefore worse than stealing one dollar from a person who has a billion more dollars in the bank.

Stealing one dollar from each of one billion people, who will each starve without that dollar, is far, far worse than stealing $100 000 from one person who has another $1e100 in the bank.

Stealing $100 000 from a person who only had $100 000 to start with is worse than stealing $1 from each of one billion people, each of whom have another billion dollars in savings.


Now, if we assume a level playing field - that is, that every single person starts with the same amount of money (say, $1 000 000) and no-one will starve if they lose $100 000, then it begins to depend on the exact function used to find the utility of money.

There are functions such that a million thefts of $1 each results in less disutility that a single theft of $100 000. (If asked to find an example, I will take a simple exponential function and fiddle with the parameters until this is true). However, if you continue adding additional thefts of $1 each from the same million people, an interesting effect takes place; each additional theft of $1 each from the same million people is worse than the previous one. By the time you hit the hundred-thousandth theft of $1 each from the same million people, that last theft is substantially more than ten times worse than a single theft of $100 000 from one person.

Replies from: cousin_it
comment by cousin_it · 2012-11-22T16:21:17.121Z · LW(p) · GW(p)

Yeah, but also keep in mind that people's utility functions cannot be very concave. (My rephrasing is pretty misleading but I can't think of a better one, do read the linked post.)

Replies from: CCC
comment by CCC · 2012-11-23T07:18:32.314Z · LW(p) · GW(p)

Hmmm. The linked post talks about the perceived utility of money; that is, what the owner of the money thinks it is worth. This is not the same as the actual utility of money, which is what I am trying to use in the grandparent post.

I apologise if that was not clear, and I hope that this has cleared up any lingering misunderstandings.

comment by AlexanderRM · 2015-03-27T22:23:13.639Z · LW(p) · GW(p)

"But with this dust speck scenario, if we accept Mr. Yudkowsky's reasoning and choose the one-person-being-tortured option, we end up with a situation in which every participant would rather that the other option had been chosen! Certainly the individual being tortured would prefer that, and each potentially dust-specked individual* would gladly agree to experience an instant of dust-speckiness in order to save the former individual."

A question for comparison: would you rather have a 1/Googolplex chance of being tortured for 50 years, or lose 1 cent? (A better comparison in this case would be if you replaced "tortured for 50 years" with "death".)

Also: for the original metaphor, imagine that you aren't the only person being offered this choice, and that the people suffering the consequences are out of the same pool- which is how real life works, although in this world we have a population of 1 googolplex rather than 7 billion. If we replace "dust speck" with "horribly tortured for 1 second", and we give 1.5 billion people the same choice and presume they all make the same decision, then the choice is between 1.5 billion people being horribly tortured for 50 years, and 1 googolplex people begin horribly tortured for 50 years.

Replies from: Jiro
comment by Jiro · 2015-03-28T08:32:27.315Z · LW(p) · GW(p)

A question for comparison: would you rather have a 1/Googolplex chance of being tortured for 50 years, or lose 1 cent?

Whenever I drive, I have a greater than a 1/googlolplex chance of getting into an accident which would leave me suffering for 50 years, and I still drive. I'm not sure how to measure the benefit I get from driving, but there are at least some cases where it's pretty small, even if it's not exactly a cent.

Replies from: soreff
comment by soreff · 2015-03-28T16:45:57.827Z · LW(p) · GW(p)

Whenever one bends down to pick up a dropped penny, one has more than a 1/Googolplex chance of a slip-and-fall accident which would leave one suffering for 50 years.

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2015-03-28T20:33:53.262Z · LW(p) · GW(p)

But you also slightly improve your physical fitness which might reduce the probability of an accident further down the line by more than 1/10^10^100.

comment by dxu · 2015-03-27T22:39:30.749Z · LW(p) · GW(p)

This argument does not show that putting dust specks in the eyes of 3^^^3 people is better than torturing one person for 50 years. It shows that putting dust specks in the eyes of 3^^^3 people and then telling them they helped save someone from torture is better than torturing one person for 50 years.

Replies from: hairyfigment
comment by hairyfigment · 2015-03-28T00:43:11.095Z · LW(p) · GW(p)

Yes - though it does mean Eliezer has to assume that the reader's implausible state of knowledge is not and will not be shared by many of the 3^^^3.

Replies from: Manfred
comment by Manfred · 2015-03-28T00:49:34.688Z · LW(p) · GW(p)

Dust, it turns out, is not naturally occurring, but is only produced as a byproduct of thought experiments.

comment by DPiepgrass · 2019-12-10T18:39:14.511Z · LW(p) · GW(p)

The loss of $100,000 (or one cent) is more or less significant depending on the individual. Which is worse: stealing a cent from 100,000,000 people, or stealing $100,000 from a billionaire? What if the 100,000,000 people are very poor and the cent would buy half a slice of bread and they were hungry to start with? (Tiny dust specks, at least, have a comparable annoyance effect on almost everyone.)

Eliezer's main gaffe here is choosing a "googolplex" people with dust specks when humans do not even have an intuition for googols. So let's scale the problem down to a level a human can understand: instead of a googolplex dust specks versus 50 years of torture, let's take "50 years of torture versus a googol (1 followed by 100 zeros) dust specks", and scale it down linearly to "1 second of torture verses "6.33 x 10^90 dust specks, one per person" - which is still far more people than have ever lived, so let's make it "a dust speck once per minute for every person on Earth for their entire lives (while awake) and make it retroactive for all of our human ancestors too" (let's pretend for a moment that humans won't evolve a resistance to dust specks as a result). By doing this we are still eliminating virtually all of the dust specks.

So now we have one second of torture versus roughly 2 billion billions of dust specks, which is nothing at all compared to a googol of dust specks. Once the numbers are scaled down to a level that ordinary college graduates can begin to comprehend, I think many of them would change their answer. Indeed, some people might volunteer for one second of torture just to save themselves from getting a tiny dust speck in their eye every minute for the rest of their lives.

The fact that humans can't feel these numbers isn't something you teach by just saying it. You teach it by creating a tension between the feeling brain and the thinking brain. Due to your ego, I would guess your brain can better imagine feeling a tiny dust speck in its eye once per minute for your entire life - 20 million specks - than 20 million people getting a tiny dust speck in their eye once, but how is it any different morally? For most people also, 20 billion people with a dust speck feels just the same as 20 million. They both feel like "really big numbers", but in reality one number is a thousand times worse, and your thinking brain can see that. In this way, I hope you learn to trust your thinking brain more than your feeling one.


comment by inemnitable · 2012-06-14T21:25:53.326Z · LW(p) · GW(p)

If you assign an epsilon of disutility to a dust speck, then 3^^^3 * epsilon is way more than 1 person suffering 50 years of torture.

This doesn't follow. Epsilon is by definition arbitrary, therefore I could say that I want it to be 1 / 4^^^4 if I want to.

If we accept Eliezer's proposition that the disutility of a dust speck is > 0, this doesn't prevent us from deciding that it is < epsilon when assigning a finite disutility to 50 years of torture.

Replies from: JaySwartz
comment by JaySwartz · 2012-11-22T00:02:09.726Z · LW(p) · GW(p)

For a site promoting rationality this entire thread is amazing for a variety of reasons (can you tell I'm new here?). The basic question is irrational. The decision for one situation over another is influenced by a large number of interconnected utilities.

A person, or an AI, does not come to a decision based on a single utility measure. The decision process draws on numerous utilities, many of which we do not yet know. Just a few utilities are morality, urgency, effort, acceptance, impact, area of impact and value.

Complicating all of this is the overlay of life experience that attaches a function of magnification to each utility impact decision. There are 7 billion, and growing, unique overlays in the world. These overlays can include unique personal, societal or other utilities and have dramatic impact on many of the core utilities as well.

While you can certainly assign some value to each choice, due to the above it will be a unique subjective value. The breadth of values do cluster in societal and common life experience utilities enabling some degree of segmentation. This enables generally acceptable decisions. The separation of the value spaces for many utilities preclude a single, unified decision. For example, a faith utility will have radically different value spaces for Christians and Buddhists. The optimum answer can be very different when the choices include faith utility considerations.

Also, the circular example of driving around the Bay Area is illogical from a variety of perspectives. The utility of each stop is ignored. The movement of the driver around the circle does not correlate to the premise that altruistic actions of an individual are circular.

For discussions to have utility value relative to rationality, it seems appropriate to use more advanced mathematics concepts. Examining the vagaries created when decisions include competing utility values or are near edges of utility spaces are where we will expand our thinking.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-11-22T00:03:59.004Z · LW(p) · GW(p)

For a site promoting rationality this entire thread is amazing for a variety of reasons (can you tell I'm new here?). The basic question is irrational. The decision for one situation over another is influenced by a large number of interconnected utilities.

So in most forms of utilitarianism, there's still an overall utility function. Having multiple different functions amounts to the same thing as having a single function when one needs to figure out how to balance the competing interests.

Replies from: JaySwartz
comment by JaySwartz · 2012-11-22T00:45:58.121Z · LW(p) · GW(p)

Granted. My point is the function needs to comprehend these factors to come to a more informed decision. Simply doing a compare of two values is inadequate. Some shading and weighting of the values is required, however subjective that may be. Devising a method to assess the amount of subjectivity would be an interesting discussion. Considering the composition of the value is the enlightening bit.

I also posit that a suite of algorithms should be comprehended with some trigger function in the overall algorithm. One of our skills is to change modes to suit a given situation. How sub-utilities impact the value(s) served up to the overall utility will vary with situational inputs.

The overall utility function needs to work with a collection of values and project each value combination forward in time, and/or back through history, to determine the best selection. The nature of the complexity of the process demands using more sophisticated means. Holding a discussion at the current level feels to me to be similar to discussing multiplication when faced with a calculus problem.

comment by denis_bider · 2008-01-22T23:49:38.000Z · LW(p) · GW(p)

Eliezer - the way question #1 is phrased, it is basically a choice between the following:

  1. Be perceived as a hero, with certainty.

  2. Be perceived as a hero with 90% probability, and continue not to be noticed with 10% probability.

This choice will be easy for most people. The expected 50 extra deaths are a reasonable sacrifice for the certainty of being perceived as a hero.

The way question #2 is phrased, it is similarly a choice between the following:

  1. Be perceived as a villain, with certainty.

  2. Not be noticed with 90% probability, and be perceived as a villain with 10% probability.

Again, the choice is obvious. Choose #2 to avoid being perceived as a villain.

If you argue that the above interpretations are then not altruistic, I think the "Repugnant Conclusion" link shows how futile it is to try to make actual "altruistic decisions".

comment by Tiiba2 · 2008-01-22T23:52:02.000Z · LW(p) · GW(p)

I don't think even everyone going blind is a good excuse for torturing a man for fifty years. How are they going to look him in the eye when he gets out?

The problem is not that I'm afraid of multiplying probability by utility, but that Eliezer is not following his own advice - his utility function is too simple.

comment by Caledonian2 · 2008-01-23T00:02:19.000Z · LW(p) · GW(p)

It will be interesting to see if this is one of the mistakes Eliezer quietly retracts, or one of the mistakes that he insists upon making over and over no matter what the criticism.

comment by Roland2 · 2008-01-23T01:07:42.000Z · LW(p) · GW(p)

I'm betting 10 credibility units on Yudkowsky publicly admitting that he was wrong on this one.

Replies from: phob
comment by phob · 2011-01-04T17:58:40.148Z · LW(p) · GW(p)

Want to put a time scale on that?

comment by gl · 2008-01-23T01:30:38.000Z · LW(p) · GW(p)

I think I understand the point of the recent series of posts, but I find them rather unsatisfying. It seems to me that there is a problem with translating emotional situations into probability calculations. This is a very real and interesting problem, but saying "shut up and multiply" is not a good way to approach it. Borrowing from 'A Technical Explanation' it's kind of like the blue tentacle question. When I am asked what would I do when faced with the choice between a googolplex of dust specks or 50 years of torture, my reaction is: But that would never happen! Or, perhaps, I would tell the psychopath who was trying to force me to make such a choice to go f- himself.

comment by Andrew3 · 2008-01-23T01:33:27.000Z · LW(p) · GW(p)

It seems a lot of people are willing to write off minimal discomfort and approximate that to 0 discomfort, I don't think thats fair at all.

If we are talking in terms of this 'discomfort' lets start out with two sets of K people out of a population of X >>> K people, with each set having the same 'discomfort' applied to them each, set A and set B. One set must bear with the discomfort, which set should we pick?.

Clearly at the start, both are defined to be the same. So we then double the number in set A while halving their discomfort.

One way to define an activity A to be 'half the discomfort' of B is to ask the average person how long will they take activity A for say... $100 and the same for B, if they are willing to take twice as long on A than B, lets call that half. There is no such thing as infinite discomfort because we are dealing with people here, double the number of people you double the discomfort.

Torturing two people for 50 years is twice as bad as torturing one person for fifty years, how do we work that out? well we have some non infinite of discomfort for "torturing for 50 years".

Eventually after many repetitions, we hit on some discomfort which 'suddenly' someone has arbitrary said is ~ 0 discomfort even though it is really some small discomfort. And since 0*K = 0, we can set the number of people in set A to be (X-K) (a bit unfair to have the guys in set B to be in set A too I think, give them a break) yet it will still be better to choose set A over set B. The sum of the discomforts is less in A than B.

Lets say that we are now that the discomfort of A is a speck of dust, and the discomfort of B is 50 years torture. Lets now crank up the discomfort of A, until we hit the precise point that you just about start to care about your discomfort (just before it changes from ~0 to some number). I reckon a stubbed toe would be a good point though I bet even more discomfort would be the true discontinuity point. K is now 1 person, and X is the entire human population of the planet. This is fine because its ~0*6.6 billion = 0, yet you take a stumble after you stub a toe, you lose .1 minutes of your life in a bad way, its not nice, if it was nice then the discomfort would be negative!

But we are all happy right? everyone stubs their toe, and a man (or woman) is saved from 50 years of torture. However, that comes to 1256 years of stubbed toes, that comes to a total of 1256 years of lost living, time spent wincing at your sore toe rather than looking the sky and so on. Is that still acceptable? double the people, 2512 years, still fine? keep on going cause to you (and people in general if your right), the discomfort is ~0.

Keep on doubling that number and watch those wasted pain filled years double and double. If you suddenly say 'ok thats enough, a trillion subbed toes is worse than 50 years torture!' then we simply double the number of people tortured, then half the population stub their toes for one guy, half for the other. Imagine the hundreds of thousands of years humanity has wasted with stubbed toes, and if you dont see that as bad you should wonder if your biased by scope insensitivity.

comment by Benquo · 2008-01-23T01:45:35.000Z · LW(p) · GW(p)

Mr. Yudkowsky, I'm not sure the duration/intensity of the torture is the only bad thing relevant here. A friend of mine pointed out that a problem with 50 years of torture is that it permanently destroys someone's life. (I think it was in one of the "fake altruism" family of posts that you pointed out that belief of utility != utility of belief.) So the utility curve would be pretty flat for the first couple thousand dust specks, beginning to slope down in proportion to the pain through a few minutes of torture. After that, it would quickly become steeper as the torture began to materially alter the person tortured. Another factor to consider is the difference between pain during which you can do other things, and pain during which you can't. So the 50-year-torturee's (or even a 1-minute torturee's) life is effectively shortened in a way that even a 1,000,000-dust-speck person's life is not. So I'm not sure people aren't implicitly including those factors sometimes, when they get mad about torture. I'd rather five years of chronic back pain than five minutes of permanently soul-crushing torture.

You might argue that it's still irrational, but it's not as obvious as you make it out to be.

comment by Jef_Allbright · 2008-01-23T01:47:15.000Z · LW(p) · GW(p)

This form of reasoning, while correct within a specified context, is dangerously flawed with regard to application within contexts sufficiently complex that outcomes cannot be effectively modeled. This includes much of moral interest to humans. In such cases, as with evolutionary computation, an optimum strategy exploits best-known principles synergisticly promoting a maximally-coherent set of present values, rather than targeting illusory, realistically unspecifiable consequences. Your "rationality" is correct but incomplete. This speaks as well to the well-known paradoxes of all consequentialist ethics.

comment by Mario2 · 2008-01-23T02:12:44.000Z · LW(p) · GW(p)

I'm not sure I understand at what point the torture would no longer be justified. It's easy to say that it is preferable to a googolplex of people with dust specks is worse than one person being tortured, but there has to be some number at which this is no longer the case. At some point even your preferences should flip, but you never suggest a point where it would be acceptable. Would it be somewhere around 1.5-1.6 billion, assuming the dust specks were worth 1 second of pain? Is it acceptable if it is just 2 people affected? How many dust specks go into 1 year of torture? I think people would be more comfortable with your conclusion if you had some way to quantify it; right now all we have is your assertion that the math is in the dust speck's favor.

Replies from: AlexanderRM
comment by AlexanderRM · 2015-03-27T22:12:39.723Z · LW(p) · GW(p)

As I understand it, the math is in the dust speck's favor because EY used an arbitrarily large number such that it couldn't possibly be otherwise.

I think a better comparison would be between 1 second of torture (which I'd estimate is worth multiple dust specks, assuming it's not hard to get them out of your eye) and 50 years of torture, in which case yes, it would flip around 1.5 billion. That is of course assuming that you don't have a term in your utility function where sharing of burdens is valuable- I assume EY would be fine with that but would insist that you implement it in the intermediate calculations as well.

comment by Jef_Allbright · 2008-01-23T02:23:31.000Z · LW(p) · GW(p)

"I think people would be more comfortable with your conclusion if you had some way to quantify it; right now all we have is your assertion that the math is in the dust speck's favor."

The actual tipping point depends on your particular subjective assessment of relative utility. The actual tipping point doesn't matter; what matters is that there is crossover at some point, therefore such reasoning about preferences, like San Jose --> San Francisco --> Oakland --> San Jose is incoherent.

comment by Nick_Tarleton · 2008-01-23T02:31:40.000Z · LW(p) · GW(p)

Roland, I'll take that bet.

The idea of an ethical discontinuity between something that can destroy a life (50 years of torture, or 1 year) and something that can't (1 minute of torture, a dust speck) has some intuitive plausibility, but ultimately I don't buy it. It very much seems like death must be in the same 'regime' as torture, but also that death is in the same regime as trivial harms, because people risk death for trivial benefit all the time - I imagine anyone here would drive across town for $100 or $500 or $1000, even though it's slightly more dangerous than staying at home. The life-destroying aspect means that the physical pain is only part (probably the smaller part) of the harm of prolonged torture, and that the badness of torture rises greater than linearly with duration, but doesn't necessarily make it incommensurable.

comment by randomwalker · 2008-01-23T02:36:18.000Z · LW(p) · GW(p)

I think "Shut up and Multiply" would be a good tagline for this blog, and a nice slogan for us anti-bias types in general!

comment by SS · 2008-01-23T03:38:37.000Z · LW(p) · GW(p)

How come these examples and subsequent narratives never mention the value of floors and diminishing returns? Is every life valued the same? If there was a monster or disease that will kill everyone in the world there is a floor involved. Choice 1 of saving 400 lives ensures that humanity continues (assuming 400 people are enough to re-populate the world). While have a 90% chance of saving 500 leaves 10% chance that humanity on earth ends. Would you agree that floors are important factors that do change the value of an optimal outcome when they are one time events? In other words the marginal utility of a life is diminishing in this example.

comment by SS · 2008-01-23T03:40:03.000Z · LW(p) · GW(p)

The idea of saving someone life has a great value to the person who did the saving. They are a hero even if it is only one life. The subsequent individuals diminish in the utility they deliver because being a hero carries such great a return and only requires saving one person verse everyone. People who choose option 1 are: not doing the math or value life differently between individuals because of the effect it has on them.

comment by SS · 2008-01-23T03:41:38.000Z · LW(p) · GW(p)

The value placed on items is really what matters because we don’t value everything the same. The true question is why do we value them differently or are we really just miscalculating the expected value? Every equation has to be learned from 2+2=4 on and maybe we are just heading up that learning curve.

comment by Keith_Adams · 2008-01-23T04:53:54.000Z · LW(p) · GW(p)

Save 400 lives, with certainty. Save 500 lives, with 90% probability; save no lives, 10% probability.

I'm surprised how few people are reacting to the implausibility of this thought experiment. When not in statistics class, God rarely gives out priors. Probabilities other than 0+epsilon and 1-epsilon tend to come from human scholarship, which is an often imperfect process. It is hard to imagine a non-contrived situation where you would have as much confidence in the 90/10 outcome as in the certain outcome.

Suppose the "90/10" figure comes from cure rates in a study of 20-year-old men, but your 500 patients are mostly middle-aged. You have the choice of disarming a bomb that will kill 400 people with probability 1-epsilon, or of taking that "90/10" estimate, really, really seriously; I know which choice I would make.

comment by Matthew2 · 2008-01-23T04:54:16.000Z · LW(p) · GW(p)

Your conclusion follows very clearly from the research results but it does not apply to the new situation. Doing the math is a false premise. Few people have personal experience of being tortured and more importantly no one who disagrees with you understands what you personally mean by the dust-speck. Perhaps if it was sawdust or getting pool water splashed in your eye, then it would finally register more clearly. Again, you (probably) haven't been tortured but you have gone through life without even conciously registering a dust speck in your eye. With a little adjustment above a threshold many people might switch sides. Pain is not linear.

comment by Caledonian2 · 2008-01-23T05:05:07.000Z · LW(p) · GW(p)
what matters is that there is crossover at some point

But there isn't necessarily one. That's the point - Eliezer is presuming that dust speck harm is additive and that enough of such harms will equal torture. This presumption does not seem to have a basis in rational argument.

comment by Unknown · 2008-01-23T06:23:36.000Z · LW(p) · GW(p)

The comments on this post are no better than those on the Torture vs. Dust Specks post. In other words, simply bring the word "torture" into the discussion and people automatically become irrational. It's happened to some of the other threads as well, when someone mentioned torture.

It strongly suggests that not many of the readers have made much progress in overcoming their biases.

By the way, Eliezer has corrected the original post; anonymous was correct about the numbers.

Replies from: JDM
comment by JDM · 2013-06-03T16:01:55.633Z · LW(p) · GW(p)

I would simply argue that a dust speck has 0 disutility.

Replies from: BerryPick6, shminux
comment by BerryPick6 · 2013-06-03T16:21:29.834Z · LW(p) · GW(p)

That'd be Fighting the Hypothetical.

Replies from: JDM
comment by JDM · 2013-06-03T16:55:19.825Z · LW(p) · GW(p)

It's an extremely hypothetical situation. However, why should it, ignoring externalities as the problem required, be measured at any disutility? That dust speck has no impact on my life in any way, other than making me blink. No pain is involved.

Replies from: BerryPick6
comment by BerryPick6 · 2013-06-03T17:40:54.897Z · LW(p) · GW(p)

Because it's one of the parameters of the thought experiment that a dust speck causes a miniscule amount of disutility.

comment by shminux · 2013-06-03T17:49:55.538Z · LW(p) · GW(p)

Pick some other inconvenience which has a small but non-zero disutility and repeat the exercise.

Replies from: JDM
comment by JDM · 2013-06-03T18:45:40.587Z · LW(p) · GW(p)

I'm not disputing the validity of the thought process. I don't think the example was well chosen, however. A dust speck, ignoring externalities, doesn't affect anything. Using even a pinprick would have made the example far better.

comment by komponisto2 · 2008-01-23T07:33:20.000Z · LW(p) · GW(p)

Lee:

Models are supposed to hew to the facts. Your model diverges from the facts of human moral judgments, and you respond by exhorting us to live up to your model.

Be careful not to confuse "is" and "ought". Eliezer is not proposing an empirical model of human psychology ("is"); what he is proposing is a normative theory ("ought"), according to which human intuitive judgements may turn out to be wrong.

If what you want is an empirical theory that accurately predicts the judgements people will make, see denis bider's comment of January 22, 2008 at 06:49 PM.

comment by Ben_Jones · 2008-01-23T10:35:32.000Z · LW(p) · GW(p)

I don't think even everyone going blind is a good excuse for torturing a man for fifty years. How are they going to look him in the eye when he gets out?

That's cold brother. Real cold....

The idea of an ethical discontinuity between something that can destroy a life (50 years of torture, or 1 year) and something that can't (1 minute of torture, a dust speck) has some intuitive plausibility, but...

Sorry, no. 'Torture' and 'dust speck' are not two different quantities of the same currency. I wouldn't even be confident trying to add up individual minutes of torture to equal one year. Humans do not experience the world like disinterested machines. They don't even experience a logarithmic progression of 'amount of discomfort.' 50 years of torture does things to the mind and body that one year (for 50 people) can never do. One year of torture does things one minute can never do. One minute of torture does things x dust specks in x people's eyes could never do. None of these things registers on each others' scales.

Cash, possessions, whatever, I'm with you and Eliezer. Pure human perception is different, even when you count neurons. And no, this isn't a blind irrational reaction to the key word 'torture'. This is how human beings work.

Something occurred to me reading through all this earlier. Do we put no weight on the fact that if you polled the 3^^^3 people and asked them whether they would all undergo one dust speck to save one person from 50 years of torture, they'd almost certainly all say yes? Who would say "no, look how many of us there are! Torture him!" I find this goes a long way to exploding the idea of 'cumulative discomfort'.

Replies from: momothefiddler
comment by momothefiddler · 2011-11-16T18:46:56.439Z · LW(p) · GW(p)

The issue with polling 3^^^3 people is that once they are all aware of the situation, it's no longer purely (3^^^3 dust specks) vs (50yrs torture). It becomes (3^^^3 dust specks plus 3^^^3 feelings of altruistically having saved a life) vs (50yrs torture). The reason most of the people polled would accept the dust speck is not because their utility of a speck is more than 1/3^^^3 their utility of torture. It's because their utility of (a speck plus feeling like a lifesaver) is more than their utility of (no speck plus feeling like a murderer).

comment by mitchell_porter2 · 2008-01-23T10:56:35.000Z · LW(p) · GW(p)

As was pointed out last time, if you insist that no quantity of dust-specks-in-individual-eyes is comparable to one instance of torture, then what is your boundary case? What about 'half-torture', 'quarter-torture', 'millionth-torture'? Once you posit a qualitative distinction between the badness of different classes of experience, such that no quantity of experiences in one class can possibly be worse than a single experience in the other class, then you have posited the existence of a sharp dividing line on what appears to be a continuum of possible individual experiences.

But if we adopt the converse position, and assume that all experiences are commensurable and additive aggregation of utility makes sense without exception - then we are saying that there is an exact quantity which measures precisely how much worse an instance of torture is than an instance of eye irritation. This is obscured by the original example, in which an inconceivably large number is employed to make the point that if you accept additive aggregation of utilities as a universal principle, then there must come a point when the specks are worse than the torture. But there must be a boundary case here as well: some number N such that, if there are more than N specks-in-eyes, it's worse than the torture, but if there are N or less, the torture wins out.

Can any advocates of additive aggregation of utility defend a particular value for N? Because if not, you're in the same boat with the incommensurabilists, unable to justify their magic dividing line.

Replies from: AndyC
comment by AndyC · 2014-04-22T12:50:56.428Z · LW(p) · GW(p)

I'm not unable to justify the "magic dividing line."

The world with the torture gives 3^^^3 people the opportunity to lead a full, thriving life.

The world with the specs gives 3^^^3+1 people the opportunity to lead a full, thriving life.

The second one is better.

Replies from: themusicgod1
comment by themusicgod1 · 2014-12-22T01:33:55.754Z · LW(p) · GW(p)

Couldn't you argue this the opposite way? That life is such misery, that extra torture isn't really adding to it.

The world with the torture gives 3^^^3+1 suffering souls a life of misery, suffering and torture.

The world with the specs gives 3^^^3+1 suffering souls a life of misery, suffering and torture, only basically everyone gets extra specks of dust in their eye.

In which case, the first is better?

It's not as much of a stretch as you might think..

comment by Unknown · 2008-01-23T11:12:13.000Z · LW(p) · GW(p)

Ben, according to your poll suggestion, we should forbid driving, because each particular person would no doubt be willing to drive a little bit slower to save lives, and ultimately having no one drive at all would save the most lives. But instead, people continue to drive, thereby trading many lives for their convenience.

Agreeing with these people, I'd be quite willing to undergo the torture personally, simply in order to prevent the dust specks for the others. And so this works in reverse against your poll.

Mitchell: "You're in the same boat with the incommensurabilists, unable to justify their magic dividing line." No, not at all. It is true that no one is going to give an exact value. But the issue is not whether you can give an exact value; the issue is whether the existence of such a value is reasonable or not. The incommensurabilists must say that there is some period of time, or some particular degree of pain, or whatever, such that a trillion people suffering for that length of time or that degree of pain would always be preferable to one person suffering for one second longer or suffering a pain ever so slightly greater. This is the claim which is unreasonable.

If someone is willing to make the torture and specks commensurable, it is true that this implies that there is some number where the specks become exactly equal to the torture. There is not at all the same intuitive problem here; it is much like the comparison made a while ago on Overcoming Bias between caning and prison time; if someone is given few enough strokes, he will prefer this to a certain amount of prison time, while if the number is continually increased, at some point he will prefer prison time.

comment by Remco_Gerlich · 2008-01-23T11:39:53.000Z · LW(p) · GW(p)

I think this article could have been improved by splitting it into two; one of them to discuss the original problem (is it better to save 400 for sure than to gamble on saving 500 with certainty 90%), and the other to discuss the reasons why people pick the other one if you rephrase the question. They're both interesting, but presenting them at once makes the discussion too confused.

And the second half... specks of dust in the eye and torture can both be described as "bad things". That doesn't mean they're the same kind of thing with different magnitudes. That was mostly a waste of time to me.

comment by Vladimir_Nesov2 · 2008-01-23T11:46:04.000Z · LW(p) · GW(p)

Eliezer,

What do specks have to do with circularity? Where in last posts you explain that certain groups of decision problems are mathematically equivalent, independent on actual decision, here you argue for a particular decision. Note that utility is not necessarily linear of number of people.

comment by Unknown · 2008-01-23T12:13:36.000Z · LW(p) · GW(p)

It looks like there was an inferential distance problem resulting from the fact that many either haven't read or don't remember the original torture vs dust specks post. Eliezer may have to explain the circularity problem in more detail.

comment by Ben_Jones · 2008-01-23T12:28:16.000Z · LW(p) · GW(p)

ultimately having no one drive at all would save the most lives

And ultimately no dust specks and no torture and lollipops all round would be great for everyone. Stick to the deal as presented. You have a choice to make. Speed is quantifiable. Death is very quantifiable. Pain - even physical pain - goes in the same category as love, sadness, confusion. They are abstract nouns because you cannot hold, measure or count them. Does N losing lottery tickets spread equally over N people equal one dead relative's worth of grief?

Reconsider my poll scenario: Wouldn't the opinions of 3^^^3 people, all willing to bear the brunt of a dust speck for you, sway your judgment one little bit? Are you that certain of your rationality? You are about to submit yourself to 50 years of torture, and you have 3^^^3 people screaming at you 'don't bother, it's okay, no single person has a problem with just blinking once, even those who would opt for torture in your place!' What do you reply? 'Stop being so damned irrational! Just insert the rods!'

comment by Ben_Jones · 2008-01-23T13:07:39.000Z · LW(p) · GW(p)

Unknown,

such that a trillion people suffering for that length of time or that degree of pain would always be preferable to one person suffering for one second longer or suffering a pain ever so slightly greater.

As I wrote yesterday, a dust mote registers exactly zero on my torture scale, and torture registers fundamentally off the scale (not just off the top, off) on my dust mote scale. Torture can be placed, if not discretely quantified, on my torture scale. Dust motes can not. If it's a short enough space of time that you could convince me N dust motes would be worse, I'd say your idea of torture is different from mine.

comment by Unknown · 2008-01-23T13:17:50.000Z · LW(p) · GW(p)

Ben: the poll scenario might persuade me if all the people actually believed that the situation with the dust specks, as a whole, were better than the torture situation. But this isn't the case, or we couldn't be having this discussion. Each person merely thinks that he wouldn't mind suffering a speck as an individual in order to save someone from torture.

As for a speck registering zero on your torture scale: what about being tied down with your eyes taped open, and then a handful of sand thrown in your face? Does that register zero too? The point would be to take the minimum which is on the same scale, and proceed from there.

As for me, I don't have any specific torture scale. I do have a pain scale, and dust specks and torture are both on it.

comment by Larry_D'anna2 · 2008-01-23T13:24:27.000Z · LW(p) · GW(p)

My utility function doesn't add the way you seem to think it does. A googolplex of dusty eyes has the same tiny negative utility as one dusty eye as far as I'm concerned. Honestly. How could anyone possibly care how many people's eyes get dusty. It doesn't matter. Torture matters a lot. But that's not really even the point. The point is that a bad thing happening to n people isn't n times worse than a bad thing happening to one person.

comment by Nick_Tarleton · 2008-01-23T13:34:00.000Z · LW(p) · GW(p)

Do we put no weight on the fact that if you polled the 3^^^3 people and asked them whether they would all undergo one dust speck to save one person from 50 years of torture, they'd almost certainly all say yes?

What if they each are willing to be tortured for 25 years? Is it better to torture a googolplex people for 25 years than one person for 50 years?

comment by Caledonian2 · 2008-01-23T14:27:00.000Z · LW(p) · GW(p)

The assumption that harms are additive is a key part of the demonstration that harm/benefit calculations can be rational.

So, has it been demonstrated that one cannot be rational without making that assumption?

comment by Unknown3 · 2008-01-23T15:12:00.000Z · LW(p) · GW(p)

Caledonian, of course that cannot be demonstrated. But who needs a demonstration? Larry D'anna said, "A googolplex of dusty eyes has the same tiny negative utility as one dusty eye as far as I'm concerned." If this is the case, does a billion deaths have the same negative utility as one death?

To put it another way, everyone knows that harms are additive.

comment by Ben_Jones · 2008-01-23T15:42:00.000Z · LW(p) · GW(p)

what about being tied down with your eyes taped open, and then a handful of sand thrown in your face

Unknown - this is called torture, and as such would register on my torture scale. Is it as bad as waterboarding? No. Do I measure them on a comparable scale? Yes. Can I, hence, imagine a value for N where N(Sand) > (Waterboarding)? Yes, I can. I stand by my previous assertion.

However, I'm beginning to see that this is a problem of interpretation. I am fully on board with Eliezer's math, I'm happy to shut up and multiply lives by probabilities, and I do have genuine doubts about whether I'm basing my decision on squeamishness. I hope not. But currently I see no reason to think I am.

Each person merely thinks that he wouldn't mind suffering a speck as an individual in order to save someone from torture.

Each person merely thinks? But you would retort, confidently, that they are in fact erroneous and irrational, you are rational and correct, and ask that the rods be inserted posthaste? To be honest mate, even if I did personally believe N dust motes 'added up to torture', if all those people said 'don't bother, we'll take the dust', I'd do so. If only because the perceived authority of that many people asserting something would (by Eliezer's own logic) amount to enormous evidence that my decision was wrong. And this is vindicated (for people like me at least) in that if I were one of the 3^^^3 and you were the unlucky one, I'd urge you to reconsider along with everyone else.

What if they each are willing to be tortured for 25 years? Is it better to torture a googolplex people for 25 years than one person for 50 years?

These are two different questions. The first is about people telling you what they're willing to do. The second is you deciding what gets done. The second is the scenario that we're confronted with, and my previous comment addresses that question.

Larry, you're not right. Two people getting dust motes in their eye is worse than one. Two people getting tortured is worse than one.

comment by Anon14 · 2008-01-23T16:50:00.000Z · LW(p) · GW(p)

To put it another way, everyone knows that harms are additive.

Is this one of the intuitions that can be wrong, or one of those that can't?

comment by Unknown3 · 2008-01-23T16:50:00.000Z · LW(p) · GW(p)

Ben, I think you might not have understood what I was saying about the poll. My point was that each individual is simply saying that he does not have a problem with suffering a dust speck to save someone from torture. But the issue isn't whether one individual should suffer a dust speck to save someone, but whether the whole group should suffer dust specks for this purpose. And it isn't true that the whole group thinks that the whole group should suffer dust specks for this purpose. If it were, there wouldn't be any disagreement about this question, since I and others arguing this position would presumably be among the group. In other words, your argument from hypothetical authority fails because human opinions are not in fact that consistent.

Suppose that 1 person per google (out of the 3^^^3 persons) is threatened with 50 years of torture? Should each member of the group accept a dust speck for each person threatened with torture, therefore burying the whole group in dust?

comment by Tiiba2 · 2008-01-23T18:10:00.000Z · LW(p) · GW(p)

After trying several ideas, I realized that my personal utility function converges, among its other features. And it's obvous in retrospect. After all, there's only so much horror I can feel. But while you call this nasty names like "scope insensitivity", I embrace it. It's my utility function. It's not good or bad or wrong or biased, it just is. (Scope insensitivity with regard to probabilities is, of course, still bad.)

I still think that one man should be tortured a lot instead of many being tortured slightly less, because higher individual suffering results in a higher point of convergence.

This also explains why our minds reject "Pascal's mugging".

comment by Ben_Jones · 2008-01-23T18:57:00.000Z · LW(p) · GW(p)

OK, my final response on the subject, which has had me unable to think about anything else all day. Thanks to all involved for helping me get my thoughts in order on this topic, and sorry for hijacking.

therefore burying the whole group in dust

You've forgotten the rules of the game. There's no 'burying everyone in dust.' You either have a speck of dust in your eye and blink it away, or you don't. And that's for every individual in the group. Playing with the numbers doesn't change the scenario much either.

My #1 complaint is that no-one seems bothered by things like this:

So we then double the number in set A while halving their discomfort.

Halving their discomfort? Care to go into some more depth on that? Would that be half as many neurons firing 'pain!'? Rods inserted half as deep? Thumbs only halfway screwed?

And this:

We can keep doing this, gradually - very gradually - diminishing the degree of discomfort, and multiplying by a factor of a googol each time, until...

As far as I know, there is no reason why I should agree that you can do anything of the sort. You might be able to divide torture by N and get dust mote, and you might not. But you certainly can't take it for granted then tell me I'm irrational.

comment by Jef_Allbright · 2008-01-23T20:06:00.000Z · LW(p) · GW(p)

Once again we've highlighted the immaturity of present-day moral thinking -- the kind that leads inevitably to Parfit's Repugnant Conclusion. But any paradox is merely a matter of insufficient context; in the bigger picture all the pieces must fit.

Here we have people struggling over the relative moral weight of torture versus dust specks, without recognizing that there is no objective measure of morality, but only objective measures of agreement on moral values.

The issue at hand can be modeled coherently in terms of the relevant distances (regardless of how highly dimensional, or what particular distance metric) between the assessor's preferred state and the assessor's perception of the alternative states. Regardless of the particular (necessarily subjective) model and evaluation function, there must be some scalar distance between the two states within the assessor's model (since a rational assessor can have only a single coherent model of reality, and the alternative states are not identical.) Furthermore introducing a multiplier on the order of a googolplex overwhelms any possible scale in any realizable model, leading to an effective infinity, forcing one (if one's reasoning is to be coherent) to view that state as dominant.

All of this (as presented by Eliezer) is perfectly rational -- but merely a special case and inappropriate to decision-making within a complex evolving context where actual consequences are effectively unpredictable.

If one faces a deep and wide chasm impeding desired trade with a neighboring tribe, should one rationally proceed to achieve the desired outcome: an optimum bridge?

Or should one focus not on perceived outcomes, but rather on most effectively expressing one's values-complex: Ie, valuing not the bridge, but effective interaction (including trade), and proceeding to exploit best-known principles promoting interaction, for example communications, air transport, replication rather than transport...and maybe even a bridge?

The underlying point is that within a complex evolutionary environment, specific outcomes can't be reliably predicted. Therefore to the extent that the system (within its environment of interaction) cannot be effectively modeled, an optimum strategy is one that leads to discovering the preferred future through the exercise of increasingly scientific (instrumental) principles promoting an increasingly coherent model of evolving values.

In the narrow case of a completely specified context, it's all the same. In the broader, more complex world we experience, it means the difference between coherence and paradox.

The Repugnant Conclusion fails (as does all consequentialist ethics when extrapolated) because it presumes to model a moral scenario incorporating an objective point of view. Same problem here.

comment by Larry_D'anna2 · 2008-01-23T20:11:00.000Z · LW(p) · GW(p)

Ben, you are right. Two people with dusty eyes is worse than one. But it isn't twice as worse. It's not even nearly twice as worse. On the other hand I would say that two people being tortured is almost twice as bad as one, but not quite. I'm sure I can't write down a formula for my utility function in terms of number of deaths, or dusty eyes, or tortures, but I know one thing: it is not linear. There's nothing inherently irrational about choosing a nonlinear utility function. So I will continue to prefer any number of dusty eyes to even one torture. I would also prefer a very large number of 1-day tortures to a singe 50-year one. (far far more than 365 * 50). Am I being irrational? How?

comment by Caledonian2 · 2008-01-23T23:18:00.000Z · LW(p) · GW(p)
Caledonian, of course that cannot be demonstrated.

Of course? It is hardly obvious to me that such a thing is beyond demonstration, even if we currently do not know.

But who needs a demonstration?

People interested in rational thinking who aren't idiots. At the very least.

So, which factor rules you out?

comment by Paul_Gowder · 2008-01-23T23:49:00.000Z · LW(p) · GW(p)

I think I'm going to have to write another of my own posts on this (hadn't I already?), when I have time. Which might not be for a while -- which might be never -- we'll see.

For now, let me ask you this Eliezer: often, we think that our intuitions about cases provide a reliable guide to morality. Without that, there's a serious question about where our moral principles come from. (I, for one, think that question has its most serious bite right on utilitarian moral principles... at least Kant, say, had an argument about how the nature of moral claims leads to his principles.)

So suppose -- hypothetically, and I do mean hypothetically -- that our best argument for the claim "one ought to maximize net welfare" comes by induction from our intuitions about individual cases. Could we then legitimately use that principle to defend the opposite of our intuitions about cases like this?

More later, I hope.

comment by Intelligent_Layman · 2008-01-24T04:17:00.000Z · LW(p) · GW(p)

Some of us would prefer to kill them all, regardless.

comment by mitchell_porter2 · 2008-01-24T05:17:00.000Z · LW(p) · GW(p)

Unknown: "There is not at all the same intuitive problem here; it is much like the comparison made a while ago on Overcoming Bias between caning and prison time; if someone is given few enough strokes, he will prefer this to a certain amount of prison time, while if the number is continually increased, at some point he will prefer prison time."

It may be a psychological fact that a person will always choose eventually. But this does not imply that those choices were made in a rationally consistent way, or that a rationally consistent extension of the decision procedures used would in fact involve addition of assigned utilities. Not only might decisions in otherwise unresolvable cases be made by the mental equivalent of flipping a coin, just to end the indecision, but what counts as unresolvable by immediate preference will itself depend on mood, circumstance, and other contingencies.

Similar considerations apply to speculations by moral rationalists regarding the form of an idealized personal utility function. Additivism and incommensurabilism both derive from simple moral intuitions - that harms are additive, that harms can be qualitatively different - and both have problems - determining ratios of badness exactly, locating that qualitative boundary exactly. Can we agree on that much?

comment by Unknown3 · 2008-01-24T06:11:00.000Z · LW(p) · GW(p)

I agree that as you defined the problems, both have problems. But I don't agree that the problems are equal, for the reason stated earlier. Suppose someone says that the boundary is that 1,526,216,123,000,252 dust specks is exactly equal to 50 years of torture (in fact, it's likely to be some relatively low number like this rather than anything like a googleplex.) It is true that proving this would be a problem. But it is no particular problem that 1,526,216,123,000,251 dust specks would be preferable to the torture, while the torture would be preferable to 1,526,123,000,253 dust specks would be worse than the torture: the point is that the torture would differ from each of these values by an extremely tiny amount.

But suppose someone defines a qualitative boundary: 1,525,123 degrees of pain (given some sort of measure) has an intrinsically worse quality from 1,525,122 degrees, such that no amount of the latter can ever add up to the former. It seems to me that there is a problem which doesn't exist in the other case, namely that for a trillion people to suffer pain of 1,525,122 degrees for a trillion years is said to be preferable to one person suffering pain of 1,525,123 degrees for one year.

In other words: both positions have difficult to find boundaries, but one directly contradicts intuition in a way the other does not.

Replies from: Voltairina, RST, RST_duplicate0.8641504549196835
comment by Voltairina · 2013-08-21T01:40:20.430Z · LW(p) · GW(p)

I'm not totally convinced - there may be other factors that make such qualitative distinctions important. Such as exceeding the threshold to boiling. Or putting enough bricks in a sack to burst the bottom. Or allowing someone to go long enough without air that they cannot be resuscitated. It probably doesn't do any good to pose /arbitrary/ boundaries, for sure, but not all such qualitative distinctions are arbitrary...

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-08-21T02:06:06.849Z · LW(p) · GW(p)

Or allowing someone to go long enough without air that they cannot be resuscitated.

This is less of a single qualitative distinction than you would think, given the various degrees of neurological damage that can make a person more or less the same person that they were before.

Replies from: Voltairina
comment by Voltairina · 2013-08-21T03:34:08.046Z · LW(p) · GW(p)

Good point... you are right about that. It would be more of a matter of degrees of personhood, especially if you had advanced medical technologies available such as neural implants.

comment by RST · 2017-12-15T06:39:03.092Z · LW(p) · GW(p)

Suppose that the qualitative difference is between bearable and unbearable, in other words things that are over o below the pain tolerance. A pain just below pain tolerance when experienced for a small quantity of time will remain bearable; however, if it is prolonged for lots of time it will become unbearable because human patience is limited. So, even if we give importance to qualitative differences, we can still choose to avoid torture and your second scenario, without going against our intuitions, or be incoherent.

Moreover, we can describe qualitative differences as the colors on the spectrum of visible light: their edges are nebulous but we can still agree that the grass is green and the sea is blue. This means that two very close points on the spectrum appear as part of the same color, but when their distance increases they became part of two different colors.

1,525,122 and 1,525,123 are so close that we can see them as shades of the same qualitative category. On the other hand, dust speck and torture are very distant from each other and we can consider them as part of two different qualitative categories.

Replies from: RST
comment by RST · 2017-12-24T22:15:45.667Z · LW(p) · GW(p)

To be more precise: let's assume that the time will be quite short (5 second for example), in this case I think it is really better to let billions of people suffer 5 second of bearable pain than to let one person suffer 5 second of unbearable pain. After all, people can stand a bearable pain by definition.

However, pain tolerance is subjective and in real life we don't know exactly where the threshold is in every person, so we can prefer, as heuristic rule, the option with less people involved when the pains are similar to each other (maybe we have evolved some system to make such approximations, a sort of threshold insensitivity).

comment by RST_duplicate0.8641504549196835 · 2017-12-29T17:06:01.307Z · LW(p) · GW(p)

Suppose that the qualitative difference is between bearable and unbearable, in other words things that are over o below the pain tolerance. A pain just below pain tolerance when experienced for a small quantity of time will remain bearable; however, if it is prolonged for lots of time it will become unbearable because human patience is limited. So, even if we give importance to qualitative differences, we can still choose to avoid torture and your scenario, without going against our intuitions, or be incoherent. Now, let's assume that the time will be quite short (5 second for example), in this case I think it is really better to let billions of people suffer 5 second of bearable pain than to let one person suffer 5 second of unbearable pain. After all, people can stand a bearable pain by definition. However, pain tolerance is subjective and in real life we don't know exactly where the threshold is in every person, so we can prefer, as heuristic rule, the option with less people involved when the pains are similar to each other (maybe we have evolved some system to make such approximations, a sort of threshold insensitivity).

comment by Another_Robin · 2008-01-24T18:03:00.000Z · LW(p) · GW(p)

My first reaction to this was, "I don't know; I don't understand 3^^^3 or a googol, or how to compare the suffering from a dust speck with torture." After I thought about it, I decided I was interpreting Eliezer's question like this: as the amount of suffering per person, say a, approaches zero but the number of people suffering, say n, goes to infinity, is the product a*n worse than somebody being tortured for 50 years?" The limiting product is undefined, though, isn't it? If a goes to zero fast enough, for example by ceasing to be suffering when it fall below the threshold of notice, then the product is not as bad as the torture. I think several other commentors are thinking about it the same way implicitly, and impose conditions so the limit exists. Andrew did this by putting a lower bound on a, so of course the product gets big, but it's not the same question. Even leaving aside the other contributions to utility like life-altering effects, I'm having trouble making sense of this question.

comment by Paul_Gowder · 2008-01-24T21:09:00.000Z · LW(p) · GW(p)

I've written and saved a(nother) response; if you'd be so kind as to approve it?

comment by anon. · 2008-01-24T22:07:00.000Z · LW(p) · GW(p)

Any question of ethics is entirely answered by arbitrarily chosen ethical system, therefore there are no "right" or "better" answers.

comment by Caledonian2 · 2008-01-25T00:32:00.000Z · LW(p) · GW(p)

Wrong, anon. If there are objective means by which ethical systems can be evaluated, there can be both better and right answers.

comment by mitchell_porter2 · 2008-01-25T02:08:00.000Z · LW(p) · GW(p)

Unknown, there is nothing inherently illogical about the idea of qualitative transitions. My thesis is that a speck of dust in the eye is a meaningless inconvenience, that torture is agony, and that any amount of genuinely meaningless inconvenience is preferable to any amount of agony. If those terms can be given objective meanings, then a boundary exists and it is a coherent position.

I just said genuinely meaningless. This is because, in the real world, there is going to be some small but nonzero probability that the speck of dust causes a car crash, for example, and this will surely be considerably more likely than a positive effect. When very large numbers are involved, this will make the specks worse than the torture.

But the original scenario does not ask us to consider consequences, so we are being asked to express a preference on the basis of the intrinsic badness of the two options.

comment by TGGP2 · 2008-01-25T07:56:00.000Z · LW(p) · GW(p)

Caledonian, what is an objective standard by which an ethical system can be evaluated?

comment by Ben_Jones · 2008-01-25T13:27:00.000Z · LW(p) · GW(p)

Mitchell, my sentiments exactly. Dust causing car crashes isn't part of the game as set up here - the idea is that you blink it away instantly, hence 'the least bad thing that can happen to you'.

The only stickler in the back of my mind is how I am (unconsciously?) categorising such things as 'inconvenience' or 'agony'. Where does stubbing my toe sit? How about cutting myself shaving? At what point do I switch to 3^^^3(Event) = Torture?

TGGP, are you familiar with the teachings of Jesus?

comment by Jef_Allbright · 2008-01-25T15:02:00.000Z · LW(p) · GW(p)

Anon wrote: "Any question of ethics is entirely answered by arbitrarily chosen ethical system, therefore there are no "right" or "better" answers."

Matters of preference are entirely subjective, but for any evolved agent they are far from arbitrary, and subject to increasing agreement to the extent that they reflect increasingly fundamental values in common.

comment by TGGP2 · 2008-01-25T15:44:00.000Z · LW(p) · GW(p)

TGGP, are you familiar with the teachings of Jesus?
Yes, I was raised Christian and I've read the Gospels. I don't think they provide an objective standard of morality, just the jewish Pharisaic tradition filtered through a Hellenistic lens.

Matters of preference are entirely subjective, but for any evolved agent they are far from arbitrary, and subject to increasing agreement to the extent that they reflect increasingly fundamental values in common.
That is relevant to what ethics people may favor, but not to any truth or objective standard. Agreement among people is the result of subjective judgment.

comment by Paul_Gowder · 2008-01-25T15:55:00.000Z · LW(p) · GW(p)

TGGP -- how about internal consistency? How about formal requirements, if we believe that moral claims should have a certain form by virtue of their being moral claims? Those two have the potential to knock out a lot of candidates...

comment by Unknown3 · 2008-01-25T19:40:00.000Z · LW(p) · GW(p)

Ben and Mitchell: the problem is that "meaningless inconvenience" and "agony" do not seem to have a common boundary. But this is only because there could be many transitional stages such as "fairly inconvenient" and "seriously inconvenient," and so on. But sooner or later, you must come to stages which have a common boundary. Then the problem I mentioned will arise: in order to maintain your position, you will be forced to maintain that pain of a certain degree, suffered by any number of people and for any length of time, is worse than a very slightly greater pain suffered by a single person for a very short time. This may not be logically incoherent but at least it is not very reasonable.

I say "a very slightly greater pain" because it is indeed evident that we experience pain as something like a continuum, where it is always possible for it to slowly increase or decrease. Even though it is possible for it to increase or decrease by a large amount suddenly, there is no necessity for this to happen.

comment by tcpkac · 2008-01-25T20:34:00.000Z · LW(p) · GW(p)

Great New Theorem in color perception : adding together 10 peoples' perceptions of light pink is equivalent to one person's perception of dark red. This is demonstrable, as there is a continuous scale between pink and red.

comment by Gary · 2008-01-25T21:30:00.000Z · LW(p) · GW(p)

Enough with the abstract. It's difficult to make a valid equation since dust=x, torture=y, and x=!y. So why don't you just replace dust in the equation with torture. Like a really small amount of torture, but still torture. Maybe, say, everybody gets a nipple pierced unwillingly.

comment by Paul_Gowder · 2008-01-26T02:14:00.000Z · LW(p) · GW(p)

Tcpkac: wonderful intuition pump.

Gary: interesting -- my sense of the nipple piercing case is that yes, there's a number of unwilling nipple piercings that does add up to 50 years of torture. It might be a number larger than the earth can support, but it exists. I wonder why my intuition is different there. Is yours?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-26T03:49:00.000Z · LW(p) · GW(p)

Paul, is there a number of dust specks that add up to stubbing your toe - not smashing it or anything, but stubbing it painfully enough that you very definitely notice, and it throbs for a few seconds before fading?

comment by Caledonian2 · 2008-01-26T03:57:00.000Z · LW(p) · GW(p)

The fact that you felt it desirable to ask that question means that the metaphor has failed. The message you attempted to send has been overwhelmed by its own noise.

comment by Salutator · 2008-01-26T04:23:00.000Z · LW(p) · GW(p)

1. In this whole series of posts you are silently presupposing that utilitarianism is the only rational system of ethics. Which is strange, because if people have different utility functions Arrow's impossibility theorem makes it impossible to arrive at a "rational" (in this blogs bayesian-consistent abuse of the term) aggregate utility function. So irrationality is not only rational but the only rational option. Funny what people will sell as overcoming bias.

2. In this particular case the introductory example fails, because 1 killing != - 1 saving. Removing a drowning man from the pool is obviously better then merely abstaining from drowning an other man in the pool.

3. The feeling of superiority over all those biased proles is a bias. In fact it is very obviously among your main biases and consequently one you should spend a disproportional amount of resources on overcoming.

Replies from: thomblake
comment by thomblake · 2012-05-01T18:29:14.310Z · LW(p) · GW(p)

Removing a drowning man from the pool is obviously better then merely abstaining from drowning an other man in the pool.

I don't think it's obvious. Thought experiment: Steve is killing Ann by drowning, and Beth is about to drown by accident nearby. I have a cell phone connection open to Steve, and I have time to either convince Steve to stop drowning Ann, or to convince Steve to save Beth but still drown Ann. It is not obvious to me that I should choose the latter option.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-01T19:02:33.746Z · LW(p) · GW(p)

Do you mean to assert that choosing the latter option in your scenario and the former option in Salutator's scenario is inconsistent?

If so, you might want to unpack your thinking a little more, as I don't follow it. What you've described in your thought experiment isn't a choice between rescuing a drowning person and abstaining from drowning a person, and the difference seems potentially important.

Replies from: thomblake
comment by thomblake · 2012-05-01T19:04:59.530Z · LW(p) · GW(p)

The options I'm choosing between are Steve rescuing a drowning person and Steve abstaining from drowning a person. If one of those options is obviously better than the other, then the same relationship should hold when I can choose Steve's actions rather than my own.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-01T19:50:36.929Z · LW(p) · GW(p)

Ah!
Either I can convince Steve to stop drowning Ann, or [convince Steve to] save Beth.
I get it now.
I had read it as either I can convince Steve to stop drowning Ann, or [I can] save Beth.
Thanks for the clarification... I'd been genuinely confused.

Replies from: thomblake
comment by thomblake · 2012-05-01T21:00:39.630Z · LW(p) · GW(p)

I've edited it to hopefully make it unambiguous - I hope no one reads that as Steve convincing himself.

comment by Caledonian2 · 2008-01-26T05:11:00.000Z · LW(p) · GW(p)

What leads you to believe he expends any of his resources overcoming his biases? Past behavior suggests he's repeating his errors over and over again without correction.

comment by Unknown3 · 2008-01-26T05:51:00.000Z · LW(p) · GW(p)

The fact that Eliezer has changed his mind several times on Overcoming Bias is evidence that he expends some resources overcoming bias; if he didn't, we would expect exactly what you say. It is true that he hasn't changed his mind often, so this fact (at least by itself) is not evidence that he expends many resources in this way.

comment by Paul_Gowder · 2008-01-26T05:58:00.000Z · LW(p) · GW(p)

Eliezer -- no, I don't think there is. At least, not if the dust specks are distributed over multiple people. Maybe localized in one person -- a dust speck every 10th/sec for a sufficiently long period of time might add up to a toe stub.

comment by Caledonian2 · 2008-01-26T06:08:00.000Z · LW(p) · GW(p)
The fact that Eliezer has changed his mind several times on Overcoming Bias is evidence that he expends some resources overcoming bias

No, he could simply be biased towards maintaining a high status by accepting the dominant opinions in his social groups. If looking bad in others' eyes is something you wish to avoid, you'll reject arguments that others reject whether you think they're right or not.

comment by Unknown3 · 2008-01-26T06:10:00.000Z · LW(p) · GW(p)

Eliezer's question for Paul is not particularly subtle, so I presume he won't mind if I give away where it is leading. If Paul says yes, there is some number of dust specks which add up to a toe stubbing, then Eliezer can ask if there is some number of toe stubbings that add up to a nipple piercing. If he says yes to this, he will ultimately have to admit that there is some number of dust specks which add up to 50 years of torture.

Rather than actually going down this road, however, perhaps it would be as well if those who wish to say that the dust specks are always preferable to the torture should the following facts:

1) Some people have a very good imagination. I could personally think of at least 100 gradations between a dust speck and a toe stubbing, 100 more between the toe stubbing and the nipple piercing, and as many as you like between the nipple piercing and the 50 years of torture.

2) Arguing about where to say no, the lesser pain can never add up to the slightly greater pain, would look a lot like creationists arguing about which transitional fossils are merely ape-like humans, and which are merely human-like apes. There is a point in the transitional fossils where the fossil is so intermediate that 50% of the creationists say that it is human, and 50% that it is an ape. Likewise, there will be a point where 50% of the Speckists say that dust specks can add up to this intermediate pain, but the intermediate pain can't add up to torture, and the other 50% will say that the intermediate pain can add up to torture, but the specks can't add up the intermediate pain. Do you really want to go down this path?

3) Is your intuition about the specks being preferable to the torture really greater than the intuition you violate by positing such an absolute division? Suppose we go down the path mentioned above, and at some point you say that specks can add up to pain X, but not to pain X+.00001 (a representation of the minute degree of greater pain in the next step if we choose a fine enough division). Do you really want to say that you prefer that a trillion persons (or a google, or a googleplex, etc) than that one person suffer pain X+.00001?

While writing this, Paul just answered no, the specks never add up to a toe stub. This actually suggests that he rounds down the speck to nothing; you don't even notice it. Remember however that originally Eliezer posited that you feel the irritation for a fraction of a second. So there is some pain there. However, Paul's answer to this question is simply a step down the path laid out above. I would like to see his answer to the above. Remember the (minimally) 100 gradations between the dust speck and the toe stub.

Replies from: RST
comment by RST · 2017-12-15T21:34:59.441Z · LW(p) · GW(p)

But consider this: the last exemplars of each species of hominids could reproduce with the firs exemplars of the following.

However, we probably wouldn’t be able to reproduce with Homo habilis.

This shows that small differences sum as the distance between the examined subjects increases, until we can clearly see that the two subjects are not part of the same category anymore.

The pains that are similar in intensity are still comparable. But there is too much difference between dust specks in the eye/stubbed toe and torture to consider them as part of the same category

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-26T06:19:00.000Z · LW(p) · GW(p)

Ah, but Unknown, knowing it can be done in the abstract isn't the same as seeing it done.

So, Paul: Do enough dust specks - which by hypothesis, do produce a brief moment of irritation, enough to notice for a moment, so at least one pain neuron is firing - add up to experiencing a brief itch on your arm that you have to scratch? Also, do enough cases of feeling your foot come down momentarily on a hard pebble, add up to a toe stub that hurts for a few seconds?

How about listening to loud music blasting from a car outside your window - would enough instances of that ever add up to one case of being forced to watch "Plan Nine from Outer Space" twice in a row?

And can you swear on a stack of copies of "The Origin of Species" that you would have given the same answers to all those questions, if I'd asked you before you'd ever heard of the Dust Specks Dilemma?

comment by Unknown3 · 2008-01-26T06:19:00.000Z · LW(p) · GW(p)

Caledonian, offering an alternative explanation for the evidence does not imply that it is not evidence that Eliezer expends some resources overcoming bias: it simply shows that the evidence is not conclusive. In fact, evidence usually can be explained in several different ways.

comment by Paul_Gowder · 2008-01-26T07:24:00.000Z · LW(p) · GW(p)

Eliezer -- depends again on whether we're aggregating across individuals or within one individual. From a utilitarian perspective (see The Post That Is To Come for a non-utilitarian take), that's my big objection to the specks thing. Slapping each of 100 people once each is not the same as slapping one person 100 times. The first is a series of slaps. The second is a beating.

Honestly, I'm not sure if I'd have given the same answer to all of those questions w/o having heard of the dust specks dilemma. I feel like that world is a little too weird -- the thing that motivates me to think about those questions is the dust specks dilemma. They're not the sort of things practical reason ordinarily has to worry about, or that we can ordinarily expect to have well-developed intuitions about!

comment by Brian_Hollar · 2008-01-26T07:34:00.000Z · LW(p) · GW(p)

I don't think this is too difficult to understand. Both in both situations, the deciders don't want to be think of themselves as possibly responsible for avoidable death. In the first scenario, you don't want to be the guy who made a gamble and everyone dies. In the second, you don't want to choose for 100 people to die. People make different choices in the two situations because they want to minimize moral culpability.

Is that rational? Strictly speaking, mabye not. Is it human? Absolutely!

Replies from: phob
comment by phob · 2011-01-04T20:11:38.297Z · LW(p) · GW(p)

Rational yes, if other people know of the decision. If you never find out the result of the gamble, are not held responsible and have your memory wiped, then all confounding interests are wiped except the desire for people not to die. Only then are the irrational options actually irrational.

comment by tcpkac · 2008-01-26T10:41:00.000Z · LW(p) · GW(p)

An AGI project would presumably need a generally accepted, watertight, axiom based, formal system of ethics, whose rules can reliably be applied right up to limit cases. I am guessing that that is the reason why Eliezer et al are arguing from the basis that such an animal exists.
If it does, please point to it. The FHI has ethics specialists on its staff, what do they have to say on the subject ?
Based on the current discussion, such an animal, at least as far as 'generally accepted' goes, does not exist. My belief is that what we have are more or less consensual guidelines which apply to situations and choices within human experience. Unknown's examples, for instance, tend to be 'middle of the range' ones. When we get towards the limits of everyday experience, these guidelines break down.
Eliezer has not provided us with a formal framework within which summing over single experiences for multiple people can be compared to summing over multiple experiences for one person. For me it stops there.

comment by Ben_Jones · 2008-01-26T12:01:00.000Z · LW(p) · GW(p)

tcpkac, if a hundred people experiencing 49 years of torture is worse than one person experiencing 50 years, then yes, you can and must compare. Whether you extend this right down to 3^^^3 dust specks is another matter. There might not be a formal framework for something so subjective, but there are obvious inconsistencies with flat refusal to sum, even with something as abstract as 'pain'. Would an AGI need a formula for summing discomfort over multiple people/one person? Who gets to write that? Yeesh.

a brief moment of irritation, enough to notice for a moment, so at least one pain neuron is firing

I'm not a neurobiologist - is this how it works? Are there neurons whose specific job it is to deliver 'pain' messages? If we're reducing down to this level, can we actually measure pain? More importantly, even if we can, can we go on to assume that there is an ininterrupted progression in the neurological mechanism, all the way up this scale, from dust mote to torture? For me, it's not clear whether blinking away a dust mote falls under 'pain' or 'sensation'. Nipple piercing is much clearer, and hence I have no problem saying 3^^^3 Piercings > Torture.

comment by Unknown3 · 2008-01-26T14:35:00.000Z · LW(p) · GW(p)

Paul : "Slapping each of 100 people once each is not the same as slapping one person 100 times."

This is absolutely true. But no one has said that these two things are equal. The point is that it is possible to assign each case a value, and these values are comparable: either you prefer to slap each of 100 people once, or you prefer to slap one person 100 times. And once you begin assigning preferences, in the end you must admit that the dust specks, distributed over multiple people, are preferable to the torture in one individual. Your only alternatives to this will be to contradict your own preferences, or to admit to some absurd preference such as "I would rather torture a million people for 49 years than one person for 50."

comment by Unknown3 · 2008-01-26T14:38:00.000Z · LW(p) · GW(p)

Ben: as I said when I brought up the sand example, Eliezer used dust specks to illustrate the "least bad" bad thing that can happen. If you think that it is not even a bad thing, then of course the point will not be apply. In this case you should simply move to the least thing which you consider to be actually bad.

comment by Caledonian2 · 2008-01-26T15:36:00.000Z · LW(p) · GW(p)
Caledonian, offering an alternative explanation for the evidence does not imply that it is not evidence that Eliezer expends some resources overcoming bias:

Of course it's not evidence for that scenario. There are alternative and simpler explanations.

If the data does not permit us to distinguish between A and ~A, it's not evidence for either state.

comment by Frank_Hirsch · 2008-01-27T00:17:00.000Z · LW(p) · GW(p)

I think the argument is misguided. Why? The choice is not only hypothetical but impossible. There is not the remotest possibility of a googolplex persons even existing.
So I'll tone it down to a more realistic "equation", then I'll argue that it's not an equation after all.
Then I'll admit that I'm lost, but so are you... =)
Let's assume 1e7 people experiencing pain of a certain intensity for one second vs. one persion experiencing equal pain for 1e7 seconds (approx. 19 years).
Let's assume that every person in question has an expectancy of, say, 63 years of painless life. Then my situation is eqivalent to either extending the painless life expectency of 1e7 people from 63y-1s to 63y or to extend it for one person from 54y to 63y.
According to the law of diminishing returns, the former is definitely much less valuable than the latter.
But how much so? How to quantify this?
I have no idea, but I claim that neither do you... =)

regards, frank

p.s.
I have a hunch that you couldn't fit enough people with specks in their eyes into the universe to make up for one 50-year-torture.

comment by Paul_Gowder · 2008-01-27T03:46:00.000Z · LW(p) · GW(p)

Unknown: I didn't deny that they're comparable, at least in the brute sense of my being able to express a preference. But I did deny that any number of distributed dust specks can ever end up to torture. And the reason I give for that denial is just that distributive problem. (Well, there are other reasons too, but one thing at a time.)

comment by Jeff4 · 2008-01-28T21:51:00.000Z · LW(p) · GW(p)

I agree, you have to just multiply.

But your math is an attempt to abstract human harm to numbers, and the problem I have (and I suspect others intuitively have) is that the abstraction is wrong. You've failed to understand the lessons of the science of happiness: we forget small painful things quickly. The cost of a speck in the eye, let us imagine, is 1 unit of harm. But that's only true in the moment the speck hits the eye. In the hour the speck hits the eye, because of the human ability to ignore or forget small pains, the extended cost is 0. This is why a googol of specks is better than torture, because a googol 0s...

The real problem is defining the boundary between momentary (transient) harm and extendable harm. This is a psychological math problem.

Replies from: BLiZme2
comment by BLiZme2 · 2010-11-14T03:52:23.966Z · LW(p) · GW(p)

I think this is fundamentally my issue. I think that even when we disregard all differences between people thus making all their personal qualities like pain tolerance identical. There are still distinct lines where inherently a given amount of pain has direct consequences that any smaller amount of pain cannot possibly have inherently, so no dust caused car crashes but the scars of torture count. Meaning that yes, I think, there is some shift where an infinite amount of pain with no inherent long term consistencies, a dust speck, is better than a case with inherent long term consequences, all else equal. Unfortunately in real life this is not normally the case. For example all of those millions of stubbed toes, or whatever, alter your behavior in a small way that is likely on average to be negative. Thus leading to non inherent losses when for example a stubbed toe causes you to miss the elevator and thus be late and thus lose a job. So that all the secondary problems can over a large sample lead to as much, or more, long term harm over the population, but all of that is outside the scope of the problem as presented. And I think makes this a poor question for deciding real actions and a difficult problem to discus clearly.

comment by MartinH · 2009-02-08T15:20:00.000Z · LW(p) · GW(p)

I'm following a link from Pharyngula, and I don't have time to read the comments, my apologies if I'm repeating something

I think you're up against the sorites paradox, you are confusing apples and oranges in comparing torture to a dust speck and that there is no practical way to implement the computation you propose.

People who whine about the dust specks in their eyes will get an unsympathetic response from me - I care zero about it. People who have been tortured for a minute will have my utmost concern and sympathy. Somewhere along the line, one turns into the other, but in your example, a googleplex of zeroes is still zero.

Torture is qualitatively different from pain, say the pain of a debilitating disease. Torture involves the intentional infliction of suffering for the sake of the suffering, extreme loss of control, the absence of sympathy and empathy, extreme uncertainty about the future and so on. The mental impact of torture is qualitatively different from accidental pain.

Universal informed consent and shared risk would seem to my moral gut to be necessary preconditions to make me stomach this kind of utilitarian calculus.

So this large population that agrees that the occasional victim enhances the overall utility would share the risk of becoming the victim. In that scenario, how many people would accept the lifetime torture lottery ticket in exchange for a lifetime free of dust motes? Without knowing the answer to this question, they can't estimate their own risk.

comment by Doug_S. · 2009-02-08T15:55:00.000Z · LW(p) · GW(p)

MartinH:

See the follow-up here.

(If a dust speck is zero, you could substitute "stubbed toe".)

Incidentally, my own answer to the torture vs. dust specks question was to bite the other bullet and say that, given any two different intensities of suffering, there is a sufficiently long finite duration of the greater intensity such that I'd pick a Nearly Infinite duration of the lesser degree over it. In other words, yeah, I'd condemn a Nearly Infinite number of people to 50 years of slightly less bad torture to spare a large enough finite group from 50 years of slightly worse torture.

In real life, I consider myself lucky that questions like that one are only hypothetical.

comment by Cool_thought. · 2009-03-02T22:47:00.000Z · LW(p) · GW(p)


I enjoyed the read, and it makes a lot of sense. however... it leaves me with a..
Hrm.
Well, I'm no mathematician, but between being a monkey and never multiplying and always feeling, and being a robot, always multiplying and never feeling, I think I'll stick with being human and do both.

comment by spamham · 2010-01-03T10:03:07.050Z · LW(p) · GW(p)

With all due respect, but this post reminds me of why I find the expectation-calculation kind of rationality dangerous.

IMO examples such as the first, with known probabilities and a straightforward way to calculate utility, are a total red herring.

In more realistic examples, you'll have to do many judgment calls such as the choice of model, and your best estimate of the basic probabilities and utilities, which will ultimately be grounded on the fuzzy, biased intuitive level.

I think you might reply that this isn't a specific fault with your approach, and that everyone has to start with some axioms somewhere. Granted.

Now the problem, as I see it, is that picking these axioms (including quantitative estimates) once and for all, and then proceeding deductively, will exaggerate any initial choices (silly metaphor: A bit like going from one point to another by calculating the angle and then going in a straight line, instead of making corrections as you go. But (quitting the metaphor) I'm not just talking about divergence over time, but also along the deduction).

So now you have a conclusion which is still based on the fuzzy and intuitive, but which has an air of mathematical exactness... If the model is complex enough, you can probably reach any desired conclusion by inconspicious parameter twiddling.

My argument is far from "Omg it's so coldhearted to mix math and moral decisions!". I think math is an important tool in the analysis (incidentally, I'm a math student ;)), but that you should know its limitations and hidden assumptions in applying math to the real world.

I would consider an act of (intuitively wrongful) violence based on a 500-page utility expectation calculation no better than one based on elaborate logic grounded in scripture or ideology.

I think that, after being informed by rationality about all the value-neutral facts, intuition, as fallible as it is, should be the final arbiter.

I think these sacred (no religion implied) values you mention, and especially kindness, do serve an important purpose, namely as a safeguard against the subtly flawed logic I've been talking about.

Replies from: orthonormal, Vladimir_Nesov
comment by orthonormal · 2010-01-03T19:16:38.107Z · LW(p) · GW(p)

I understand your point, and I believe Eliezer isn't as naive as you might think. Compare Ethical Injunctions (which starts out looking irrelevant to this, but comes around by the end)...

comment by Vladimir_Nesov · 2010-01-03T21:31:15.633Z · LW(p) · GW(p)

To add to orthonormal's link (see also the other posts around these two in the list of all posts):

comment by simplicio · 2010-03-21T16:58:41.700Z · LW(p) · GW(p)

You know, I have always said that the trolley problem in which you push the fat man onto the tracks to save several lives is immoral, because you are treating him as a means rather than an end.

I've just noticed that if I reframe it in such a way that it's not so personal, my intuition changes. For example, suppose it were a question of pushing a trolley containing one (unknown) person in front of another (empty) runaway trolley, in order to stop the runaway one from hitting a third trolley containing 4 people. Suddenly I'm actively killing the guy.

Replies from: mwaser
comment by mwaser · 2010-10-27T21:12:47.814Z · LW(p) · GW(p)

Pushing a fat man onto the tracks to save several lives is generally considered to be immoral because you are USING a person to achieve some goal.

In your case, you are only using the trolley containing the man to stop the death of four people. You are NOT using the man because the trolley would work regardless of whether or not he is present. Thus, it is mere misfortune that he is present and killed -- exactly as if he were on a siding where you diverted a train to save ten people.

comment by tenshiko · 2010-12-02T03:22:29.610Z · LW(p) · GW(p)

http://wiki.lesswrong.com/wiki/Shut_up_and_multiply The shut up and multiply article on the wiki (markup troubles...) taken in conjunction with the following out-of-context paragraph strongly implies to readers of the wiki that this post is about the moral imperative to reproduce:

You know what? This isn't about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain's feelings of comfort or discomfort with a plan. Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn't even a feather in the scales, when a life is at stake. Just shut up and multiply.

...is this intentional or unintentional subtext, and if the latter, do you intend to revise the word to "calculate" as some people quoting the post have done or not bother since apparently nobody but me noticed in the first place?

Replies from: Document
comment by Document · 2010-12-02T06:47:37.967Z · LW(p) · GW(p)

Possibly related:

"Shut up and calculate" apparently has an existing meaning related to quantum mechanics.

Replies from: wedrifid, tenshiko
comment by wedrifid · 2010-12-02T07:37:05.694Z · LW(p) · GW(p)

"I've wondered in the past if perhaps the best thing LW members could do if the singularity is more than 80 (4 generations) years away was simply to breed like Amish..."

An Amish with a cryonics facility. You know, that's unheard of!

comment by tenshiko · 2010-12-02T12:09:54.217Z · LW(p) · GW(p)

Huh. Dangit, I had ctrl+F for children and reproduction and pregnancy in this post and foolishly assumed that was conclusive evidence.

comment by TimFreeman · 2011-04-27T15:56:36.019Z · LW(p) · GW(p)

If that population of 400 or 500 people is all that's left of Homo Sapiens, then it's obvious to me that keeping 400 with probability 1 is better than keeping 500 with probability 0.9. Repopulating starting with 400 doesn't seem much harder than repopulating starting with 500, but repopulating starting with 0 is obviously impossible.

If we change the numbers a little bit, we get a different and still interesting example. I don't see an important difference between having 10 billion happy people and having 100 billion happy people. I can't visualize either number, and I have no reason to believe that having another 90 billion after the first 10 billion gives anything I care about.

Multiplying individual utility by number of people to get total utility is a mistake, IMO. I don't know what the correct solution is, but that's not it.

comment by Hul-Gil · 2011-05-16T06:10:05.076Z · LW(p) · GW(p)

What's worse, stealing one cent each from 5,000,000 people, or stealing $49,999 from one person? (Let us further assume that money has some utility value.)

If we decide we can just add the diminished wealth together, the former is clearly one cent worse: $50,000 is stolen, as opposed to $49,999. But this doesn't take into account that loss of utility grows with each cent lost from the same person. Losing one cent won't bother me at all; everyone else who had a cent stolen would probably feel the same way. However, $49,999 from one person is enough to ruin their life: numerically, less was stolen overall, but the utility loss grows incredibly as it is concentrated.

Another case: is an eye-mote a second for a year (31,556,926 motes) in one person better than 31,556,927 motes spread out evenly among 31,556,927 people? The former case would involve serious loss of utilons, whereas a single mote is quickly fixed and forgotten: qualitatively different from constant irritation. The loss of utilons from dust motes can thus be concentrated and added, but not spread out and then added. (I think this may indicate time and memory plays a factor in this, since they do in the mechanism of suffering.)

In other words, a negligible amount of utility loss cannot be multiplied so that it is preferable to a concentrated, non-negligible utility loss. If none of the people involved in the negligible-group suffer individually, they obviously can't be suffering as a group, either (what would be suffering - a group is not an entity!).

However, I have read refutations of this that say "well just replace dust specks with something that does cause suffering." I have no problem with that; there may be "non-trivial" pain and "non-trivial" pleasure that can be added. So in the stubbed-toes example, it might be non-trivial, since it is concentrated enough to matter to the individual and cause suffering; and suffering is additive.

Perhaps there is such a line innately built into human biology, between "trivial" and "non-trivial". Eye-motes can't ever really degrade the quality of our lives, so cannot be used in examples of this kind. But in the case of one person being tortured slightly worse than ten people being tortured slightly less, the non-trivial suffering of the ten can be considered to be additive. This also solves this problem.

Replies from: Alicorn
comment by Alicorn · 2011-05-16T06:11:54.807Z · LW(p) · GW(p)

loss of utility per cent grows exponentially with each cent lost.

On this end of the scale, it grows (I'm not sure if it's exponential), but it doesn't grow indefinitely; eventually it starts falling.

Replies from: ata, Hul-Gil
comment by ata · 2011-05-16T06:50:59.809Z · LW(p) · GW(p)

What function is that? I thought human utility over money was roughly logarithmic, in which case loss of utility per cent lost would grow until (theoretically) hitting an asymptote. (Also, why would it make sense for it to eventually start falling?)

Replies from: Alicorn, Peter_de_Blanc
comment by Alicorn · 2011-05-16T07:01:39.927Z · LW(p) · GW(p)

I have no idea what function it is. I also don't really have a working understanding of what "logarithmic" is. It starts falling because when you're dealing in the thousands of dollars, the next dollar matters less than it did when you were dealing in the tens of dollars.

Replies from: ata
comment by ata · 2011-05-16T07:15:10.223Z · LW(p) · GW(p)

Oh, okay, I think we're talking about the same function in different terms. You're talking in terms of the utility function itself, and I was talking about how much the growth rate falls as the amount of money decreases from some positive starting point, since that's what Hul-Gil seemed to be talking about. (I think that would be hyperbolic rather than exponential, though.)

The utility function itself does grow indefinitely; just really slowly at some point. And at no point is its own growth speeding up rather than slowing down.

comment by Peter_de_Blanc · 2011-05-16T08:10:31.851Z · LW(p) · GW(p)

I thought human utility over money was roughly logarithmic, in which case loss of utility per cent lost would grow until (theoretically) hitting an asymptote.

So you're saying that being broke is infinite disutility. How seriously have you thought about the realism of this model?

Replies from: ata
comment by ata · 2011-05-16T08:51:14.603Z · LW(p) · GW(p)

Obviously I didn't mean that being broke (or anything) is infinite disutility. Am I mistaken that the utility of money is otherwise modeled as logarithmic generally?

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2011-05-16T10:25:21.954Z · LW(p) · GW(p)

Obviously I didn't mean that being broke (or anything) is infinite disutility.

Then what asymptote were you referring to?

Replies from: ata
comment by ata · 2011-05-17T05:33:19.456Z · LW(p) · GW(p)

It was in response to the "indefinitely" in the parent comment, but I think I was just thinking of the function and not about how to apply it to humans. So actually your original response was pretty much exactly correct.
It was a silly thing to say.

I wonder if it's correct, then, that the marginal disutility (according to whatever preferences are revealed by how people actually act) of the loss of another dollar actually does eventually start decreasing when a person is in enough debt. That seems humanly plausible.

comment by Hul-Gil · 2011-05-16T06:51:28.957Z · LW(p) · GW(p)

A good point. I've edited to rephrase.

comment by fool · 2011-06-26T01:58:15.425Z · LW(p) · GW(p)

I, like others, can do the maths just fine, so what? How does it follow that circular preferences over very long chains of remotely possible pairs of choices should cause me to doubt strong moral intuition? Because I would, under carefully contrived conditions, lose against the allegedly optimal solution... As for grandstanding, hah! To presume to call this brand of consequentialism "rationality" is already quite rhetorical. Never mind warm fuzzies, bare swords, flames, and chimpanzees.

comment by Multiheaded · 2011-07-02T14:41:42.273Z · LW(p) · GW(p)

As already stated somewhere above, with or without "sacred" values humans invariably believe in thresholds where the expected negative utility jumps exponentially. If I believe that lengthy torture is a vastly different event (for the individual in question, and we clearly aren't considering any ripples) from a quickly and cleanly amputated limb, I'll still adjust my preferences accordingly. I'm only acting on my beliefs about human consciousness. That's not irrational in itself. Therefore... sorry, tried two intuitive therefores but neither one checks out. I'll get back on it.

comment by fool · 2011-07-11T00:24:20.296Z · LW(p) · GW(p)

Now that I have read (and commented on) the "Savage axiom" (http://lesswrong.com/lw/5te/a_summary_of_savages_foundations_for_probability/) thread, I would like to note here on this thread that there are no computable solutions to the Savage axioms.

Now, of course, in this universe, googolplexes are utterly irrelevant. If an AI could harness every planck volume of space in the observable universe to each perform one computations per planck time, all the stars would burn out long before it got anywhere close to 2^1024 computations, which is a long way off from a googolplex. So it seems to me "circular altruism" on this level is of absolutely no consequence.

Of course we might still cling to the "thought experiment" aspect of it. I don't see why we should, but even if we do, it doesn't help: ideal rationality, in the Savage sense, isn't even computable. No AI, even with unlimited time and space to make up its mind, can be rational, in the sense of always choosing the course that maximises some utility function with respect to some subjective probability distribution in all situations. So something still has to give. Of course there are lots of ways to do this. We can be "rational within epsilon" if you like, but this epsilon will matter when considering these googolplex circularity arguments. I'm skeptical that there is anything coherent here at all.

comment by hairyfigment · 2011-08-02T19:17:07.073Z · LW(p) · GW(p)

A googolplex is ten to the googolth power. That's a googol/100 factors of a googol. So we can keep doing this, gradually - very gradually - diminishing the degree of discomfort, and multiplying by a factor of a googol each time, until we choose between a googolplex people getting a dust speck in their eye, and a googolplex/googol people getting two dust specks in their eye.

Maybe the strange notation has me confused, but I don't see the contradiction here. Consider sneezes. One human sneezing N times in a row, where N>2, seems at least super-exponentially worse than N humans each sneezing once (assuming that no noticeable consequences for any of this last beyond the day). In fact, if we all sneeze simultaneously that would be pretty cool.

This next part doesn't directly address the original question. But if 3^^^3 humans know that by getting a dust speck in their eye they helped save someone from torture, the vast majority would likely feel happy about this and we wind up with a mountain of increased utility from Dust Specks relative to No Pain. Whereas an average-human torture victim who learns that the torture served to prevent dust specks might try to kill you bare-handed.

Replies from: MatthewBaker
comment by MatthewBaker · 2011-08-02T19:58:51.385Z · LW(p) · GW(p)

They would all die of dust specks due to 3^^^3! Or something.

comment by fool · 2011-08-03T10:02:51.413Z · LW(p) · GW(p)

How about:

  1. floor(3^(3^(3^(3 + sin(3^^^3))))) people are tortured for a day.

  2. floor(3^(3^(3^(3 + cos(3^^^3))))) people are tortured for a day.

Choose.

Well, why are certain notations for large integers to be taken seriously but not others? Shut up and do trig!

(Mainly though I want to claim dibs to the name "a googolplex simultaneous sneezes")

Replies from: None
comment by [deleted] · 2012-12-24T20:29:34.239Z · LW(p) · GW(p)

I choose 2. Justification: 3^^^3 is an odd multiple of 3, which is pretty close to being an odd multiple of π. Sin(π)=0 while cos(π)=-1. cos(3^^^3) is smaller in the latter case, which necessarily leads to a smaller overall number (by a significant amount).

comment by TuviaDulin · 2011-09-23T21:22:37.497Z · LW(p) · GW(p)

50 years of torture is going to ruin someone's life. Dust specks and stubbed toes are not going to ruin anyone's life, which makes the number of people with dust specks and stubbed toes irrelevant. That's the threshold. You can't multiply one to get to the other.

The amount humanity loses to a dust speck in someone's eye is exactly 0, unless that person was piloting an aircraft or something and crashes because of it, which - based on my reading of the premise - doesn't seem to be the case. A stubbed toe might cost more, but even that is only true if you treat humanity as an amorpheous, hiveminded mass rather than a group of individuals.

Replies from: Desrtopa, ArisKatsaris
comment by Desrtopa · 2011-09-23T21:36:34.658Z · LW(p) · GW(p)

A stubbed toe can put someone in a bad mood which affects their behavior until it feels better, and that can put a damper on their whole day.

I intuitively see 3^^^^3 stubbed toes as worse than 50 years of torture, but not 3^^^^3 dust specks, but this is a scenario where I feel I should at the very least be highly suspicious of my intuition.

Replies from: TuviaDulin
comment by TuviaDulin · 2011-09-23T22:13:16.731Z · LW(p) · GW(p)

Ruining someone's day is still on the other side of the threshold from ruining someone's life. Now, if all the stubbed toes were going to happen to the same person, that would be different.

I guess I could say that the line is between being hurt, and being destroyed. The point at which I would start to look at the man being tortured as a preferable option is when the pain being suffered by the googolplex others would be bad enough to cause serious financial, social, or physical damage to each of them as individuals. That's the line.

Replies from: Desrtopa
comment by Desrtopa · 2011-09-24T02:39:55.110Z · LW(p) · GW(p)

If you give a significant fraction of 3^^^^3 people a bad day, it adds up to more time worth of unhappy experience than a 50 year life that is miserable 100% of the time, more times over than we can possibly imagine.

A single second each in the lives of every person on Earth adds up to about 2000 years of cumulative life experience. That's not even a drop in the ocean of 3^^^^3 people.

Of course, if you give even an infinitesimally tiny fraction of that many people, say a trillion, single bad days, that's probably going to lead to at least a few million ruined relationships and lost jobs. 3^^^^3 stubbed toes would lead to more ruined lives than the number of people who've actually ever lived. But even if you assume no spillover effects, it's still a greater mass of cumulative negative experience than has occurred in the entirety of human history.

comment by ArisKatsaris · 2011-09-23T21:57:55.956Z · LW(p) · GW(p)

50 years of torture is going to ruin someone's life.

And a dust speck is going to ruin someone's fraction of a second. How many fractions of a second do a life make?

Replies from: TuviaDulin
comment by TuviaDulin · 2011-09-23T22:17:01.440Z · LW(p) · GW(p)

You're making it sound like all humans share a single consciousness and pool their life experiences. Every human has a different life, a different consciousness. Totalling the value of a second from any number of humans can never equal the value of a human lifetime, because you won't have caused any serious problems for any person.

Replies from: PhilGoetz, ArisKatsaris
comment by PhilGoetz · 2011-09-23T22:20:49.236Z · LW(p) · GW(p)

Define "serious". Specify one harm X that is just barely not serious, and one Y that is just a little worse, and is serious. Verify that you can find an N such that YN > 1 human life, and that there is no N such that XN > 1 human life.

Replies from: TuviaDulin
comment by TuviaDulin · 2011-09-23T22:26:53.076Z · LW(p) · GW(p)

X = losing a finger. Y = losing a hand.

Losing a finger is traumatic and produces chronic disfigurement and loss of some manual dexterity, but (as long as it isn't a thumb or index finger) it isn't going to truly handicap someone. Losing a hand WILL truly handicap someone. I would rather everyone lose a finger than one person lose a hand.

Replies from: TuviaDulin, ArisKatsaris
comment by TuviaDulin · 2011-09-23T22:28:54.870Z · LW(p) · GW(p)

Actually, let's make it closer. X = losing a finger, Y = losing a thumb. My answer would still be the same. Missing a finger isn't a huge setback. Missing a thumb is.

comment by ArisKatsaris · 2011-09-23T23:30:22.233Z · LW(p) · GW(p)

I would rather everyone lose a finger than one person lose a hand.

I'm pretty sure that if an invading alien fleet came and demanded every human lose a single finger, there'd be more than enough people that'd be willing to sacrifice their very lives to prevent that tribute -- and though I'm not sure I'd be as brave as that, I'd most certainly be willing to sacrifice my hand in order to save a finger of each of 6 billion people.

Replies from: TuviaDulin
comment by TuviaDulin · 2011-09-23T23:38:16.701Z · LW(p) · GW(p)

People would sacrifice their lives for it. However, would that choice be rational? Especially if we consider the likelihood that a war with the aliens might result in massive civilian casualties? Fighting is only a good idea if winning puts you in a better position than you would otherwise be in.

Being willing to sacrifice your hand is noble, and I would probably do the same thing. But if you're talking about someone ELSE'S hand, you need to look at what losing a finger really costs in life experience and working ability versus losing a hand.

comment by ArisKatsaris · 2011-09-23T22:44:36.468Z · LW(p) · GW(p)

Totalling the value of a second from any number of humans can never equal the value of a human lifetime,

I don't see why not.

, because you won't have caused any serious problems for any person.

You'll have caused an infinitisemal problem to a truly humongous number of people.

Even before I had discovered LessWrong or met the dust-speck-vs-torture problem, I had publically wondered if some computer virus-creators (especially those famous viruses that affected millions of people worldwide, hijacking email services, etc) were even worse in the results of their actions than your average murderer or average rapist. They stole some minutes out of many million people's lives. Minutes stolen from how many millions people become morally equivalent to murdering a person?

So the issue exists: If dust specks aren't enough for you, how about breaking a leg of 3^^^^3 people. This doesn't ruin their whole life, but it may ruin a whole month for them. Does the equation seem different now (by talking about a month instead of millisecond), that you would prefer to have a single person tortured for 50 years instead of 3^^^^3 people having a leg broken?

Replies from: TuviaDulin
comment by TuviaDulin · 2011-09-23T22:53:52.128Z · LW(p) · GW(p)

Well, if those 3^^^^3 people being crippled for a month is going to shut down the galactic economy, then torturing someone for fifty years is preferable. If we're just taking the suffering of one person who broke his leg, then had 3^^^^3 other people endure the same thing in ISOLATION (say each of the leg-breakers lives in a different parallel universe and thus no society has to give more than one month's worth of worker's compensation), on the other hand, I would rather have everyone break their legs.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-09-23T23:24:19.342Z · LW(p) · GW(p)

I see. This is then no longer about not causing "serious problems" -- because a broken leg is a serious problem.

But how far does your argument extent. Let's increase the amount of individual harm: How about 3^^^^3 people tortured for 3 months, vs a single person being tortured for 50 years? Which would you rather?

How about 3^^^^3 people imprisoned unjustly for ten years, in rather bad but not torturing conditions, vs a single person being tortured for 50 years. Which would you rather?

- For the sake of this discussion, we indeed consider the cases individual (we can imagine each case happening in a parallel universe, as you suggest)

Replies from: TuviaDulin
comment by TuviaDulin · 2011-09-23T23:30:33.161Z · LW(p) · GW(p)

Three months of torture is enough to cause immense and longlasting psychological scarring. Ten years out of your life is something that changes the course of a person's life. I would rather someone be tortured for fifty years than have either of the above happen to a large number of people.

I think your choice of broken leg is pretty much exactly at the threshold. I can't think of anything worse than that that wouldn't stand a good chance of ruining someone's life.

comment by Tasky · 2011-09-24T00:13:03.343Z · LW(p) · GW(p)

I don't know if I understood your circular argument right, but you are basically saying that if 50 years of torture for one person (50yt1) < dustspeck for a googolplex (ds10^10^100) then 50yt1>49.9999999yt10^100>49.9999998yt10^200>...>ds10^10^100

if this is not what you are saying, then I don't understand your point and ask to elucidate it. if it is, then I think there is a serious flaw here: in the 50yt1 scenario, someone is suffering, i.e. feeling pain in the ds10^10^100 scenario, there is a mere annoyance. There has therefore to be a point in that sequence, where one can consistently argue X10^Y<X'10^2Y, where X is the last "pain", X' is the first mere annoyance, therefore interrupting the chain.

I hope this is understandable.

EDIT to avoid double post: I think the kind of reasoning you are using is very, very dangerous if you try a gradual transformation between two things with different quality not just quantity. It is clear that the two extremes of the sequence have a different quality, but you are assuming the only thing that changes is quantity.

comment by TuviaDulin · 2011-09-26T16:06:21.886Z · LW(p) · GW(p)

Here's another word problem for you.

There is a disease - painful, but not usually life threatening - that is rapidly becoming a pandemic. Medical science is not going to be able to cure the disease for the next several decades, which means that many millions of people will have to endure it, and a few dozen will probably die. You can find a cure for the disease, but to do so you'll have to perform agonizing, ultimately lethal, experiments on a young and healthy human subject.

Do you do it?

Replies from: Multipartite
comment by Multipartite · 2011-09-26T16:21:53.711Z · LW(p) · GW(p)

I note the answer to this seems particularly straightforward if the few dozen who would probably die would also have been young and healthy at the time. Even more convenient if the subject is a volunteer, and/or if the experimentor (possibly with a staff of non-sentient robot record-keepers and essay-compilers, rather than humans?) did them on himself/herself/themself(?).

(I personally have an extremely strong desire to survive eternally, but I understand there are (/have historically been) people who would willingly risk death or even die for certain in order to save others. Perhaps if sacrificing myself was the only way to save my sister, say, though that's a somewhat unfair situation to suggest as relevant. Again, tempting to just use a less-egocentric volunteer instead if available.) (Results-based reasoning, rather than idealistic/cautious action-based reasoning. Particularly given public backlash, I can understand why a governmental body would choose to keep its hands as clean as possible instead and allow a massive tragedy rather than staining their hands with a sin. Hmm.)

Replies from: None
comment by [deleted] · 2011-09-26T16:24:23.320Z · LW(p) · GW(p)

Perhaps if sacrificing myself was the only way to save my sister, say, though that's a somewhat unfair situation to suggest as relevant.

Assume the least-convenient possible world. It's not like this one is fair either...

Replies from: Multipartite
comment by Multipartite · 2011-09-26T16:38:10.752Z · LW(p) · GW(p)

Indeed. nods

If sacrifice of myself was necessary to (hope to?) save the person mentioned, I hope that I would {be consistent with my current perception of my likely actions} and go through with it, though I do not claim complete certainty of my actions.

If those that would die from the hypothetical disease were the soon-to-die-anyway (very elderly/infirm), I would likely choose to spend my time on more significant areas of research (life extension, more-fatal/-painful diseases).

If all other significant areas had been dealt with or were being adequately dealt with, perhaps rendering the disease the only remaining ailment that humanity suffered from, I might carry out the research for the sake of completeness. I might also wait a few decades depending on whether or not it would be fixed even without doing so.

A problem here is that the more inconvenient I make one decision, the more convenient I make the other. If I jump ahead to a hypothetical cases where the choices were completely balanced either way, I might just flip a coin, since I presumably wouldn't care which one I took.

Then again, the stacking could be chosen such that no matter which I took it would be emotionally devastating... that though conveniently (hah) comprises such a slim fraction of all possibilities that I gain by assuming there to always be some aspect that would make a difference or which could be exploited in some way, given that if there were then I could find it and make a sound decision and that if the weren't my position would not in fact change (by the nature of the balanced setup).

Stepping back and considering the least-convenient consideration argument, I notice that its primary function may be to get people to accept two options as both conceivable in different circumstances, rather than rejecting one on technicalities. If I already acknowledge that I would be inclined to make a different choice depending on different circumstances, am I freed from that application I wonder?

comment by lessdazed · 2011-11-07T21:00:37.592Z · LW(p) · GW(p)

Health professionals and consumers may change their choices when the same risks and risk reductions are presented using alternative statistical formats. Based on the results of 35 studies reporting 83 comparisons, we found the risk of a health outcome is better understood when it is presented as a natural frequency rather than a percentage for diagnostic and screening tests. For interventions, and on average, people perceive risk reductions to be larger and are more persuaded to adopt a health intervention when its effect is presented in relative terms (eg using relative risk reduction which represents a proportional reduction) rather than in absolute terms (eg using absolute risk reduction which represents a simple difference). We found no differences between health professionals and consumers. The implications for clinical and public health practice are limited by the lack of research on how these alternative presentations affect actual behaviour. However, there are strong logical arguments for not reporting relative values alone, as they do not allow a fair comparison of benefits and harms as absolute values do.

http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD006776.pub2/abstract

This is awesome: Spin the risk!

comment by [deleted] · 2012-03-18T10:15:53.583Z · LW(p) · GW(p)

I reject the idea that human suffering is a linear function. Once you accept such an idea, it's not too difficult too say that to avoid minor inconveniences of sufficiently large number of people we should torture one person for his whole life.

Here is a question to demonstrate:

Two people are scheduled to be tortured for no reason you know. One is to be tortured for two days, the other for three. You know that subjects are equally resistant to torture.

You have the choice:

  • Reduce the time one subject is tortured from 3 to 2 days. Other is tortured for 2d.
  • Reduce the time one subject is tortured from 2 to 1 day. Other is tortured for 3d.

Which choice would you make?

I certainly hope that most of so called utilitarians can see the importance of that choice. To grow infatuated with a small cold flame is to blind yourself to the possibility of being wrong in your estimates. Before you say that torture is preferable to dustspecks ask yourself how does human suffering scale from specks to torture.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-03-18T12:02:50.537Z · LW(p) · GW(p)

I reject the idea that human suffering is a linear function. Once you accept such an idea, it's not too difficult too say that to avoid minor inconveniences of sufficiently large number of people we should torture one person for his whole life.

To reject the second concept, you don't just need to reject the idea of human suffering as a linear function, you need to reject the idea of human suffering as quantifiable at all -- whether it can be expressed as a linear or exponential or any other kind of function.

Here is a question to demonstrate:

Two people are scheduled to be tortured for no reason you know. One is to be tortured for two days, the other for three. > You know that subjects are equally resistant to torture. You have the choice:
Reduce the time one subject is tortured from 3 to 2 days. Other is tortured for 2d. Reduce the time one subject is tortured from 2 to 1 day. Other is tortured for 3d.
Which choice would you make?

I don't actually understand the purpose of this question. For me to answer it correctly I'd have to know much more about the lasting effects of torture over 1, 2 or 3 days respectively to figure out whether "1day of torture + 3 days of torture" is greater or less disutility than "2 times 2 days of torture".

None of the utilitarians here, as far as I know, has ever argued that you can just multiply the timespan of torture to find the disutility thereof. I think your argument is not actually attacking anything we recognize as utilitarianism.

Before you say that torture is preferable to dustspecks ask yourself how does human suffering scale from specks to torture.

I attempted such a scaling in a thread of a different forum. For me It went a bit like this:
20,000 dust specks worse than a single papercut.
50 papercuts worse than a 5-minute hiccup
20 hiccups worse than a half-hour headache
50 half-hour headaches worse than a evening of diarrhea
400 diarrheas worse than a broken leg.
30 broken legs worse than a month of unjust imprisonments.
200 unjust imprisonments worse than a month of torture
100 torture-months worse than a torture year
10 torture-years worse than a torture of 5 years
10 torture-5-years worse than torture of 20 years
5 torture-20-years worse than torture of 50 years

That took me to 120,000,000,000,000,000,000 dustspecks (one speck per person) being worse than 50 years of torture. So basically specks equivalent to the population of 20 billion Earths

The number still seems a bit too small for me, and I'd currently probably revise upwards some of the steps above (e.g. the factor corresponding between diarrheas and broken legs, and perhaps a few other figures there).

Replies from: None
comment by [deleted] · 2012-03-18T12:34:43.173Z · LW(p) · GW(p)

Ah. But you see, I don't think one can get away with classifying any physical part of our universe as principally unquantifiable (Evidence leads me to believe that measurements of pain or damage are possible for example, though they are not ideally accurate). And I do not argue that no judgements in favour of moderate individual damage vs. huge spread damage will be justified. Just that in specific case of dustspecks versus torture I don't think most of us should choose torture.

I don't actually understand the purpose of this question. For me to answer it correctly I'd have to know much more about the lasting effects of torture over 1, 2 or 3 days respectively to figure out whether "1day of torture + 3 days of torture" is greater or less disutility than "2 times 2 days of torture".

The thing about that choice is that one does not have overwhelming evidence available. What would be your best estimate of disutility function, given the evidence you currently posses? If you personally had to make such a choice, you would be forced to consider at least some disutility function or admit that you are making a judgement without taking into account well-being of subjects. The whole idea behind that question is to point out that some kind of utility function is required and it is that function that ultimately determines your answer.

As to the correct answer, I don't see how could anyone ever give a perfectly correct one (as there is no way to know what specific effect torture will have on each of the subjects in advance, though in future I expect we will be able to give pretty good estimates), but if I was forced to make such a choice, I would definitely try to take the option with the least total damage to subjects. And I do not currently think that 3 days of torture would be less damaging than two and that the second day would do more or equal harm compared to the third.

None of the utilitarians here, as far as I know, has ever argued that you can just multiply the timespan of torture to find the disutility thereof. I think your argument is not actually attacking anything we recognize as utilitarianism.

I sure hope so. I expect that my meaning was lost in so called part. I am just terrified that some people will be tempted to don the robes of utilitarianism and argue in favour of oppressing some small groups for the benefit of the whole society at large. We may not favour such arguments in politics right now, but are you sure that in future such a flawed call to pragmatism will not become a danger? I'm even more terrified by the fact that Eliezer posted this here without several explicit warnings about such a danger. Just as knowing about biases can be dangerous this example is potentially lethal.

So the point I wanted to make is that before considering a choice in such a dilemma one has to carefully examine his (dis)utility functions, not just shut up and multiply.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-03-18T13:56:17.339Z · LW(p) · GW(p)

am just terrified that some people will be tempted to don the robes of utilitarianism and argue in favour of oppressing some small groups for the benefit of the whole society at large.

Oppression of minorities can happen both via consequentialistic claims (e.g. Stalin) but also via deontological claims (e.g. Islamists). But either way such societies have proved themselves tyrannical for pretty much almost everyone, not just the most oppressed minorities. So in those cases it's not "oppressing the few for the benefit of the many", it ends up being "oppressing the many for the benefit of the few". Likewise with slaveholding societies, etc..

A better example of oppressing the few for the benefit of the many is how our own (modern, Western) societies lock away criminals. We oppress prisoners (and frankly we oppress them too much and too needlessly in my point of view) for the supposed benefit of the whole society. You may argue that we oppress them because they deserve it and NOT for the benefit of society -- but would you consider the existence of prisons justifiable if it was just done because the prisoners deserve it, and not that society as a whole also benefitted by such locking away of prisoners? I, for one, would not.

If you argue that we only lock away the people we believe guilty -- that's not true either. We detain suspects as well, (and therefore oppress them in this manner), before their guilt is determined. And we fully expect that atleast some of them will be found not guilty. We currently accept this oppression of (relatively few) innocent, for the benefit of the society as a whole.

Replies from: None
comment by [deleted] · 2012-03-18T14:48:16.701Z · LW(p) · GW(p)

Indeed. But prisons can be justified from a pragmatic point of view. Certainly we detain these people for the benefit of the many, but we do not torture them and lately there is a trend to give them more opportunities to work and create. I should note here, that I absolutely abhor death penalty, so let's not go off on that tangent.

As for Stalin, I am ashamed to admit that I can not remember him actually using appeal to pragmatism to convince someone. Perhaps it was more like giving the audience a safe route out of challenging him after he made the decision alone. As in his argument sounds vaguely convincing so I don't have to feel guilty about avoiding being the first to dissent and going to GULag. Can you see how much more convenient it may be for a dictaor to give such a line of retreat? Usually dictators are not in position where there is a real need to convince people of something by arguing, as far as I can see.

What I have in mind is a situation sometime in not distant future, when appeals to pragmatism will become more common in politics, but general population is not yet ready to spot skewed utility functions in such a dilemma. So it will indeed become possible to convince the majority of population to willingly cooperate and oppress the few. Most people really don't require that much convincing when it comes to certain inconvenience for me vs. distant suffering for some strangers that I will probably never see type of choice anyway. And when such a Master of rationality as Eliezer himself argued in favour of torturing some hapless chap for 50 years just so many people would be spared an inconvenience of blinking once, you can see where this will go. I'm just afraid that while Eliezer tries to instil a certainly useful principle of shut up and multiply he may well be setting some people up to prefer "the many" side of such a dilemma. Cannot be too cautious when teaching aspiring rationalists.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-03-18T15:26:49.876Z · LW(p) · GW(p)

But prisons can be justified from a pragmatic point of view.

What's the difference between the "pragmatic point of view" (which it seems you justify) and the "benefit of the many" (which I understand you don't justify)? This seems to me a meaningless distinction.

Certainly we detain these people for the benefit of the many, but we do not torture them

Well, most people don't perceive enough benefit for society to hurting prisoners more than they currently are being hurt. So that's rather besides the point, isn't it? The point is we detain and oppress the few for the benefit of the many.

and lately there is a trend to give them more opportunities to work and create.

Even assuming I accept such a trend exists (not sure about it), again we don't consider such opportunities to be against the benefit of the many. So it's besides the point.

So it will indeed become possible to convince the majority of population to willingly cooperate and oppress the few.

As I already said we already cooperate in order to oppress the few. We call those few "prisoners", which we're oppressing for the benefit of the many.

And when such a Master of rationality as Eliezer himself argued in favour of torturing some hapless chap for 50 years just so many people would be spared an inconvenience of blinking once, you can see where this will go.

No, I'm sorry, but I really REALLY don't see where it's supposed to be going. In the current world people are tortured to death for much less reason than that. Not even for the small benefit of 3^^^^3 people, but for no benefit or even for negative benefit.

I'd rather argue with someone about torture on the terms of expected utility and disutility for the whole of humanity, rather than with someone who just repeats the mantra "If you oppose torture, then you're just a terrorist-lover who hates our freedoms" or for that matter the opposite "If you support torture for any reason whatsoever, even in extreme hypothetical scenarios, you're just as bad as the terrorists".

And currently it's the latter practice that seems dominant in actual discussions (and defenses also) of torture, not any utilitarian tactic of assigning utilities to expected outcomes.

Replies from: None
comment by [deleted] · 2012-03-18T16:13:11.658Z · LW(p) · GW(p)

What's the difference between the "pragmatic point of view" (which it seems you justify) and the "benefit of the many" (which I understand you don't justify)? This seems to me a meaningless distinction.

It seems that way because it is that way. I simply failed to communicate my idea properly. In fact I mentioned that

I do not argue that no judgements in favour of moderate individual damage vs. huge spread damage will be justified. Just that in specific case of dustspecks versus torture I don't think most of us should choose torture.

What I truly want is not to dismiss "benefit of the many"(nothing wrong with it), but to bring into focus the issue of comparing utility functions, which in this case I think Eliezer messed up.

As I already said we already cooperate in order to oppress the few. We call those few "prisoners", which we're oppressing for the benefit of the many.

Yes we do. And it seems that we both prefer to actually talk about such decisions in terms of utility gain or loss. But just because two of us are being reasonable does not mean that everyone else will be. What worries me is that some people learning about "the Way" form Eliezer's post may acquire a bit of bias toward "the many" side of such dilemmas. And then when the issue will arise in the future they will chose the wrong side and perhaps convince many others to take the wrong side.

No, I'm sorry, but I really REALLY don't see where it's supposed to be going. In the current world people are tortured to death for much less reason than that. Not even for the small benefit of 3^^^^3 people, but for no benefit or even for negative benefit.

Now this is not certain, but I expect Eliezer to have a huge impact on the future of our species, because issues of thinking and deciding are indeed central to our daily lives. And any inadvertent mistake here or in his book will have noticeable consequences. Someone in the future will take out that book and point to how Eliezer prefers to condemn one person to torture instead of having 3^^^^3 people blink, and the audience may well be convinced that it is better in general to prefer "the many", because Eliezer will be an authority and their brains will just dump 3^^^^3 into "many" mental bucket. Better to introduce a few cautionary lines into that post and book now, while there is time to do it.

I'd rather argue with someone about torture on the terms of expected utility and disutility for the whole of humanity, rather than with someone who just repeats the mantra "If you oppose torture, then you're just a terrorist-lover who hates our freedoms" or for that matter the opposite "If you support torture for any reason whatsoever, even in extreme hypothetical scenarios, you're just as bad as the terrorists".

So would I. I am not trying to argue with you here. As far as I can see we agree on pretty much everything so far. I probably just fail to convey my ideas most of the time.

comment by buybuydandavis · 2012-12-17T22:38:32.943Z · LW(p) · GW(p)

Altruism isn't the warm fuzzy feeling you get from being altruistic. If you're doing it for the spiritual benefit, that is nothing but selfishness. The primary thing is to help others, whatever the means. So shut up and multiply!

That's how you get a warm and fuzzy feeling if you're a consequentialist. If you're a deontologist, you get it by obedience to the Rule, or often more easily by saying "Yay, Rule!"

comment by [deleted] · 2012-12-20T17:00:29.522Z · LW(p) · GW(p)

How many units of physical pain per speck? How many units of perceived mistreatment per speck? How many instances of basic human rights infringements per speck?

I don't have a terminal value as following: "People should never experience an infinitesimally irritating dust speck hitting their eye"

I do have a terminal value as following: "People should never go through torture"

So in this case we can make another calculation which is: Googolplex of instances that are compatible with my terminal values vs a single event that is not compatible with my terminal values.

I'd like to add however that if these events are causally connected then dust specks would become the obvious choice. I'm sure there's a certain probability of getting into a car accident due to blinking etc, lots of other ways to make essentially the same argument. Anyway that aspect was not emphasized in either post so I take it was not intended either.

If the intial option 1 was written as "Save 400 lives with certainty, 100 people die with certainty" it would be less misleading. Because if you interpret the option 1 as no one dying, it actually is the correct choice, although it later becomes clear anyway.

Replies from: ArisKatsaris, ArisKatsaris
comment by ArisKatsaris · 2012-12-20T18:00:49.161Z · LW(p) · GW(p)

I don't have a terminal value as following: "People should never experience an infinitesimally irritating dust speck hitting their eye" I do have a terminal value as following: "People should never go through torture"

Are you sure you can identify your terminal values as well as that? Most people can't.

If so, can you please give a full list of your terminal values, or as full such a list as you can make it? Thanks in advance.

Replies from: None
comment by [deleted] · 2012-12-20T18:45:04.080Z · LW(p) · GW(p)

To identify a single value does not require you to identify all your values, which your sardonic comment seems to suggest. I chose that phrasing because it was plausible. In creating this example of terminal values I did no want to get into a full analysis of what's wrong here, I merely intended to point out that the torture is not an obvious option, and do that with a compact reply. The post seems to suggest convertibility between dust specks and torture, if you can come up with a couple of ways to weigh the situation where convertibility does not follow, it becomes a trivial issue to keep listing. That in my opinion is sufficient to conclude that there is no obvious ultimately absolutely correct right answer, and that you should proceed with care instead of shrugging and giving the torture verdict. Most of the actual problems with the dilemma do not stem from the number googolplex, but rather this being a hypothetical setup which seems to eliminate consequences, and consequences are usually a big of part of what people perceive right and wrong, however you can argue that when examining consequences eventually you will hit some kind terminal values. So there you have it.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-12-20T19:08:15.347Z · LW(p) · GW(p)

If you don't consider this particular one type of disutility (dust speck) convertible into the other (torture), the standard follow-up argument is to ask you to identify the smallest kind of disutility that might nonetheless be somehow convertible.

The typical list of examples include "a year's unjust imprisonment", "a broken leg", "a splitting migraine", "a diarrhea", "an annoying hiccup", "a papercut", "a stubbed toe".

Would any of these, if taking the place of the "dust speck", change your position so that it's now in favour of preferring to avert 3^^^3 repetitions of the lesser disutility, rather than avert the single person from being tortured?

E.g. is it better to save a single person from being tortured for 50 years, or better to save 3^^^3 people from suffering a year's unjust imprisonment?

Replies from: None
comment by [deleted] · 2012-12-20T19:42:03.099Z · LW(p) · GW(p)

As you can see from both of my above comments it's not the mathematical aspect that's problematic. You choosing the word "disutility" means you've already accepted these units as convertible to a single currency.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-12-20T20:14:33.156Z · LW(p) · GW(p)

In what manner would you prefer someone to decide such dilemmas? Arguing that the various sufferings might not be convertible at all is more of an additional problem, not a solution -- not an algorithm that indicates how a person or an AI should decide.

I don't expect that you think that an AI should explode in such a dilemma, nor that it should prefer to save neither potential torture victim nor potential dustspecked multitudes....

comment by ArisKatsaris · 2012-12-20T18:10:15.502Z · LW(p) · GW(p)

Because if you interpret the option 1 as no one dying

Such a reading would, frankly, be at the very least extremely careless.

When the juxtaposition is between saving 400 lives or saving 500 lives, it's obvious that an additional 100 people are NOT being saved in the first scenario.

comment by Jagan · 2013-01-23T06:50:29.911Z · LW(p) · GW(p)

You've officially given me the best example of the inherent flaw in the utilitarian model of morality. Normally, I use the example of a man who is the sole provider of an arbitrarily large family murdering an old homeless man. Utilitarianism says he should go free. The murder's family, of size X, will all experience disutility from his imprisonment. Call that Y. The homeless man, literally no one will miss. No family members to gain utility from exacting justice. Therefore, since X*Y > 0, the murderer should go back to providing for his family. I do not believe any rational person would consider that just, moral, or even reasonable.

I'm all for rational evaluations of problems, but rationality does not apply to moral arguments. Morality is an emotional response by its very nature. Rational arguments are fine when we're comparing large numbers of people. A plan that will save 400 lives vs. a plan that has a 90% chance to save 500 lives. That's not morality, that's rationality. It doesn't truly become about morality until it's personal. If you could save the lives of 3 people you've never met, would you let yourself be tortured? Would you torture someone? Regardless of your answer, it is easier said than done...

P.S. I'm not a psychologist, but I imagine if you had different answers to torturing vs. being tortured, that says something about you. Not sure what...

Replies from: CarlShulman, Peterdjones, MugaSofer
comment by CarlShulman · 2013-01-23T07:31:11.278Z · LW(p) · GW(p)

Normally, I use the example of a man who is the sole provider of an arbitrarily large family murdering an old homeless man. Utilitarianism says he should go free. The murder's family, of size X, will all experience disutility from his imprisonment. Call that Y. The homeless man, literally no one will miss. No family members to gain utility from exacting justice. Therefore, since X*Y > 0, the murderer should go back to providing for his family. I do not believe any rational person would consider that just, moral, or even reasonable.

Err...effectively legalizing murder of large classes of the population would tend to increase the murder rate, costing far more lives in aggregate, setting aside the dire consequences on social order and cooperation. You should use an example where the repellent recommendation actually increases rather than decreases happiness/welfare.

Replies from: Jagan
comment by Jagan · 2013-01-23T08:22:36.695Z · LW(p) · GW(p)

Well, I could qualify my example, saying surveillance ensures only people who provide zero utility are allowed to be murdered, but as I said, the article makes my point much better, even if it doesn't mean to. A single speck of dust, even an annoying and slightly painful one, in the eyes of X people NEVER adds up to 50 years of torture for an individual. It doesn't matter how large you make X, 7 billion, a googolplex, or 13^^^^^^^^41. It's irrelevant.

Replies from: TheOtherDave, MugaSofer
comment by TheOtherDave · 2013-01-23T16:02:54.868Z · LW(p) · GW(p)

Imagine that you find yourself visiting a hypothetical culture that acknowledges two importantly distinct classes of people: masters and slaves. By cultural convention, slaves are understood to have effectively no moral weight; causing their suffering, death, injury etc. is simply a property crime, analogous to vandalism. Slaves and masters are distinguished solely by a visible hereditable trait that you don't consider in any wayrelevant to their moral weight as people.

Shortly after your arrival, a thousand slaves are rounded up and killed. You, as a properly emotional moral thinker, presumably express your dismay at this, and the natives explain that you needn't worry; it was just a market correction and the economics of the situation are such that the masters are better off now. You explain in turn that your dismay is not economic in nature; it's because those slaves have moral weight.

They look at you, puzzled.

How might you go about explaining to them that they're wrong, and slaves really do have moral weight?

Some time later, you return home, and find yourself entertaining a visitor from another realm who is horrified by the discovery that a million old automobiles have recently been destroyed. You explain that it's OK, the materials are being recycled to make better products, and he explains in turn that his dismay is because automobiles have moral weight.

How might you go about explaining to him that he's wrong, and cars really don't have moral weight?

Replies from: Peterdjones
comment by Peterdjones · 2013-01-23T16:09:39.874Z · LW(p) · GW(p)

"You might have been a slave" is imaginable in a way that "you might have been an automobil" is not. See Rawls and Kant.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-23T16:52:25.025Z · LW(p) · GW(p)

Yup. But would they argue as Jagan did that "rationality does not apply to moral arguments. Morality is an emotional response by its very nature"? I'm specifically interested in Jagan's answers to my questions, given that assertion.

comment by MugaSofer · 2013-01-24T13:32:53.189Z · LW(p) · GW(p)

I could qualify my example, saying surveillance ensures only people who provide zero utility are allowed to be murdered,

If some people's lives are worth zero utility, then by definition they are worthless. That's what "zero utility" means. Did you mean something else? Because it seems to me that nobody is worthless to me in real life, and that's why your example doesn't work.

A single speck of dust, even an annoying and slightly painful one, in the eyes of X people NEVER adds up to 50 years of torture for an individual. It doesn't matter how large you make X, 7 billion, a googolplex, or 13^^^^^^^^41. It's irrelevant.

And you judge it irrelevant based on what? Considering that scope insensitivity is a known bias in humans, so "instinct" is reliably going to go wrong in this case without mindhacking. Two murders are worse than one murder, two gorups of people with dustspecks in heir eyes are worse than one such group; at what point does this stop being true?

comment by Peterdjones · 2013-01-23T16:15:10.724Z · LW(p) · GW(p)

The murder's family, of size X, will all experience disutility from his imprisonment. Call that Y. The homeless man, literally no one will miss. No family members to gain utility from exacting justice. Therefore, since X*Y > 0, the murderer should go back to providing for his family.

You're overlooking the disutility to the murdered man. Actually, what you describe is Prudent Predation, a famous objection to egoism, not utilitarianism.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-24T11:57:35.950Z · LW(p) · GW(p)

I think you forgot to finish this:

Actually, what you describe is Prudent Predation, a famous objection to egoism, not

Excellent point about the murdered man, though.

comment by MugaSofer · 2013-01-24T13:56:30.377Z · LW(p) · GW(p)

I'm all for rational evaluations of problems, but rationality does not apply to moral arguments. Morality is an emotional response by its very nature. Rational arguments are fine when we're comparing large numbers of people.

I don't understand this. Sure, small amounts often have more emotional force ("near mode") than large ones ("far mode".) But that doesn't make it right to let your bias hurt people. OTOH, you said "It doesn't truly become about morality until it's personal", so maybe you mean something unusual when you say "morality".

I'm not a psychologist, but I imagine if you had different answers to torturing vs. being tortured, that says something about you. Not sure what...

Humans are often unable to conform perfectly to their desires, even when they know what the best choice is. It's known as "akrasia". For example, addicts often want to stop taking the drugs. If you couldn't bring yourself to make that sacrifice, that doesn't mean you shouldn't, or that you believe you shouldn't. (Not saying you think it does, just noting for the record.)

comment by sjmp · 2013-04-17T16:15:42.624Z · LW(p) · GW(p)

You are making assumption the feeling caused by having dust spec in your eye is in same category as feeling of being tortured for 50 years.

Would you rather have googolplex people drink a glass of water or have one person tortured for 50 years? Would you rather have googolplex people put on their underwear in the morning or have one person tortured for 50 years? If you put feeling of dust spec in same category as feelings arising from 50 years of torture, you can put pretty much anything in that category and you end up preferring one person being tortured for 50 years to to almost any physical phenomena that would happen to googolplex people.

And even if it's in same category? I bet that just having a thought causes some extremely small activity in brain areas related to pain. Multiply that by large enough number and the total pain value will be greater than pain value of person being tortured for 50 years! I would hope that there is no one who would prefer one person being tortured for 50 years to 3^^^3 persons having a thought...

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-17T18:06:34.803Z · LW(p) · GW(p)

You are dodging an important part of the question.

The "dust speck" was originally adopted as a convenient label for the smallest imaginable unit of disutility. If I believe that disutility exists at all and that events can be ranked by how much disutility they cause, it seems to follows that there's some "smallest amount of disutility I'm willing to talk about." If it's not a dust speck for you, fine; pick a different example: stubbing your toe, maybe. Or if that's not bad enough to appear on your radar screen, cutting your toe off. The particular doesn't matter.

Whatever particular small problem you choose, then ask yourself how you compare small-problem-to-lots-of-people with large-problem-to-fewer-people.

If disutilities add across people, then for some number of people I arrive at the counterintuitive conclusion that 50 years of torture to one person is preferable to small-problem-to-lots-of-people. And if I resist the temptation to flinch, I can either learn something about my intuitions and how they break down when faced with very large and very small numbers, or I can endorse my intuitions and reject the idea that disutilities add across people.

Replies from: sjmp
comment by sjmp · 2013-04-17T19:46:24.020Z · LW(p) · GW(p)

Whatever particular small problem you choose, then ask yourself how you compare small-problem-to-lots-of-people with large-problem-to-fewer-people. If disutilities add across people, then for some number of people I arrive at the counterintuitive conclusion that 50 years of torture to one person is preferable to small-problem-to-lots-of-people.

It is counterintuitive, and at least for me it's REALLY counterintuitive. On wether to save 400 people or 500 people with 90% change it didn't take me many seconds to choose second option, but this feels very different. Now that you put it in terms of unit of disutily isntead of dust specks it is easier to think about, and on some level it does feel like torture of one person would be the logical choice. And then part of my mind starts screamin this is wrong.

Thanks for your reply though, I'll have to think about all this.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-17T20:16:18.729Z · LW(p) · GW(p)

I suspect it's really counterintuitive to most people. That's why it gets so much discussion, and in particular why so many people fight the hypothetical so hard. The "yeah, that makes sense, but then my brain starts screaming" reaction is pretty common.

And yes, I agree that if we compare things that are closer together in scale, our intuitions don't break down quite so dramatically.

comment by Voltairina · 2013-08-21T01:33:43.310Z · LW(p) · GW(p)

It would be a very different kind of evaluation, but the importance would matter differently if it were the /last/ 500 humans we were talking about - and there was a 90% chance that all would live and a 10% chance that all would die on one pathway versus a guaranteed 100 dying on the other pathway. But since they are just /some group/ of 500 humans with presumedly other groups other places, it is worth the investment - gambling in this way pays out in less lives lost, on average.

comment by christopherj · 2013-10-09T05:19:11.993Z · LW(p) · GW(p)

In the torture vs dust specks comparison, it is important not to discard the disutilities of unfairness, nor of moral hazards. One cannot publicly acknowledge the superiority of "one guy tortured" vs " lots of people mildly inconvenienced" without others, including potentially our politicians, or enemies, deciding that this supports their use of actual torture on actual people "for the greater good". Accepting torture has a negative utility for many people.

Also we humans value fairness, and prefer that things be evenly distributed (fairness has positive utility). The disutility of even a tiny fraction of those people knowing that someone was tortured so as to spare them from dust specks, when added together, would probably exceed that of the person being tortured. The danger of "shut up and multiply" is that someone might be multiplying the wrong things.

Rejecting the principle that we can't, in general, sacrifice one person for the good of many, also has disutility. If we were to accept torturing someone to prevent a lot of dust specks, imagine how much time would have to be spent arguing whether we can take away this other guy's property for some greater good (which might fail to deliver, and might have been suggested for someone's self-interest).

comment by Taurus_Londono · 2013-11-29T16:28:43.601Z · LW(p) · GW(p)

Raise your hand if you (yes you, the person reading this) will submit to 50 years of torture in order to avert "least bad" dust speck momentarily finding its way into the eyes of an unimaginably large number of people.

Why was it not written "I, Eliezer Yudkowsky, should choose to submit to 50 years of torture in place of a googolplex people getting dust specks in their eyes"?

Why restrict yourself to the comforting distance of omniscience?

Did Miyamoto Musashi ever exhort the reader to ask his sword what he should want? Why is this not a case of using a tool as an end in and of itself rather than as a means to achieve a desired end?

Are you irrational if your something to protect is yourself...from torture?

Has anyone ever addressed whether or not this applies to the AGI Utility Monster whose experiential capacity would presumably exceed the ~7 billion humans who should rationally subserve Its interests (whatever they may be)?

Replies from: TheOtherDave, ArisKatsaris, hyporational
comment by TheOtherDave · 2013-11-29T18:21:47.050Z · LW(p) · GW(p)

I would not submit to 50 years of torture to avert a dust speck in the eyes of lots of people.
I suspect I also would not submit to 50 years of torture to avert a stranger being subjected to 55 years of torture.
It's not clear to me what, if anything, I should infer from this.

Replies from: hyporational, army1987
comment by hyporational · 2013-11-30T07:18:38.565Z · LW(p) · GW(p)

Ready the tar and feather, but I woudn't submit myself to even 1 year of torture to avert a stranger being tortured 50 years if no terrible social repercussions could be expected.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-30T15:40:42.330Z · LW(p) · GW(p)

Yup. I suspect that's true of the overwhelming majority of people. It's most likely true of me.

comment by A1987dM (army1987) · 2013-11-30T09:37:04.433Z · LW(p) · GW(p)

That you value yourself more than a stranger. (I don't think there's anything wrong with that, BTW, so long as this doesn't mean you'd defect in a PD against them.)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-30T15:43:47.190Z · LW(p) · GW(p)

Sure. Sorry, what I meant was it's not clear what I should infer from this about the relative harmfulness of 50 years of torture, 55 years of torture, and Dust Specks.

Mostly, what it seems to imply is that "would I choose A over B?" doesn't necessarily have much to do with the harmfulness to the system as a whole of A and B.

comment by ArisKatsaris · 2013-11-29T20:10:27.284Z · LW(p) · GW(p)

I suffer under no delusion that I'm a morally perfect individual.

You seem to believe that to identify what's the morally correct path, one must also be willing to follow it. Morality pushes our wills towards that direction, but selfishness has its own role to play and here it pushes elsewhere.

But yes, I am willing to say that I should submit to 50 years of torture in order to save 3^^^3 people getting dust specks in their eyes. I'll also openly admit that that I am not willing to submit to such. This is not contradictory: "should" is a moral judgment, but being willing to be moral at such high cost is another thing entirely.

comment by hyporational · 2013-11-30T07:15:25.064Z · LW(p) · GW(p)

Why was it not written "I, Eliezer Yudkowsky, should choose to submit to 50 years of torture in place of a googolplex people getting dust specks in their eyes"?

Because then it's clearly not the same argument anymore, and would appeal only to people who ascribe to even a narrower form of incredibly altruistic utilitarianism, who I personally suspect don't even exist statistically speaking. Say the person chosen for torture is random, then it would make a bit more sense, but would essentially be the same argument given the ridiculously high numbers involved.

comment by more_wrong · 2014-05-27T02:18:26.842Z · LW(p) · GW(p)

It depends on the actual situation and my goal.

Imagine I were a ship captain assigned to try to If rescue a viable sample of a culture from a zone that was about to be genocided, I would be very likely to take the 400 peopleweights (including books or whatever else they valued as much as people) of evacuees, unless someone made a convincing case that the extra 100 people were vital cultural or genetic carriers. For definiteness, imagine my ship is rated to carry up to 400 peopleweight worth of passengers in almost any weather, but 500 people would overload it to the point of sinking during a storm of the sort that the weather experts predict 10 percent probable during voyage to safe harbor.

People are not dollars or bales of cotton to be sold at market. You can't just count heads and multiply that number by utilons per head and say "This answer is best, any other answer is foolish."

Well obviously you can do that, but the main reward for doing so is the feeling that you are smarter than the poor dumb fools who believe that the world is complex and situation dependent. That is, you can give yourself a sort of warm fuzzy feeling of smug superiority by defeating the straw man you constructed as your foolish competitor in the Intelligence Sweepstakes.

That being said, if there really is no other information available, I would take the same choice Eliezer recommends; I just deny that it is the only non foolish choice.

This applies to lottery tickets as well. A slim chance at escaping economic hell might be worth more than its nominal expected return value to a given individual. 100 million dollars might very well have a personal utility over a billion times the value of one dollar for example, if that person's deep goals would be facilitated mightily by the big win and not at all by a single dollar or any reasonable number of dollars they might expect to save over the available time. Also, if any entertainment dollar is not a foolish waste, then a dollar spent on a lottery ticket is worth its expected winning value plus its entertainment value, which varies /profoundly/ from person to person.

I myself prefer to give people $1 lottery tickets instead of $2.95 witty birthday cards. Am I wise or foolish in this? But posts here have branded all lottery purchases as foolish, so I must be a fool. I bow to the collective wisdom here and admit that I am a fool. There is a lot of other evidence that supports this conclusion :)

if you give yourself over to rationality without holding back, you will find that rationality gives to you in return.

I heartily agree, that's one reason I try to avoid trotting out applause lights to trigger other people into giving me warm fuzzies.

I am happy for one person to be tortured for 50 years to stave off the dust specks, as long as that person is me. In fact, this pretty much sums up my career in software development, it is not my favorite thing to do but I endured cubicle hell for many years in exchange for money in part, but also because of my deep belief that solving annoying little bugs and glitches that might inconvenience many many people was an activity important enough to override my personal preferences; I could easily have found other combinations of pay and fun that pleased me better, so I have actually been through this dilemma in muted form in real life and chose to personally suffer to hold off 'specks' like poorly designed user interfaces.

I do have great admiration for Eliezer but he claims to want to be more rational and to welcome criticism intended to promote his progress on The Way, so I thought it would be ok to be critical of this post, which irked me because paragraph four is a straw man "fool" phrased in second person, which seems like a sort of pre-emptive ad hominem against any reader of the post foolish enough to disagree with the premise of the writer. This seems like an extremely poor substitute for rational discourse, the sort of nonsense that could cost the writer Quirrell points, and none of us want that. I don't want to seem hostile, but since I am exactly the sort of fool who disagreed with the premise of paragraph 3, I do feel like I was being flamed a bit, and since I am apparently made of straw, flames make me nervous :)

Replies from: Jiro, ChristianKl
comment by Jiro · 2014-05-27T16:15:33.011Z · LW(p) · GW(p)

I myself prefer to give people $1 lottery tickets instead of $2.95 witty birthday cards. Am I wise or foolish in this?

You are foolish in this.

Birthday cards show that you specifically thought of someone's birthday and are celebrating it. Giving them something generic, regardless of value, doesn't serve the same purpose as a birthday card. By your reasoning you could not only substitute lottery tickets for birthday cards, you could substitute lottery tickets for saying the words "happy birthday" as well, thus never wishing them a happy birthday either.

Furthermore, since the lottery ticket is cheaper than the birthday card, and everyone knows this, and (apparently) this cheapness is one of your reasons for doing this, you are violating social expectations about when it is acceptable to be obviously cheap. (You can still be cheap, but you can't be obviously cheap about it.)

comment by ChristianKl · 2014-05-27T16:46:28.602Z · LW(p) · GW(p)

I myself prefer to give people $1 lottery tickets instead of $2.95 witty birthday cards. Am I wise or foolish in this? But posts here have branded all lottery purchases as foolish, so I must be a fool.

The foolish thing is to consider those two options the only choices.

comment by matteyas · 2014-10-18T00:22:01.285Z · LW(p) · GW(p)

This threshold thing is interesting. Just to make the idea itself solid, imagine this. You have a type of iron bar that can bend completely elastically (no deformation) if forces less than 100N is applied to it. Say they are more valuable if they have no such deformations. Would you apply 90N to 5 billion bars or 110N to one bar?

With this thought experiment, I reckon the idea is solidified and obvious, yes? The question that still remains, then, is whether dust specks in eyes is or is not affected by some threshold.

Though I suppose the issue could actually be dropped completely, if we now agree that the idea of threshold is real. If there is a threshold and something is below that threshold, then the utility of doing it is indeed zero, regardless of how many times you do it. If something is above the threshold, shut up (or don't) and multiply.

comment by Elund · 2014-10-21T06:31:17.789Z · LW(p) · GW(p)

"Most people choose option 1." I find this hard to believe. Were they forced to answer quickly or under cognitive load, and without access to a calculator or pen and paper? I would appreciate it if you could edit the post to provide a citation.

comment by josinalvo · 2014-12-05T19:41:11.380Z · LW(p) · GW(p)

I am truly confused. This post does not endorse either side.

I just would like to note something about my cognitive process here: in the "step by step" argument, what I seem to be thinking is "rigorously the same torture" and "for more people". The argument may be sound, but it does not seem to be hitting my brain in a sound way

comment by Epictetus · 2015-01-07T21:47:14.744Z · LW(p) · GW(p)

Let's turn the argument on its head: if you could abolish torture at the cost of everybody getting a speck of dust in their eye, would you do it?

Next question: If you could stop the Holocaust at the cost of N people getting dust specks in their eye, what's the maximum value you'd permit for N? That is, is there a number N with the property that if N people get dust specks in their eye, you prevent the Holocaust, but if N+1 people get dust specks in their eye, then the Holocaust proceeds on schedule?

Arguments that begin with something benign and proceed by degree to silly conclusions have been around since ancient times. Dressing them up in the language of mathematics doesn't change their nature. The kind of utilitarian model presented here is not an accurate reflection of the real world. Locally you can get reasonable results if you decide to treat similar things on a linear scale, but for disparate things the linear approximation breaks down. You can still plug numbers into the model, but the answer is meaningless.

comment by dlrlw · 2015-01-21T15:24:27.729Z · LW(p) · GW(p)

The problem here is that you don't KNOW that the probability is 90%. What if it's 80%? or 60%? or 12%? In real life you will only run the experiment once. The probabilities are just a GUESS. The person who is making the guess has no idea what the real probabilities are. And as Mr. Yudkowsky has pointed out elsewhere, people consistently tend to underestimate the difficulty of a task. They can't even estimate with any accuracy how long it will take them to finish their homework. If you aren't in the business of saving people's lives in EXACTLY this same way, on a regular basis, the estimate of 90% is probably crap. And so is the estimate of 100% probability of saving 400 lives. All you can really say, is that you see fewer difficulties that way, from where you are standing now. It's a crap shoot, either way, because, once you get started, no matter which option you choose, difficulties you hadn't anticipated will arise.

This reminds me of 'the bridge experiment', where a test subject is given the opportunity to throw a fat person off a bridge in front of a train, and thereby save the lives of 5 persons trapped on the tracks up ahead. The psychologists bemoaned the lack of rationality of the test subjects, since most of them wouldn't throw the fat person off the bridge, and thus trade the lives of one person, for five. I was like, 'ARE YOU CRAZY? Do you think one fat person would DERAIL A TRAIN? What do you think cow catchers are for, fool? What if he BOUNCED a couple of times, and didn't end up on the rails? It's preposterous. The odds are 1000 to 1 against success. No sane person would take that bet.'

The psychologists supposedly fixed this concern by telling the test subjects that it was guaranteed that throwing the fat person off the bridge would succeed. Didn't work, because people STILL wouldn't buy into their preposterous plan.

Then the psychologists changed the experiment so that the test subject would just have to throw a switch on the track which would divert the train from the track where the five people were trapped to a track where just one person was trapped (still fat by the way). Far more of the test subjects said they would flip the switch than had said they would throw someone off the bridge. The psychologists suggested some preposterous sounding reason for the difference, I don't even remember what, but it seemed to me that the change was because the plan just seemed a lot more likely to succeed. The test subjects DISCOUNTED the assurances of the psychologists that the 'throw someone off the bridge plan' would succeed. And quite rationally too, if you ask me. What rational person would rely on the opinion of a psychologist on such a matter?

When the 90%/500 or 100%/400 question was posed, I felt myself having exactly the same reaction. I immediately felt DUBIOUS that the odds were actually 90%. I immediately discounted the odds. By quite a bit, in fact. Perhaps that was because of lack of self confidence, or hard won pessimism from years of real life experience, but I immediately discounted the odds. I bet a lot of other people did too. And I wouldn't take the bet, for exactly that reason. I didn't BELIEVE the odds, as given. I was skeptical. Interestingly enough though, I was less skeptical of the 'can't fail/100%' estimate, than of the 90% estimate. Maybe I could easily imagine a scenario where there was no chance of failure at all, but couldn't easily imagine a scenario where the odds were, reliably, 90%. Once you start throwing around numbers like 90%, in an imperfect world, what you're really saying is 'there is SOME chance of failure'. Estimating how much chance, would be very much a judgement call.

So maybe what you're looking at here isn't irrationality, or the inability to multiply, but rather rational pessimism about it being as easy as claimed.

comment by AlexanderRM · 2015-03-27T22:31:12.666Z · LW(p) · GW(p)

"My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective. The governmental agency rejected the report because, they said, you couldn't put a dollar value on human life. After rejecting the report, the agency decided [i]not[/i] to implement the measure."

Does anyone know of a citation for this? Because I'd really like to be able to share it. I found this really, really, hilarious until I realized that, according to Eliezer, it actually happened and killed people. Although it's still hilarious, just simultaneously horrifying. It sounds like somebody misunderstood the point of their own moral grandstanding. (on the other hand, I suppose a Deontologist could in fact say "you can't put a dollar value on human life" and literally mean "comparing human lives to dollars is inherently immoral", not "human lives have a value of infinity dollars". To me as a consequentialist the former seems even stupider than the latter, but in deontology it's acceptable moral reasoning.)

comment by David Keith Maslen · 2022-08-14T17:16:19.369Z · LW(p) · GW(p)

Small inconveniences have consequences, and a googolplex is a very large number. A speck of dust in a googolplex people’s eye is a inconceivable bad thing., and I think most people would understand that if they thought what that entailed. What’s the chance that a momentary distraction will cause an accident? One in a billion? One in a quadrillion? Anyway you can be sure that a googolplex specks of dust will lead to large numbers of deaths and years of pain (much bigger than a googol). I suppose there is some minute way in which a speck of dust is more likely to cause a negative outcome than a positive - if that’s so then using a  “realistic” probability (say > 1e-100) the specks of dust seem by far the worse option. Shut up and multiply indeed!

I think the more interesting moral conclusion is that when you think about a probabilistic and uncertain world these minor inconveniences are not qualitatively different from major things like torture and death. The issue is that people don’t think about the variability in outcomes. I don’t think it’s a matter of “cold heartedly calculating”, but rather realizing that large numbers of small things have large consequences.

A more common example would be a traffic delay. Suppose you cause a 10 minutes delay to all traffic world-wide. What do you get? First it’s cost of some billions of dollars of lost work. Secondly some ambulances arrive to hospitals too late for the patients to survive. Maybe some police and firemen are delayed. 

But back to the original example. When you are talking about numbers like a googolplex I’m not sure there is any negative thing that is noticeable or could be described that wouldn’t multiply out to an obvious disaster which most people would consider far outweighs the torture (i.e. torture of far greater numbers for longer).

So maybe the problem is not that people view some things as sacred, but rather that they haven’t been explained the stakes? 

The use of torture in the example also muddies the issue I think. There are complicated reasons why people are against torture other than the pain inflicted. There is a (very imperfect) taboo against torture (and murder) and breaking this taboo is considered to possibly leading to more people tortured. So the taboo nature of torture interferes with the ability or desire to calculate.