Torture vs. Dust Specks

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-30T02:50:28.000Z · LW · GW · Legacy · 629 comments

Contents

631 comments

"What's the worst that can happen?" goes the optimistic saying.  It's probably a bad question to ask anyone with a creative imagination.  Let's consider the problem on an individual level: it's not really the worst that can happen, but would nonetheless be fairly bad, if you were horribly tortured for a number of years.  This is one of the worse things that can realistically happen to one person in today's world.

What's the least bad, bad thing that can happen?  Well, suppose a dust speck floated into your eye and irritated it just a little, for a fraction of a second, barely enough to make you notice before you blink and wipe away the dust speck.

For our next ingredient, we need a large number.  Let's use 3^^^3, written in Knuth's up-arrow notation:

3^^^3 is an exponential tower of 3s which is 7,625,597,484,987 layers tall.  You start with 1; raise 3 to the power of 1 to get 3; raise 3 to the power of 3 to get 27; raise 3 to the power of 27 to get 7625597484987; raise 3 to the power of 7625597484987 to get a number much larger than the number of atoms in the universe, but which could still be written down in base 10, on 100 square kilometers of paper; then raise 3 to that power; and continue until you've exponentiated 7625597484987 times.  That's 3^^^3.  It's the smallest simple inconceivably huge number I know.

Now here's the moral dilemma.  If neither event is going to happen to you personally, but you still had to choose one or the other:

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

I think the answer is obvious.  How about you?

629 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Tom_McCabe2 · 2007-10-30T03:25:11.000Z · LW(p) · GW(p)

Does this analysis focus on pure, monotone utility, or does it include the huge ripple effect putting dust specks into so many people's eyes would have? Are these people with normal lives, or created specifically for this one experience?

Replies from: lockeandkeynes, VAuroch
comment by lockeandkeynes · 2010-07-07T19:29:59.461Z · LW(p) · GW(p)

I think you can be allowed to imagine that any ripple effect caused by someone getting a barely-noticeable dust speck in their eyes (perhaps it makes someone mad enough to beat his dog) would be about the same as that of the torture (perhaps the torturers go home and beat their dogs because they're so desensitized to torturing).

comment by VAuroch · 2015-01-23T13:42:04.238Z · LW(p) · GW(p)

The ripple effect is real, but as in Pascal's Wager, for every possible situation where the timing is critical and something bad will happen if you are distracted for a moment, there's a counterbalancing situation where the timing is critical and something bad will happen unless you are distracted for a moment, so those probably balance out into noise.

Replies from: DragonGod
comment by DragonGod · 2017-06-08T13:41:07.871Z · LW(p) · GW(p)

I doubt this.

Replies from: VAuroch
comment by VAuroch · 2017-06-15T05:54:37.258Z · LW(p) · GW(p)

Why?

comment by g · 2007-10-30T03:36:34.000Z · LW(p) · GW(p)

The answer that's obvious to me is that my mental moral machinery -- both the bit that says "specks of dust in the eye can't outweigh torture, no matter how many there are" and the bit that says "however small the badness of a thing, enough repetition of it can make it arbitrarily awful" or "maximize expected sum of utilities" -- wasn't designed for questions with numbers like 3^^^3 in. In view of which, I profoundly mistrust any answer I might happen to find "obvious" to the question itself.

Replies from: RomanDavis, adamisom
comment by RomanDavis · 2010-05-24T13:54:25.848Z · LW(p) · GW(p)

Isn't this just appeal to humility? If not, what makes this different?

Replies from: MathMage
comment by MathMage · 2018-06-21T19:38:01.057Z · LW(p) · GW(p)

It is not humility to note that extrapolating models unimaginably far beyond their normal operating ranges is a fraught business. Just because we can apply a certain utility approximation to our monkeysphere, or even a few orders of magnitude above our monkeysphere, doesn't mean the limiting behavior matches our approximation.

comment by adamisom · 2012-11-04T03:29:05.128Z · LW(p) · GW(p)

In other words, you're meta-cogitation is 1 - do I trust my very certain intuition? or 2 - do I trust the heuristic from formal/mathematical thinking (that I see as useful partially and specifically to compensate for inaccuracies in our intuition)?

comment by Anon6 · 2007-10-30T03:48:25.000Z · LW(p) · GW(p)

Since there was a post about what seems obvious to the speaker might not be to the listener in this blog a few days ago, I thought I would point out that : It was NOT AT ALL obvious to me what should be preferred, torture 1 man for 50 years or speck of dust in 3^^^3 people. Can you please plase clarify/update what the point of the post was?

comment by Michael_G.R. · 2007-10-30T04:12:35.000Z · LW(p) · GW(p)

The dust speck is described as "barely enough to make you notice", so however many people it would happen to, it seems better than even something a lot less worse than 50 years of horrible torture. There are so many irritating things that a human barely notices in his/her life, what's an extra dust speck?

I think I'd trade the dust specks for even a kick in the groin.

But hey, maybe I'm missing something here...

Replies from: Insert_Idionym_Here
comment by Insert_Idionym_Here · 2011-10-31T18:17:34.496Z · LW(p) · GW(p)

If 3^^^3 people get dust in their eye, an extraordinary number of people will die. I'm not thinking even 1% of those affected will die, but perhaps 0.000000000000001% might, if that. But when dealing with numbers this huge, I think the death toll would measure greater than 7 billion. Knowing this, I would take the torture.

Replies from: thomblake
comment by thomblake · 2011-12-16T20:11:52.538Z · LW(p) · GW(p)

If 3^^^3 people get dust in their eye, an extraordinary number of people will die.

The premise assumes it's "barely enough to make you notice", which was supposed to rule out any other unpleasant side-effects.

Replies from: Insert_Idionym_Here
comment by Insert_Idionym_Here · 2011-12-16T21:23:49.733Z · LW(p) · GW(p)

No, I'm pretty sure it makes you notice. It's "enough". "barely enough", but still "enough". However, that doesn't seem to be what's really important. If I consider you to be correct in your interpretation of the dilemma, in that there are no other side effects, then yes, the 3^^^3 people getting dust in their eyes is a much better choice.

Replies from: dlthomas, dlthomas
comment by dlthomas · 2011-12-16T21:41:00.044Z · LW(p) · GW(p)

The thought experiment is, 3^^^3 bad events, each just so bad that you notice their badness. Considering consequences of the particular bad thing means that in fact there are other things as well that are depending on your choice, and that's a different thought experiment.

Replies from: Insert_Idionym_Here
comment by Insert_Idionym_Here · 2011-12-16T21:51:13.239Z · LW(p) · GW(p)

That is in no way what was said. Also, the idea of an event that somehow manages to have no effect aside from being bad is... insanely contrived. More contrived than the dilemma itself.

However, let's say that instead of 3^^^3 people getting dust in their eye, 3^^^3 people experience a single nano-second of despair, which is immediately erased from their memory to prevent any psychological damage. If I had a choice between that and torturing a person for 50 years, then I would probably choose the former.

Replies from: dlthomas
comment by dlthomas · 2011-12-16T22:00:31.501Z · LW(p) · GW(p)

That is in no way what was said. Also, the idea of an event that somehow manages to have no effect aside from being bad is... insanely contrived. More contrived than the dilemma itself.

The notion of 3^^^3 events of any sort is far more contrived than the elimination of knock-on effects of an event. There isn't enough matter in the universe to make that many dust specks, let alone the eyes to be hit and nervous systems to experience it. Of course it's contrived. It's a thought experiment. I don't assert that the original formulation makes it entirely clear; my point is to keep the focus on the actual relevant bit of the experiment - if you wander, you're answering a less interesting question.

Replies from: Insert_Idionym_Here
comment by Insert_Idionym_Here · 2011-12-16T22:10:15.873Z · LW(p) · GW(p)

I don't agree. The existence 3^^^3 people, or 3^^^3 dust specks, is impossible because there isn't enough matter, as you said. The existence of an event that has only effects that are tailored to fit a particular person's idea of 'bad' does not fit my model of how causality works. That seems like a worse infraction, to me.

However, all of that is irrelevant, because I answered the more "interesting question" in the comment you quoted. To be blunt, why are we still talking about this?

Replies from: dlthomas
comment by dlthomas · 2011-12-16T22:21:51.351Z · LW(p) · GW(p)

I don't agree. The existence 3^^^3 people, or 3^^^3 dust specks, is impossible because there isn't enough matter, as you said. The existence of an event that has only effects that are tailored to fit a particular person's idea of 'bad' does not fit my model of how causality works. That seems like a worse infraction, to me.

I'm not sure I agree, but "which impossible thing is more impossible" does seem an odd thing to be arguing about, so I'll not go into the reasons unless someone asks for them.

However, all of that is irrelevant, because I answered the more "interesting question" in the comment you quoted. To be blunt, why are we still talking about this?

I meant a more generalized you, in my last sentence. You in particular did indeed answer the more interesting question.

comment by dlthomas · 2011-12-16T21:44:35.768Z · LW(p) · GW(p)

[T]he 3^^^3 people getting dust in their eyes is a much better choice.

Can you explain a bit about your moral or decision theory that would lead you to conclude that?

Replies from: Insert_Idionym_Here
comment by Insert_Idionym_Here · 2011-12-16T21:56:26.747Z · LW(p) · GW(p)

Yes. I believe that because any suffering caused by the 3^^^3 dust specks is spread across 3^^^3 people, it is of lesser evil than torturing a man for 50 years. Assuming there to be no side effects to the dust specks.

Replies from: Nornagest, dlthomas, TimS
comment by Nornagest · 2011-12-16T22:08:21.603Z · LW(p) · GW(p)

That's not general enough to mean very much: it fits a number of deontological moral theories and a few utilitarian ones (what the right answer within virtue ethics is is far too dependent on assumptions to mean much), and seems to fit a number of others if you don't look too closely. Its validity depends greatly on which you've picked.

As best I can tell the most common utilitarian objection to TvDS is to deny that Specks are individually of moral significance, which seems to me to miss the point rather badly. Another is to treat various kinds of disutility as incommensurate with each other, which is at least consistent with the spirit of the argument but leads to some rather weird consequences around the edge cases.

Replies from: Insert_Idionym_Here
comment by Insert_Idionym_Here · 2011-12-16T22:21:55.011Z · LW(p) · GW(p)

No-one asked for a general explanation.

The best term I have found, the one that seems to describe the way I evaluate situations the most accurately, is consequentialism. However, that may still be inaccurate. I don't have a fully reliable way to determine what consequentialism entails; all I have is Wikipedia, at the moment.

I tend to just use cost-benefit analysis. I also have a mental, and quite arbitrary, scale of what things I do and don't value, and to what degree, to avoid situations where I am presented with multiple, equally beneficial choices. I also have a few heuristics. One of them essentially says that given a choice between a loss that is spread out amongst many, and an equal loss divided amongst the few, the former is the more moral choice. Does that help?

Replies from: Nornagest
comment by Nornagest · 2011-12-16T22:27:28.927Z · LW(p) · GW(p)

It helps me understand your reasoning, yes. But if you aren't arguing within a fairly consistent utilitarian framework, there's not much point in trying to convince others that the intuitive option is correct in a dilemma designed to illustrate counterintuitive consequences of utilitarianism.

So far it sounds like you're telling us that Specks is intuitively more reasonable than Torture, because the losses are so small and so widely distributed. Well, yes, it is. That's the point.

Replies from: Insert_Idionym_Here
comment by Insert_Idionym_Here · 2011-12-16T22:31:33.570Z · LW(p) · GW(p)

At what point is utilitarianism not completely arbitrary?

Replies from: Nornagest
comment by Nornagest · 2011-12-16T22:38:34.428Z · LW(p) · GW(p)

I'm not a moral realist. At some point it is completely arbitrary. The meta-ethics here are way outside the scope of this discussion; suffice it to say that I find it attractive as a first approximation of ethical behavior anyway, because it's a simple way of satisfying some basic axioms without going completely off the rails in situations that don't require Knuth up-arrow notation to describe.

But that's all a sideline: if the choice of moral theory is arbitrary, then arguing about the consequences of one you don't actually hold makes less sense than it otherwise would, not more.

Replies from: Insert_Idionym_Here
comment by Insert_Idionym_Here · 2011-12-16T22:45:15.497Z · LW(p) · GW(p)

I believe I suggested earlier that I don't know what moral theory I hold, because I am not sure of the terminology. So I may, in fact, be a utilitarian, and not know it, because I have not the vocabulary to say so. I asked "At what point is utilitarianism not completely arbitrary?" because I wanted to know more about utilitarianism. That's all.

Replies from: Nornagest
comment by Nornagest · 2011-12-16T22:50:08.561Z · LW(p) · GW(p)

Ah. Well, informally, if you're interested in pissing the fewest people off, which as best I can tell is the main point where moral abstractions intersect with physical reality, then it makes sense to evaluate the moral value of actions you're considering according to the degree to which they piss people off. That loosely corresponds to preference utilitarianism: specifically negative preference utilitarianism, but extending it to the general version isn't too tricky. I'm not a perfect preference utilitarian either (people are rather bad at knowing what they want; I think there are situations where what they actually want trumps their stated preference; but correspondence with stated preference is itself a preference and I'm not sure exactly where the inflection points lie), but that ought to suffice as an outline of motivations.

Replies from: Insert_Idionym_Here
comment by Insert_Idionym_Here · 2011-12-16T22:54:28.484Z · LW(p) · GW(p)

Thank you.

comment by dlthomas · 2011-12-16T22:12:45.826Z · LW(p) · GW(p)

That's not quite what I meant by "explain" - I had understood that to be your position, and was trying to get insight into your reasoning.

Drawing an analogy to mathematics, would you say that this is an axiom, or a theorem?

If an axiom, it clearly must be produced by a schema of some sort (as you clearly don't have 3^^^3 incompressible rules in your head). Can you explore somewhat the nature of that schema?

If a theorem, what sort of axioms, and how arranged, produce it?

comment by TimS · 2011-12-16T22:22:25.982Z · LW(p) · GW(p)

When I participated in this debate, this post convinced me that a utilitarian must believe that dust specks cause more overall suffering (or whatever badness measure you prefer). Since I already wasn't a utilitarian, this didn't bother me.

Replies from: dlthomas
comment by dlthomas · 2011-12-16T22:47:20.851Z · LW(p) · GW(p)

As a utilitarian (in broad strokes), I agree, and this doesn't bother me because this example is so far out of the range of what is possible that I don't object to saying, "yes, somewhere out there torture might be a better choice." I don't have to worry about that changing what the answer is around these parts.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-30T04:24:10.000Z · LW(p) · GW(p)

Anon, I deliberately didn't say what I thought, because I guessed that other people would think a different answer was "obvious". I didn't want to prejudice the responses.

Replies from: None
comment by [deleted] · 2015-03-24T19:46:30.187Z · LW(p) · GW(p)

So what do you think?

Replies from: dxu
comment by dxu · 2015-03-24T20:27:40.617Z · LW(p) · GW(p)

He gives his answer here.

Replies from: None
comment by [deleted] · 2015-03-25T13:00:46.161Z · LW(p) · GW(p)

Thank you!

Replies from: coldlyrationalogic
comment by coldlyrationalogic · 2015-04-10T14:43:43.996Z · LW(p) · GW(p)

Exactly, if Elizier had went out and said what he thought, nothing good would come out of it. The point is to make you think.

comment by Anon_prime · 2007-10-30T04:35:03.000Z · LW(p) · GW(p)

Even when applying the cold cruel calculus of moral utilitarianism, I think that most people acknowledge that egalitarianism in a society has value in itself, and assign it positive utility. Would you rather be born into a country where 9/10 people are destitute (<$1000/yr), and the last is very wealthy (100,000/yr)? Or, be born into a country where almost all people subsist on a modest (6-8000/yr) amount?

Any system that allocates benefits (say, wealth) more fairly might be preferable to one that allocates more wealth in a more unequal fashion. And, the same goes for negative benefits. The dust specks may result in more total misery, but there is utility in distributing that misery equally.

Replies from: DanielLC, Mestroyer, JasonCoston
comment by DanielLC · 2009-12-27T02:39:08.497Z · LW(p) · GW(p)

I don't believe egalitarianism has value in itself. Tell me, would you rather get all your wealth continuously throughout the year, or get a disproportionate amount on Christmas?

If wealth is evenly distributed, it will lead to more total happiness, but I don't see any advantage in happiness being evenly distributed.

I don't see how your comment relates to this post.

Replies from: byrnema
comment by byrnema · 2010-12-17T19:49:48.610Z · LW(p) · GW(p)

Perhaps it could be framed in terms of the utility of psychological comfort. Suppose that one person is tortured to avoid 3^^^3 people getting dust specks. Won't almost every one of those 3^^^3 people empathize with the tortured person enough to feel a pang of discomfort more uncomfortable than a dust speck?

Replies from: jimrandomh
comment by jimrandomh · 2010-12-17T20:18:36.299Z · LW(p) · GW(p)

Only if they find out that the tortured person exists, which would be an event that's not in the problem statement.

comment by Mestroyer · 2012-05-27T14:36:27.361Z · LW(p) · GW(p)

Well, there's valuing money at more utility per dollar when you have less money and less utility per dollar when you have more money, which makes perfect sense. But that's not the same as egalitarianism as part of utility.

comment by JasonCoston · 2013-08-17T08:25:47.762Z · LW(p) · GW(p)

Third-to-last sentence sets up a false dichotomy between "more fairly" and "more unequal."

comment by Kat · 2007-10-30T04:56:38.000Z · LW(p) · GW(p)

The dust specks seem like the "obvious" answer to me, but how large the tiny harm must be to cross the line where the unthinkably huge number of them outweighs a single tremendous one isn't something I could easily say, when clearly I don't think simply calculating the total amount of harm caused is the right measure.

comment by Kyle2 · 2007-10-30T05:13:46.000Z · LW(p) · GW(p)

It seems obvious to me to choose the dust specks because that would mean that the human species would have to exist for an awfully long time for the total number of people to equal that number and that minimum amount of annoyance would be something they were used to anyway.

comment by Paul_Gowder · 2007-10-30T05:31:12.000Z · LW(p) · GW(p)

I too see the dust specks as obvious, but for the simpler reason that I reject utilitarian sorts of comparisons like that. Torture is wicked, period. If one must go further, it seems like the suffering from torture is qualitatively worse than the suffering from any number of dust specks.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2014-08-12T19:30:58.533Z · LW(p) · GW(p)

I too see the dust specks as obvious, but for the simpler reason that I reject utilitarian sorts of comparisons like that. Torture is wicked, period.

I think you have misunderstood the point of the thought experiment. Eliezer could have imagined that the intense and prolonged suffering experienced by the victim was not intentionally caused, but was instead the result of natural causes. The "torture is wicked" reply cannot be used to resist the decision to bring about this scenario. (There may, of course, be other reasons for objecting to that decision.)

comment by michael_vassar3 · 2007-10-30T05:34:47.000Z · LW(p) · GW(p)

Anon prime: dollars are not utility. Economic egalitarianism is instrumentally desirable. We don't normally favor all types of equality, as Robin frequently points out.

Kyle: cute

Eliezer: My impulse is to choose the torture, even when I imagine very bad kinds of torture and very small annoyances (I think that one can go smaller than a dust mote, possibly something like a letter on the spine of a book that your eye sweeps over being in a shade less well selected a font). Then, however, I think of how much longer the torture could last and still not outweigh the trivial annoyances if I am to take the utilitarian perspective and my mind breaks. Condoning 50 years of torture, or even a day worth, is pretty much the same as condoning universes of agonium lasting for eons in the face of numbers like these, and I don't think that I can condone that for any amount of a trivial benefit.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-02T04:00:03.354Z · LW(p) · GW(p)

(This was my favorite reply, BTW.)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-02T04:08:17.092Z · LW(p) · GW(p)

I admire the restraint involved in waiting nearly five years before selecting a favorite.

Replies from: Friendly-HI
comment by Friendly-HI · 2014-05-27T13:11:52.963Z · LW(p) · GW(p)

Well too bad he didn't wait a year longer then ;). I think preferring torture is the wrong answer for the same reason that I think universal health-care is a good idea. The financial cost of serious illness and injury is distributed over the taxpaying population so no single individual has to deal with a spike in medical costs ruining their life. And I think it's still the correct moral choice regardless of whether universal health-care happens to be more expensive or not.

Analogous I think the exact same applies to dust vs torture. I don't think the correct moral choice is about minimizing the total area under the pain-curve at all, it's about avoiding severe pain-spikes for any given individual even at the cost of having a larger area under the curve. I don't think "shut up and multiply" applies here in it's simplistic conception in the way it might apply in the scenario where you have to choose whether 400 people live for sure or 500 people live with .9 probability (and die with .1 probability).


Irrespective of the former however, the thought experiment is a bit problematic because it's more complex than apparent at first, if we really take it seriously. Eliezer said the dust-specks are "barely noticed", but being conscious or aware of something isn't an either-or thing, awareness falls on a continuum so whatever "pain" the dust-specks causes has to be multiplied by how aware the person really is. If someone is tortured that person is presumably very aware of the physical and emotional pain.

Other possible consequences like lasting damage or social repercussions not counting, I don't really care all that much about any kind of pain that happens to me while I'm not aware of it. I could probably figure out whether or not pain is actually registered in my brain during having my upcoming operation under anesthesia, but the fact that I won't bother tells me very clearly, that awareness of pain is an important weight we have to multiply in some fashion with the actual pain-registration in the brain.

That's just an additional consideration though, even if we simplify it and imagine the pain is directly comparable and has no difference in quality at all, while the total quantity of pain is excessively higher in the dust-scenario compared to the torture-scenario, it changes nothing about my current choice.

So what does that tell me about the relationship between utility and morality? I don't accept that morality is just about the total lump sums of utility and disutility, I think we also have to consider the distribution of those in any given population. Why is that I ask myself and my brain offers the following answer to this question:

If I was the only agent in the entire universe and had to pick torture vs dust for myself (and obviously if I was immortal/ had a long enough life to experience all those dust specks), I would still prefer the larger area under the curve over the pain-spike, even if I assume direct comparability of the two kinds of pain. I suspect the reason for this choice is a type of time-discounting my brain does, I'd rather suffer a little pain every day for a trillion years than a big spike for 50 years. Considering that briefly speaking utility is (or at least I think should be defined as) a thing that only results from the interaction of minds and environments, my mind and its workings are definitely part of the equation that says what has utility and what doesn't. And my mind wants to suffer low disutility evenly distributed over a long time-period rather than suffer great disutility for a 50 year spike (assuming a trillion-year lifetime).

Replies from: Jiro
comment by Jiro · 2014-05-27T15:58:46.082Z · LW(p) · GW(p)

I don't think the correct moral choice is about minimizing the total area under the pain-curve at all, it's about avoiding severe pain-spikes for any given individual even at the cost of having a larger area under the curve.

If you're going to say that, you'll need some threshhold, and pain over the threshhold makes the whole society count as worse than pain under the threshhold. This will mean that any number of people with pain X is better than one person with pain X + epsilon, where epsilon is very small but happens to push it over the threshhold.

Alternately, you could say that the disutility of pain gradually changes, but that has other problems. I suggest you read up on the repugnant conclusion ( http://plato.stanford.edu/entries/repugnant-conclusion/ )--depending on exactly what you mean, what you suggest is similar to the proposed solutions, which don't really work.

comment by Tiiba2 · 2007-10-30T05:50:39.000Z · LW(p) · GW(p)

Personally, I choose C: torture 3^^^3 people for 3^^^3 years. Why? Because I can.

Ahem. My morality is based on maximizing average welfare, while also avoiding extreme individual suffering, rather than cumulative welfare.

So torturing one man for fifty years is not preferable to annoying any number of people.

This is different when the many are also suffering extremely, though - then it may be worthwhile to torture one even more to save the rest.

comment by Jonathan_El-Bizri · 2007-10-30T06:00:51.000Z · LW(p) · GW(p)

Trivial annoyances and torture cannot be compared in this quantifiable manner. Torture is not only suffering, but lost opportunity due to imprisonment, permanent mental hardship, activation of pain and suffering processes in the mind, and a myriad of other unconsidered things.

And even if the torture was 'to have flecks of dust dropped in your eyes', you still can't compare a 'torturous amount' applied to one person, to substantial number dropped in the eyes of many people: We aren't talking about cpu cycles here - we are trying to quantify qualifiables.

If you revised the question, and specified stated exactly how the torture would affect the individual, and how they would react to it, and the same for each of the 'dust in the eyes' people (what if one goes blind? what of their mental capacity to deal with the hardship? what of the actual level of moisture in their eyes, and consequently the discomfort being felt?) then, maybe then, we could determine which was the worse outcome, and by how much.

There are simply too many assumptions that we have to make in this, mortal, world to determine the answer to such questions: you might as well as how many angels dance on the head of a pin. Or you could start more simply and ask: if you were to torture two people in exactly the same way, which one would suffer more, and by how much?

And you notice, I haven't even started to think about the ethical side of the question...

Replies from: DanielLC, snewmark
comment by DanielLC · 2009-12-27T03:20:52.040Z · LW(p) · GW(p)

Can you compare apples and oranges? You certainly don't seem to have much trouble when you decide how to spend your money at the grocery store.

It was rather clear from the context that the "dust in the eye" was a very, very minor torture. People are not going blind. They are perfectly capable of dealing with it. It's just not 3^^^3 times as minor as the torture.

If you were to torture two people in exactly the same way, they'd suffer about equally. Why do you imply that's some sort of unanswerable question?

If you weren't talking about the ethical side, what were you talking about? He wasn't trying to compare everything about the two choices, just which was more ethical. It would be impossible if he didn't limit it like that.

comment by snewmark · 2016-06-02T16:12:52.070Z · LW(p) · GW(p)

And you notice, I haven't even started to think about the ethical side of the question...

I'm pretty sure the question itself revolves around ethics, as far as I can tell the question is: given these 2 choices, which would you consider, ethically speaking, the ideal option?

comment by Psy-Kosh · 2007-10-30T06:14:05.000Z · LW(p) · GW(p)

I think this all revolves around one question: Is "disutility of dust speck for N people" = N*"disutility of dust speck for one person"?

This, of course, depends on the properties of one's utility function.

How about this... Consider one person getting, say, ten dust specks per second for an hour vs 106060 = 36,000 people getting a single dust speck each.

This is probably a better way to probe the issue at its core. Which of those situations is preferable? I would probably consider the second. However, I suspect one person getting a billion dust specks in their eye per second for an hour would be preferable to 1000 people getting a million per second for an hour.

Suffering isn't linear in dust specks. Well, actually, I'm not sure subjective states in general can be viewed in a linear way. At least, if there is a potentially valid "linear qualia theory", I'd be surprised.

But as far as the dust specks vs torture thing in the original question? I think I'd go with dust specks for all.

But that's one person vs buncha people with dustspecks.

comment by Psy-Kosh · 2007-10-30T06:24:01.000Z · LW(p) · GW(p)

Oh, just had a thought. A less extreme yet quite related real world situation/question would be this: What is appropriate punishment for spammers?

Yes, I understand there're a few additional issues here, that would make it more analogous to, say, if the potential torturee was planning on deliberately causing all those people a DSE (Dust Speck Event)

But still, the spammer issue gives us a more concrete version, involving quantities that don't make our brains explode, so considering that may help work out the principles by which these sorts of questions can be dealt with.

comment by Jonathan_El-Bizri · 2007-10-30T06:51:19.000Z · LW(p) · GW(p)

The problem with spammers isn't the cause of a singular dust spec event: it's the cause of multiple dust speck events repeatedly to individuals in the population in question. It's also a 'tragedy of the commons' question, since there is more than one spammer.

To respond to your question: What is appropriate punishment for spammers? I am sad to conclude that until Aubrey DeGray manages to conquer human mortality, or the singularity occurs, there is no suitable punishment for spammers.

After either of those, however, I would propose unblocking everyone's toilets and/or triple shifts as a Fry's Electronics floor lackey until the universal heat death, unless you have even >less< interesting suggestions.

comment by Pete_Carlton · 2007-10-30T06:52:08.000Z · LW(p) · GW(p)

If you could take all the pain and discomfort you will ever feel in your life, and compress it into a 12-hour interval, so you really feel ALL of it right then, and then after the 12 hours are up you have no ill effects - would you do it? I certainly would. In fact, I would probably make the trade even if it were 2 or 3 times longer-lasting and of the same intensity. But something doesn't make sense now... am I saying I would gladly double or triple the pain I feel over my whole life?

The upshot is that there are some very nonlinear phenomena involved with calculating amounts of suffering, as Psy-Kosh and others have pointed out. You may indeed move along one coordinate in "suffering-space" by 3^^^3 units, but it isn't just absolute magnitude that's relevant. That is, you cannot recapitulate the "effect" of fifty years of torturing with isolated dust specks. As the responses here make clear, we do not simply map magnitudes in suffering space to moral relevance, but instead we consider the actual locations and contours. (Compare: you decide to go for a 10-mile hike. But your enjoyment of the hike depends more on where you go, than the distance traveled.)

Replies from: JoeSchmoe
comment by JoeSchmoe · 2009-09-12T07:44:13.960Z · LW(p) · GW(p)

"If you could take all the pain and discomfort you will ever feel in your life, and compress it into a 12-hour interval, so you really feel ALL of it right then, and then after the 12 hours are up you have no ill effects - would you do it? I certainly would.""

Hubris. You don't know, can't know, how that pain would/could be instrumental in processing external stimuli in ways that enable you to make better decisions.

"The sort of pain that builds character, as they say".

The concept of processing 'pain' in all its forms is rooted very deep in humanity -- get rid of it entirely (as opposed to modulating it as we currently do), and you run a strong risk of throwing the baby out with the bathwater, especially if you then have an assurance that your life will have no pain going forward. There's a strong argument to be made for deference to traditional human experience in the face of the unknown.

comment by James_Bach · 2007-10-30T07:30:57.000Z · LW(p) · GW(p)

Yes the answer is obvious. The answer is that this question obviously does not yet have meaning. It's like an ink blot. Any meaning a person might think it has is completely inside his own mind. Is the inkblot a bunny? Is the inkblot a Grateful Dead concert? The right answer is not merely unknown, because there is no possible right answer.

A serious person-- one who take moral dilemmas seriously, anyway-- must learn more before proceeding.

The question is an inkblot because too many crucial variables have been left unspecified. For instance, in order for this to be an interesting moral dilemma I need to know that it is a situation that is physically possible, or else analogous to something that is possible. Otherwise, I can't know what other laws of physics or logic apply or don't apply, and therefore can't make an assessment. I need to know what my position is in this universe. I need to know why this power has been invested in me. I need to know the nature of the torture and who the person is who will be tortured. I need to consider such factors as what the torture may mean to other people who are aware of it (such as the people doing the torture). I need to know something about the costs and benefits involved. Will the person being tortured know they are being tortured? Or can it be arranged that they are born into the torture and consider it a normal part of their life. Will the person being tortured have volunteered to have been tortured? Will the dust motes have peppered the eyes of all those people anyway? Will the torture have happened anyway? Will choosing torture save other people from being tortured?

It would seem that torture is bad. On the other hand, just being alive is a form of torture. Each of us has a Sword of Damocles hanging over us. It's called mortality. Some people consider it torture when I keep telling them they haven't finished asking their question...

comment by douglas · 2007-10-30T07:45:40.000Z · LW(p) · GW(p)

The non-linear nature of 'qualia' and the difficulty of assigning a utility function to such things as 'minor annoyance' has been noted before. It seems to some insolvable. One solution presented by Dennett in 'Consciousness Explained' is to suggest that there is no such thing as qualia or subjective experience. There are only objective facts. As Searle calls it 'consciousness denied'. With this approach it would (at least theoretically) be possible to objectively determine the answer to this question based on something like the number of ergs needed to fire the neurons that would represent the outcomes of the two different choices. The idea of which would be the more/less pleasant experience is therefore not relevant as there is no subjective experience to be had in the first place. Of course I'm being sloppy here- the word choice would have to be re-defined to include that each action is determined by the physical configuration of the brain and that the chooser is in fact a fictional construct of that physical configuration. Otherwise, I admit that 3^^^3 people is not something I can easily contemplate, and that clouds my ability to think of an answer to this question.

comment by Psy-Kosh · 2007-10-30T07:49:17.000Z · LW(p) · GW(p)

Uh... If there's no such thing as qualia, there's no such thing as actual suffering, unless I misunderstand your description of Dennett's views.

But if my understanding is correct, and those views were correct, then wouldn't the answer be "nobody actually exists to care one way or another?" (Or am I sorely mistaken in interpreting that view?)

comment by James_Bach · 2007-10-30T07:54:48.000Z · LW(p) · GW(p)

Regarding your example of income disparity: I might rather be born into a system with very unequal incomes, if, as in America (in my personal and biased opinion), there is a reasonable chance of upping my income through persistence and pluck. I mean hey, that guy with all that money has to spend it somewhere-- perhaps he'll shop at my superstore!

But wait, what does wealth mean? In the case where everyone has the same income, where are they spending their money? Are they all buying the same things? Is this a totalitarian state? An economy without disparity is pretty disturbing to contemplate, because it means no one is making an effort to do better than other people, or else no one can do better. Money is not being concentrated or funnelled anywhere. Sounds like a pretty moribund economy.

If it's a situation where everyone always gets what they want and need, then wealth will have lost its conventional meaning, and no one will care whether one person is rich and another one isn't. What they will care about is the success of their God, their sports teams, and their children.

I guess what I'm saying is that there may be no interesting way to simplify interesting moral dilemmas without destroying the dilemma or rendering it irrelevant to natural dilemmas.

comment by J_Thomas · 2007-10-30T08:21:01.000Z · LW(p) · GW(p)

If even one in a hundred billion of the people is driving and has an accident because of the dust speck and gets killed, that's a tremendous number of deaths. If one in a hundred quadrillion of them survives the accident but is mangled and spends the next 50 years in pain, that's also a tremendous amount of torture.

If one in a hundred decillion of them is working in a nuclear power plant and the dust speck makes him have a nuclear accident....

We just aren't designed to think in terms of 3^^^3. It's too big. We don't habitually think much about one-in-a-million chances, much less one in a hundred decillion. But a hundred decillion is a very small number compared to 3^^^3.

Replies from: marenz, ata, homunq
comment by marenz · 2010-09-20T22:01:22.892Z · LW(p) · GW(p)

I would say that it is pretty easy to think in terms of 3^^^3. Just assume that everything that could happen due to a dust speck in your eye, will happen.

comment by ata · 2010-09-20T22:11:44.252Z · LW(p) · GW(p)

That is an interesting argument (I've considered it before) though I think it misses the point of the thought experiment. As I understand it, it's not about any of the possible consequences of the dust specks, but about specks as (very minor) intrinsically bad things themselves. It's about whether you're willing to measure the unpleasantness of getting a dust speck in your eye on the same scale as the unpleasantness of being tortured, as (vastly) different in degree rather than fundamentally different in kind.

comment by homunq · 2012-04-13T13:29:55.443Z · LW(p) · GW(p)

How do you know that more accidents are caused than avoided by dust specks?

(Of course I realize I'm saying "you" to a 5-year-old comment but you get the picture.)

comment by g · 2007-10-30T10:06:46.000Z · LW(p) · GW(p)

Douglas and Psy-Kosh: Dennett explicitly says that in denying that there are such things as qualia he is not denying the existence of conscious experience. Of course, Douglas may think that Dennett is lying or doesn't understand his own position as well as Douglas does.

James Bach and J Thomas: I think Eliezer is asking us to assume that there are no knock-on effects in either the torture or the dust-speck scenario, and the usual assumption in these "which economy would you rather have?" questions is that the numbers provided represent the situation after all parties concerned have exerted whatever effort they can. (So, e.g., if almost everyone is described as destitute, then it must be a society in which escaping destitution by hard work is very difficult.) Of course I agree with both of you that there's danger in this sort of simplification.

comment by Sebastian_Hagen2 · 2007-10-30T10:26:46.000Z · LW(p) · GW(p)

J Thomas: You're neglecting that there might be some positive-side effects for a small fraction of the people affected by the dust specks; in fact, there is some precedent for this. The resulting average effect is hard to estimate, but (considering that dust specks seem to mostly add entropy to the thought processes of the affected persons), would likely still be negative.

Copying g's assumption that higher-order effects should be neglected, I'd take the torture. For each of the 3^^^3 persons, the choice looks as follows:

1.) A 1/(3^^^3) chance of being tortured for 50 years. 2.) A 1 chance of getting a dust speck.

I'd definitely prefer the former. That probability is so close to zero that it vastly outweighs the differences in disutility.

comment by Rick_Smith · 2007-10-30T10:49:23.000Z · LW(p) · GW(p)

Hmm, tricky one.

Do I get to pick the person who has to be tortured?

comment by Tomhs2 · 2007-10-30T11:03:53.000Z · LW(p) · GW(p)

As I read this I knew my answer would be the dust specks. Since then I have been mentally evaluating various methods for deciding on the ethics of the situation and have chosen the one that makes me feel better about the answer I instinctively chose.

I can tell you this though. I reckon I personally would choose max five minutes of torture to stop the dust specks event happening. So if the person threatened with 50yrs of torture was me, I'd choose the dust specks.

comment by Benquo · 2007-10-30T11:49:14.000Z · LW(p) · GW(p)

What if it were a repeatable choice?

Suppose you choose dust specks, say, 1,000,000,000 times. That's a considerable amount of torture inflicted on 3^^^3 people. I suspect that you could find the number of times equivalent to torturing each of thoes 3^^^3 people 50 years, and that number would be smaller than 3^^^3. In other words, choose the dust speck enough times, and more people would be tortured effectually for longer than if you chose the 50-year torture an equivalent number of times.

If that math is correct, I'd have to go with the torture, not the dust specks.

Replies from: themusicgod1
comment by themusicgod1 · 2013-10-02T21:29:16.298Z · LW(p) · GW(p)

Likewise, if this was iterated 3^^^3+1 times(ie 3^^^3 plus the reader),it could easily be 50*3^^^3 (ie > 3^^^3+1) people tortured. The odds are if it's possible for you to make this choice, unless you have reason to believe otherwise they may too, making this an implicit prisoner's dilemma of sorts. On the other side, 3^^^3 specks could possibly crush you, and/or your local cluster of galaxies into a black hole, so there's that to consider if you consider the life within meaningful distance of of every one of those 3^^^3 people valuable.

Replies from: Benquo
comment by Benquo · 2013-10-03T13:15:49.695Z · LW(p) · GW(p)

I'm not sure I follow your argument.

I'm going to assume that for a single person, 3^^3 dust specks = 50 years of torture. (My earlier figure seems wrong, but 3^^3 dust specks over 50 years is a little under 5,000 dust specks per second.) I'm going to ignore the +1 because these are big numbers already.

If this were iterated 3^^^3 times, then we have the choice between:

TORTURE: 3^^^3 people are each tortured for 50 years, once.

DUST SPECKS: 3^^^3 people are tortured for 50 years, repeated (3^^^3)/(3^^3)=3^(3^^3-3^3) times.

Replies from: themusicgod1
comment by themusicgod1 · 2013-10-04T18:59:51.596Z · LW(p) · GW(p)

The probability I'm the only person person selected out of 3^^^3 for such a decision p(i) is less than any reasonable estimate of how many people could be selected, imho. Let's say well below 700dB against. The chances are much greater that some probability fo those about to be dust specked or tortured also gets this choice (p(k)). p(k)*3^^^3 > p(i) => 3^^^3 > p(i)/p(k) => true for any reasonable p(i)/p(k)

So this means that the effective number of dust particles given to each of us is going to be roughly (1-p(i))p(k)3^^^3.

I'm going to assume any amount of dust larger in mass than a few orders of magnitude above the Chandrasekhar limit (1e33 kg) is going to result in a black hole. I can even assume a significant error margin in my understanding of how black holes work, and the reuslts do not change.

The smallest dust particle is probably a single hydrogen atom(really everything resoles to hydrogen at small enough quantities, right?). 1 mol of hydrogen weighs about 1 gram. So (1-p(i))(p(k)3^^^3 (1 gram/mol)(6e-23 'specks'/mol) (1e-3 kg/g) (1e-33 kg/black hole) = roughly ( 3^^^3 ) (~1e-730) = roughly 3^^^3 black holes.

ie 3^(3_1^3_2^3_3^...^3_7e13 -730) = roughly 3^(3_1^3_2^3_3^...^3_7e13)

ie 3_1^3_2^3_3^...^3_7e13 - 730 = roughly 3_1^3_2^3_3^...^3_7e13.

In conclusion, I think at this level, I would choose 'cancel' / 'default' / 'roll a dice and determine the choice randomly/not choose' BUT would woefully update my concept of the sizee of the universe to contain enough mass to even support a reasonably infentessimal probability of some proportion of 3^^^3 specks of dust, and 3^^^3 people or at least some reasonable proportion thereof.

The question I have now is how is our model of the universe to update given this moral dillema? What is the new radius of the universe given this situation? It can't be big enough for 3^^^3 dust specks piled on the edge of our universe outside of our light cone somewhere. Either way I think the new radius ought to be termed the "Yudkowsky Radius".

Replies from: Benquo
comment by Benquo · 2013-10-05T22:25:00.709Z · LW(p) · GW(p)

I don't really care what happens if you take the dust speck literally; the point is to exemplify an extremely small disutility.

Replies from: themusicgod1
comment by themusicgod1 · 2013-10-06T06:27:54.461Z · LW(p) · GW(p)

I suppose you could view the utility as a meaninful object in this frame and abstract away the dust, too, but in the end the dust-utility system is going to encompaps both anyway so solving the problem on either level is going to solve it on both.

comment by Zubon2 · 2007-10-30T12:30:32.000Z · LW(p) · GW(p)

Kyle wins.

Absent using this to guarantee the nigh-endless survival of the species, my math suggests that 3^^^3 beats anything. The problem is that the speck rounds down to 0 for me.

There is some minimum threshold below which it just does not count, like saying, "What if we exposed 3^^^3 people to radiation equivalent to standing in front of a microwave for 10 seconds? Would that be worse than nuking a few cities?" I suppose there must be someone in 3^^^3 who is marginally close enough to cancer for that to matter, but no, that rounds down to 0. For the speck, I am going to blink in the next few seconds anyway.

That in no way addresses the intent of the question, since we can just increase it to the minimum that does not round down. Being poked with a blunt stick? Still hard, since I think every human being would take one stick over some poor soul being tortured. Do I really get to be the moral agent for 3^^^3 people?

As others have said, our moral intuitions do not work with 3^^^3.

Replies from: ThoughtSpeed
comment by ThoughtSpeed · 2017-06-19T02:11:54.707Z · LW(p) · GW(p)

There is some minimum threshold below which it just does not count, like saying, "What if we exposed 3^^^3 people to radiation equivalent to standing in front of a microwave for 10 seconds? Would that be worse than nuking a few cities?" I suppose there must be someone in 3^^^3 who is marginally close enough to cancer for that to matter, but no, that rounds down to 0.

Why would that round down to zero? That's a lot more people having cancer than getting nuked!

(It would be hilarious if Zubon could actually respond after almost a decade)

comment by Robin_Hanson2 · 2007-10-30T12:30:53.000Z · LW(p) · GW(p)

Wow. The obvious answer is TORTURE, all else equal, and I'm pretty sure this is obvious to Eliezer too. But even though there are 26 comments here, and many of them probably know in their hearts torture is the right choice, no one but me has said so yet. What does that say about our abilities in moral reasoning?

comment by Caledonian2 · 2007-10-30T12:44:18.000Z · LW(p) · GW(p)

Given that human brains are known not to be able to intuitively process even moderately large numbers, I'd say the question can't meaningfully be asked - our ethical modules simply can't process it. 3^^^3 is too large - WAY too large.

comment by jason_braswell · 2007-10-30T13:37:10.000Z · LW(p) · GW(p)

I'm unconvinced that the number is too large for us to think clearly. Though it takes some machinery, humans reason about infinite quantities all the time and arrive at meaningful conclusions.

My intuitions strongly favor the dust speck scenario. Even if forget 3^^^^3 and just say that an infinite number of people will experience the speck, I'd still favor it over the torture.

comment by cw · 2007-10-30T13:38:23.000Z · LW(p) · GW(p)

Robin is absolutely wrong, because different instances of human suffering cannot be added together in any meaningful way. The cumulative effect when placed on one person is far greater than the sum of many tiny nuisances experienced by many. Whereas small irritants such as a dust mote do not cause "suffering" in any standard sense of the word, the sum total of those motes concentrated at one time and placed into one person's eye could cause serious injury or even blindness. Dispersing the dust (either over time or across many people) mitigates the effect. If the dispersion is sufficient, there is actually no suffering at all. To extend the example, you could divide the dust mote into even smaller particles, until each individual would not even be aware of the impact.

So the question becomes, would you rather live in a world with little or no suffering (caused by this particular event) or a world where one person suffers badly, and those around him or her sit idly by, even though they reap very little or no benefit from the situation?

The notion of shifting human suffering onto one unlucky individual so that the rest of society can avoid minor inconveniences is morally reprehensible. That (I hope) is why no one has stood up and shouted yeay for torture.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2014-08-12T19:47:18.657Z · LW(p) · GW(p)

Robin is absolutely wrong, because different instances of human suffering cannot be added together in any meaningful way.

The problem with this claim is that you can construct a series of overlapping comparisons involving experiences that differ but slightly in how painful they are. Then, provided that the series has sufficiently many elements, you'll reach the conclusion that an experience of pain, no matter how intense, is preferable to arbitrarily many instances of the mildest pain imaginable.

(Strictly speaking, you could actually avoid this conclusion by assuming that painful experiences of a given intensity have diminishing marginal value and that this value converges to a finite quantity. Then if the limiting value of a very mild pain is less than the value of a single extremely painful experience, the continuity argument wouldn't work. However, I see no independent motivation for embracing a theory of value of this sort. Moreover, such a theory would have incredible implications, e.g., that to determine how bad someone's pain is one needs to consider whether sentient beings have already experienced pains of that intensity in remote regions of spacetime.)

Replies from: shminux
comment by Shmi (shminux) · 2014-08-12T21:45:14.195Z · LW(p) · GW(p)

you could actually avoid this conclusion by assuming that painful experiences of a given intensity have diminishing marginal value and that this value converges to a finite quantity.

Yeah, this is a common attempt to avoid this particular repugnant conclusion. This approach leads to conclusions like that a 3^^^3 mildly stabbed toes are better than a single moderately stabbed one. (Because if not, we can construct an unbroken chain of comparable pain experiences from specks to torture.)

However, I see no independent motivation for embracing a theory of value of this sort.

The motivation is there, to make dust specks and torture incomparable. Unfortunately, this approach doesn't work, as it results in infinitely many arbitrarily defined discontinuities.

comment by Constant2 · 2007-10-30T13:42:00.000Z · LW(p) · GW(p)

The obvious answer is TORTURE, all else equal, and I'm pretty sure this is obvious to Eliezer too.

That is the straightforward utilitarian answer, without any question. However, it is not the common intuition, and even if Eliezer agrees with you he is evidently aware that the common intuition disagrees, because otherwise he would not bother blogging it. It's the contradiction between intuition and philosophical conclusion that makes it an interesting topic.

comment by scott_clark · 2007-10-30T14:13:05.000Z · LW(p) · GW(p)

Robin's answer hinges on "all else being equal." That condition can tie up a lot of loose ends, it smooths over plenty of rough patches. But those ends unravel pretty quickly once you start to consider all the ways in which everything else is inherently unequal. I happen to think the dust speck is a 0 on the disutility meter, myself, and 3^^^3*0 disutilities = 0 disutility.

comment by Benoit_Essiambre · 2007-10-30T14:26:33.000Z · LW(p) · GW(p)

I believe that ideally speaking the best choice is the torture, but pragmatically, I think the dust speck answer can make more sense. Of course it is more intuitive morally, but I would go as far as saying that the utility can be higher for the dust specks situation (and thus our intuition is right). How? the problem is in this sentence: "If neither event is going to happen to you personally," the truth is that in the real world, we can't rely on this statement. Even if it is promised to us or made into a law, this type of statements often won't hold up very long. Precedents have to be taken into account when we make a decision based on utility. If we let someone be tortured now, we are building a precedent, a tradition of letting people being tortured. This has a very low utility for people living in the affected society. This is well summarized in the saying "What goes around comes around".

If you take the strict idealistic situation described, the torture is the best choice. But if you instead deem the situation to be completely unrealistic and you pick a similar one by simply not giving a 100% reliability on the sentence: "If neither event is going to happen to you personally," the best choice can become the dust specks, depending on how much you believe the risk of a tradition of torture will be established. (and IMO traditions of torture and violence is the kind of thing that spreads easily as it stimulates resentment and hatred in the groups that are more affected.) The torture situation has much risk of getting worst but not the dust speck situation.

The scenario might have been different if torture was replaced by a kind of suffering that is not induced by humans. Say... an incredibly painful and long (but not contagious) illness.

Is it better to have the dust specks everywhere all the time or to have the existence of this illness once in history?

comment by gaverick · 2007-10-30T14:32:32.000Z · LW(p) · GW(p)

Torture. See Norcross: http://www.ruf.rice.edu/~norcross/ComparingHarms.pdf

Replies from: themusicgod1
comment by themusicgod1 · 2013-10-04T03:44:20.350Z · LW(p) · GW(p)

Your link is 404ing. Is http://spot.colorado.edu/~norcross/Comparingharms.pdf‎ the same one?

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2014-08-12T19:53:37.802Z · LW(p) · GW(p)

Here's the link (both links above are dead).

Replies from: ignoranceprior
comment by ignoranceprior · 2017-08-27T00:06:42.599Z · LW(p) · GW(p)

Here's the latest working link (all three above are dead)

Also, here's an archive in case that one ever breaks!

comment by Michael_G.R. · 2007-10-30T14:42:08.000Z · LW(p) · GW(p)

Robin, could you explain your reasoning. I'm curious.

Humans get barely noticeable "dust speck equivalent" events so often in their lives that the number of people in Eliezer's post is irrelevant; it's simply not going to change their lives, even if it's a gazillion lives, even with a number bigger than Eliezer's (even considering the "butterfly effect", you can't say if the dust speck is going to change them for the better or worse -- but with 50 years of torture, you know it's going to be for the worse).

Subjectively for these people, it's going to be lost in the static and probably won't even be remembered a few seconds after the event. Torture won't be lost in static, and it won't be forgotten (if survived).

The alternative to torture is so mild and inconsequential, even if applied to a mind-boggling number of people, that it's almost like asking: Would you rather torture that guy or not?

comment by Benquo · 2007-10-30T14:52:12.000Z · LW(p) · GW(p)

@Robin,

"But even though there are 26 comments here, and many of them probably know in their hearts torture is the right choice, no one but me has said so yet."

I thought that Sebastian Hagen and I had said it. Or do you think we gave weasel answers? Mine was only contingent on my math being correct, and I thought his was similarly clear.

Perhaps I was unclear in a different way. By asking if the choice was repeatable, I didn't mean to dodge the question; I meant to make it more vivid. Moral questions are asked in a situation where many people are making moral choices all the time. If dust-speck displeasure is additive, then we should evaluate our choices based on their potential aggregate effects.

Essentially, it's a same-ratio problem, like showing that 6:4::9:6, because 6x3=9x2 and 4x3=6x2. If the aggregate of dust-specking can ever be greater than the equivalent aggregate of torturing, then it is always greater.

comment by Michael_G.R. · 2007-10-30T15:03:57.000Z · LW(p) · GW(p)

Hmm, thinking some more about this, I can see another angle (not the suffering angle, but the "being prudent about unintended consequences" angle):

If you had the choice between very very slightly changing the life of a huge number of people or changing a lot the life of only one person, the prudent choice might be to change the life of only one person (as horrible as that change might be).

Still, with the dust speck we can't really know if the net final outcome will be negative or positive. It might distract people who are about to have genius ideas, but it might also change chains of events that would lead to bad things. Averaged over so many people, it's probably going to stay very close to neutral, positive or negative. The torture of one person might also look very close to neutral if averaged with the other 3^^^3 people, but we know that it's going to be negative. Hmm..

comment by Recovering_irrationalist · 2007-10-30T15:23:11.000Z · LW(p) · GW(p)

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

The square of the number of milliseconds in 50 years is about 10^21.

Would you rather one person tortured for a millisecond (then no ill effects), or that 3^^^3/10^21 people get a dust speck per second for 50 centuries?

OK, so the utility/effect doesn't scale when you change the times. But even if each 1% added dust/torture time made things ten times worse, when you reduce the dust-speckled population to reflect that it's still countless universes worth of people.

comment by Bob3 · 2007-10-30T15:27:29.000Z · LW(p) · GW(p)

I'm with Tomhs. The question has less value as a moral dilemma than as an opportunity to recognize how we think when we "know" the answer. I intentionally did not read the comments last night so I could examine my own thought process, and tried very hard to hold an open mind (my instinct was dust). It's been a useful and interesting experience. Much better than the brain teasers which I can generally get because I'm on hightened alert when reading El's posts. Here being on alert simply allowed me to try to avoid immediately giving in to my bias.

comment by Vladimir_Nesov · 2007-10-30T15:36:52.000Z · LW(p) · GW(p)

Averaging utility works only when law of large numbers starts to play a role. It's a good general policy, as stuff subject to it happens all the time, enough to give sensible results over the human/civilization lifespan. So, if Eliezer's experiment is a singular event and similar events don't happen frequently enough, answer is 3^^^3 specks. Otherwise, torture (as in this case, similar frequent enough choices would lead to a tempest of specks in anyone's eye which is about 3^^^3 times worse then 50 years of torture, for each and every one of them).

comment by Robin_Hanson2 · 2007-10-30T15:39:17.000Z · LW(p) · GW(p)

Benquo, your first answer seems equivocal, and so did Sebastian's on a first reading, but now I see that it was not.

comment by James_D._Miller · 2007-10-30T15:39:26.000Z · LW(p) · GW(p)

Torture,

Consider three possibilities:

(a) A dusk speck hits you with probability one, (b) You face an additional probability 1/( 3^^^3) of being tortured for 50 years, (c) You must blink your eyes for a fraction of a second, just long enough to prevent a dusk speck from hitting you in the eye.

Most people would pick (c) over (a). Yet, 1/( 3^^^3) is such a small number that by blinking your eyes one more time than you normally would you increase your chances of being captured by a sadist and tortured for 50 years by more than 1/( 3^^^3). Thus, (b) must be better than (c). Consequently, most people should prefer (b) to (a).

Replies from: timujin
comment by timujin · 2014-10-24T12:33:52.976Z · LW(p) · GW(p)

You know, that actually persuaded me to override my intuitions and pick torture over dust specks.

Replies from: Jiro
comment by Jiro · 2014-10-24T14:38:19.349Z · LW(p) · GW(p)

You don't even have to go that far. Replace "dust specks" with "the inconvenience of not going outside the house" and "tiny chance of torture" with "tiny chance that being outside the house will lead to you getting killed".

Replies from: timujin
comment by timujin · 2014-10-26T10:12:46.936Z · LW(p) · GW(p)

Yeah, I understood the point.

comment by Mike_Kenny · 2007-10-30T15:48:46.000Z · LW(p) · GW(p)

There isn't any right answer. Answers to what is good or bad is a matter of taste, to borrow from Nietzsche.

To me the example has messianic quality. One person suffers immensely to save others from suffering. Does the sense that there is a 'right' answer come from a Judeo-Christian sense of what is appropriate. Is this a sort of bias in line with biases towards expecting facts to conform to a story?

Also, this example suggests to me that the value pluralism of Cowen makes much more sense than some reductive approach that seeks to create one objective measure of good and bad. One person might seek to reduce instances of illness, another to maximize reported happiness, another to maximize a personal sense of beauty. IMO, there isn't a judge who will decide who is right and who is wrong, and the decisive factor is who can marhsal the power to bring about his will, as unsavory as that might be (unless your side is winning).

comment by Tom_Crispin · 2007-10-30T16:34:13.000Z · LW(p) · GW(p)

Why is this a serious question? Given the physical unreality of the situation, the putative existence of 3^^^3 humans and the ability to actually create the option in the physical universe - why is this question taken seriously while something like is it better to kill Santa Claus or the Easter Bunny considered silly?

comment by Jef_Allbright · 2007-10-30T16:36:06.000Z · LW(p) · GW(p)

Fascinating, and scary, the extent to which we adhere to established models of moral reasoning despite the obvious inconsistencies. Someone here pointed out that the problem wasn't sufficiently defined, but then proceeded to offer examples of objective factors that would appear necessary to evaluation of a consequentialist solution. Robin seized upon the "obvious" answer that any significant amount of discomfort, over such a vast population, would easily dominate, with any conceivable scaling factor, the utilitarian value of the torture of a single individual. But I think he took the problem statement too literally; the discomfort of the dust mote was intended to be vanishingly small, over a vast population, thus keeping the problem interesting rather than "obvious."

But most interesting to me is that no one pointed out that fundamentally, the assessed goodness of any act is a function of the values (effective, but not necessarily explicit) of the assessor. And assessed morality as a function of group agreement on the "goodness" of an act, promoting the increasingly coherent values of the group over increasing scope of expected consequences.

Now the values of any agent will necessarily be rooted in an evolutionary branch of reality, and this is the basis for increasing agreement as we move toward the common root, but this evolving agreement in principle on the direction of increasing morality should never be considered to point to any particular destination of goodness or morality in any objective sense, for that way lies the "repugnant conclusion" and other paradoxes of utilitarianism.

Obvious? Not at all, for while we can increasingly converge on principles promoting "what works" to promote our increasingly coherent values over increasing scope, our expression of those values will increasingly diverge.

comment by George_Dvorsky · 2007-10-30T16:48:57.000Z · LW(p) · GW(p)

The hardships experienced by a man tortured for 50 years cannot compare to a trivial experience massively shared by a large number of individuals -- even on the scale that Eli describes. There is no accumulation of experiences, and it cannot be conflated into a larger meta dust-in-the-eye experience; it has to be analyzed as a series of discreet experiences.

As for larger social implications, the negative consequence of so many dust specked eyes would be negligible.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-30T16:51:42.000Z · LW(p) · GW(p)

Wow. People sure are coming up with interesting ways of avoiding the question.

comment by Jef_Allbright · 2007-10-30T17:07:14.000Z · LW(p) · GW(p)

Eliezer wrote "Wow. People sure are coming up with interesting ways of avoiding the question."

I posted earlier on what I consider the more interesting question of how to frame the problem in order to best approach a solution.

If I were to simply provide my "answer" to the problem, with the assumption that the dust in the eyes is likewise limited to 50 years, then I would argue that the dust is to be preferred to the torture, not on a utilitarian basis of relative weights of the consequences as specified, but on the bigger-picture view that my preferred future is one in which torture is abhorrent in principle (noting that this entails significant indirect consequences not specified in the problem statement.)

comment by g · 2007-10-30T17:10:07.000Z · LW(p) · GW(p)

Eliezer, are you suggesting that declining to make up one's mind in the face of a question that (1) we have excellent reason to mistrust our judgement about and (2) we have no actual need to have an answer to is somehow disreputable?

As for your link to the "motivated stopping" article, I don't quite see why declining to decide on this is any more "stopping" than choosing a definite one of the options. Or are you suggesting that it's an instance of motivated continuation? Perhaps it is, but (as you said in that article) the problem with excessive "continuation" is that it can waste resources and miss opportunities. I don't see either of those being an issue here, unless you're actually threatening to do one of those two things -- in which case I declare you a Pascal's mugger and take no notice.

comment by Brandon_Reinhart · 2007-10-30T17:15:00.000Z · LW(p) · GW(p)

What happens if there aren't 3^^^3 instanced people to get dust specks? Do those specks carry over such that person #1 gets a 2nd speck and so on? If so, you would elect to have the person tortured for 50 years for surely the alternative is to fill our universe with dust and annihilate all cultures and life.

comment by Neel_Krishnaswami · 2007-10-30T17:28:00.000Z · LW(p) · GW(p)

Robin, of course it's not obvious. It's only an obvious conclusion if the global utility function from the dust specks is an additive function of the individual utilities, and since we know that utility functions must be bounded to avoid Dutch books, we know that the global utility function cannot possibly be additive -- otherwise you could break the bound by choosing a large enough number of people (say, 3^^^3).


From a more metamathematical perspective, you can also question whether 3^^3 is a number at all. It's perfectly straightforward to construct a perfectly consistent mathematics that rejects the axiom of infinity. Besides the philosophical justification for ultrafinitism (ie, infinite sets don't really exist), these theories corresponds to various notions of bounded computation (such as logspace or polytime). This is a natural requirement, if we want to require moral judgements to be made quickly enough to be relevant to decision making -- and that rules out seriously computing with numbers like 3^^^3.

Replies from: homunq
comment by homunq · 2012-04-13T13:50:03.686Z · LW(p) · GW(p)

I once read the following story about a Russian mathematician. I can't find the source right now.

Cast: Russian mathematician RM, other guy OG

RM: "Truly large numbers don't really exist in the same sense that small ones do."

OG: "That's ridiculous. Consider the powers of two. Does 2ˆ1 exist?""

RM: "Yes."

OG: "OK, does 2ˆ2 exist?"

RM: ".Yes."

OG: "So you'd agree that 2ˆ3 exists?"

RM: "...Yes."

OG: "How about 2ˆ4?"

RM: ".......Yes."

OG: "So this is silly. Where would you ever draw the boundary?"

RM: ".............................................................................................................................................."

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-30T17:32:00.000Z · LW(p) · GW(p)

Eliezer, are you suggesting that declining to make up one's mind in the face of a question that (1) we have excellent reason to mistrust our judgement about and (2) we have no actual need to have an answer to is somehow disreputable?

Yes, I am.

Regarding (1), we pretty much always have excellent reason to mistrust our judgments, and then we have to choose anyway; inaction is also a choice. The null plan is a plan. As Russell and Norvig put it, refusing to act is like refusing to allow time to pass.

Regarding (2), whenever a tester finds a user input that crashes your program, it is always bad - it reveals a flaw in the code - even if it's not a user input that would plausibly occur; you're still supposed to fix it. "Would you kill Santa Claus or the Easter Bunny?" is an important question if and only if you have trouble deciding. I'd definitely kill the Easter Bunny, by the way, so I don't think it's an important question.

Followup dilemmas:

For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?

For those who would pick TORTURE, what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-02-13T18:29:27.906Z · LW(p) · GW(p)

Unless the 3^^^3 people are forming a hive mind, I pick the specks.

I'm terribly inexperienced in translating ethical preferences into money, but in that scenario I wouldn't pay the penny. A penny can be better used in buying more utility than removing specks from 3^^^3 eyeballs.

comment by Kaj_Sotala · 2007-10-30T17:40:00.000Z · LW(p) · GW(p)

Fascinating question. No matter how small the negative utility in the dust speck, multiplying it with a number such as 3^^^3 will make it way worse than torture. Yet I find the obvious answer to be the dust speck one, for reasons similar to what others have pointed out - the negative utility rounds down to zero.

But that doesn't really solve the problem, for what if the harm in question was slightly larger? At what point does it cease rounding down? I have no meaningful criteria to give for that one. Obviously there must be a point where it does cease doing so, for it certainly is much better to torture one person for 50 years than 3^^^3 people for 49 years.

It is quite counterintuitive, but I suppose I should choose the torture option. My other alternatives would be to reject utilitarianism (but I have no better substitutes for it) or to modify my ethical system so that it solves this problem, but I currently cannot come up with an unproblematic way of doing so.

Still, I can't quite bring myself to do so. I choose specks, and admit that my ethical system is not consistent yet. (Not that it would be a surprise - I've noticed that all my attempts at building entirely consistent ethical systems tend to cause unwanted results at one point or the other.)


For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?

A single penny to avoid one dust speck, or to avoid 3^^^3 dust specks? No to the first one. To the second one, depends on how often they occured - if I somehow could live for 3^^^3 years, getting one dust speck in my eye per year, then no. If they actually inconvenienced me, then yes - a penny is just a penny.

comment by Jef_Allbright · 2007-10-30T17:48:00.000Z · LW(p) · GW(p)

"Regarding (1), we pretty much always have excellent reason to mistrust our judgments, and then we have to choose anyway; inaction is also a choice. The null plan is a plan. As Russell and Norvig put it, refusing to act is like refusing to allow time to pass."

This goes to the crux of the matter, why to the extent the future is uncertain, it is better to decide based on principles (representing wisdom encoded via evolutionary processes over time) rather than on the flat basis of expected consequences.

comment by Zubon · 2007-10-30T17:53:00.000Z · LW(p) · GW(p)

Would you condemn one person to be horribly tortured for fifty years without hope or rest, to save every qualia-experiencing being who will ever exist one blink?

Is the question significantly changed by this rephrasing? It makes SPECKS the default choice, and it changes 3^^^3 to "all." Are we better able to process "all" than 3^^^3, or can we really process "all" at all? Does it change your answer if we switch the default?

Would you force every qualia-experiencing being who will ever exist to blink one additional time to save one person from being horribly tortured for fifty years without hope or rest?

comment by Brandon_Reinhart · 2007-10-30T17:59:00.000Z · LW(p) · GW(p)

> For those who would pick TORTURE, what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.

If you mean would I condemn all conscious beings to a googolplex of torture to avoid universal annihilation from a big "dust crunch" my answer is still probably yes. The alternative is universal doom. At least the tortured masses might have some small chance of finding a solution to their problem at some point. Or at least a googolplex years might pass leaving some future civilization free to prosper. The dust is absolute doom for all potential futures.

Of course, I'm assuming that 3^^^3 conscious beings are unlikely to ever exist and so that dust would be applied over and over to the same people causing the universe to be filled with dust. Maybe this isn't how the mechanics of the problem work.

comment by Brandon_Reinhart · 2007-10-30T18:02:00.000Z · LW(p) · GW(p)

> Would you condemn one person to be horribly tortured for fifty years without hope or rest, to save every qualia-experiencing being who will ever exist one blink?

That's assuming you're interpreting the question correctly. That you aren't dealing with an evil genie.

comment by Zeus · 2007-10-30T18:04:00.000Z · LW(p) · GW(p)

You never said we couldn't choose who specifically gets tortured, so I'm assuming we can make that selection. Given that, the once agonizingly difficult choice is made trivially simple. I would choose 50 years of torture for the person who made me make this decision.

comment by Kat3 · 2007-10-30T18:20:00.000Z · LW(p) · GW(p)

Since I chose the specks -- no, I probably wouldn't pay a penny; avoiding the speck is not even worth the effort to decide to pay the penny or not. I would barely notice it; it's too insignificant to be worth paying even a tiny sum to avoid.

I suppose I too am "rounding down to zero"; a more significant harm would result in a different answer.

Replies from: phob
comment by phob · 2010-07-27T16:37:23.736Z · LW(p) · GW(p)

You're avoiding the question. What if a penny was automatically payed for you each time in the future to avoid dust specks floating in your eye? The question is whether the dust speck is worth at least a negative penny of disutility. For me, I would say yes.

comment by Michael_G.R. · 2007-10-30T18:36:00.000Z · LW(p) · GW(p)

"For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?"

To avoid all the dust specks, yeah, I'd pay a penny and more. Not a penny per speck, though ;)

The reason is to avoid having to deal with the "unintended consequences" of being responsible for that very very small change over such a large number of people. It's bound to have some significant indirect consequences, both positive and negative, on the far edges of the bell curve... the net impact could be negative, and a penny is little to pay to avoid responsibility for that possibility.

comment by Marcello · 2007-10-30T18:52:00.000Z · LW(p) · GW(p)

The first thing I thought when I read this question was that the dust specks were obviously preferable. Then I remembered that my intuition likes to round 3^^^3 down to something around twenty. Obviously, the dust specks are preferable to the torture for any number at all that I have any sort of intuitive grasp over.

But I found an argument that pretty much convinced me that the torture was the correct answer.

Suppose that instead of making this choice once, you will be faced with the same choice 10^17 times for the next fifty years (This number was chosen so that it was more than a million per second.) If you have a problem imagining the ability to make more than a million choices per second, imagine that you have a dial in front of you which goes from zero to a 10^17. If you set the dial to n, then 10^17-n people will get tortured starting now for the next fifty years, and n dust specks will fly into the eyes of each of 3^^^3 people during the next fifty years.

The dial starts at zero. For each unit that you turn the dial up, you are saving one person from being tortured by putting a dust speck in the eyes of each of the 3^^^3 people, the exact choice presented.

So, if you thought the correct answer was the dust specks, you'd turn the dial from zero to one right? And then you'd turn it from one to two, right?

But, if you turned the dial all the way up to 10^17, you'd effectively be rubbing the corneas of the 3^^^3 people with sandpaper for fifty years (of course, their corneas would wear through, and their eyes would come apart under that sort of abrasion. It would probably take less than a million dust specks per second to do that, but let's be conservative and make them smaller dust specks.) Even if you don't count the pain involved, they'd be blind forever. How many people would you blind in order to save one person from being tortured for fifty years? You probably wouldn't blind everyone on earth to save that one person from being tortured, and yet, there are (3^^^3)/(10^17) >> 7*10^9 people being blinded for each person you
have saved from torture.


So if your answer was the dust specks, you'd either end up turning the knob all the way up to 10^17, or you'd have to stop somewhere, because there's no escaping that in this scenario, there's a real dial in front of you, and you have to turn it to some n between 0 and a 10^17.


If you left the dial on, say, 10^10, I'd ask "Tell me, what is so special about the difference between hitting someone with 10^10 dust specs versus hitting them with 10^10+1, that wasn't special about the difference between hitting them with zero versus one?" If anything, the more dust specks there are, the less of a difference one more would make.

There are easily 10^17 continuous gradations between no inconvenience and having ones eyes turned to pulp, and I don't really see what would make any of them terribly different from each other. Yet n=0 is obviously preferable to n=10^17, and so, each individual increment of n must be bad.

Replies from: aausch, XiXiDu
comment by aausch · 2010-02-12T21:08:22.260Z · LW(p) · GW(p)

The reasoning here seems very broken to me (I have no opinion on the conclusion yet):

Look at a version of the reverse dial. Say that you start with 3^^^3 people having 1000000 dust-specks a second rubbed in their eye, and 0 people tortured. Each time you turn the dial up by 1, 1 person is moved over from the "speck in the eye" list over to the "tortured for 50 years" list, and the frequency is reduced by 1 spec/second. Would you turn the dial up to 1000000?

Replies from: phob
comment by phob · 2010-07-27T16:41:18.743Z · LW(p) · GW(p)

So because there is a continuum between the right answer (lots of torture) and the wrong answer (3^^^3 horribly blinded people), you would rather blind those people?

Replies from: Manfred
comment by Manfred · 2010-12-09T19:53:23.736Z · LW(p) · GW(p)

Nah, he was pretty clearly challenging the use of induction in the above post.

The larger problem is assuming linearity in an obviously nonlinear situation - this also explains why the induction appears to work either way. Applying 1 pound of force to someone's kneecap is simply not 1/10th as bad as applying 10 pounds of force to someone's kneecap.

comment by XiXiDu · 2010-12-09T20:35:51.879Z · LW(p) · GW(p)

This has nothing to do with the original question. You rephrased it so that it now asks if you'd rather torture one person or 3^^^3. Of course you rather torture one person than 3^^^3. That does not equal torturing one person or that 3^^^3 people get dust specks in their eyes for a fraction of a second.

comment by Tom_Crispin · 2007-10-30T18:57:00.000Z · LW(p) · GW(p)

"... whenever a tester finds a user input that crashes your program, it is always bad - it reveals a flaw in the code - even if it's not a user input that would plausibly occur; you're still supposed to fix it. "Would you kill Santa Claus or the Easter Bunny?" is an important question if and only if you have trouble deciding. I'd definitely kill the Easter Bunny, by the way, so I don't think it's an important question."

I write code for a living; I do not claim that it crashes the program. Rather the answer is irrelevant as I don't think that the question is important or insightful regarding our moral judgements since it lacks physical plausibility. BTW, since one can think of God as "Santa Claus for grown-ups", the Easter Bunny lives.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-30T18:58:00.000Z · LW(p) · GW(p)

By "pay a penny to avoid the dust specks" I meant "avoid all dust specks", not just one dust speck. Obviously for one speck I'd rather have the penny.

Replies from: phob
comment by phob · 2010-07-27T16:43:27.528Z · LW(p) · GW(p)

So if someone would pay a penny, they should pick torture if it were 3^^^^3 people getting dust specks, which makes it suspect that they understood 3^^^3 in the first place.

comment by Recovering_irrationalist · 2007-10-30T19:11:00.000Z · LW(p) · GW(p)

what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.

To reduce suffering in general rather than your own (it would be tough to live with), bring on the coddling grinders. (10^10^100)^2 is a joke next to 3^^^3.

Having said that, it depends on the qualia-experiencing population of all existence compared to the numbers affected, and whether you change existing lives or make new ones. If only a few googolplex-squared people-years exist anyway, I vote dust.

I also vote to kill the bunny.

comment by Sebastian_Hagen2 · 2007-10-30T19:27:00.000Z · LW(p) · GW(p)

For those who would pick TORTURE, what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.

Torture, again. From the perspective of each affected individual, the choice becomes:

1.) A (10(10100))/(3^^^3) chance of being tortured for (10(10100)) years.
2.) A 1 chance of a dust speck.
(or very slightly different numbers if the (10(10100)) people exist in addition to the 3^^^3 people; the difference is too small to be noticable)

I'd still take the former. (10(10100))/(3^^^3) is still so close to zero that there's no way I can tell the difference without getting a larger universe for storing my memory first.

comment by g · 2007-10-30T19:44:00.000Z · LW(p) · GW(p)

Eliezer, it's the combination of (1) totally untrustworthy brain machinery and (2) no immediate need to make a choice that I'm suggesting means that withholding judgement is reasonable. I completely agree that you've found a bug; congratulations, you may file a bug report and add it to the many other bug reports already on file; but how do you get from there to the conclusion that the right thing to do is to make a choice between these two options?

When I read the question, I didn't go into a coma or become psychotic. I didn't even join a crazy religion or start beating my wife. If for some reason I actually had to make such a choice, I still wouldn't go nuts. So I think analogies with crashing software are inappropriate. (Again, I don't deny that there's a valid bug report. I'm just questioning its severity.)

So what we have here is an architectural problem with the software, which produces a failure mode in which input radically different from any that will ever actually be supplied provokes a small user-interface glitch. It would be nice to fix it, but it doesn't strike me as unreasonable if it doesn't make it through some people's triage.

(Santa Claus versus the Easter Bunny is much nearer to being a realistic question, and so far as I can tell there isn't anything in my mental machinery that fundamentally isn't equipped to consider it. Kill the bunny.)

comment by Gordon_Worley · 2007-10-30T20:05:00.000Z · LW(p) · GW(p)

Let's suppose we measure pain in pain points (pp). Any event which can cause pain is given a value in [0, 1], with 0 being no pain and 1 being the maximum amount of pain perceivable. To calculate the pp of an event, assign a value to the pain, say p, and then multiply it by the number of people who will experience the pain, n. So for the torture case, assume p = 1, then:

torture: 1*1 = 1 pp

For the spec in eye case, suppose it causes the least amount of pain greater than no pain possible. Denote this by e. Assume that the dust speck causes e amount of pain. Then if e < 1/3^^^3

spec: 1 * e < 1 pp

and if e > 1/3^^^3

spec: 1 * e > 1 pp

So assuming our moral calculus is to always choose whichever option generates the least pp, we need only ask if e is greater than or less than 1/n.

If you've been paying attention, I now have an out to give no answer: we don't know what e is, so I can't decide (at least not based on pp). But I'll go ahead and wager a guess. Since 1/3^^^3 is very small, I think that most likely any pain sensing system of any present or future intelligence will have e > 1/3^^^3, then I must choose torture because torture costs 1 pp but the specs cost more than 1 pp.

This doesn't feel like what, as a human, I would expect the answer to be. I want to say don't torture the poor guy and all the rest of us will suffer the spec so he need not be tortured. But I suspect this is human inability to deal with large numbers, because I think about how I would be willing to accept a spec so the guy wouldn't be torture since e pp < 1 pp, and every other individual, supposing they were pp-fearing people, would make the same short-sighted choice. But the net cost would be to distribute more pain with the specs than the torture ever would.

Weird how the human mind can find a logical answer and still expect a nonlogical answer to be the truth.

comment by Tom_McCabe · 2007-10-30T20:07:00.000Z · LW(p) · GW(p)

"Wow. People sure are coming up with interesting ways of avoiding the question."

My response was a real request for information- if this is a pure utility test, I would select the dust specks. If this were done to a complex, functioning society, adding dust specks into everyone's eyes would disrupt a great deal of important stuff- someone would almost certainly get killed in an accident due to the distraction, even on a planet with only 10^15 people and not 3^^^^3.

comment by Neel_Krishnaswami · 2007-10-30T20:19:00.000Z · LW(p) · GW(p)

Eliezer, in your response to g, are you suggesting that we should strive to ensure that our probability distribution over possible beliefs sum to 1? If so, I disagree: I don't think this can be considered a plausible requirement for rationality. When you have no information about the distribution, you ought to assign probabilities uniformly, according to Laplace's principle of indifference. But the principle of indifference only works for distributions over finite sets. So for infinite sets you have to make an arbitrary choice of distribution, which violates indifference.

comment by Tom_McCabe · 2007-10-30T20:21:00.000Z · LW(p) · GW(p)

"For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?"

Yes. Note that, for the obvious next question, I cannot think of an amount of money large enough such that I would rather keep it than use it to save a person from torture. Assuming that this is post-Singularity money which I cannot spend on other life-saving or torture-stopping efforts.

"You probably wouldn't blind everyone on earth to save that one person from being tortured, and yet, there are (3^^^3)/(10^17) >> 7*10^9 people being blinded for each person you have saved from torture."

This is cheating, to put it bluntly- my utility function does not assign the same value to blinding someone and putting six billion dust specks in everyone's eye, even though six billion specks are enough to blind people if you force them into their eyes all at once.

"I'd still take the former. (10(10100))/(3^^^3) is still so close to zero that there's no way I can tell the difference without getting a larger universe for storing my memory first."

The probability is effectively much greater than that, because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)). abs(U(torture)) would also get reduced, but by a much smaller factor, because the number is much smaller to begin with.

Replies from: phob
comment by phob · 2010-07-27T16:46:02.142Z · LW(p) · GW(p)

People are being tortured, and it wouldn't take too much money to prevent some of it. Obviously, there is already a price on torture.

comment by Pete_Carlton · 2007-10-30T20:28:00.000Z · LW(p) · GW(p)

My algorithm goes like this:
there are two variables, X and Y.
Adding a single additional dust speck to a person's eye over their entire lifetime increases X by 1 for every person this happens to.
A person being tortured for a few minutes increases Y by 1.

I would object to most situations where Y is greater than 1. But I have no preferences at all with regard to X.

See? Dust specks and torture are not the same. I do not lump them together as "disutility". To do so seems to me a preposterous oversimplification. In any case, it has to be argued that they are the same. If you assume they're the same, then you're just assuming the torture answer when you state the question - it's not a problem of ethical philosophy but a problem of addition.

comment by Mike7 · 2007-10-30T20:38:00.000Z · LW(p) · GW(p)

I am not convinced that this question can be converted into a personal choice where you face the decision of whether to take the speck or a 1/3^^^3 chance of being tortured. I would avoid the speck and take my chances with torture, and I think that is indeed an obvious choice.

I think a more apposite application of that translation might be:
If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day, I would always choose the speck, because I would never want to endure the inevitable 50 years of torture.

The difference is that framing the question as a one-off individual choice obscures the fact that in the example proffered, the torture is a certainty.

comment by Recovering_irrationalist · 2007-10-30T21:32:00.000Z · LW(p) · GW(p)

1/3^^^3 chance of being tortured... If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day, I would always choose the speck, because I would never want to endure the inevitable 50 years of torture.

That wouldn't make it inevitable. You could get away with it, but then you could get multiple tortures. Rolling 6 dice often won't get exactly one "1".

comment by Sebastian_Hagen2 · 2007-10-30T21:37:00.000Z · LW(p) · GW(p)

Tom McCabe wrote:
The probability is effectively much greater than that, because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)). abs(U(torture)) would also get reduced, but by a much smaller factor, because the number is much smaller to begin with.

Is there something wrong with viewing this from the perspective of the affected individuals (unique or not)? For any individual instance of a person, the probability of directly experiencing the torture is (10(10100))/(3^^^3), regardless of how many identical copies of this person exist.


Mike wrote:
I think a more apposite application of that translation might be:
If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day ...

I'm wondering how you would phrase the daily choice in this case, to get the properties you want. Perhaps like this:
1.) Add a period of (50*365)/3^^^3 days to the time period you will be tortured at the end of your life.
2.) Get a speck.

This isn't quite the same as the original question, as it gives choices between the two extremes. And in practice, this could get rather annoying, as just having to answer the question would be similarly bad to getting a speck. Leaving that aside, however, I'd still take the (ridiculously short) torture every day.

The difference is that framing the question as a one-off individual choice obscures the fact that in the example proffered, the torture is a certainty.
I don't think the math in my personal utility-estimation algorithm works out significantly differently depending on which of the cases is chosen.

comment by mr._M. · 2007-10-30T21:37:00.000Z · LW(p) · GW(p)

Answer depends on the person's POV on consciousness.

comment by Recovering_irrationalist · 2007-10-30T21:57:00.000Z · LW(p) · GW(p)

because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)).

If so, I want my anti-wish back. Evil Genie never said anything about compression. No wonder he has so many people to dust. I'm complaining to GOD Over Djinn.

If they're not compressed, surely a copy will still experience qualia? Does it matter that it's identical to another? If the sum experience of many copies is weighted as if there was just one, then I'm officially converting from infinite set agnostic to infinite set atheist.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-30T22:11:00.000Z · LW(p) · GW(p)

Bayesianism, Infinite Decisions, and Binding replies to Vann McGee's "An airtight dutch book", defending the permissibility of an unbounded utility function.

An option that dominates in finite cases will always provably be part of the maximal option in finite problems; but in infinite problems, where there is no maximal option, the dominance of the option for the infinite case does not follow from its dominance in all finite cases.

If you allow a discontinuity where the utility of the infinite case is not the same as the limit of the utilities of the finite cases, then you have to allow a corresponding discontinuity in planning where the rational infinite plan is not the limit of the rational finite plans.

comment by douglas · 2007-10-30T22:21:00.000Z · LW(p) · GW(p)

It is clearly not so easy to have a non-subjective determination of utility.
After some thought I pick the torture. That is because the concept of 3^^^3 people means that no evolution will occur while that many people live. The one advantage to death is that it allows for evolution. It seems likely that we will have evovled into much more interesting life forms long before 3^^^3 of us have passed.
What's the utility of that?

comment by Mike7 · 2007-10-30T22:56:00.000Z · LW(p) · GW(p)

Recovering Irrationalist:
True: my expected value would be 50 years of torture, but I don't think that changes my argument much.

Sebastian:
I'm not sure I understand what you're trying to say. (50*365)/3^^^3 (which is basically the same thing as 1/3^^^3) days of torture wouldn't be anything at all, because it wouldn't be noticeable. I don't think you can divide time to that extent from the point of view of human consciousness.

I don't think the math in my personal utility-estimation algorithm works out significantly differently depending on which of the cases is chosen.
To the extent that you think that and it is reasonable, I suppose that would undermine my argument that the personal choice framework is the wrong way of looking at the question. I would choose the speck every day, and it seems like a clear choice to me, but perhaps that just reflects that I have the bias this thought experiment was meant to bring out.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-30T23:28:00.000Z · LW(p) · GW(p)

I'll go ahead and reveal my answer now: Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity.

Some comments:

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant. In other words it has to be effectively flat. And I doubt they would have said anything different if I'd said 3^^^^3.

If anything is aggregating nonlinearly it should be the 50 years of torture, to which one person has the opportunity to acclimate; there is no individual acclimatization to the dust specks because each dust speck occurs to a different person. The only person who could be "acclimating" to 3^^^3 is you, a bystander who is insensitive to the inconceivably vast scope.

Scope insensitivity - extremely sublinear aggregation by individuals considering bad events happening to many people - can lead to mass defection in a multiplayer prisoner's dilemma even by altruists who would normally cooperate. Suppose I can go skydiving today but this causes the world to get warmer by 0.000001 degree Celsius. This poses very little annoyance to any individual, and my utility function aggregates sublinearly over individuals, so I conclude that it's best to go skydiving. Then a billion people go skydiving and we all catch on fire. Which exact person in the chain should first refuse?

I may be influenced by having previously dealt with existential risks and people's tendency to ignore them.

Replies from: MrHen, Aharon, dlthomas, private_messaging, None, None
comment by MrHen · 2010-02-11T01:51:08.778Z · LW(p) · GW(p)

I will admit, that was a pretty awesome lesson to learn. Marcello's reasoning had it click in my head but the kicker that drove the point home was scaling it to 3^^^^3 instead of 3^^^3.

comment by Aharon · 2010-12-17T21:01:37.107Z · LW(p) · GW(p)

I think I understand why one should derive the conclusion to torture one person, given these premises.

What I don't understand is the premises. In the article about scope insensitivity you linked to, it was very clear that the scope of things made things worse. I don't understand why it should be wrong to round down the dust speck, or similar very small disutilities, to zero - Basically, what Scott Clark said: 3^^^3*0 disutilities = 0 disutility.

Replies from: dlthomas
comment by dlthomas · 2011-10-11T22:16:03.996Z · LW(p) · GW(p)

Rounding to zero is odd. In the absence of other considerations, you have no preference whether or not people get a dust speck in their eye?

It is also in violation of the structure of the thought experiment - a dust speck was chosen as the least bad bad thing that can happen to someone. If you would round it to zero, then you need to choose slightly worse thing - I can't imagine your intuitions will be any less shocked by preferring torture to that slightly worse thing.

Replies from: lessdazed, TAG
comment by lessdazed · 2011-10-11T22:56:56.220Z · LW(p) · GW(p)

a dust speck was chosen as the least bad bad thing that can happen to someone.

That was a mistake, since so many people round it to zero.

Replies from: dlthomas
comment by dlthomas · 2011-10-11T23:14:46.007Z · LW(p) · GW(p)

It seems to have been. Since the criteria for the choice was laid out explicitly, though, I would have hoped that more people would notice that the thought experiment they solved so easily was not actually the one they had been given, and perform the necessary adjustment. This is obviously too optimistic - but perhaps can serve itself as some kind of lesson about reasoning.

Replies from: Aharon
comment by Aharon · 2011-10-29T13:14:24.173Z · LW(p) · GW(p)

I conceed that it is reasonable within the constraints of the thought experiment. However, I think it should be noted that this will never be more than a thought experiment and that if real world numbers and real world problems are used, it becomes less clear cut, and the intuition of going against the 50 years torture is a good starting point in some cases.

Replies from: coldlyrationalogic
comment by coldlyrationalogic · 2015-04-10T14:52:49.679Z · LW(p) · GW(p)

It's odd. If you think about it, Eliezer's Argument is absolutely correct. But it seems rather unintuitive even though I KNOW it's right. We humans are a bit silly sometimes. On the other hand, we did manage to figure this out, so it's not that bad.

comment by TAG · 2023-01-07T19:15:55.170Z · LW(p) · GW(p)

"In the absence of other considerations, you have no preference whether or not people get a dust speck in their eye?"

I can regard the moral significance as zero. I don't have to take the view that morality "is" preferences, of any kind or degree.

Excessive demandingness is a famous problem with utiltarianism: rounding down helps to curtail it.

comment by dlthomas · 2011-10-11T22:12:31.227Z · LW(p) · GW(p)

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant.

Sum(1/n^2, 1, 3^^^3) < Sum(1/n^2, 1, inf) = (pi^2)/6

So an algorithm like, "order utilities from least to greatest, then sum with a weight if 1/n^2, where n is their position in the list" could pick dust specks over torture while recommending most people not go sky diving (as their benefit is outweighed by the detriment to those less fortunate).

This would mean that scope insensitivity, beyond a certain point, is a feature of our morality rather than a bias; I am not sure my opinion of this outcome.

That said, while giving an answer to the one problem that some seem more comfortable with, and to the second that everyone agrees on, I expect there are clear failure modes I haven't thought of.

Edited to add:

This of course holds for weights of 1/n^a for any a>1; the most convincing defeat of this proposition would be showing that weights of 1/n (or 1/(n log(n))) drop off quickly enough to lead to bad behavior.

Replies from: dlthomas
comment by dlthomas · 2011-11-11T22:48:49.304Z · LW(p) · GW(p)

On recently encountering the wikipedia page on Utility Monsters and thence to the Mere Addition Paradox, it occurs to me that this seems to neatly defang both.

Edited - rather, completely defangs the Mere Addition Paradox, may or may not completely defang Utility Monsters depending on details but at least reduces their impact.

comment by private_messaging · 2013-08-06T20:00:23.856Z · LW(p) · GW(p)

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant. In other words it has to be effectively flat. And I doubt they would have said anything different if I'd said 3^^^^3.

And why should they consider 3^^^^3 differently, if their function asymptotically approaches a limit? Besides, human utility function would take the whole, and then perhaps consider duplicates, uniqueness (you don't want your prehistoric tribe to lose the last man who knows how to make a stone axe), and so on, rather than evaluate one by one and then sum.

Scope insensitivity - extremely sublinear aggregation by individuals considering bad events happening to many people - can lead to mass defection in a multiplayer prisoner's dilemma even by altruists who would normally cooperate. Suppose I can go skydiving today but this causes the world to get warmer by 0.000001 degree Celsius. This poses very little annoyance to any individual, and my utility function aggregates sublinearly over individuals, so I conclude that it's best to go skydiving. Then a billion people go skydiving and we all catch on fire. Which exact person in the chain should first refuse?

The false allure of oversimplified morality is in ease of inventing hypothetical examples where it works great.

One could, of course, posit a colder planet. Most of the population would prefer that planet to be warmer, but if the temperature rise exceeds 5 Celsius, the gas hydrates melt, and everyone dies. And they all have to decide at one day. Or one could posit a planet Linearium populated entirely by people that really love skydiving, who would want to skydive everyday but that would raise the global temperature by 100 Celsius, and they'd rather be alive than skydive every day and boil to death. They opt to skydive at their birthdays at the expense of 0.3 degree global temperature rise, which each one of them finds to be an acceptable price to pay for getting to skydive at your birthday.

comment by [deleted] · 2015-03-25T13:15:46.858Z · LW(p) · GW(p)

But still, WHY is torture better? What is even the problem with the speck dusts? Some of the people who get speck dust in their eyes will die in accidents caused by the dust particles? Is this why speck dust is so bad? But then, have we considered the fact that speck dust may save an equal amount of people, who would otherwise die? I really don´t get it and it bothers me alot.

Replies from: Roho, dxu, None, Kindly
comment by Roho · 2015-03-25T14:34:39.273Z · LW(p) · GW(p)

Okeymaker, I think the argument is this:

Torturing one person for 50 years is better than torturing 10 persons for 40 years.

Torturing 10 persons for 40 years is better than torturing 1000 persons for 10 year.

Torturing 1000 persons for 10 years is better than torturing 1000000 persons for 1 year.

Torturing 10^6 persons for 1 year is better than torturing 10^9 persons for 1 month.

Torturing 10^9 persons for 1 month is better than torturing 10^12 persons for 1 week.

Torturing 10^12 persons for 1 week is better than torturing 10^15 persons for 1 day.

Torturing 10^15 persons for 1 day is better than torturing 10^18 persons for 1 hour.

Torturing 10^18 persons for 1 hour is better than torturing 10^21 persons for 1 minute.

Torturing 10^21 persons for 1 minute is better than torturing 10^30 persons for 1 second.

Torturing 10^30 persons for 1 second is better than torturing 10^100 persons for 1 millisecond.

Torturing for 1 millisecond is exactly what a dust speck does.

And if you disagree with the numbers, you can add a few millions. There is still plenty of space between 10^100 and 3^^^3.

Replies from: None, private_messaging
comment by [deleted] · 2015-03-25T18:16:04.091Z · LW(p) · GW(p)

Yes, if this is the case (would be nice if Eliezer confirmed it) I can see where the logic halts from my perspective :)

Explanatory example if someone care:

Torturing 10^21 persons for 1 minute is better than torturing 10^30 persons for 1 second.

I disagree. From my moral standpoint AND from my utility function whereas I am a bystander and perceive all humans as a cooperating system and want to minimize the damages to it, I think that it is better for 10^30 persons to put up with 1 second of intense pain compared to a single one who have to survive a whole minute. It is much, much more easy to recover from one second of pain than from being tortured for a minute.

And spec dust is virtually harmless. The potential harm it may cause should at least POSSIBLY be outweighted by the benefits, e.g. someone not being run over by a car because he stopped and scratched his eye.

Replies from: Roho, Jiro
comment by Roho · 2015-03-25T19:04:26.359Z · LW(p) · GW(p)

Okay, so let's zoom in here. What is preferable?

Torturing 1 person for 60 seconds

Torturing 100 person for 59 seconds

Torturing 10000 person for 58 seconds

etc.

Kind of a paradox of the heap. How many seconds of torture are still torture?

And 10^30 is really a lot of people. That's what Eliezer meant with "scope insensitivity". And all of them would be really grateful if you spared them their second of pain. Could be worth a minute of pain?

comment by Jiro · 2015-03-25T19:29:07.577Z · LW(p) · GW(p)

The potential harm it may cause should at least POSSIBLY be outweighted by the benefits, e.g. someone not being run over by a car because he stopped and scratched his eye.

That's fighting the hypothetical. Assume that the speck is such that the harm caused by the spec slightly outweighs the benefits.

Replies from: None
comment by [deleted] · 2015-03-25T20:20:27.757Z · LW(p) · GW(p)

Or the benefits could slightly outweigh the harm.

You have to treat this option as a net win of 0 then, because you have no more info to go on so the probs. are 50/50. Option A: Torture. Net win is negativ. Option B: Spec dust. Net win is zero. Make you choice.

Replies from: Quill_McGee
comment by Quill_McGee · 2015-03-25T21:43:26.603Z · LW(p) · GW(p)

In the Least Convenient Possible World of this hypothetical, every dust speck causes a constant small amount of harm with no knock-on effects(no avoiding buses, no crashing cars...)

Replies from: private_messaging
comment by private_messaging · 2015-03-26T08:02:14.490Z · LW(p) · GW(p)

I thought the original point was to focus just on the inconvenience of the dust, rather than simply propositioning that out of 3^^^3 people who were dustspecked, one person would've gotten something worse than 50 years of torture as a consequence of the dust speck. The latter is not even an ethical dilemma, it's merely an (entirely baseless but somewhat plausible) assertion about the consequences of dust specks in the eyes.

Replies from: Quill_McGee
comment by Quill_McGee · 2015-03-27T03:09:14.638Z · LW(p) · GW(p)

exactly! No knock-on effects. Perhaps you meant to comment on the grandparent(great-grandparent? do I measure from this post or your post?) instead?

Replies from: private_messaging
comment by private_messaging · 2015-03-27T12:34:48.555Z · LW(p) · GW(p)

yeah, clicked wrong button.

comment by private_messaging · 2015-03-26T08:05:19.632Z · LW(p) · GW(p)

Torturing a person for 1 millisecond is not necessarily even a possibility. It doesn't make any sense whatsoever; in 1 millisecond no interesting feedback loops can even close.

If we accept that torture is some class of computational processes that we wish to avoid, the badness definitely could be eating up your 3^^^3s in one way or the other. We have absolutely zero reason to expect linearity when some (however unknown) properties of a set of computations are involvd. And the computational processes are not infinitely divisible into smaller lengths of time.

Replies from: TomStocker, dxu
comment by TomStocker · 2015-03-26T11:19:45.794Z · LW(p) · GW(p)

Agree, having lived in chronic pain supposedly worse than untrained childbirth, I'd say that even an hour has a really seriously different possibility in terms of capacity for suffering than a day, and a day different from a week. For me it breaks down somewhere, even when multiplying between the 10^15 for 1 day and 10^21 for one minute. You can't really feel THAT much pain in a minute that is comparable to a day, even orders of magnitude? Its just qualitatively different. Interested to hear pushback on this

Replies from: Kindly
comment by Kindly · 2015-03-26T13:44:36.660Z · LW(p) · GW(p)

We could go from a day to a minute more slowly; for example, by increasing the number of people by a factor of a googolplex every time the torture time decreases by 1 second.

I absolutely agree that the length of torture increases how bad it is in nonlinear ways, but this doesn't mean we can't find exponential factors that dominate it at every point at least along the "less than 50 years" range.

Replies from: private_messaging, TomStocker
comment by private_messaging · 2015-03-26T20:21:38.768Z · LW(p) · GW(p)

That strikes me as a deliberate set up for a continuum fallacy.

Also, why are you so sure that the number of people increases suffering in a linear way for even very large numbers? What is a number of people anyway?

I'd much prefer to have a [large number of exact copies of me] experience 1 second of headache than for one me to suffer it for a whole day. Because those copies they don't have any mechanism which could compound their suffering. They aren't even different subjectivities. I don't see any reason why a hypothetical mind upload of me running on multiple redundant hardware should be an utility monster, if it can't even tell subjectively how redundant it's hardware is.

Some anaesthetics do something similar, preventing any new long term memories, people have no problem with taking those for surgery. Something's still experiencing pain but it's not compounding into anything really bad (unless the drugs fail to work, or unless some form of long term memory still works). A real example of a very strong preference for N independent experiences of 30 seconds of pain over 1 experience of 30*N seconds of pain.

Replies from: Kindly
comment by Kindly · 2015-03-26T22:30:03.855Z · LW(p) · GW(p)

It's not a continuum fallacy because I would accept "There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don't know the exact values of N and T" as an answer. If, on the other hand, the comparison goes the other way for any values of N and T, then you have to accept the transitive closure of those comparisons as well.

Also, why are you so sure that the number of people increases suffering in a linear way for even very large numbers? What is a number of people anyway?

I'm not sure what you mean by this. I don't believe in linearity of suffering: that would be the claim that 2 people tortured for 1 day is the same as 1 person tortured for 2 days, and that's ridiculous. I believe in comparability of suffering, which is the claim that for some value of N, N people tortured for 1 day is worse than 1 person tortured for 2 days.

Regarding anaesthetics: I would prefer a memory inhibitor for a painful surgery to the absence of one, but I would still strongly prefer to feel less pain during the surgery even if I know I will not remember it one way or the other. Is this preference unusual?

Replies from: None, private_messaging
comment by [deleted] · 2015-03-26T22:50:11.946Z · LW(p) · GW(p)

I believe in comparability of suffering, which is the claim that for some value of N, N people tortured for 1 day is worse than 1 person tortured for 2 days.

This is where the argument for choosing torture falls apart for me, really. I don't think there is any number of people getting dust specks in their eyes that would be worse than torturing one person for fifty years. I have to assume my utility function over other people is asymptotic; the amount of disutility of choosing to let even an infinity of people get dust specks in their eyes is still less than the disutility of one person getting tortured for fifty years.

I'm not sure what you mean by this. I don't believe in linearity of suffering: that would be the claim that 2 people tortured for 1 day is the same as 1 person tortured for 2 days, and that's ridiculous.

I think he's questioning the idea that two people getting dust specks in their eyes is twice the disutility of one person getting dust specks, and that is the linearity he's referring to.

Personally, I think the problem stems from dust specks being such a minor inconvenience that it's basically below the noise threshold. I'd almost be indifferent between choosing for nothing to happen or choosing for everyone on Earth to get dust specks (assuming they don't cause crashes or anything).

Replies from: SimonJester23, SimonJester23
comment by SimonJester23 · 2015-03-27T02:37:30.825Z · LW(p) · GW(p)

There's the question of linearity- but if you use big enough numbers you can brute force any nonlinear relationship, as Yudkowsky correctly pointed out some years ago. Take Kindly's statement:

"There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don't know the exact values of N and T"

We can imagine a world where this statement is true (probably for a value of T really close to 1). And we can imagine knowing the correct values of N and T in that world. But even then, if a critical condition is met, it will be true that

"For all values of N, and for all T>1, there exists a value of A such that torturing N people for T seconds is better than torturing A*N people for T-1 seconds."

Sure, the value of A may be larger than 10^100... But then, 3^^^3 is already vastly larger than 10^100. And if it weren't big enough we could just throw a bigger number at the problem; there is no upper bound on the size of conceivable real numbers. So if we grant the critical condition in question, as Yudkowsky does/did in the original post...

Well, you basically have to concede that "torture" wins the argument, because even if you say that [hugenumber] of dust specks does not equate to a half-century of torture, that is NOT you winning the argument. That is just you trying to bid up the price of half a century of torture.

The critical condition that must be met here is simple, and is an underlying assumption of Yudkowsky's original post: All forms of suffering and inconvenience are represented by some real number quantity, with commensurate units to all other forms of suffering and inconvenience.

In other words, the "torture one person rather than allow 3^^^3 dust specks" wins, quite predictably, if and only if it is true that that the 'pain' component of the utility function is measured in one and only one dimension.

So the question is, basically, do you measure your utility function in terms of a single input variable?

If you do, then either you bury your head in the sand and develop a severe case of scope insensitivity... or you conclude that there has to be some number of dust specks worse than a single lifetime of torture.

If you don't, it raises a large complex of additional questions- but so far as I know, there may well be space to construct coherent, rational systems of ethics in that realm of ideas.

comment by SimonJester23 · 2015-03-30T22:33:41.066Z · LW(p) · GW(p)

It occurred to me to add something to my previous comments about the idea of harm being nonlinear, or something that we compute in multiple dimensions that are not commensurate.

One is that any deontological system of ethics automatically has at least two dimensions. One for general-purpose "utilons," and one for... call them "red flags." As soon as you accumulate even one red flag you are doing something capital-w Wrong in that system of ethics, regardless of the number of utilons you've accumulated.

The main argument justifying this is, of course, that you may think you have found a clever way to accumulate 3^^^3 utilons in exchange for a trivial amount of harm (torture ONLY one scapegoat!)... but the overall weighted average of all human moral reasoning suggests that people who think they've done this are usually wrong. Therefore, best to red-flag such methods, because they usually only sound clever.

Obviously, one may need to take this argument with a grain of salt, or 3^^^3 grains of salt. It depends on how strongly you feel bound to honor conclusions drawn by looking at the weighted average of past human decision-making.


The other observation that occurred to me is unrelated. It is about the idea of harm being nonlinear, which as I noted above is just plain not enough to invalidate the torture/specks argument by itself due to the ability to keep thwacking a nonlinear relationship with bigger numbers until it collapses.

Take as a thought-experiment an alternate Earth where, in the year 1000, population growth has stabilized at an equilibrium level, and will rise back to that equilibrium level in response to sudden population decrease. The equilibrium level is assumed to be stable in and of itself.

Imagine aliens arriving and killing 50% of all humans, chosen apparently at random. Then they wait until the population has returned to equilibrium (say, 150 years) and do it again. Then they repeat the process twice more.

The world population circa 1000 was about 300 million (roughly,) so we estimate that this process would kill 600 million people.

Now consider as an alternative, said aliens simply killing everyone, all at once. 300 million dead.

Which outcome is worse?

If harm is strictly linear, we would expect that one death plus one death is exactly as bad as two deaths. By the same logic, 300 megadeaths is only half as bad as 600 megadeaths, and if we inoculate ourselves against hyperbolic discounting...

Well, the "linear harm" theory smacks into a wall. Because it is very credible to claim that the extinction of the human species is much worse than merely twice as bad as the extinction of exactly half the human species. Many arguments can be presented, and no doubt have been presented on this very site. The first that comes to mind is that human extinction means the loss of all potential future value associated with humans, not just the loss of present value, or even the loss of some portion of the potential future.

We are forced to conclude that there is a "total extinction" term in our calculation of harm, one that rises very rapidly in an 'inflationary' way. And it would do this as the destruction wrought upon humanity reaches and passes a level beyond which the species could not recover- the aliens killing all humans except one is not noticeably better than killing all of them, nor is sparing any population less than a complete breeding population, but once a breeding population is spared, there is a fairly sudden drop in the total quantity of harm.

Now, again, in itself this does not strictly invalidate the Torture/Specks argument. Assuming that the harm associated with human extinction (or torturing one person) is any finite amount that could conceivably be equalled by adding up a finite number of specks in eyes, then by definition there is some "big enough" number of specks that the aliens would rationally decide to wipe out humanity rather than accept that many specks in that many eyes.

But I can't recall a similar argument for nonlinear harm measurement being presented in any of the comments I've sampled, so I wanted to mention it.

But I thought it was interesting and couldn't recall seeing it elsewhere.

Replies from: private_messaging
comment by private_messaging · 2015-03-30T23:32:54.573Z · LW(p) · GW(p)

I mentioned duplication. That in 3^^^3 people, most have to be exact duplicates of one another birth to death.

In your extinction example, once you have substantially more than the breeding population, extra people duplicate some aspects of your population (ability to breed) which causes you to find it less bad.

The other observation that occurred to me is unrelated. It is about the idea of harm being nonlinear, which as I noted above is just plain not enough to invalidate the torture/specks argument by itself due to the ability to keep thwacking a nonlinear relationship with bigger numbers until it collapses.

Not every non-linear relationship can be thwacked with bigger and bigger numbers...

comment by private_messaging · 2015-03-26T23:01:46.304Z · LW(p) · GW(p)

don't know the exact values of N and T

For one thing N=1 T=1 trivially satisfies your condition...

I'm not sure what you mean by this.

I mean, suppose that you got yourself a function that takes in a description of what's going on in a region of spacetime, and it spits out a real number of how bad it is.

Now, that function can do all sorts of perfectly reasonable things that could make it asymptotic for large numbers of people, for example it could be counting distinct subjective experiences in there (otherwise a mind upload on very multiple redundant hardware is an utility monster, despite having an identical subjective experience to same upload running one time. That's much sillier than the usual utility monster, which feels much stronger feelings). This would impose a finite limit (for brains of finite complexity).

One thing that function can't do, is to have a general property that f(a union b)=f(a)+f(b) , because then we just subdivide our space into individual atoms none of which are feeling anything.

Replies from: Kindly
comment by Kindly · 2015-03-27T00:17:18.353Z · LW(p) · GW(p)

For one thing N=1 T=1 trivially satisfies your condition...

Obviously I only meant to consider values of T and N that actually occur in the argument we were both talking about.

Replies from: private_messaging
comment by private_messaging · 2015-03-27T12:33:53.975Z · LW(p) · GW(p)

Well I'm not sure what's the point then. What you're trying to induct from it.

comment by TomStocker · 2015-04-23T13:58:32.756Z · LW(p) · GW(p)

Obviously. Just important to remember that extremity of suffering is something we frequently fail to think well about.

Replies from: Kindly
comment by Kindly · 2015-04-23T14:34:33.518Z · LW(p) · GW(p)

Absolutely. We're bad at anything that we can't easily imagine. Probably, for many people, intuition for "torture vs. dust specks" imagines a guy with a broken arm on one side, and a hundred people saying 'ow' on the other.

The consequences of our poor imagination for large numbers of people (i.e. scope insensitivity) are well-studied. We have trouble doing charity effectively because our intuition doesn't take the number of people saved by an intervention into account; we just picture the typical effect on a single person.

What, I wonder, are the consequence of our poor imagination for extremity of suffering? For me, the prison system comes to mind: I don't know how bad being in prison is, but it probably becomes much worse than I imagine if you're there for 50 years, and we don't think about that at all when arguing (or voting) about prison sentences.

Replies from: dxu, TomStocker
comment by dxu · 2015-04-23T15:58:29.579Z · LW(p) · GW(p)

My heuristic for dealing with such situations is somewhat reminiscent of Hofstadter's Law: however bad you imagine it to be, it's worse than that, even when you take the preceding statement into account. In principle, this recursion should go on forever and lead to you regarding any sufficiently unimaginably bad situation as infinitely bad, but in practice, I've yet to have it overflow, probably because your judgment spontaneously regresses back to your original (inaccurate) representation of the situation unless consciously corrected for.

Replies from: Lumifer
comment by Lumifer · 2015-04-23T17:01:36.377Z · LW(p) · GW(p)

Obligatory xkcd.

Replies from: Nornagest
comment by Nornagest · 2015-04-23T18:48:00.927Z · LW(p) · GW(p)

That would have been a better comic without the commentary in the last panel.

Replies from: Lumifer
comment by Lumifer · 2015-04-23T18:59:53.042Z · LW(p) · GW(p)

But the alt text is great X-)

comment by TomStocker · 2015-05-12T12:39:47.192Z · LW(p) · GW(p)

My feeling is that situations like being caught for doing something horrendous might or might not be subject to psychological adjustment - that many situations of suffering are subject to psychological adjustment and so might actually be not as bad as we though. But chronic intense pain, is literally unadjustable to some degree - you can adjust to being in intense suffering but that doesn't make the intense suffering go away. That's why I think its a special class of states of being - one that invokes action. What do people think?

comment by dxu · 2015-03-27T01:06:50.719Z · LW(p) · GW(p)

Okay, here's a new argument for you (originally proposed by James Miller, and which I have yet to see adequately addressed): assume that you live on a planet with a population of 3^^^3 distinct people. (The "planet" part is obviously not possible, and the "distinct" part may or may not be possible, but for the purposes of a discussion about morality, it's fine to assume these.)

Now let's suppose that you are given a choice: (a) everyone on the planet can get a dust speck in the eye right now, or (b) the entire planet holds a lottery, and the one person who "wins" (or "loses", more accurately) will be tortured for 50 years. Which would you choose?

If you are against torture (as you seem to be, from your comment), you will presumably choose (a). But now let's suppose you are allowed to blink just before the dust speck enters your eye. Call this choice (c). Seeing as you probably prefer not having a dust speck in your eye to having one in your eye, you will most likely prefer (c) to (a).

However, 3^^^3 just so unimaginably enormous that blinking for even the tiniest fraction of a second increases the probability that you will be captured by a madman during that blink and tortured for 50 years by more than 1/3^^^3. But since the lottery proposed in (b) only offers a 1/3^^^3 probability of being picked for the torture, (b) is preferable to (c).

Then, by the transitivity axiom, if you prefer (c) to (a) and (b) to (c), you must prefer (b) to (a).

Q.E.D.

Replies from: EHeller, Epictetus
comment by EHeller · 2015-03-27T01:10:29.354Z · LW(p) · GW(p)

However, 3^^^3 just so unimaginably enormous that blinking for even the tiniest fraction of a second increases the probability that you will be captured by a madman during that blink and tortured for 50 years by more than 1/3^^^3.

This seems pretty unlikely to be true.

Replies from: dxu
comment by dxu · 2015-03-27T01:22:17.284Z · LW(p) · GW(p)

I think you underestimate the magnitude of 3^^^3 (and thereby overestimate the magnitude of 1/3^^^3).

Replies from: EHeller
comment by EHeller · 2015-03-27T01:38:45.685Z · LW(p) · GW(p)

Both numbers seem basically arbitrarily small (probability 0).

Since the planet has so many distinct people, and they blink more than once a day, you are essentially asserting that on that planet, multiple people are kidnapped and tortured for more than 50 years several times a day.

Replies from: dxu
comment by dxu · 2015-03-27T02:27:23.796Z · LW(p) · GW(p)

Since the planet has so many distinct people, and they blink more than once a day, you are essentially asserting that on that planet, multiple people are kidnapped and tortured for more than 50 years several times a day.

Well, I mean, obviously a single person can't be kidnapped more than once every 50 years (assuming that's how long each torture session lasts), and certainly not several times a day, since he/she wouldn't have finished being tortured quickly enough to be kidnapped again. But yes, the general sentiment of your comment is correct, I'd say. The prospect of a planet with daily kidnappings and 50-year-long torture sessions may seem strange, but that sort of thing is just what you get when you have a population count of 3^^^3.

Replies from: EHeller
comment by EHeller · 2015-03-27T05:55:00.306Z · LW(p) · GW(p)

I worked it out back of the envelope, and the probability of being kidnapped when you blink is only 1/5^^^5.

Replies from: dxu
comment by dxu · 2015-03-27T18:00:47.014Z · LW(p) · GW(p)

Well, now I know you're underestimating how big 3^^^3 is (and 5^^^5, too). But let's say somehow you're right, and the probability really is 1/5^^^5. All I have to do is modify the thought experiment so that the planet has 5^^^5 people instead of 3^^^3. There, problem solved.

So, new question: would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 5^^^5 people get dust specks in their eyes?

comment by Epictetus · 2015-03-27T02:04:09.613Z · LW(p) · GW(p)

However, 3^^^3 just so unimaginably enormous that blinking for even the tiniest fraction of a second increases the probability that you will be captured by a madman during that blink and tortured for 50 years by more than 1/3^^^3.

And the time spent setting up a lottery and carrying out the drawing also increases the probability that someone else gets captured and tortured in the intervening time, far more than blinking would. In fact, the probability goes up anyway in that fraction of a second, whether you blink or not. You can't stop time, so there's no reason to prefer (c) to (b).

Replies from: dxu
comment by dxu · 2015-03-27T02:17:39.062Z · LW(p) · GW(p)

In fact, the probability goes up anyway in that fraction of a second, whether you blink or not.

Ah, sorry; I wasn't clear. What I meant was that blinking increases your probability of being tortured beyond the normal "baseline" probability of torture. Obviously, even if you don't blink, there's still a probability of you being tortured. My claim is that blinking affects the probability of being tortured so that the probability is higher than it would be if you hadn't blinked (since you can't see for a fraction of a second while blinking, leaving you ever-so-slightly more vulnerable than you would be with your eyes open), and moreover that it would increase by more than 1/3^^^3. So basically what I'm saying is that P(torture|blink) > P(torture|~blink) + 1/3^^^3.

Replies from: Epictetus
comment by Epictetus · 2015-03-27T03:18:59.466Z · LW(p) · GW(p)

Let me see if I get this straight:

The choice comes down to dust specks at time T or dust specks at time T + dT, where the interval dT allows you time to blink. The argument is that in the interval dT, the probability of being captured and tortured increases by an amount greater than your odds in the lottery.

It seems to me that the blinking is immaterial. If the question were whether to hold the lottery today or put dust in everyone's eyes tomorrow, the argument should be unchanged. It appears to hinge on the notion that as time increases, so do the odds of something bad happening, and therefore you'd prefer to be in the present instead of the future.

The problem I have is that the future is going to happen anyway. Once the interval dT passes, the odds of someone being captured in that time will go up regardless of whether you chose the lottery or not.

comment by dxu · 2015-03-25T21:10:15.925Z · LW(p) · GW(p)

What is even the problem with the speck dusts?

If I told you that a dust speck was about to float into your left eye in the next second, would you (a) take it full in the eye, or (b) blink to keep it out? If you say you would blink, you are implicitly acknowledging that you prefer not getting specked to getting specked, and thereby conceding that getting specked is worse than not getting specked. If you would take it full in the eye, well... you're weird.

comment by [deleted] · 2015-03-25T21:17:50.830Z · LW(p) · GW(p)

It's not (necessarily) about dust specks accidentally leading to major accidents. But if you think that having a dust speck in your eye may be even slightly annoying (whether you consciously know that or not), the cost you have from having it fly into your eye is not zero.

Now something not zero multiplied by a sufficiently large number will necessarily be larger than the cost of one human being's life in torture.

Replies from: None, private_messaging
comment by [deleted] · 2015-03-26T10:20:11.570Z · LW(p) · GW(p)

Now you are getting it copletely wrong. You can´t add up harm on spec dust if it is happening to different people. Every individual has a capability to recover from it. Think about it. With that logic it is worse to rip a hair from every living being in the universe than to nuke New York. If people in charge reasoned that way we might have harmageddon in no time.

Replies from: helltank, None
comment by helltank · 2015-03-26T10:56:01.090Z · LW(p) · GW(p)

That's ridiculous. So mild pains don't count if they're done to many different people?

Let's give a more obvious example. It's better to kill one person than to amputate the right hands of 5000 people, because the total pain will be less.

Scaling down, we can say that it's better to amputate the right hands of 50,000 people than to torture one person to death, because the total pain will be less.

Keep repeating this in your head(see how consistent it feels, how it makes sense).

Now just extrapolate to the instance that it's better to have 3^^^3 people have dust specks in their eyes than to torture one person to death because the total pain will be less. The hair-ripping argument isn't good enough because pain.[ (people on earth) (pain from hair rip) ] < pain.[(people in New York) (pain of being nuked) ]. The math doesn't add up in your straw man example, unlike with the actual example given.

As a side note, you are also appealing to consequences.

Replies from: dxu
comment by dxu · 2015-03-26T16:09:18.908Z · LW(p) · GW(p)

[ (people on earth) (pain from hair rip) ] < pain.[(people in New York) (pain of being nuked) ]

I think Okeymaker was actually referring to all the people in the universe. While the number of "people" in the universe (defining a "person" as a conscious mind) isn't a known number, let's do as blossom does and assume Okeymaker was referring to the Level I multiverse. In that case, the calculation isn't nearly as clear-cut. (That being said, if I were considering a hypothetical like that, I would simply modus ponens Okeymaker's modus tollens and reply that I would prefer to nuke New York.)

comment by [deleted] · 2015-03-26T11:03:16.189Z · LW(p) · GW(p)

If

  1. Each human death has only finite cost. We sure act this way in our everyday lives, exchanging human lives for the convenience of driving around with cars etc.
  2. By our universe you do not mean only the observable universe, but include the level I multiverse

then yes, that is the whole point. A tiny amount of suffering multiplied by a sufficiently large number obviously is eventually larger than the fixed cost of nuking New York.

Unless you can tell my why my model for the costs of suffering distributed over multiple people is wrong, I don't see why I should change it. "I don't like the conclusions!!!" is not a valid objection.

If people in charge reasoned that way we might have harmageddon in no time.

If they ever justifiable start to reason that way, i.e. if they actually have the power to rip a hair from every living human being, I think we'll have larger problems than the potential nuking of New York.

Replies from: None
comment by [deleted] · 2015-03-26T12:27:28.993Z · LW(p) · GW(p)

Okey, I was trying to learn from this post but now I see that I have to try to explain stuff myself in order for this communication to become useful. When It comes to pain it is hard to explain why one person´s great suffering is worse than many suffering very very little if you don´t understand it by yourself. So let us change the currency from pain to money.

Let´s say that you and me need to fund a large plantage of algae in order to let the Earth´s population escape starvation due to lack of food. This project is of great importence for the whole world so we can force anyone to become a sponsor and this is good because we need the money FAST. We work for the whole world (read: Earth) and we want to minimze the damages from our actions. This project is really expensive however... Should we:

a) Take one dollar from every person around the world with a minimum wage that can still afford house, food etc. even if we take that one dollar?

or should we

b) Take all the money (instantly) from Denmark and watch it break down in bakruptcy?

Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.

Replies from: None, Jiro
comment by [deleted] · 2015-03-26T13:30:01.400Z · LW(p) · GW(p)

In this case I do not disagree with you. The number of people on earth is simply not large enough.

But if you asked me whether to take money from 3^^^3 people compared to throwing Denmark into bankruptcy, I would choose the latter.

Math should override intuition. So unless you give me a model that you can convince me of that is more reasonable than adding up costs/utilities, I don't think you will change my mind.

Replies from: None
comment by [deleted] · 2015-03-26T14:14:26.315Z · LW(p) · GW(p)

Now I see what is fundamentally wrong with the article and you´re reasoning from MY perspective. You don´t seem to understand the difference between a permanent sacriface and a temporary.

If we subsitute the spec dust with index fingers for example, I agree that it is reasonable to think that killing one person is far better than to have 3 billion (we don´t need 3^^^3 for this one) persons lose their index fingers. Because that is a permanent sacriface. At least for now we can´t have fingers grow out just like that. To get dust in your eye at the other hand, is only temporary. You will get over it real quick and forget all about it. But 50 years of torture is something that you will never fully heal from and it will ruin a persons life and cause permanent damage.

comment by Jiro · 2015-03-26T15:51:09.732Z · LW(p) · GW(p)

Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.

The trouble is that there is a continuous sequence from

Take $1 from everyone

Take $1.01 from almost everyone

Take $1.02 from almost almost everyone

...

Take a lot of money from very few people (Denmark)

If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly. You will have to say, for instance, taking $20 each from 1/20 the population of the world is good, but taking $20.01 each from slightly less than 1/10 the population of the world is bad. Can you say that?

Replies from: Lumifer, dxu, None, None
comment by Lumifer · 2015-03-26T16:42:44.205Z · LW(p) · GW(p)

If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.

If you think that 100C water is hot and 0C water is cold, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.

Replies from: dxu, Jiro
comment by dxu · 2015-03-26T16:49:39.970Z · LW(p) · GW(p)

No, because temperature is (very close to) a continuum, whereas good/bad is a binary. To see this more clearly, you can replace the question, "Is this action good or bad?" to "Would an omniscient, moral person choose to take this action?", and you can instantly see the answer can only be "yes" (good) or "no" (bad).

(Of course, it's not always clear which choice the answer is--hence why so many argue over it--but the answer has to be, in principle, either "yes" or "no".)

Replies from: Lumifer, Good_Burning_Plastic
comment by Lumifer · 2015-03-26T17:12:01.412Z · LW(p) · GW(p)

No, because temperature is (very close to) a continuum, whereas good/bad is a binary.

First, I'm not talking about temperature, but about categories "hot" and "cold".

Second, why in the world would good/bad be binary?

"Would an omniscient, moral person choose to take this action?"

I have no idea -- I don't know what an omniscient person (aka God) will do, and in any case the answer is likely to be "depends on which morality we are talking about".

Oh, and would an omniscient being call that water hot or cold?

Replies from: dxu
comment by dxu · 2015-03-26T17:33:37.301Z · LW(p) · GW(p)

First, I'm not talking about temperature, but about categories "hot" and "cold".

You'll need to define your terms for that, then. (And for the record, I don't use the words "hot" and "cold" exclusively; I also use terms like "warm" or "cool" or "this might be a great temperature for a swimming pool, but it's horrible for tea".)

Also, if you weren't talking about temperature, why bother mentioning degrees Celsius when talking about "hotness" and "coldness"? Clearly temperature has something to do with it, or else you wouldn't have mentioned it, right?

Second, why in the world would good/bad be binary?

Because you can always replace a question of goodness with the question "Would an omniscient, moral person choose to take this action?".

I have no idea -- I don't know what an omniscient person (aka God) will do,

Just because you have no idea what the answer could be doesn't mean the true answer can fall outside the possible space of answers. For instance, you can't answer the question of "Would an omniscient moral reasoner choose to take this action?" with something like "fish", because that falls outside of the answer space. In fact, there are only two possible answers: "yes" or "no". It might be one; it might be the other, but my original point was that the answer to the question is guaranteed to be either "yes or "no", and that holds true even if you don't know what the answer is.

the answer is likely to be "depends on which morality we are talking about"

There is only one "morality" as far as this discussion is concerned. There might be other "moralities" held by aliens or whatever, but the human CEV is just that: the human CEV. I don't care about what the Babyeaters think is "moral", or the Pebblesorters, or any other alien species you care to substitute--I am human, and so are the other participants in this discussion. The answer to the question "which morality are we talking about?" is presupposed by the context of the discussion. If this thread included, say, Clippy, then your answer would be a valid one (although even then, I'd rather talk game theory with Clippy than morality--it's far more likely to get me somewhere with him/her/it), but as it is, it just seems like a rather unsubtle attempt to dodge the question.

Replies from: Lumifer
comment by Lumifer · 2015-03-26T17:39:30.996Z · LW(p) · GW(p)

In fact, there are only two possible answers: "yes" or "no"

I don't think so.

You're making a circular argument -- good/bad is binary because there are only two possible states. I do not agree that there are only two possible states.

There is only one "morality" for the participants of this discussion.

Really? Either I'm not a participant in this discussion or you're wrong. See: a binary outcome :-D

but the human CEV is just that: the human CEV

I have no idea what the human CEV is and even whether such a thing is possible. I am familiar with the concept, but I have doubts about it's reality.

Replies from: dxu
comment by dxu · 2015-03-26T17:43:58.413Z · LW(p) · GW(p)

You're making a circular argument -- good/bad is binary because there are only two possible states. I do not agree that there are only two possible states.

Name a third alternative that is actually an answer, as opposed to some sort of evasion ("it depends"), and I'll concede the point.

Also, I'm aware that this isn't your main point, but... how is the argument circular? I'm not saying something like, "It's binary, therefore there are two possible states, therefore it's binary"; I'm just saying "There are two possible states, therefore it's binary."

Either I'm not a participant in this discussion or you're wrong. See: a binary outcome :-D

Are you human? (y/n)

I have no idea what the human CEV it and even whether such a thing is possible. I am familiar with the concept, but I have doubts about it's reality.

Which part do you object to? The "coherent" part, the "extrapolated" part, or the "volition" part?

Replies from: Lumifer
comment by Lumifer · 2015-03-26T18:03:48.610Z · LW(p) · GW(p)

Name a third alternative that is actually an answer

"Doesn't matter".

First of all you're ignoring the existence of morally neutral questions. Should I scratch my butt? Lessee, would an omniscient perfectly moral being scratch his/her/its butt? Oh dear, I think we're in trouble now... X-D

Second, you're assuming atomicity of actions and that's a bad assumption. In your world actions are very limited -- they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways.

Third, you're assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences.

Fourth, for the great majority of dilemmas in life (e.g. "Should I take this job?", "Should I marry him/her?", "Should I buy a new phone?") the answer "what an omniscient moral being would choose" is perfectly useless.

Which part do you object to?

The concept of CEV seems to me to be the direct equivalent of "God's will" -- handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the "coherent" part while also having great doubts about the "extrapolated" part as well.

Replies from: dxu
comment by dxu · 2015-03-26T18:16:07.719Z · LW(p) · GW(p)

would an omniscient perfectly moral being scratch his/her/its butt?

(Side note: this conversation is taking a rather strange turn, but whatever.)

If its butt feels itchy, and it would prefer for its butt to not feel itchy, and the best way to make its butt not feel itchy is to scratch it, and there are no external moral consequences to its decision (like, say, someone threatening to kill 3^^^3 people iff it scratches its butt)... well, it's increasing its own utility by scratching its butt, isn't it? If it increases its own utility by doing so and doesn't decrease net utility elsewhere, then that's a net increase in utility. Scratch away, I say.

Second, you're assuming atomicity of actions and that's a bad assumption. In your world actions are very limited -- they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways.

Sure. I agree I did just handwave a lot of stuff with respect to what an "action" is... but would you agree that, conditional on having a good definition of "action", we can evaluate "actions" morally? (Moral by human standards, of course, not Pebblesorter standards.)

Third, you're assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences.

Agreed, but if you come up with a way to make good/moral decisions in the idealized situation of omniscience, you can generalize to uncertain situations simply by applying probability theory.

Fourth, for the great majority of dilemmas in life (e.g. "Should I take this job?", "Should I marry him/her?", "Should I buy a new phone?") the answer "what an omniscient moral being would choose" is perfectly useless.

Again, I agree... but then, knowledge of the Banach-Tarski paradox isn't of much use to most people.

The concept of CEV seems to me to be the direct equivalent of "God's will" -- handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the "coherent" part while also having great doubts about the "extrapolated" part as well.

Fair enough. I don't have enough domain expertise to really analyze your position in depth, but at a glance, it seems reasonable.

Replies from: Lumifer
comment by Lumifer · 2015-03-26T18:29:36.464Z · LW(p) · GW(p)

it's increasing its own utility

The assumption that morality boils down to utility is a rather huge assumption :-)

would you agree that, conditional on having a good definition of "action", we can evaluate "actions" morally?

Conditional on having a good definition of "action" and on having a good definition of "morally".

you can generalize to uncertain situations simply by applying probability theory

I don't think so, at least not "simply". An omniscient being has no risk and no risk aversion, for example.

isn't of much use to most people

Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries... :-)

Replies from: dxu
comment by dxu · 2015-03-27T00:45:43.278Z · LW(p) · GW(p)

The assumption that morality boils down to utility is a rather huge assumption :-)

It's not an assumption; it's a normative statement I choose to endorse. If you have some other system, feel free to endorse that... but then we'll be discussing morality, and not meta-morality or whatever system originally produced your objection to Jiro's distinction between good and bad.

on having a good definition of "morally"

Agree.

An omniscient being has no risk and no risk aversion, for example.

Well, it could have risk aversion. It's just that risk aversion never comes into play during its decision-making process due to its omniscience. Strip away that omniscience, and risk aversion very well might rear its head.

Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries... :-)

I disagree. Take the following two statements:

  1. Morality, properly formalized, would be useful for practical purposes.
  2. Morality is not currently properly formalized.

There is no contradiction in these two statements.

Replies from: Lumifer
comment by Lumifer · 2015-03-27T14:37:16.949Z · LW(p) · GW(p)

There is no contradiction in these two statements.

But they have a consequence: Morality currently is not useful for practical purposes.

That's... an interesting position. Are you willing to live with it? X-)

You can, of course define morality in this particular way, but why would you do that?

comment by Good_Burning_Plastic · 2015-03-26T21:41:32.858Z · LW(p) · GW(p)

To see this more clearly, you can replace the question, "Is this action good or bad?" to "Would an omniscient, moral person choose to take this action?", and you can instantly see the answer can only be "yes" (good) or "no" (bad).

By that definition, almost all actions are bad.

Also, why the heck do you think there exist words for "better" and "worse"?

Replies from: dxu
comment by dxu · 2015-03-27T00:30:55.301Z · LW(p) · GW(p)

By that definition, almost all actions are bad.

True. I'm not sure why that matters, though. It seems trivially obvious to me that a random action selected out of the set of all possible actions would have an overwhelming probability of being bad. But most agents don't select actions randomly, so that doesn't seem to be a problem. After all, the key aspect of intelligence is that it allows you to it extremely tiny targets in configuration space; the fact that most configurations of particles don't give you a car doesn't prevent human engineers from making cars. Why would the fact that most actions are bad prevent you from choosing a good one?

Also, why the heck do you think there exist words for "better" and "worse"?

Those are relative terms, meant to compare one action to another. That doesn't mean you can't classify an action as "good" or "bad"; for instance, if I decided to randomly select and kill 10 people today, that would be a unilaterally bad action, even if it would theoretically be "worse" if I decided to kill 11 people instead of 10. The difference between the two is like the difference between asking "Is this number bigger than that number?" and "Is this number positive or negative?".

comment by Jiro · 2015-03-26T19:23:20.177Z · LW(p) · GW(p)

My opinion would change gradually between 100 degrees and 0 degrees. Either I would use qualifiers so that there is no abrupt transition, or else I would consider something to be hot in a set of situations and the size of that set would decrease gradually.

comment by dxu · 2015-03-26T16:46:35.231Z · LW(p) · GW(p)

You will have to say, for instance, taking $20 each from 1/20 the population of the world is good, but taking $20.01 each from slightly less than 1/10 the population of the world is bad. (emphasis mine)

Typo here?

comment by [deleted] · 2015-03-26T19:27:58.175Z · LW(p) · GW(p)

YES because that is how economics work! You can´t take alot of money from ONE person without him getting poor but you CAN take money from alot of people without ruining them! Money is a circulating resource and just like pain you can recover form small losses after a time.

comment by [deleted] · 2015-03-26T19:38:13.709Z · LW(p) · GW(p)

If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.

I think my last response starting with YES got lost somehow, so I will clarify here. I don´t follow the sequence because I don´t know where the critical limit is. Why? Because the critical limit is depending on other factors which i can´t foresee. Read up on basic global economy. But YES, in theory I can take little money from everyone without ruining a single one of them since it balances out, but if I take alot of money form one person I make him poor. That is how economics work, you can recover from small losses easily while some are too big to ever recover form, hence why some banks go bankrupt sometimes. And pain is similar since I can recover from a dust speck in my eye, but not from being tortured for 50 years. The dust specks are not permanent sacrifaces. If they were, I agree that they could stack up.

Replies from: Jiro
comment by Jiro · 2015-03-26T20:33:40.775Z · LW(p) · GW(p)

I don´t follow the sequence because I don´t know where the critical limit is.

You may not know exactly where the limit is, but the point isn't that the limit is at some exact number, the point is that there is a limit. There's some point where your reasoning makes you go from good to bad even though the change is very small. Do you accept that such a limit exists, even though you may not know exactly where it is?

Replies from: None
comment by [deleted] · 2015-03-26T20:35:06.342Z · LW(p) · GW(p)

Yes I do.

Replies from: Jiro
comment by Jiro · 2015-03-26T21:04:03.517Z · LW(p) · GW(p)

So you recognize that your original statement about $1 versus bankruptcy also forces you to make the same conclusion about $20.00 versus $20.01 (or whatever the actual number is, since you don't know it).

But making the conclusion about $20.00 versus $20.01 is much harder to justify. Can you justify it? You have to be able to, since it is implied by your original statement.

Replies from: None
comment by [deleted] · 2015-03-26T21:17:22.743Z · LW(p) · GW(p)

No I don´t have to make the same conclusion about 20.00 dollar versus 20.01. I left a safety margin when I said 1 dollar since I don´t want to follow the sequence but am very, very sure that 1 dollar is a safe number. I don´t know exactly how much I can risk taking from a random individual before I risk ruining him, but if I take only one dollar from a person who can afford a house and food, I am pretty safe.

Replies from: Jiro
comment by Jiro · 2015-03-26T22:00:21.060Z · LW(p) · GW(p)

No I don´t have to make the same conclusion about 20.00 dollar versus 20.01

Yes, you do. You just admitted it, although the number might not be 20. And whether you admit it or not it logically follows from what you said up above.

Replies from: None
comment by [deleted] · 2015-03-26T22:06:44.237Z · LW(p) · GW(p)

Maybe I didn´t understand you the first time.

You will have to say, for instance, taking $20 each from 1/20 the population of the world is good, but taking $20.01 each from slightly less than 1/10 the population of the world is bad. Can you say that? To answer that, well yes it MIGHT be the case, I don´t know, therefore I only ask for 1 dollar. Is that making it any clearer?

Replies from: Jiro
comment by Jiro · 2015-03-26T22:10:23.939Z · LW(p) · GW(p)

Your belief about $1 versus bankruptcy logically implies a similar belief about $20.00 versus $20.01 (or whatever the actual numbers are). You can't just answer that that "might" be the case--if your original belief is as described, that is the case. You have to be willing to defend the logical consequence of what you said, not just defend the exact words that you said.

Replies from: None
comment by [deleted] · 2015-03-26T22:14:10.306Z · LW(p) · GW(p)

What do you mean with "whatever the actual numbers are". Numbers for what? For the amount that takes to ruin someone? As long as the individual donations doesn´t ruin the donators I accept a higher donation from a smaller population. Is that what you mean?

Replies from: Jiro
comment by Jiro · 2015-03-26T22:17:47.729Z · LW(p) · GW(p)

I just wrote 20 because I have to write something, but there is a number. This number has a value, even if you don't know it. Pretend I put the real number there instead of 20.

Replies from: None
comment by [deleted] · 2015-03-26T22:24:32.469Z · LW(p) · GW(p)

Yes, but still, what number? IF it is as I already suggested, the number for the amount of money that can be taken without ruining anyone, then I agree that we could take that amount of money instead of 1 dollar.

Replies from: dxu, Jiro
comment by dxu · 2015-03-27T00:52:32.552Z · LW(p) · GW(p)

the number for the amount of money that can be taken without ruining anyone

So you're saying there exists such a number, such that taking that amount of money from someone wouldn't ruin them, but taking that amount plus a tiny bit more (say, 1 cent) would?

comment by Jiro · 2015-03-27T04:52:27.524Z · LW(p) · GW(p)

I don't think you understand.

Yout original statement about $1 versus bankruptcy logically implies that there is a number such that that it is okay to take exactly that amount of money from a certain number of people, but wrong to take a very tiny amount more. Even though you don't know exactly what this number is, you know that it exists. Because this number is a logical consequence of what you said, you must be able to justify having such a number.

Replies from: None
comment by [deleted] · 2015-03-27T07:02:00.994Z · LW(p) · GW(p)

Yes, in my last comment I agreed to it. There is such a number. I don't think you understand my reasons why, which I already explained. It is wrong to take a tiny amoint more, since that will ruin them. I can'tknow ecactly what that is since global and local economy isn`t that stable. Tapping out.

comment by private_messaging · 2015-03-26T22:28:25.636Z · LW(p) · GW(p)

Now, do you have any actual argument as to why the 'badness' function computed over a box containing two persons with a dust speck, is exactly twice the badness of a box containing one person with a dust speck, all the way up to very large numbers (when you may even have exhausted the number of possible distinct people) ?

I don't think you do. This is why this stuff strikes me as pseudomath. You don't even state your premises let alone justify them.

Replies from: None
comment by [deleted] · 2015-03-26T23:31:26.247Z · LW(p) · GW(p)

You're right, I don't. And I do not really need it in this case.

What I need is a cost function C(e,n) - e is some event and n is the number of people being subjected to said event, i.e. everyone gets their own - where for ε > 0: C(e,n+m) > C(e,n) + ε for some m. I guess we can limit e to "torture for 50 years" and "dust specks" so this generally makes sense at all.

The reason why I would want to have such a cost function is because I believe that it should be more than infinitesimally worse for 3^^^^3 people to suffer than for 3^^^3 people to suffer. I don't think there should ever be a point where you can go "Meh, not much of a big deal, no matter how many more people suffer."

If however the number of possible distinct people should be finite - even after taking into account level II and level III multiverses - due to discreteness of space and discreteness of permitted physical constants, then yes, this is all null and void. But I currently have no particular reason to believe that there should be such a bound, while I do have reason to believe that permitted physical constants should be from a non-discrete set.

Replies from: private_messaging
comment by private_messaging · 2015-03-27T12:22:10.219Z · LW(p) · GW(p)

Well, within the 3^^^3 people you have every single possible brain replicated a gazillion times already (there's only that many ways you can arrange the atoms in the volume of human head, sufficiently distinct as to be computing something subjectively different, after all, and the number of such arrangements is unimaginably smaller than 3^^^3 ).

I don't think that e.g. I must massively prioritize the happiness of a brain upload of me running on multiple redundant hardware (which subjectively feels the same as if it was running in one instance; it doesn't feel any stronger because there's more 'copies' of it running in perfect unison, it can't even tell the difference. It won't affect the subjective experience if the CPUs running the same computation are slightly physically different).

edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you'll never exceed it with dust specks.

Seriously, you people (LW crowd in general) need to take more calculus or something before your mathematical intuitions become in any way relevant to anything whatsoever. It does feel intuitively that with your epsilon it's going to keep growing without a limit, but that's simply not true.

Replies from: None
comment by [deleted] · 2015-03-27T14:40:59.996Z · LW(p) · GW(p)

I consider entities in computationally distinct universes to also be distinct entities, even if the arrangements of their neurons are the same. If I have an infinite (or sufficiently large) set of physical constants such that in those universes human beings could emerge, I will also have enough human beings.

edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you'll never exceed it with dust specks.

No. I will always find a larger number which is at least ε greater. I fixed ε before I talked about n,m. So I find numbers m_1,m_2,... such that C(dustspeck,m_j) > jε.

Besides which, even if I had somehow messed up, you're not here (I hope) to score easy points because my mathematical formalization is flawed when it is perfectly obvious where I want to go.

Replies from: private_messaging
comment by private_messaging · 2015-03-27T18:40:57.120Z · LW(p) · GW(p)

Well, in my view, some details of implementation of a computation are totally indiscernible 'from the inside' and thus make no difference to the subjective experiences, qualia, and the like.

I definitely don't care if there's 1 me, 3^^^3 copies of me, or 3^^^^3, or 3^^^^^^3 , or the actual infinity (as the physics of our universe would suggest), where the copies are what thinks and perceives everything exactly the same over the lifetime. I'm not sure how counting copies as distinct would cope with an infinity of copies anyway. You have a torture of inf persons vs dust specks in inf*3^^^3 persons, then what?

Albeit it would be quite hilarious to see if someone here picks up the idea and starts arguing that because they're 'important', there must be a lot of copies of them in the future, and thus they are rightfully an utility monster.

comment by Kindly · 2015-03-26T13:50:17.686Z · LW(p) · GW(p)

Consider the flip side of the argument: would you rather get a dust speck in your eye or have a 1 in 3^^^3 chance of being tortured for 50 years?

We take much greater risks without a moment's thought every time we cross the street. The chance that a car comes out of nowhere and hits you in just the right way to both paralyze you and cause incredible pain to you for the rest of your life may be very small; but it's probably not smaller than 1 in 10^100, let alone than 1 in 3^^^3.

comment by [deleted] · 2015-04-10T21:23:42.226Z · LW(p) · GW(p)

I agree with this analysis provided there is some reason for linear aggregation.

Why should the utility of the world be the sum of the utilities of its inhabitants? Why not, for instance, the min of the utilities of its inhabitants?

I think that's what my intuition wants to do anyway: care about how badly off the worst-off person is, and try to improve that.

U1(world) = min_people(u(person)) instead of U2(world) = sum_people(u(person))

so U1(torture) = -big, U1(dust) = -tiny
U2(torture) = -big, U2(dust) = -outrageously massive

Thus, if you use U1, you choose dust because -tiny > -big,
but if you use U2, you choose torture because -big > -outrage.

But I see no real reason to prefer one intuition over the other, so my question is this:
Why linear aggregation of utilities?

Replies from: Wes_W, jsartor7
comment by Wes_W · 2015-04-10T21:47:19.110Z · LW(p) · GW(p)

I think that's what my intuition wants to do anyway: care about how badly off the worst-off person is, and try to improve that.

I find it hard to believe that you believe that. Under that metric, for example, "pick a thousand happy people and kill their dogs" is a completely neutral act, along with lots of other extremely strange results.

Replies from: gjm, None
comment by gjm · 2015-04-10T23:28:44.427Z · LW(p) · GW(p)

Or, for a maybe more dramatic instance: "Find the world's unhappiest person and kill them". Of course total utilitarianism might also endorse doing that (as might quite a lot of people, horrible though it sounds, on considering just how wretched the lives of the world's unhappiest people probably are) -- but min-utilitarianism continues to endorse doing this even if everyone in the world -- including the soon-to-be-ex-unhappiest-person -- is extremely happy and very much wishes to go on living.

Replies from: Jiro
comment by Jiro · 2015-04-11T05:12:28.068Z · LW(p) · GW(p)

The specific problem which causes that is that most versions of utilitarianism don't allow the fact that someone desires not to be killed to affect the utility calculation, since after they have been killed, they no longer have utility.

Replies from: Wes_W
comment by Wes_W · 2015-04-11T05:46:10.709Z · LW(p) · GW(p)

Yes, this is a failure mode of (some forms of?) utilitarianism, but not the specific weirdness I was trying to get at, which was that if you aggregate by min(), then it's completely morally OK to do very bad things to huge numbers of people - in fact, it's no worse than radically improving huge numbers of lives - as long as you avoid affecting the one person who is worst-off. This is a very silly property for a moral system to have.

You can attempt to mitigate this property with too-clever objections, like "aha, but if you kill a happy person, then in the moment of their death they are temporarily the most unhappy person, so you have affected the metric after all". I don't think that actually works, but didn't want it to obscure the point, so I picked "kill their dog" as an example, because it's a clearly bad thing which definitely doesn't bump anyone to the bottom.

comment by [deleted] · 2015-04-10T23:47:56.393Z · LW(p) · GW(p)

Oh, good point, maybe a kind of alphabetical ordering could break ties.

So then, we disregard everyone who isn't affected by the possible action and maximize over the utilities of those who are.

But still, this prefers a million people being punched once to any one person being punched twice, which seems silly --- I'm just trying to parse out my intuition for choosing dust specks.

I get other possible methods being flawed is a mark for linear aggregation, but what positive reasons are there for it?

comment by jsartor7 · 2019-11-29T20:02:00.187Z · LW(p) · GW(p)

Min is a really bad metric - it means that, for example, my decision of whether to torture someone or not doesn't matter as long as someone out there is also getting tortured. So it doesn't actually lead to an answer of the dust speck problem. And if you limit it to the min of people involved, it leads to things like... "then it's better to break 1 billion people's non-dominant arms than one person's dominant arm" which in my opinion is absurd.

comment by Kaj_Sotala · 2007-10-30T23:45:00.000Z · LW(p) · GW(p)

If anything is aggregating nonlinearly it should be the 50 years of torture, to which one person has the opportunity to acclimate; there is no individual acclimatization to the dust specks because each dust speck occurs to a different person

I find this reasoning problematic, because in the dust specks there is effectively nothing to acclimate to... the amount of inconvenience to the individual will always be smaller in the speck scenario (excluding secondary effects, such as the individual being distracted and ending up in a car crash, of course).

Which exact person in the chain should first refuse?

Now, this is considerably better reasoning - however, there was no clue to this being a decision that would be selected over and over by countless of people. Had it been worded "you among many have to make the following choice...", I could agree with you. But the current wording implied that it was once-a-universe sort of choice.

comment by RobinHanson · 2007-10-30T23:52:00.000Z · LW(p) · GW(p)

Well as long as we've gone to all the trouble to collect 85 comments on this topic, this seems like an great chance for a disagreement case study. It would be interesting to collect stats on who takes what side, and to relate that to their various kinds of relevant expertize. For the moment I disturbed by the fact that Eliezer and I seem to be in a minority here, but comforted a bit by the fact that we seem to know decision theory better than most. But I'm open to new data on the balance of opinion and the balance of relevant expertize.

comment by Constant2 · 2007-10-31T00:07:00.000Z · LW(p) · GW(p)

The diagnosis of scope insensitivity presupposes that people are trying to perform a utilitarian calculation and failing. But there is an ordinary sense in which a sufficiently small harm is no wrong. A harm must reach a certain threshold before the victim is willing to bear the cost of seeking redress. Harms that fall below the threshold are shrugged off. And an unenforced law is no law. This holds even as the victims multiply. A class action lawsuit is possible, summing the minuscule harms, but our moral intuitions are probably not based on those.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-31T00:07:00.000Z · LW(p) · GW(p)

Now, this is considerably better reasoning - however, there was no clue to this being a decision that would be selected over and over by countless of people. Had it been worded "you among many have to make the following choice...", I could agree with you. But the current wording implied that it was once-a-universe sort of choice.

The choice doesn't have to be repeated to present you with the dilemma. Since all elements of the problem are finite - not countless, finite - if you refuse all actions in the chain, you should also refuse the start of the chain even when no future repetitions are presented as options. This kind of reasoning doesn't work for infinite cases, but it works for finite ones.

One potential counter to the "global heating" example is that at some point, people begin to die who would not otherwise have done so, and that should be the point of refusal. But for the case of dust specks - and we can imagine getting more than one dust speck in your eye per day - it doesn't seem like there should be any sharp borderline.

We face the real-world analogue of this problem every day, when we decide whether to tax everyone in the First World one penny in order to save one starving African child by mounting a large military rescue operation that swoops in, takes the one child, and leaves.

There is no "special penny" where this logic goes from good to bad. It's wrong when repeated because it's also wrong in the individual case. You just have to come to terms with scope sensitivity.

Replies from: homunq
comment by homunq · 2012-04-13T14:09:00.829Z · LW(p) · GW(p)

"Swoops in, takes one child, and leaves"... wow. I'd like to say I can't imagine being so insensitive as to think this would be a good thing to do (even if not worth the money), but I actually can.

And why would you use that horrible example, when the arguement would work just fine if you substituted "A permanent presence devoted to giving one person three square meals a day."

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-31T00:17:00.000Z · LW(p) · GW(p)

Actually, that was a poor example because taxing one penny has side effects. I would rather save one life and everyone in the world poked with a stick with no other side effects, because I put a substantial probability on lifespans being longer than many might anticipate. So even repeating this six billion times to save everyone's life at the price of 120 years of being repeatedly poked with a stick, would still be a good bargain.

Where there are no special inflection points, a bad repeated action should be a bad individual action, a good repeated action should be a good individual action. Talking about the repeated case changes your intuitions and gets around your scope insensitivity, it doesn't change the normative shape of the problem (IMHO).

comment by Paul_Gowder · 2007-10-31T00:34:00.000Z · LW(p) · GW(p)

Robin: dare I suggest that one area of relevant expertise is normative philosophy for-@#%(^^$-sake?!

It's just painful -- really, really, painful -- to see dozens of comments filled with blinkered nonsense like "the contradiction between intuition and philosophical conclusion" when the alleged "philosophical conclusion" hinges on some ridiculous simplistic Benthamite utilitarianism that nobody outside of certain economics departments and insular technocratic computer-geek blog communities actually accepts! My model for the torture case is swiftly becoming fifty years of reading the comments to this post.

The "obviousness" of the dust mote answer to people like Robin, Eliezer, and many commenters depends on the following three claims:

a) you can unproblematically aggregate pleasure and pain across time, space, and individuality,

b) all types of pleasures and pains are commensurable such that for all i, j, given a quantity of pleasure/pain experience i, you can find a quantity of pleasure/pain experience j that is equal to (or greater or less than) it. (i.e. that pleasures and pains exist on one dimension)

c) it is a moral fact that we ought to select the world with more pleasure and less pain.

But each of those three claims is hotly, hotly contested. And almost nobody who has ever thought about the questions seriously believes all three. I expect there are a few (has anyone posed the three beliefs in that form to Peter Singer?), but, man, if you're a Bayesian and you update your beliefs about those three claims based on the general opinions of people with expertise in the relevant area, well, you ain't accepting all three. No way, no how.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-01-06T06:33:13.525Z · LW(p) · GW(p)
dare I suggest that one area of relevant expertise is normative philosophy for-@#%(^^$-sake?!

As someone who has studied moral philosophy for many years, I would like to point out that I agree with Robin and Eliezer, and that I know many professional moral philosophers who would agree with them, too, if presented with this moral dilemma. It is also worth noting that, many comments above, Gaverick Matheny provided a link to a paper by a professional moral philosopher, published in one of the two most prestigious moral philosophy journals in the English-speaking world, which defends essentially the same conclusion. And as the argument presented in that paper makes clear, the conclusion that one should torture need not be motivated by a theoretical commitment to some substantive thesis about the nature of pain or aggregation (as Gowder claims), but follows instead by transitivity from a series of comparisons that everyone--including those who deny that conclusion--finds intuitively plausible.

Replies from: BerryPick6
comment by BerryPick6 · 2013-01-10T03:30:48.608Z · LW(p) · GW(p)

If anyone still has a hard time believing that this is not an unorthodox position among Philosophers, I'd like to recommend Shelly Kagan's excellent The Limits of Morality, which discusses 'radical consequentialism' and defends a similar conclusion.

comment by Constant2 · 2007-10-31T00:57:00.000Z · LW(p) · GW(p)

dozens of comments filled with blinkered nonsense like "the contradiction between intuition and philosophical conclusion" when the alleged "philosophical conclusion" hinges on some ridiculous simplistic Benthamite utilitarianism that nobody outside of certain economics departments and insular technocratic computer-geek blog communities actually accepts!

You've quoted one of the few comments which your criticism does not apply to. I carry no water for utilitarian philosophy and was here highlighting its failure to capture moral intuition.

comment by Nick_Tarleton · 2007-10-31T00:58:00.000Z · LW(p) · GW(p)

all types of pleasures and pains are commensurable such that for all i, j, given a quantity of pleasure/pain experience i, you can find a quantity of pleasure/pain experience j that is equal to (or greater or less than) it. (i.e. that pleasures and pains exist on one dimension)

Is a consistent and complete preference ordering without this property possible?

comment by Tom_McCabe · 2007-10-31T01:07:00.000Z · LW(p) · GW(p)

"An option that dominates in finite cases will always provably be part of the maximal option in finite problems; but in infinite problems, where there is no maximal option, the dominance of the option for the infinite case does not follow from its dominance in all finite cases."

From Peter's proof, it seems like you should be able to prove that an arbitrarily large (but finite) utility function will be dominated by events with arbitrarily large (but finite) improbabilities.

"Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity."

And so we come to the billion-dollar question: Will scope insensitivity of this type be eliminated under CEV? So far as I can tell, a utility function is arbitrary; there is no truth which destroys it, and so the FAI will be unable to change around our renormalized utility functions by correcting for factual inaccuracy.

"Which exact person in the chain should first refuse?"

The point at which the negative utility of people catching on fire exceeds the positive utility of skydiving. If the temperature is 20 C, nobody will notice an increase of 0.00000001 C. If the temperature is 70 C, the aggregate negative utility could start to outweigh the positive utility. This is not a new idea; see http://en.wikipedia.org/wiki/Tragedy_of_the_commons.

"We face the real-world analogue of this problem every day, when we decide whether to tax everyone in the First World one penny in order to save one starving African child by mounting a large military rescue operation that swoops in, takes the one child, and leaves."

According to http://www.wider.unu.edu/research/2006-2007/2006-2007-1/wider-wdhw-launch-5-12-2006/wider-wdhw-press-release-5-12-2006.pdf, 10% of the world's adults, around 400 million people, own 85% of the world's wealth. Taxing them each one penny would give a total of $4 million, more than enough to mount this kind of a rescue operation. While incredibly wasteful, this would actually be preferable to some of the stuff we spend our money on; my local school district just voted to spend $9 million (current US dollars) to build a swimming pool. I don't even want to know how much we spend on $200 pants; probably more than $9 million in my town alone.

comment by Laura · 2007-10-31T01:13:00.000Z · LW(p) · GW(p)

Elizer: "It's wrong when repeated because it's also wrong in the individual case. You just have to come to terms with scope sensitivity."

But determining whether or not a decision is right or wrong in the individual case requires that you be able to place a value on each outcome. We determine this value in part by using our knowledge of how frequently the outcomes occur and how much time/effort/money it takes to prevent or assuage them. Thus knowing the frequency that we can expect an event to occur is integral to assigning it a value in the first place. The reason it would be wrong in the individual case to tax everyone in the first world the penny to save one African child is that there are so many starving children that doing the same for each one would become very expensive. It would not be obvious, however, if there was only one child in the world that needed rescuing. The value of life would increase because we could afford it to if people didn't die so frequently.

People in a village might be willing to help pay the costs when someone's house burns down. If 20 houses in the village burned down, the people might still contribute, but it is unlikely they will contribute 20 times as much. If house-burning became a rampant problem, people might stop contributing entirely, because it would seem futile for them to do so. Is this necessarily scope insensitivity? Or is it reasonable to determine values based on frequencies we can realistically expect?

comment by Kaj_Sotala · 2007-10-31T01:14:00.000Z · LW(p) · GW(p)

Where there are no special inflection points, a bad repeated action should be a bad individual action, a good repeated action should be a good individual action. Talking about the repeated case changes your intuitions and gets around your scope insensitivity, it doesn't change the normative shape of the problem (IMHO).

Hmm, I see your point. I can't help like feeling that there are cases where repetition does matter, though. For instance, assuming for a moment that radical life-extension and the Singularity and all that won't happen, and assuming that we consider humanity's continued existence to be a valuable thing - how about the choice of having/not having children? Not having children causes a very small harm to everybody else in the same generation (they'll have less people supporting them when old). Doesn't your reasoning imply that every couple should be forced into having children even if they weren't of the type who'd want that (the "torture" option), to avoid causing a small harm to all the others? This even though society could continue to function without major trouble even if a fraction of the population did choose to remain childfree, for as long as sufficiently many others had enough children?

comment by Paul_Gowder · 2007-10-31T01:24:00.000Z · LW(p) · GW(p)

Constant, my reference to your quote wasn't aimed at you or your opinions, but rather at the sort of view which declares that the silly calculation is some kind of accepted or coherent moral theory. Sorry if it came off the other way.

Nick, good question. Who says that we have consistent and complete preference orderings? Certainly we don't have them across people (consider social choice theory). Even to say that we have them within individual people is contestable. There's a really interesting literature in philosophy, for example, on the incommensurability of goods. (The best introduction of which I'm aware consists in the essays in Ruth Chang, ed. 1997. Incommensurability, Incomparability, and Practical Reason Cambridge: Harvard University Press.)

That being said, it might be possible to have complete and consistent preference orderings with qualitative differences between kinds of pain, such that any amount of torture is worse than any amount of dust-speck-in-eye. And there are even utilitarian theories that incorporate that sort of difference. (See chapter 2 of John Stuart Mill's Utilitarianism, where he argues that intellectual pleasures are qualitatively superior to more base kinds. Many indeed interpret that chapter to suggest that any amount of an intellectual pleasure outweighs any amount of drinking, sex, chocolate, etc.) Which just goes to show that even utilitarians might not find the torture choice "obvious," if they deny b) like Mill.

comment by Recovering_irrationalist · 2007-10-31T01:56:00.000Z · LW(p) · GW(p)

Who says that we have consistent and complete preference orderings?

Who says you need them? The question wasn't to quantify an exact balance. You just need to be sure enough to make the decision that one side outweighs the other for the numbers involved.

By my values, all else equal, for all x between 1 millisecond and fifty years, 10^1000 people being tortured for time x is worse than one person being tortured for time x*2. Would you disagree?

So, 10^1000 people tortured for (fifty years)/2 is worse than one person tortured for fifty years.
Then, 10^2000 people tortured for (fifty years)/4 is worse than one person tortured for fifty years.

You see where I'm going with this. Do something similar with the dust specs and unless I prefer countless people getting countless years of intense dust harassment to one person getting a millisecond of pain, I vote torture.

I recognize this is my opinion and relies on your c) it is a moral fact that we ought to select the world with more pleasure and less pain not being hopelessly outweighed by another criteria. I think this is definitely a worthwhile thing to debate and that your input would be extremely valuable.

comment by mitchell_porter2 · 2007-10-31T02:07:00.000Z · LW(p) · GW(p)

Since Robin is interested in data... I chose SPECKS, and was shocked by the people who chose TORTURE on grounds of aggregated utility. I had not considered the possibility that a speck in the eye might cause a car crash (etc) for some of those 3^^^3 people, and it is the only reason I see for revising my original choice. I have no accredited expertise in anything relevant, but I know what decision theory is.

I see a widespread assumption that everything has a finite utility, and so no matter how much worse X is than Y, there must be a situation in which it is better to have one person experiencing X, rather than a large number of people experiencing Y. And it looks to me as if this assumption derives from nothing more than a particular formalism. In fact, it is extremely easy to have a utility function in which X unconditionally trumps Y, while still being quantitatively commensurable with some other option X'. You could do it with delta functions, for example. You would use ordinary scalars to represent the least important things to have preferences about, scalar multiples of a delta function to represent the utilities of things which are unconditionally more important than those, scalar multiples of a delta function squared to represent things that are even more important, and so on.

The qualitative distinction I would appeal to here could be dubbed pain versus inconvenience. A speck of dust in your eye is not pain. Torture, especially fifty years of it, is.

comment by Zubon · 2007-10-31T02:13:00.000Z · LW(p) · GW(p)

Eliezer, a problem seems to be that the speck does not serve the function you want it to in this example, at least not for all readers. In this case, many people see a special penny because there is some threshold value below which the least bad bad thing is not really bad. The speck is intended to be an example of the least bad bad thing, but we give it a badness rating of one minus .9-repeating.

(This seems to happen to a lot of arguments. "Take x, which is y." Well, no, x is not quite y, so the argument breaks down and the discussion follows some tangent. The Distributed Republic had a good post on this, but I cannot find it.)

We have a special penny because there is some amount of eye dust that becomes noticeable and could genuinely qualify as the least bad bad thing. If everyone on Earth gets this decision at once, and everyone suddenly gets >6,000,000,000 specks, that might be enough to crush all our skulls (how much does a speck weigh?). Somewhere between that and "one speck, one blink, ever" is a special penny.

If we can just stipulate "the smallest unit of suffering (or negative qualia, or your favorite term)," then we can move on to the more interesting parts of the discussion.

I also see a qualitative difference if there can be secondary effects or summation causes secondary effects. As noted above, if 3^^^3/10^20 people die due to freakishly unlikely accidents caused by blinking, the choice becomes trivial. Similarly, +0.000001C sums somewhat differently than specks. 1 speck/day/person for 3^^^3 days is still not an existential risk; 3^^^3 specks at once will kill everyone.

(I still say Kyle wins.)

comment by Pete_Carlton · 2007-10-31T02:28:00.000Z · LW(p) · GW(p)

Okay, here's the data: I choose SPECKS, and here is my background and reasons.

I am a cell biologist. That is perhaps not relevant.

My reasoning is that I do not think that there is much meaning in adding up individual instances of dust specks. Those of you who choose TORTURE seem to think that there is a net disutility that you obtain by multiplying epsilon by 3^^^3. This is obviously greater than the disutility of torturing one person.
I reject the premise that there is a meaningful sense in which these dust specks can "add up".

You can think in terms of biological inputs - simplifying, you can imagine a system with two registers. A dust speck in the eye raises register A by epsilon. Register A also resets to zero if a minute goes by without any dust specks. Torture immediately sets register B to 10. I am morally obliged to intervene if register B ever goes above 1. In this scheme register A is a morally irrelevant register. It trades in different units than register B. No matter how many instances of A*epsilon there are, it does not warrant intervention.

You are making a huge, unargued assumption if you treat both torture and dust-specks in equivalent terms of "disutility". I accept your question and argue for "SPECKS" by rejecting your premise of like units (which does make the question trivial). But I sympathize with people who reject your question outright.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-31T02:34:00.000Z · LW(p) · GW(p)

Mitchell, I acknowledge the defensibility of the position that there are tiers of incommensurable utilities. But to me it seems that the dust speck is a very, very small amount of badness, yet badness nonetheless. And that by the time it's multiplied to ~3^^^3 lifetimes of blinking, the badness should become incomprehensibly huge just like 3^^^3 is an incomprehensibly huge number.

One reason I have problems with assigning a hyperreal infinitesimal badness to the speck, is that it (a) doesn't seem like a good description of psychology (b) leads to total loss of that preference in smarter minds.

(B) If the value I assign to the momentary irritation of a dust speck is less than 1/3^^^3 the value of 50 years' torture, then I will never even bother to blink away the dust speck because I could spend the thought or the muscular movement on my eye on something with a better than 1/3^^^3 chance of saving someone from torture.

(A) People often also think that money, a mundane value, is incommensurate with human life, a sacred value, even though they very definitely don't attach infinitesimal value to money.

I think that what we're dealing here is more like the irrationality of trying to impose and rationalize comfortable moral absolutes in defiance of expected utility, than anyone actually possessing a consistent utility function using hyperreal infinitesimal numbers.

The notion of sacred values seems to lead to irrationality in a lot of cases, some of it gross irrationality like scope neglect over human lives and "Can't Say No" spending.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-07-13T06:22:43.693Z · LW(p) · GW(p)

I'm not sure why surreal/hyperreal numbers result in, essentially, monofocus.

Consider this scale on the surreals:

  • Omega^2: Utility of universal immortality; dis-utility of an existential risk. Omega utility for potentially omega people.
  • Omega: Utility of a human life.
  • 1: One traditional utilon.
  • Epsilon: Dust speck in your eye.

Let's say you're a perfectly rational human (*cough cough*). You naturally start on the Omega^2 scale, with a certain finite amount of resources. Clearly, the worth of an omega of human lives is worth more than your own, so you do not repeat do not promptly donate them all to MIRI.

At least, not until you first calculate the approximate probability that your independent existence will make it more likely that someone somewhere will finally defeat death. Even if you have not the intelligence to do it yourself, or the social skills to keep someone else stable while they attack it, there's still the fact that you can give more to MIRI, over the long run, if you live on just enough to keep yourself psychologically and physiologically sound and then donate the rest to MIRI.

This is, essentially, the "sanity" term. Most of the calculation is done at this step, but because your life, across your lifespan, has some chance of solving death, you are not morally obligated to have yourself processed into Soylent Green.

This step interrupts for one of three reasons. One, you have reached a point where spending further resources, either on yourself or some existential-risk organization, does not predictably affect an existential risk. Two, all existential risks are dealt with, and death itself has died. (Yay!) Three, part of ensuring your own psychological soundness requires it - really, this just represents the fact that sometimes, a dollar (approx. one utilon) or a speck (epsilon utilons) can result in your death or significant misery, but nevertheless such concerns should still be resolved in order of decreasing utility.

At this point, we break to the Omega step, which works much the same way, balancing charity donations against your own life and QoL. Situations where spending money can save lives - say, a hospital or a charity - should be evaluated at this step.

Then we break to the unitary step, which is essentially entirely QoL for yourself or others.

Hypothetically, we might then break to the epsilon step - in practice, since even in a post-scarcity society you will never finish optimizing your unitaries, this step is only evaluated when it or something in it is promoted by causal dependence to a higher step.

So, returning to the original problem: Barring all other considerations, 3^^^3*epsilon is still an epsilon, while 50 years of torture is probably something like 3/4 Omega. With two tiers of difference, the result is obvious, and has been resolved with intuition.

I'm going to conclude with something Hermione says in MoR, that I think applies here.

"But the thing that people forget sometimes, is that even though appearances can be misleading, they're usually not."

comment by Tom_McCabe · 2007-10-31T02:43:00.000Z · LW(p) · GW(p)

"The notion of sacred values seems to lead to irrationality in a lot of cases, some of it gross irrationality like scope neglect over human lives and "Can't Say No" spending."

Could you post a scenario where most people would choose the option which unambiguously causes greater harm, without getting into these kinds of debates about what "harm" means? Eg., where option A ends with shooting one person, and option B ends with shooting ten people, but option B sounds better initially? We have a hard enough time getting rid of irrationality, even in cases where we know what is rational.

comment by mitchell_porter2 · 2007-10-31T02:43:00.000Z · LW(p) · GW(p)

Eliezer: Why does anything have a utility at all? Let us suppose there are some things to which we attribute an intrinsic utility, negative or positive - those are our moral absolutes - and that there are others which only have a derivative utility, deriving from the intrinsic utility of some of their consequences. This is certainly one way to get incommensurables. If pain has intrinsic disutility and inconvenience does not, then no finite quantity of inconvenience can by itself trump the imperative of minimizing pain. But if the inconvenience might give rise to consequences with intrinsic disutility, that's different.

comment by Brandon_Reinhart · 2007-10-31T02:52:00.000Z · LW(p) · GW(p)

Dare I say that people may be overvaluing 50 years of a single human life? We know for a fact that some effect will be multiplied by 3^^^3 by our choice. We have no idea what strange an unexpected existential side effects this may have. It's worth avoiding the risk. If the question were posed with more detail, or specific limitations on the nature of the effects, we might be able to answer more confidently. But to risk not only human civilization, but ALL POSSIBLE CIVILIZATIONS, you must be DAMN SURE you are right. 3^^^3 makes even incredibly small doubts significant.

comment by Brandon_Reinhart · 2007-10-31T02:57:00.000Z · LW(p) · GW(p)

I wonder if my answers make me fail some kind of test of AI friendliness. What would the friendly AI do in this situation? Probably write poetry.

comment by Laura · 2007-10-31T04:14:00.000Z · LW(p) · GW(p)

For Robin's statistics:
Given no other data but the choice, I would have to choose torture. If we don't know anything about the consequences of the blinking or how many times the choice is being made, we can't know that we are not causing huge amounts of harm. If the question deliberately eliminated these unknowns- ie the badness was limited to an eyeblink that does not immediately result in some disaster for someone or blindness for another, and you really are the one and only person making the choice ever, then I'd go with the dust-- But these qualifications are huge when you consider 3^^^3. How can we say the eyeblink didn't distract a surgeon and cause a slip of his knife? Given enough trials, something like that is bound to happen.

comment by Benquo · 2007-10-31T04:22:00.000Z · LW(p) · GW(p)

@Paul, I was trying to find a solution that didn't assume "b) all types of pleasures and pains are commensurable such that for all i, j, given a quantity of pleasure/pain experience i, you can find a quantity of pleasure/pain experience j that is equal to (or greater or less than) it. (i.e. that pleasures and pains exist on one dimension).", but rather established it for the case at hand. Unless it's specifically stated in the hypothetical that this is a true 1-shot choice (which we know it isn't in the real world, as we make analogous choices all the time), I think it's legitimate to assume the aggregate result of the test repeated by everyone. Thus, I'm not invoking utilitarian calculation, but Kantian absolutism! ;) I mean to appeal to your practical intuition by suggesting that a constant barrage of specks will create an experience of a like kind with torture.

@Robin Hanson, what little expertise I have is in the liberal arts and sciences; Euclid and Ptolemy, Aristotle and Kant, Einstein and Sophocles, etc.

comment by Paul_Gowder · 2007-10-31T05:17:00.000Z · LW(p) · GW(p)

Eliezer -- I think the issues we're getting into now require discussion that's too involved to handle in the comments. Thus, I've composed my own post on this question. Would you please be so kind as to approve it?

Recovering irrationalist: I think the hopefully-forthcoming-post-of-my-own will constitute one kind of answer to your comment. One other might be that one can, in fact, prefer huge dust harassment to a little torture. Yet a third might be that we can't aggregate the pain of dust harassment across people, so that there's some amount of single-person dust harassment that will be worse than some amount of torture, but if we spread that out, it's not.

comment by Sebastian_Hagen2 · 2007-10-31T10:27:00.000Z · LW(p) · GW(p)

For Robin's statistics:
Torture on the first problem, and torture again on the followup dilemma.

relevant expertise: I study probability theory, rationality and cognitive biases as a hobby. I don't claim any real expertise in any of these areas.

comment by Kaj_Sotala · 2007-10-31T10:29:00.000Z · LW(p) · GW(p)

I think one of the reasons I finally chose specks is because the unlike implied, the suffering does not simply "add up": 3^^^3 people getting one dust speck in their eye is most certainly not equal to one person getting 3^^^3 dust specks in his eyes. It's not "3^^^3 units of disutility, total", it's one unit of disutility per person.

That still doesn't really answer the "one person for 50 years or two people for 49 years" question, though - by my reasoning, the second option would be preferrable, while obviously the first option is the preferrable one. I might need to come up with a guideline stating that only experiences of suffering within a few orders of magnitude are directly comparable with each other, or some such, but it does feel like a crude hack. Ah well.

If statistics are being gathered, I'm a second year cognitive science student.

comment by John_Mark_Rozendaal · 2007-10-31T10:41:00.000Z · LW(p) · GW(p)

It is my impression that human beings almost universally desire something like "justice" or "fairness." If everybody had the dust speck problem, it would hardly be percieved as a problem. If one person is beign tortured, both the tortured person and others percieve unfairness, and society has a problem with this.

Actually, we all DO get dust motes in our eyes from time to time, and this is not a public policy issue.
In fact relatively small numbers of people ARE being tortured today, and this is a big problem both for the victims and for people who care about justice.

comment by John_Mark_Rozendaal · 2007-10-31T10:59:00.000Z · LW(p) · GW(p)

Beyond the distracting arithmetic lesson, this question reeks of Christianity, positing a situation in which one person's suffering can take away the suffering of others.

Replies from: homunq
comment by homunq · 2012-04-13T14:17:29.497Z · LW(p) · GW(p)

This comment reeks of fuzzy reasoning.

comment by pdf23ds · 2007-10-31T11:48:00.000Z · LW(p) · GW(p)

For the moment I disturbed by the fact that Eliezer and I seem to be in a minority here, but comforted a bit by the fact that we seem to know decision theory better than most. But I'm open to new data on the balance of opinion and the balance of relevant expertize.

It seems like selection bias might make this data much less useful. (It applied it my case, at least.) The people who chose TORTURE were likely among those with the most familiarity with Eliezer's writings, and so were able to predict that he would agree with them, and so felt less inclined to respond. Also, voicing their opinion would be publicly taking an unpopular position, which people instinctively shy away from.

comment by Recovering_irrationalist · 2007-10-31T13:30:00.000Z · LW(p) · GW(p)

Paul: Yet a third might be that we can't aggregate the pain of dust harassment across people, so that there's some amount of single-person dust harassment that will be worse than some amount of torture, but if we spread that out, it's not.

My induction argument covers that. As long as, all else equal, you believe:

  • A googolplex people tortured for time x is worse than one person tortured for time x+0.00001%.
  • A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect.
  • A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable.
  • If a is worse than b and b is worse than c then a is worse than c.

  • ...you can show that all else equal, to reduce suffering you pick TORTURE. As far as I can see anyway, I've been wrong before. Again, I acknowledge that it depends on how much you care about reducing suffering compared to other concerns, such as an arbitrary cut-off point, abhoration to using maths to answer such questions, or sacred values, which certainly can have utility by keeping worse irrationalities in check.

    comment by Nick_Tarleton · 2007-10-31T13:48:00.000Z · LW(p) · GW(p)

    A googolplex people being dust speckled every second of their life without further ill effect

    I don't think this is directly comparable, because the disutility of additional dust specking to one person in a short period of time probably grows faster than linearly - if I have to blink every second for an hour, I'll probably get extremely frustrated on top of the slight discomfort of the specks themselves. I would say that one person getting specked every second of their life is significantly worse than a couple billion people getting specked once.

    comment by Recovering_irrationalist · 2007-10-31T14:00:00.000Z · LW(p) · GW(p)

    the disutility of additional dust specking to one person in a short period of time probably grows faster than linearly

    That's why I used a googolplex people to balance the growth. All else equal, do you disagree with: "A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect" for the range concerned?

    one person getting specked every second of their life is significantly worse than a couple billion people getting specked once.

    I agree. I never said it wasn't.

    Have to run - will elaborate later.

    comment by Nick_Tarleton · 2007-10-31T14:27:00.000Z · LW(p) · GW(p)

    All else equal, do you disagree with: "A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect" for the range concerned?

    I agree with that. My point is that agreeing that "A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable" doesn't oblige me to agree that "A few billion* googolplexes of people being dust specked once without further ill effect is worse than one person being horribly tortured for the shortest period experiencable". (Unless "a further ill effect" is meant to exclude not only car accidents but superlinear personal emotional effects, but that would be stupid.)

    * 1 billion seconds = 31.7 years

    I think that what we're dealing here is more like the irrationality of trying to impose and rationalize comfortable moral absolutes in defiance of expected utility

    Since real problems never possess the degree of certainty that this dilemma does, holding certain heuristics as absolutes may be the utility-maximizing thing to do. In a realistic version of this problem, you would have to consider the results of empowering whatever agent is doing this to torture people with supposedly good but nonverifiable results. If it's a human or group of humans, not such a good idea; if it's a Friendly AI, maybe you can trust it but can't it figure out a better way to achieve the result? (There is a Pascal's Mugging problem here.)

    One more thing for TORTURErs to think about: if every one of those 3^^^3 people is willing to individually suffer a dust speck in order to prevent someone from suffering torture, is TORTURE still the right answer? I lean towards SPECK on considering this, although I'm less sure about the case of torturing 3^^^3 people for a minute each vs. 1 person for 50 years.

    comment by Psy-Kosh · 2007-10-31T17:13:00.000Z · LW(p) · GW(p)

    Just thought I'd comment that the more I think about the question, the more confusing it becomes. I'm inclined to think that if we consider the max utility state of every person having maximal fulfilment, and a "dust speck" as the minimal amount of "unfulfilment" from the top a person can experience, then two people experiencing a single "dust speck" is not quite as bad as a sigle person two "dust specks" below optimal. I think the reason I'm thinking that is that the second speck takes away more proportionally than the first speck did.

    Oh, one other thing. I was assuming for my replies both here and in the other thread that we're only talking about the actual "moment of suffering" caused by a dust speck event, with no potential "side effects"

    If we consider that those can have consequences, I'm pretty sure that on average those would be negative/harmful, and when the law of large numbers is invoked via stupendously large numbers, well, in that case I'm going with TORTURE.

    For the moment at least. :)

    comment by Recovering_irrationalist · 2007-10-31T17:58:00.000Z · LW(p) · GW(p)

    I agree with that. My point is that agreeing that "A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable" doesn't oblige me to agree that "A few billion* googolplexes of people being dust specked once without further ill effect is worse than one person being horribly tortured for the shortest period experiencable".

    Neither would I, you don't need to. :-)

    The only reason I can pull this off is because 3^^^3 is such a ludicrous number of people, allowing me to actually divide my army by a googolplex a silly number of times. You couldn't cut the series up fine enough with a mere six billion people.

    If you agree with my first two statements listed, you can use them (and your vast googolplex-cutter-proof army) to infer a series of small steps from each of Eliezer's options, meeting in the middle at my third statement in the list. You then have a series of steps when a is worse than b, b than c, c than d, all the way from SPECS to my third statement to TORTURE.

    If for some reason you object to one of the first 3 statements, my 3^^^3 vast hoard of minions will just cut the series up even finer.

    If that's not clear it's probably my fault - I've never had to explain anything like this before.

    if every one of those 3^^^3 people is willing to individually suffer a dust speck in order to prevent someone from suffering torture, is TORTURE still the right answer?

    I sure would, but I wouldn't ask 3^^^3 others to.

    comment by jonvon · 2007-10-31T18:03:00.000Z · LW(p) · GW(p)

    ok, without reading the above comments... (i did read a few of them, including robin hanson's first comment - don't know if he weighed in again).

    dust specks over torture.

    the apparatus of the eye handles dust specks all day long. i just blinked. it's quite possible there was a dust speck in there somewhere. i just don't see how that adds up to anything, even if a very large number is invoked. in fact with a very large number like the one described it is likely that human beings would evolve more efficient tear ducts, or faster blinking, or something like that. we would adapt and be stronger.

    torturing one person for fifty years however puts a stain on the whole human race. it affects all of us, even if the torture is carried out fifty miles underground in complete secrecy.

    comment by Paul_Gowder · 2007-10-31T18:46:00.000Z · LW(p) · GW(p)

    Recovering irrationalist: in your induction argument, my first stab would be to deny the last premise (transitivity of moral judgments). I'm not sure why moral judgments have to be transitive.

    Next, I'd deny the second-to-last premise (for one thing, I don't know what it means to be horribly tortured for the shortest period possible -- part of the tortureness of torture is that it lasts a while).

    comment by Neel_Krishnaswami · 2007-10-31T19:04:00.000Z · LW(p) · GW(p)

    Eliezer, both you and Robin are assuming the additivity of utility. This is not justifiable, because it is false for any computationally feasible rational agent.

    If you have a bounded amount of computation to make a decision, we can see that the number of distinctions a utility function can make is in turn bounded. Concretely, if you have N bits of memory, a utility function using that much memory can distinguish at most 2^N states. Obviously, this is not compatible with additivity of disutility, because by picking enough people you can identify more distinct states than the 2^N distinctions your computational process can make.

    Now, the reason for adopting additivity comes from the intuition that 1) hurting two people is at least as bad as hurting one, and 2) that people are morally equal, so that it doesn't matter which people are hurt. Note that these intuitions mathematically only require that harm should be monotone in the number of people with dust specks in their eyes. Furthermore, this requirement is compatible with the finite computation requrements -- it implies that there is a finite number of specks beyond which disutility does not increase.

    If we want to generalize away from the specific number N of bits we have available, we can take an order-theoretic viewpoint, and simply require that all increasing chains of utilities have limits. (As an aside, this idea lies at the heart of the denotational semantics of programming languages.) This forms a natural restriction on the domain of utility functions, corresponding to the idea that utility functions are bounded.

    comment by Tom_Breton · 2007-10-31T20:00:00.000Z · LW(p) · GW(p)

    It's truly amazing the contortions many people have gone through rather than appear to endorse torture. I see many attempts to redefine the question, categorical answers that basically ignore the scalar, and what Eliezer called "motivated continuation".

    One type of dodge in particular caught my attention. Paul Gowder phrased it most clearly, so I'll use his text for reference:

    ...depends on the following three claims:

    a) you can unproblematically aggregate pleasure and pain across time, space, and individuality,

    "Unproblematically" vastly overstates what is required here. The question doesn't require unproblematic aggregation; any slight tendency of aggregation will do just fine. We could stipulate that pain aggregates as the hundredth root of N and the question would still have the same answer. That is an insanely modest assumption, ie that it takes 2^100 people having a dust mote before we can be sure there is twice as much suffering as for one person having a dust mote.

    "b" is actually inapplicable to the stated question and it's "a" again anyways - just add "type" or "mode" to the second conjunction in "a".

    c) it is a moral fact that we ought to select the world with more pleasure and less pain.

    I see only three possibilities for challenging this, none of which affects the question at hand.

    • Favor a desideratum that roughly aligns with "pleasure" but not quite, such as "health". Not a problem.
    • Focus on some special situation where paining others is arguably desirable, such as deterrence, "negative reinforcement", or retributive justice. ISTM that's already been idealized away in the question formulation.
    • Just don't care about others' utility, eg Rand-style selfishness.
    Replies from: Kenny
    comment by Kenny · 2013-01-06T05:05:39.745Z · LW(p) · GW(p)

    The "Rand-style selfishness" mars an otherwise sound comment.

    comment by Recovering_irrationalist · 2007-10-31T20:11:00.000Z · LW(p) · GW(p)

    Recovering irrationalist: in your induction argument, my first stab would be to deny the last premise (transitivity of moral judgments). I'm not sure why moral judgments have to be transitive.

    I acknowledged it won't hold for every moral. There are some pretty barking ones out there. I say it holds for choosing the option that creates less suffering. For finite values, transitivity should work fine.

    Next, I'd deny the second-to-last premise (for one thing, I don't know what it means to be horribly tortured for the shortest period possible -- part of the tortureness of torture is that it lasts a while).

    Fine, I still have plenty of googolplex-divisions left. Cut the series as fine as you like. Have billions of intervening levels of discomfort from spec->itch->ouch->"fifty years of reading the comments to this post." The point is if you slowly morph from TORTURE to SPEC in very small steps, every step gets worse because the population multiplies enormously while the pain differs by a incredibly tiny amount.

    comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-31T20:24:00.000Z · LW(p) · GW(p)

    Recovering irrationalist, I hadn't thought of things in precisely that way - just "3^^4 is really damn big, never mind 3^^7625597484987" - but now that you point it out, the argument by googolplex gradations seems to me like a much stronger version of the arguments I would have put forth.

    It only requires 3^^5 = 3^(3^7625597484987) to get more googolplex factors than you can shake a stick at. But why not use a googol instead of a googolplex, so we can stick with 3^^4? If anything, the case is more persuasive with a googol because a googol is more comprehensible than a googolplex. It's all about scope neglect, remember - googolplex just fades into a featureless big number, but a googol is ten thousand trillion trillion trillion trillion trillion trillion trillion trillion.

    comment by Neel_Krishnaswami · 2007-10-31T20:41:00.000Z · LW(p) · GW(p)

    Tom, your claim is false. Consider the disutility function

    D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))

    Now, with this function, disutility increases monotonically with the number of people with specks in their eyes, satisfying your "slight aggregation" requirement. However, it's also easy to see that going from 0 to 1 person tortured is worse than going from 0 to any number of people getting dust specks in their eyes, including 3^^^3.

    The basic objection to this kind of functional form is that it's not additive. However, it's wrong to assume an additive form, because that assumption mandates unbounded utilities, which are a bad idea, because they are not computationally realistic and admit Dutch books. With bounded utility functions, you have to confront the aggregation problem head-on, and depending on how you choose to do it, you can get different answers. Decision theory does not affirmatively tell you how to judge this problem. If you think it does, then you're wrong.

    comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-31T20:48:00.000Z · LW(p) · GW(p)

    Again, not everyone agrees with the argument that unbounded utility functions give rise to Dutch books. Unbounded utilities only admit Dutch books if you do allow a discontinuity between infinite rewards and the limit of increasing finite awards, but you don't allow a discontinuity between infinite planning and the limit of increasing finite plans.

    comment by Silas · 2007-10-31T20:58:00.000Z · LW(p) · GW(p)

    Oh geez. Originally I had considered this question uninteresting so I ignored it, but considering the increasing devotion to it in later posts, I guess I should give my answer.

    My justification, but not my answer, depends upon what how the change is made.

    -If the offer is made to all of humanity before being implemented ("Do you want to be the 'lots of people get specks race' or the 'one guy gets severe torture' race?") I believe people could all agree to the specks by "buying out" whoever eventually gets the torture. For an immeasurably small amount, less than the pain of a speck, they can together amass funds sufficient to return the torture to the indivdual's indifference curve. OTOH, the person getting the torture couldn't possibly buy out that many people. (In other words, the specks are Kaldor-Hicks efficient.)

    -If the offer, at my decision, would just be thrown onto humanity without possibly of advance negotation, I would still take the specks because even if only people who feel bad for the tortured make a small contribution, it will still comparable to what they had to offer in the above paragraph, such is the nature of large numbers of people.

    I don't think this is the result of my revulsion toward the torture, although I have that. I think my decision stems from how such large (and superlinearly increasing) utility differences imply the possibility of "evening it out" through some transfer.

    comment by Recovering_irrationalist · 2007-10-31T21:09:00.000Z · LW(p) · GW(p)

    the argument by googolplex gradations seems to me like a much stronger version of the arguments I would have put forth.

    You just warmed my heart for the day :-)

    But why not use a googol instead of a googolplex

    Shock and awe tactics. I wanted a featureless big number of featureless big numbers, to avoid wiggle-outs, and scream "your intuition ain't from these parts". In my head, FBNs always carry more weight than regular ones. Now you mention it, their gravity could get lightened by incomprehensibility, but we we're already counting to 3^^^3.

    Googol is better. Less readers will have to google it.

    comment by Tom_Breton · 2007-10-31T21:39:00.000Z · LW(p) · GW(p)

    @Neel.

    Then I only need to make the condition slightly stronger: "Any slight tendency to aggregation that doesn't beg the question." Ie, that doesn't place a mathematical upper limit on disutility(Specks) that is lower than disutility(Torture=1). I trust you can see how that would be simply begging the question. Your formulation:

    D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))

    ...doesn't meet this test.

    Contrary to what you think, it doesn't require unbounded utility. Limiting the lower bound of the range to (say) 2 * disutility(torture) will suffice. The rest of your message assumes it does.

    For completeness, I note that introducing numbers comparable to 3^^^3 in an attempt to undo the 3^^^3 scaling would cause a formulation to fail the "slight" condition, modest though it is.

    comment by Jef_Allbright · 2007-10-31T22:24:00.000Z · LW(p) · GW(p)

    With so many so deep in reductionist thinking, I'm compelled to stir the pot by asking how one justifies the assumption that the SPECK is a net negative at all, aggregate or not, extended consequences or not? Wouldn't such a mild irritant, over such a vast and diverse population, act as an excellent stimulus for positive adaptations (non-genetic, of course) and likely positive extended consequences?

    comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-31T22:26:00.000Z · LW(p) · GW(p)

    A brilliant idea, Jef! I volunteer you to test it out. Start blowing dust around your house today.

    comment by Psy-Kosh · 2007-10-31T22:31:00.000Z · LW(p) · GW(p)

    Hrm... Recovering's induction argument is starting to sway me toward TORTURE.

    More to the point, that and some other comments are starting to sway me away from the thought that disutility of single dust speck events per person becomes sublinear as people experiencing it increases (but total population is held constant)

    I think if I made some errors, they were partly was caused by "I really don't want to say TORTURE", and partly caused by my mistaking the exact nature of the nonlinearity. I maintain "one person experiencing two dust specks" is not equal to, and actually worse, I think, than two people experiencing one dust speck, but now I'm starting to suspect that two people each experiencing one dust speck is exactly twice as bad as one person experiencing one dust speck. (Assuming, as we shift number of people experiencing DSE that we hold the total population constant.)

    Thus, I'm going to tentatively shift my answer to TORTURE.

    comment by Jef_Allbright · 2007-10-31T22:35:00.000Z · LW(p) · GW(p)

    "A brilliant idea, Jef! I volunteer you to test it out. Start blowing dust around your house today."

    Although only one person, I've already begun, and have entered in my inventor's notebook some apparently novel thinking on not only dust, but mites, dog hair, smart eyedrops, and nanobot swarms!

    comment by g · 2007-10-31T22:42:00.000Z · LW(p) · GW(p)

    Tom, if having an upper limit on disutility(Specks) that's lower than disutility(Torture1) is begging the question in favour of SPECKS then why isn't not* having such an upper limit begging the question in favour of TORTURE?

    I find it rather surprising that so many people agree that utility functions may be drastically nonlinear but are apparently completely certain that they know quite a bit about how they behave in cases as exotic as this one.

    comment by Tom_Breton · 2007-11-01T00:05:00.000Z · LW(p) · GW(p)
    Tom, if having an upper limit on disutility(Specks) that's lower than disutility(Torture1) is begging the question in favour of SPECKS then why isn't not* having such an upper limit begging the question in favour of TORTURE?

    It should be obvious why. The constraint in the first one is neither argued for nor agreed on and by itself entails the conclusion being argued for. There's no such element in the second.

    comment by g · 2007-11-01T01:05:00.000Z · LW(p) · GW(p)

    I think we may be at cross purposes; my apologies if we are and it's my fault. Let me try to be clearer.

    Any particular utility function (if it's real-valued and total) "begs the question" in the sense that it either prefers SPECKS to TORTURE, or prefers TORTURE to SPECKS, or puts them exactly equal. I don't see how this can possibly be considered a defect, but if it is one then all utility functions have it, not just ones that prefer SPECKS to TORTURE.

    Saying "Clearly SPECKS is better than TORTURE, because here's my utility function and it says SPECKS is better" would be begging the question (absent arguments in support of that utility function). I don't see anyone doing that. Neel's saying "You can't rule out the possibility that SPECKS is better than TORTURE by saying that no real utility function prefers SPECKS, because here's one possible utility function that says SPECKS is better". So far as I can tell you're rejecting that argument on the grounds that any utility function that prefers SPECKS is ipso facto obviously unacceptable; that is begging the question.

    comment by Neel_Krishnaswami · 2007-11-01T01:29:00.000Z · LW(p) · GW(p)

    g: that's exactly what I'm saying. In fact, you can show something stronger than that.

    Suppose that we have an agent with rational preferences, and who is minimally ethical, in the sense that they always prefer fewer people with dust specks in their eyes, and fewer people being tortured. This seems to be something everyone agrees on.

    Now, because they have rational preferences, we know that a bounded utility function consistent with their preferences exists. Furthermore, the fact that they are minimally ethical implies that this function is monotone in the number of people being tortured, and monotone in the number of people with dust specks in their eyes. The combination of a bound on the utility function, plus the monotonicity of their preferences, means that the utility function has a well-defined limit as the number of people with specks in their eyes goes to infinity. However, the existence of the limit doesn't tell you what it is -- it may be any value within the bounds.

    Concretely, we can supply utility functions that justify either choice, and are consistent with minimal ethics. (I'll assume the bound is the [0,1] interval.) In particular, all disutility functions of the form:

    U(T, S) = A(T/(T+1)) + B(S/(S+1))

    satisfy minimal ethics, for all positive A and B such that A plus B is less than one. Since A and B are free parameters, you can choose them to make either specks or torture preferred.

    Likewise, Robin and Eliezer seem to have an implicit disutility function of the form

    U_ER(T, S) = AT + BS

    If you normalize to get [0,1] bounds, you can make something up like

    U'(T, S) = (AT + BS)/(AT + BS + 1).

    Now, note U' also satisfies minimal ethics, in that if T is set to 1, then in the limit as S goes to infinity, U' will still always go to one and exceed A/(A+1). So that's why they tend to have the intuition that torture is the right answer. (Incidentally, this disproves my suggestion that bounded utility functions vitiate the force of E's argument -- but the bounds proved helpful in the end by letting us use limit analysis. So my focus on this point was accidentally correct!)

    Now, consider yet another disutility function,

    U''(T,S) = (ST + S)/ (ST + S + 1)

    This is also minimally ethical, and doesn't have any of the free parameters that Tom didn't like. But this function also always implies a preference for any number of dust specks to even a single instance of torture.

    Basically, if you think the answer is obvious, then you have to make some additional assumptions about the structure of the aggregate preference relation.

    comment by Wei_Dai2 · 2007-11-01T01:38:00.000Z · LW(p) · GW(p)

    I have argued in previous comments that the utility of a person should be discounted by his or her measure, which may be based on algorithmic complexity. If this "torture vs specks" dilemma is to have the same force under this assumption, we'd have to reword it a bit:

    Would you prefer that the measure of people horribly tortured for fifty years increases by x/3^^^3, or that the measure of people who get dust specks in their eyes increases by x?

    I argue that no one, not even a superintelligence, can actually face such a choice. Because x is at most 1, x/3^^^3 is at most 1/3^^^3. But how can you increase the measure of something by more than 0 but no more than 1/3^^^3? You might, perhaps, generate a random number between 0 and 3^^^3 and do something only if that random number is 0. But algorithmic information theory says that for any program (even a superintelligence), there are pseudorandom sequences that it cannot distinguish from truly random sequences, and the prior probability that your random number generator is generating such a pseudorandom sequence is much higher than 1/3^^^3. Therefore the probability of that "random" number being 0 (or being any other number that you can think of) is actually much larger than 1/3^^^3.

    Therefore, if someone tells you "measure of ... increases by x/3^^^3", in your mind you've got to be thinking "... increases by y" for some y much larger than 1/3^^^3. I think my theories explains both those who answer SPECKS and those who say no answer is possible.

    comment by Andrew_Macdonald · 2007-11-01T01:47:00.000Z · LW(p) · GW(p)

    A consistent utilitarian would choose the torture, but I don't think it's the moral choice.

    Let's bring this a little closer to home. Hypothetically, let's say you get to live your life again 3^^^3 times. Would you prefer to have an additional dust speck in your eye in each of your future lives, or else be tortured for 50 years in a single one of them?

    Any takers for the torture?

    Replies from: Salivanth
    comment by Salivanth · 2012-04-05T03:18:45.560Z · LW(p) · GW(p)

    Man that's a good one. It's certainly interesting to know that my ability to override intuition when it comes to large numbers is far less effective when the question is applied to me personally. I'm assuming that this question assumes no other ill effects from the specks. And I know I should pick the torture. I know that if the torture is the best outcome for other people, it's the best outcome for myself. But if I was given that choice in real life, I don't think I would as of writing this comment.

    I have some correcting to do.

    Replies from: Salivanth
    comment by Salivanth · 2012-05-25T12:22:45.092Z · LW(p) · GW(p)

    Actually, I ended up resolving this at some point. I would in fact pick the dust specks in this case, because the situations aren't identical. I'd spend a lot of time in my 3^^^3 lives worrying if I'm going to start being tortured for 50 years, but I wouldn't worry about the dust specks. Technically, the disutility of the dust specks is worse, but my brain can't comprehend the number "3^^^3", so it would worry more about the torture happening to me. Adding in the disutility of worrying about the torture, even a small amount, across 3^^^3 / 2 lives, and it's clear that I should pick the dust specks for myself in this situation, regardless of whether or not I choose torture in the original problem.

    Replies from: AnotherIdiot
    comment by AnotherIdiot · 2012-06-27T23:29:53.866Z · LW(p) · GW(p)

    This is sort of avoiding the question. What if you made the choice, but then had your memory erased about the whole dilemma right afterwards? Assuming you knew before making your choice that your memory would be erased, of course.

    Replies from: Salivanth
    comment by Salivanth · 2012-07-06T03:49:07.149Z · LW(p) · GW(p)

    Then I choose the torture. I've grown a bit more comfortable with overriding intuition in regards to extremely large numbers since my original reply 3 months ago.

    comment by Recovering_irrationalist · 2007-11-01T02:01:00.000Z · LW(p) · GW(p)

    I'll take it, as long as it's no more likely to be one of the earliest lives. I don't trust any universe that can make 3^^^3 of me not to be a simulation that would get pulled early.

    Hrm... Recovering's induction argument is starting to sway me toward TORTURE.

    Interesting. The idea of convincing others to decide TORTURE is bothering me much more than my own decision.

    I hope these ideas never get argued out of context!

    comment by Caledonian2 · 2007-11-01T02:53:00.000Z · LW(p) · GW(p)

    Cooking something for two hours at 350 degrees isn't equivalent to cooking something at 700 degrees for one hour.

    I'd rather accept one additional dust speck per lifetime in 3^^^3 lives than have one lifetime out of 3^^^3 lives involve fifty years of torture.

    Of course, that's me saying that, with my single life. If I actually had that many lives to live, I might become so bored that I'd opt for the torture merely for a change of pace.

    comment by Psy-Kosh · 2007-11-01T03:07:00.000Z · LW(p) · GW(p)

    Recovering: chuckles no, I meant thinking about that, and rethinking about what the actual properties of what I'd consider to be a reasonable utility function led me to reject my earlier claim of the specific nonlinearity that lead to my assumption that as you increase the number of people that recieve a spec, the disutility is sublinear, and now I believe it to be linear. So huge bigbigbigbiggigantaenormous num specks would, of course, eventually have to have more disutility than the torture. But since to get to that point knuth arrow notation had to be invoked, I don't think there's any worry that I'm off to get my "rack winding certificate" :P

    But yeah, out of context this debate would sound like complete nonsense... "crazy geeks find it difficult to decide between dust specks and extreme torture."

    I do have to admit though, Andrew's comment about individual living 3^^^3 times and so on has me thinking again. If "keep memories and so on of all previous lives = yes" (so it's really one really long lifespan) and "permanent physical and psychological damage post torture = no") then I may take that. I think. Arrrgh, stop messing with my head. Actually, no, don't stop, this is fun! :)

    comment by Mike7 · 2007-11-01T04:04:00.000Z · LW(p) · GW(p)

    I'd take it.
    I find your choice/intuition completely baffling, and I would guess that far less than 1% of people would agree with you on this, for whatever that's worth (surely it's worth something.) I am a consequentialist and have studied consequentialist philosophy extensively (I would not call myself an expert), and you seem to be clinging to a very crude form of utilitarianism that has been abandoned by pretty much every utilitarian philosopher (not to mention those who reject utilitarianism!). In fact, your argument reads like a reductio ad absurdum of the point you are trying to make. To wit: if we think of things in equivalent, additive utility units, you get this result that torture is preferable. But that is absurd, and I think almost everyone would be able to appreciate the absurdity when faced with the 3^^^3 lives scenario. Even if you gave everyone a one week lecture on scope insensitivity.

    So... I don't think I want you to be one of the people to initially program AI that might influence my life...

    comment by michael_vassar · 2007-11-01T05:11:00.000Z · LW(p) · GW(p)

    No Mike, your intuition for really large numbers is non-baffling, probably typical, but clearly wrong, as judged by another non-Utilitarian consequentialist (this item is clear even to egoists).

    Personally I'd take the torture over the dust specks even if the number was just an ordinary incomprehensible number like say the number of biological humans who could live in artificial environments that could be built in one galaxy. (about 10^46th given a 100 year life span and a 300W (of terminal entropy dump into a 3K background from 300K, that's a large budget) energy budget for each of them). It's totally clear to me that a second of torture isn't a billion billion billion times worse than getting a dust speck in my eye, and that there are only about 1.5 billion seconds in a 50 year period. That leaves about a 10^10 : 1 preference for the torture.

    The only considerations that dull my certainty here is that I'm not convinced that my utility function can even encompass these sorts of ordinary incomprehensible numbers, but it seems to me that there is at least a one-in-a-billion chance that it can.

    comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-01T05:28:00.000Z · LW(p) · GW(p)

    So, if additive utility functions are naive, does that mean I can swap around your preferences at random like jerking around a puppet on a string, just by having a sealed box in the next galaxy over where I keep a googol individuals who are already being tortured for fifty years, or already getting dust specks in their eyes, or already being poked with a stick, etc., which your actions cannot possibly affect one way or the other?

    It seems I can arbitrarily vary your "non-additive" utilities, and hence your priorities, simply by messing with the numbers of existing people having various experiences in a sealed box in a galaxy a googol light years away.

    This seems remarkably reminiscent of E. T. Jaynes's experience with the "sophisticated" philosophers who sniffed that of course naive Bayesian probability theory had to be abandoned in the face of paradox #239; which paradox Jaynes would proceed to slice into confetti using "naive" Bayesian theory but with this time with rigorous math instead of the various mistakes the "sophisticated" philosophers had made.

    There are reasons for preferring certain kinds of simplicity.

    comment by Mike7 · 2007-11-01T07:08:00.000Z · LW(p) · GW(p)

    Michael Vassar:
    Well, in the prior comment, I was coming at it as an egoist, as the example demands.
    It's totally clear to me that a second of torture isn't a billion billion billion times worse than getting a dust speck in my eye, and that there are only about 1.5 billion seconds in a 50 year period. That leaves about a 10^10 : 1 preference for the torture.
    I reject the notion that each (time,utility) event can be calculated in the way you suggest. Successive speck-type experiences for an individual (or 1,000 successive dust specks for 1,000,000 individuals) over the time period we are talking about would easily overtake 50 years of torture. It doesn't make sense to tally (total human disutility of torture (1 person/50 years in this case))(some quantification of the disutility of a time unit of torture) vs. (total human speck disutility)(some quantification of a unit of speck disutility).
    The universe is made up of distinct beings (animals included), not the sum of utilities (which is just a useful contruct.)
    All of this is to say:
    If we are to choose for ourselves between these scenarios, I think it is incredibly bizarre to prefer 3^^^3 satisfying lives and one indescribably horrible life to 3^^^3 infinitesimally better lives than the alternative 3^^^3 lives. I think doing so ignores basic human psychology, from whence our preferences arise.

    comment by mitchell_porter2 · 2007-11-01T10:31:00.000Z · LW(p) · GW(p)

    To continue this business of looking at the problem from different angles:

    Another formulation, complementary to Andrew Macdonald's, would be: Should 3^^^3 people each volunteer to experience a speck in the eye, in order to save one person from fifty years of torture?

    And with respect to utility functions: Another nonlinear way to aggregate individual disutilities x, y, z... is just to take the maximum, and to say that a situation is only as bad as the worst thing happening to any individual in that situation. This could be defended if one's assignment of utilities was based on intensity of experience, for example. There is no-one actually having a bad experience with 3^^^3 times the badness of a speck in the eye. As for the fact that two people suffering identically turns out to be no worse than just one - accepting a few counterintuitive conclusions is a small price to pay for simplicity, right?

    comment by John_Mark_Rozendaal · 2007-11-01T10:32:00.000Z · LW(p) · GW(p)

    I find it positively bizarre to see so much interest in the arithmetic here, as if knowing how many dust flecks go into a year of torture, just as one knows that sixteen ounces go into one pint, would inform the answer.

    What happens to the debate if we absolutely know the equation:

    3^^^3 dustflecks = 50 years of torture

    or

    3^^^3 dustflecks = 600 years of torture

    or

    3^^^3 dustfleck = 2 years of torture ?

    comment by John_Mark_Rozendaal · 2007-11-01T11:23:00.000Z · LW(p) · GW(p)

    The nation of Nod has a population of 3^^^3. By amazing coincidence, every person in the nation of Nod has $3^^^3 in the bank. (With a money suplly like that, those dollars are not worth much.) By yet another coincidence, the government needs to raise revenues of $3^^^3. (It is a very efficient government and doesn't need much money.) Should the money be raised by taking $1 from each person, or by simply taking the entire amount from one person?

    comment by Recovering_irrationalist · 2007-11-01T12:23:00.000Z · LW(p) · GW(p)

    I take $1 from each person. It's not the same dilemma.

    ----

    Ri:The idea of convincing others to decide TORTURE is bothering me much more than my own decision.

    PK:I don't think there's any worry that I'm off to get my "rack winding certificate" :P

    Yes, I know. :-) I was just curious about the biases making me feel that way.

    individual living 3^^^3 times...keep memories and so on of all previous lives

    3^^^3 lives worth of memories? Even at one bit per life, that makes you far from human. Besides, you're likely to get tortured in googolplexes of those lifetimes anyway.

    Arrrgh, stop messing with my head. Actually, no, don't stop, this is fun! :)

    OK here goes... it's this life. Tonight, you start fifty years being loved at by countless sadistic Barney the Dinosaurs. Or, for all 3^^^3 lives you (at your present age) have to singalong to one of his songs. BARNEYLOVE or SONGS?

    comment by Sebastian_Hagen2 · 2007-11-01T14:18:00.000Z · LW(p) · GW(p)

    Andrew Macdonald asked:
    Any takers for the torture?
    Assuming the torture-life is randomly chosen from the 3^^^3 sized pool, definitely torture. If I have a strong reason to expect the torture life to be found close to the beginning of the sequence, similar considerations as for the next answer apply.

    Recovering irrationalist asks:
    OK here goes... it's this life. Tonight, you start fifty years being loved at by countless sadistic Barney the Dinosaurs. Or, for all 3^^^3 lives you (at your present age) have to singalong to one of his songs. BARNEYLOVE or SONGS?
    The answer depends on whether I expect to make it through the 50 year ordeal without permanent psychological damage. If I know with close to certainty that I will, the answer is BARNEYLOVE. Otherwise, it's SONGS; while I might still acquire irreversible psychological damage, it would probably take much longer, giving me a chance to live relatively sane for a long time before then.

    comment by Zubon · 2007-11-01T15:25:00.000Z · LW(p) · GW(p)

    Cooking something for two hours at 350 degrees isn't equivalent to cooking something at 700 degrees for one hour.

    Caledonian has made a great analogy for the point that is being made on either side. May I over-work it?

    They are not equivalent, but there is some length of time at 350 degrees that will burn as badly as 700 degrees. In 3^^^3 seconds, your lasagna will be ... okay, entropy will have consumed your lasagna by then, but it turns into a cloud of smoke at some point.

    Correct me if I am wrong here, but I don't think there is any length of time at 75 or 100 degrees that will burn as badly as one hour at 700 degrees. It just will not cook at all. Your food will sit there and rot, rather than burning.

    There must be some minimum temperature at which various things can burn. Given enough time at that temperature, it is the equivalent of just setting it on fire. Below that temperature, it is qualitatively different. You do not get bronze no matter how long you leave copper and tin at room temperature.

    (Or maybe I am wrong there. Maybe a couple of molecules will move properly at room temperature over a few centures, so the whole mass becomes bronze in less than 3^^^3 seconds. I assume that anything physically possible will happen at some point in 3^^^3 seconds.)

    Are there any SPECKS advocates who say we should pick two people tortured for 49.5 years rather than one for 50 years? If there is any degree of summation possible, 3^^^3 will get us there.

    But, SPECKS can reply, there can be levels across with summation is not possible. If lasagna physically cannot burn at 75 degrees, even letting it "cook" for 33^^^^33 seconds, then it will never be as badly burned as one hour at 700 degrees.

    "Did I say 75?" TORTURE replies. "I meant whatever the minimum possible is for lasagna to burn, plus 1/3^^3 degrees." SPECKS must grant victory in that case, but wins at 2/3^^3 degrees lower.

    Which just returns the whole thing back to the primordial question-begging on either side, whether specks can ever sum to torture. If any number of beings needing to blink ever adds to 10 seconds of torture, TORTURE is in a very strong position, unless you are again arguing that 10 seconds of TORTURE is like 75 degrees, and there is some magic penny somewhere.

    (Am I completely wrong? Aren't physics and chemistry full of magic pennies like escape velocities and temperatures needed for physical reactions?)

    TORTURE must argue that yes, it is the sort of thing that adds. SPECKS must argue that it is like asking how many blades of grades you must add to get a battleship. "Mu."

    comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-01T15:37:00.000Z · LW(p) · GW(p)

    Zubon, we could formalize this with a tiered utility function (one not order-isomorphic to the reals, but containing several strata each order-isomorphic to the reals).

    But then there is a magic penny, a single sharp divide where no matter how many googols of pieces you break it into, it is better to torture 3^^^3 people for 9.99 seconds than to torture one person for 10.01 seconds. There is a price for departing the simple utility function, and reasons to prefer certain kinds of simplicity. I'll admit you can't slice it down further than the essentially digital brain; at some point, neurons do or don't fire. This rules out divisions of genuine googolplexes, rather than simple billions of fine gradations. But if you admit a tiered utility function, it will sooner or later come down to one neuron firing.

    And I'll bet that most Speckists disagree on which neuron firing is the magical one. So that for all their horror at us Unspeckists, they will be just as horrified at each other, when one of them claims that thirty seconds of waterboarding is better than 3^^^3 people poked with needles, and the other disagrees.

    comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-01T16:11:00.000Z · LW(p) · GW(p)

    ...except that, if I'm right about the biases involved, the Speckists won't be horrified at each other.

    If you trade off thirty seconds of waterboarding for one person against twenty seconds of waterboarding for two people, you're not visibly treading on a "sacred" value against a "mundane" value. It will rouse no moral indignation.

    Indeed, if I'm right about the bias here, the Speckists will never be able to identify a discrete jump in utility across a single neuron firing, even though the transition from dust speck to torture can be broken up into a series of such jumps. There's no difference of a single neuron firing that leads to the feeling of a comparison between a sacred and an unsacred value. The feeling of sacredness, itself, is quantitative and comes upon you in gradual increments of neurons firing - even though it supposedly describes a utility cliff with a slope higher than 3^^^3.

    The prohibition against torture is clearly very sacred, and a dust speck is clearly very unsacred, so there must be a cliff sharper than 3^^^3 between them. But the distinction between one dust speck and two dust specks doesn't seem to involve a comparison between a sacred and mundane value, and the distinction between 50 and 49.99 years of torture doesn't seem to involve a comparison between a sacred and a mundane value...

    So we're left with cyclical prefrences. The one will trade 3 people suffering 49.99 years of torture for 1 person suffering 50 years of torture; after having previously traded 9 people suffering 49.98 years of torture for 3 people suffering 49.99 years of torture; and so on back to the starting point where it's better for 3^999999999 people to feel two dust specks than for 3^1000000000 people to feel one dust speck; right after, a moment before, having traded one person suffering 50 years of torture for 3^1000000000 people feeling one dust speck.

    Replies from: RST, RST
    comment by RST · 2017-12-19T13:15:45.539Z · LW(p) · GW(p)

    I think that we “speckists” see injuries as poisons: they can destroy people lives only if they reach a certain concentration. So a greater but far more diluted pain can be less dangerous than a smaller but more concentrated one. 50 and 49 years of torture are still far over the threshold. One or two dust specks, on the other hand, are far below.

    comment by RST · 2017-12-22T09:09:10.293Z · LW(p) · GW(p)

    I think it's worst for 3^999999999 people to feel two dust specks than for 3^1000000000 people to feel one dust speck. After all the next step is that it is worst for 3^1000000000 people to feel one dust speck than for 3^1000000001 people to feel less than one dust speck, which seem right.

    comment by Evan · 2007-11-02T00:02:00.000Z · LW(p) · GW(p)

    Assuming that there are 3^^^3 distinct individuals in existence, I think the answer is pretty obvious- pick the torture. However, the fact that we cannot possibly hope to visualize so many individuals it's a pointlessly large number. In fact, I would go so low as one quadrillion human beings with dust specks in their eyes outweighs one individual's 50 years of torture. Consider- one quadrillion seconds of minute but noticeable pain versus a scant fifty years of tortured hell. One quadrillion seconds is about 31,709,792 years. Let's just go with 32 million years. Then factor in the magnitudes- torture is far worse than dust specks- 50 years versus 32 million good enough odds for you?

    However, that being said, the question is yet another installment of lifeboat ethics, and has little bearing on the real world. If we are ever forced to make such a decision, that's one thing, but in the meantime let's work through systemic issues that might lead to such a situation instead.

    comment by iwdw · 2007-11-03T00:19:00.000Z · LW(p) · GW(p)

    My initial reaction (before I started to think...) was to pick the dust specks, given that my biases made the suffering caused by the dust specks morally equivalent to zero, and 0^^^3 is still 0.

    However, given that the problem stated an actual physical phenomenon (dust specks), and not a hypothetical minimal annoyance, then you kind of have to take the other consequences of the sudden appearance of the dust specks under consideration, don't you?

    If I was omnipotent, and I could make everyone on Earth get a dust speck in their eye right now, how many car accidents would occur? Heavy machinery accidents? Workplace accidents? Even if the chance is vanishingly small -- let's say 6 accidents occur on Earth because everyone got a dust speck in their eye. That's one in a billion.

    That's one accident for every 10e9 people. Now, what percentage of those are fatal? Transport Canada currently lists the 23.7 of car accidents in 2003 as resulting in a fatality, which is 1 in 4. Let's be nice, and assume that everywhere else on earth safer, and take that down to 1 in 100 accidents being fatal.

    Now, if everyone in existence gets a dust speck in their eye because of my decision, assuming the hypothetical 3^^^3 people live in something approximating the lifestyles on Earth, I've conceivably doomed 1 in 10e11 people to death.

    That is, my cloud of dust specks have killed 3^^^3 / 10e11 people.

    Replies from: ArisKatsaris
    comment by ArisKatsaris · 2011-02-21T21:28:40.142Z · LW(p) · GW(p)

    It is cheating to answer this by using worse individual consequences than the dust specks themselves.

    The very point of the question is the infinitesimality of each individual disutility.

    Replies from: Nornagest
    comment by Nornagest · 2011-02-21T21:40:29.977Z · LW(p) · GW(p)

    The more I think about the question, the more I'm convinced that it attempts to demonstrate the commensurability of disutility by invoking the commensurability of disutility.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2011-02-21T22:05:43.884Z · LW(p) · GW(p)

    I don't see how it's attempting to demonstrate the commensurability of disutility at all; it seems to be using the assumed commensurability of disutility to challenge intuitions about disutility. Can you say more about what is convincing you?

    Replies from: Nornagest
    comment by Nornagest · 2011-02-21T22:31:02.631Z · LW(p) · GW(p)

    If the OP's challenging a moral intuition that doesn't at some point reduce to commensurability, then I don't know what it is. It asks us to imagine the worst thing that could happen to a random person, and then the least perceptibly bad thing that could happen, and seems to be making the argument that an unimaginably huge number of the latter would trump a single instance of the former. What's that a reductio for, if not the assumption that torture (or anything comparably bad) carries a special kind of disutility?

    On the other hand I'm not sure what the post was written in response to, if anything, so there might be some contextual information there that I'm missing.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2011-02-21T22:47:07.325Z · LW(p) · GW(p)

    I'm... puzzled by this exchange.

    But, yes, agreed that a lot of objections to this post implicitly assert that torture is incommensurable with dust-specks, and EY is challenging that intuition.

    comment by rake · 2007-11-06T06:11:00.000Z · LW(p) · GW(p)

    I have a question/answer in relation to this post that seems to be off-topic for the forum. Click on my name if interested.

    comment by trever · 2007-11-06T10:00:00.000Z · LW(p) · GW(p)

    If the poor bastard being tortured is G. W. Bush, I'm all for it . . .

    comment by Kellopyy · 2007-11-22T20:53:00.000Z · LW(p) · GW(p)

    Since I would not be one of the people affected I would not consider myself able to make that decision alone. In fact my preferences are irrelevant in that situation even if I consider situation to be obvious.

    To have situation with 3^^^3 people we must have at least that many people capable of existing in some meaningful way. I assume we cannot query them about their preferences in any meaningful (omniscient) way. As I cannot choose who will be tortured or who gets dust specks I have to make collective decission.

    I think that my solution would be to take three different groups of randomly chosen people. First group would be asked that question and given chance to discuss and change their minds. Second group would be asked would they save 3^^^3 people from dust specks by accepting torture. Third group would be asked would they agree to be dust specked giving person to be tortured 1/3^^^3 chance to be saved.

    If one of the latter tests would show significant preference over one of the situations I would assume it is for some reason more acceptable given chance to choose. If it would seem that people are either willing to change scenario given chance in both situations or not willing to change situation in either scenario I would rely on their stated preference from first group and go by that.

    I do not think this solution is good enough.

    comment by Chris7 · 2007-11-30T14:33:00.000Z · LW(p) · GW(p)

    Evolution seems to have favoured the capacity for empathy (the specks choice) over the capacity for utility calculation, even though utility calculation would have been a 'no brainer' for the brain capacity we have.
    The whole concept reminds me of the Turing test. Turing, as a mathematician, just seems to have completely failed to understand that we don't assign rationality, or sentience, to another object by deduction. We do it by analogy.

    comment by Jeffrey_Herrlich · 2008-02-04T23:21:00.000Z · LW(p) · GW(p)

    I know that this is only a hypothetical example, but I must admit that I'm fairly shocked at the number of people indicating that they would select the torture option (as long as it wasn't them being tortured). We should be wary of the temptation to support something unorthodox for the effect of: "Hey, look at what a hardcore rationalist I can be." Real decisions have real effects on real people.

    comment by g · 2008-02-05T01:24:00.000Z · LW(p) · GW(p)

    And we should be wary to select something orthodox for fear of provoking shock and outrage. Do you have any reason to believe that the people who say they prefer TORTURE to SPECKS are motivated by the desire to prove their rationalist credentials, or that they don't appreciate that their decisions have real consequences?

    comment by Unknown3 · 2008-02-05T04:24:00.000Z · LW(p) · GW(p)

    Jeffrey, on one of the other threads, I volunteered to be the one tortured to save the others from the specks.

    As for "Real decisions have real effects on real people," that's absolutely correct, and that's the reason to prefer the torture. The utility function implied by preferring the specks would also prefer lowering all the speed limits in the world in order to save lives, and ultimately would ban the use of cars. It would promote raising taxes by a small amount in order to reduce the amount of violent crime (including crimes involving torture of real people), and ultimately would promote raising taxes on everyone until everyone could barely survive on what remains.

    Yes, real decisions have real effects on real people. That's why it's necessary to consider the total effect, not merely the effect on each person considered as an isolated individual, as those who favor the specks are doing.

    comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-02-05T05:22:00.000Z · LW(p) · GW(p)

    Following your heart and not your head - refusing to multiply - has also wrought plenty of havoc on the world, historically speaking. It's a questionable assertion (to say the least) that condoning irrationality has less damaging side effects than condoning torture.

    Replies from: rkyeun
    comment by rkyeun · 2012-08-02T00:28:01.238Z · LW(p) · GW(p)

    I think you've constructed your utility wrong in this instance. Without losing track of scope, we have 3^^^3 motes of dust in 3^^^3 eyes. And yes, that outweighs 50 years of torture, if and only if people have zero tolerance. But people don't break down into sobbing messes at the (literally at least) slightest provocation. There is a small threshold of badness that can happen to someone without them caring, and as long as all 3^^^3 of them only get epsilon below that, the total suffering for all 3^^^3 of them summed is exactly 0. We have 3^^^3 people, and 3^^^3 motes of dust, but also 3^^^3 separate emotional shock absorbers that take that speck of dust without flinching.

    It is non-linear. If you keep adding dust, eventually it starts breaking people's shock absorbers. And once those 3^^^3 people start experiencing nonzero suffering, it would quickly add up to more than fifty man-years of torture. Then the equation stops favoring dust motes. And here I hope I have some other recourse, because "If you ever find yourself thinking that torture is the right thing to do," is one of my warnings. I hope I can come out clever enough to take a third option where nobody gets tortured.

    Replies from: polymathwannabe, gjm
    comment by polymathwannabe · 2014-02-13T18:45:21.017Z · LW(p) · GW(p)

    I wish I could upvote this 3^^^3 times.

    comment by gjm · 2014-02-13T18:52:00.009Z · LW(p) · GW(p)

    that can happen to someone without them noticing

    But Eliezer's original description said this:

    suppose a dust speck floated into your eye and irritated it just a little, for a fraction of a second, barely enough to make you notice before you blink and wipe away the dust speck.

    It's an essential part of the setup that the disutility of a "dust speck" is not zero.

    Replies from: rkyeun
    comment by rkyeun · 2014-03-16T05:21:27.092Z · LW(p) · GW(p)

    Let me change "noticing" to "caring" then. Thank you for the correction.

    comment by Jeffrey_Herrlich · 2008-02-05T21:41:00.000Z · LW(p) · GW(p)

    "Following your heart and not your head - refusing to multiply - has also wrought plenty of havoc on the world, historically speaking. It's a questionable assertion (to say the least) that condoning irrationality has less damaging side effects than condoning torture."

    I'm not really convinced that multiplication of the dust-speck effect is relevant. Subjective experience is restricted to individuals, not collectives. To me, this specific exercise reduces to a simpler question: Would it be better (more ethical) to torture individual A for 50 years, or inflict a dust speck on individual B?

    If the goal is to be a utilitarian ethicist with the well-being of humanity as your highest priority; then something may be wrong with your model when the vast majority of humans would choose the option that you wouldn't. (As I suspect they would). Utility isn't all that matters to most people. Is utilitarianism the only "real" ethics?

    My criticisms can sometimes come across the wrong way. (And I know that you actually do care about humanity, Eli.) I don't mean to judge here, just strongly disagree. Not that I retract what I wrote; I don't.

    comment by g · 2008-02-05T22:33:00.000Z · LW(p) · GW(p)

    Jeffrey wrote: To me, this specific exercise reduces to a simpler question: Would it be better (more ethical) to torture individual A for 50 years, or inflict a dust speck on individual B? Gosh. The only justification I can see for that equivalence would be some general belief that badness is simply independent of numbers. Suppose the question were: Which is better, for one person to be tortured for 50 years or for everyone on earth to be tortured for 49 years? Would you really choose the latter? Would you not, in fact, jump at the chance to be the single person for 50 years if that were the only way to get that outcome rather than the other one?

    In any case: since you now appear to be conceding that it's possible for someone to prefer TORTURE to SPECKS for reasons other than a childish desire to shock, are you retracting your original accusation and analysis of motives? ... Oh, wait, I see you've explicitly said you aren't. So, you know that one leading proponent of the TORTURE option actually does care about humanity; you agree (if I've understood you right) that utilitarian analysis can lead to the conclusion that TORTURE is the less-bad option; I assume you agree that reasonable people can be utilitarians; you've seen that one person explicitly said s/he'd be willing to be the one tortured; but in spite of all this, you don't retract your characterization of that view as shocking; you don't retract your implication that people who expressed a preference for TORTURE did so because they want to show how uncompromisingly rationalist they are; you don't retract your implication that those people don't appreciate that real decisions have real effects on real people. I find that ... well, "fairly shocking", actually.

    (It shouldn't matter, but: I was not one of those advocating TORTURE, nor one of those opposing it. If you care, you can find my opinions above.)

    comment by Unknown3 · 2008-02-06T06:21:00.000Z · LW(p) · GW(p)

    Jeffrey, do you really think serial killing is no worse than murdering a single individual, since "Subjective experience is restricted to individuals"?

    In fact, if you kill someone fast enough, he may not subjectively experience it at all. In that case, is it no worse than a dust speck?

    comment by Jeffrey_Herrlich · 2008-02-06T19:01:00.000Z · LW(p) · GW(p)

    "Suppose the question were: Which is better, for one person to be tortured for 50 years or for everyone on earth to be tortured for 49 years? Would you really choose the latter? Would you not, in fact, jump at the chance to be the single person for 50 years if that were the only way to get that outcome rather than the other one?"

    My criticism was for this specific initial example, which yes did seem "obvious" to me. Very few, if any, ethical opinions can be generalized over any situation and still seem reasonable. At least by my definition of "reasonable".

    Notice that I didn't single anyone out as being "bad". Morality is subjective and I don't dispute that. "Every man is right by his own mind". I cautioned that we shouldn't allow a desire to stand-out factor into a decision such as this. I know well that theatrics isn't an uncommon element on mailing lists/blogs. This example shocked me because toy decisions can become real decisions. I have a hunch that I wouldn't be the only person shocked by this. If this specific example were put before all of humanity, I imagine that the people who were not shocked by it, would be the minority. I don't think that I'm being unreasonable.

    comment by Jeffrey_Herrlich · 2008-02-07T19:01:00.000Z · LW(p) · GW(p)

    I can see myself spending too much time here, so I'm going to finish-up and ya'll can have the last word. I'll admit that it's possible that one or more of you actually would sacrifice yourself to save others from a dust speck. Needless to say, I think it would be a huge mistake on your part. I definitely wouldn't want you to do it on my behalf, if for nothing more than selfish reasons: I don't want it weighing on my conscience. Hopefully this is a moot point anyway, since it should be possible to avoid both unwanted dust specks and unwanted torture (eg. via a Friendly AI). We should hope that torture dies-away with the other tragedies of our past, and isn't perpetuated into our not-yet-tarnished future.

    comment by Bogdan_Butnaru · 2008-10-22T13:26:00.000Z · LW(p) · GW(p)

    I know you're all getting a bit bored, but I'm curious what you think about a different scenario:

    What if you have to choose between (a) for the next 3^^^3 days, you get an extra speck in your eye per day than normally, and 50 years you're placed in stasis, or (b) you get the normal amount of specks in your eyes, but during the next 3^^^3 days you'll pass through 50 years of atrocious torture.

    Everything else is considered equal in the other cases, including the fact that (i) your total lifespan will be the same in both cases (more than 3^^^3 days), (ii) the specks are guaranteed to not cause any physical effects other than those mentioned in the original post (i.e., you're minimally annoyed and blink once more each day; there are no "tricks" about hidden consequences of specks), (iii) any other occurrence of specks in the eye (yours or others') or torture (you or others) will happen exactly the same for either choice, (iv) the 50 years of either stasis or torture would happen at the same points and (v) after the end of the 3^^^3 days the state of the world is exactly the same except for you (e.g., the genie doesn't come back with something tricky).

    Also assume that the 3^^^3 days you are human-shaped and human-minded, except for the change that your memory (and ability to use it) is stretched to work over the duration as a typical human's does during a typical life.

    Does your answer change if either:
    A) it's guaranteed that everything else is perfectly equal (e.g., the two possible cases will magically be forbidden to interfere with any of your decisions during the 3^^^3 days, but afterwards you'll remember them; in the case of torture, any remaining trauma will remain until healed "physically". More succinctly, there are no side effects during the 3^^^3 days, and none other than the "normal" ones afterwards).
    B) the 50 years of torture happen at the start, end, or distributed throughout the period.
    C) we replace the life period with either (i) your entire lifespan or (ii) infinity, and/or the period of torture with (i) any constant length larger than one year or (ii) any constant fraction of the lifespan discussed.
    D) you are magically justified to put absolute certain trust in the offer (i.e., you're sure the genie isn't tricking you).
    E) replace "speck in the eye" by "one hair on your body grows by half the normal amount" for each day.

    Of course, you don't have to address every variation mentioned, just those that you think relevant.

    comment by Bogdan_Butnaru · 2008-10-22T13:40:00.000Z · LW(p) · GW(p)

    OK, I see I got a bit long-winded. The interesting part of my question is if you'd take the same decision if it's about you instead of others. The answer is obvious, of course ;-)

    The other details/versions I mentioned are only intended to explore the "contour of the value space" of the other posters. (: I'm sure Eliezer has a term for this, but I forget it.)

    comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-10-22T15:43:00.000Z · LW(p) · GW(p)

    Bogdan's presented almost exactly the argument that I too came up with while reading this thread. I would choose the specks in that argument and also in the original scenario (as long as I am not committing to the same choice being repeated an arbitrary number of times, and I am not causing more people to crash their cars than I cause not to crash their cars; the latter seems like an unlikely assumption, but thought experiments are allowed to make unlikely assumptions, and I'm interested in the moral question posed when we accept the assumption). Based on the comments above, I expect that Eliezer is perfectly consistent and would choose torture, though (as in the scenario with 3^^^3 repeated lives).

    Eliezer and Marcello do seem to be correct in that, in order to be consistent, I would have to choose a cut-off point such that n dust specks in 3^^^3 eyes would be less bad than one torture, but n+1 dust specks would be worse. I agree that it seems counterintuitive that adding just one speck could make the situation "infinitely" worse, especially since the speckists won't be able to agree exactly where the cut-off point is.

    But it's only the infinity that's unique to speckism. Suppose that you had to choose between inflicting one minute of torture on one person, or putting n dust specks into that person's eye over the next fifty years. If you're a consistent expected utility altruist, there must be some n such that you would choose n specks, but not n+1 specks. What makes the n+1st speck different? Nothing, it just happens to be the cut-off point you must choose if you don't want to choose 10^57 specks over torture, nor torture over zero specks. If you make ten altruists consider the question independently, will they arrive at exactly the same value of n? Prolly not.

    The above argument does not destroy my faith in decision theory, so it doesn't destroy my provisional acceptance of speckism, either.

    comment by retired_urologist · 2008-10-22T15:52:00.000Z · LW(p) · GW(p)

    I came across this post only today, because of the current comment in the "recent comments" column. Clearly, it was an exercise that drew an unusual amount of response. It further reinforces

    my impression of much of the OB blog, posted in August, and denied by email.

    comment by Tim7 · 2009-02-20T22:58:00.000Z · LW(p) · GW(p)

    I think you should ask everyone until you have at least 3^^^3 people whether they would consent to having a dust speck fly into their eye to save someone from torture. When you have enough people just put dust specks into their eyes and save the others.

    comment by homunq · 2009-02-21T01:08:00.000Z · LW(p) · GW(p)

    The question is, of course, silly. It is perfectly rational to decline to answer. I choose to try to answer.

    It is also perfectly rational to say "it depends". If you really think "a dust speck in 3^^^3 eyes" gives a uniquely defined probability distribution over different subsets of possibilityverse, you are being ridiculous. But let's pretend it did - let's pretend we had 3^^^^3 parallel Eleizers, standing on flat golden surfaces in 1G and one atmosphere, for just long enough to ask each other enough enough questions to define the problem properly. (I'm sorry, Eleizer, if by stating that possibility, I've increased the "true"ness of that part of the probabilityverse by ((3^^^3+1)/3^^^3) :) ).

    You can or "I've thought about it, but I don't trust my thought processes". That is not my position.

    My position is that this question does not, in fact, have an answer. I think that that fact is very important.

    It's not that the numbers are meaningless. 3^^^3 is a very exact number, and you can prove any number of things about it. A different question using ridiculous numbers - say, would you rather torture 4^^^4 people for 5 minutes or 3^^^3 of them for 50 years - has a single correct answer which is very clear (of course, the 3^^^3 ones; 4^^^4 >>> (3^^^3)^2). (Unless there were very bizarre extra conditions on the problem.)

    It's just that there is no universal moral utility function which inputs a probability distribution over a finite subset of the possibilityverse and outputs a number. It's more like relativistic causality - substitute "better" for "after". A is after B and B is a spacelike distance from C, but C can also be spacelike from A. The dust specks and the torture are incomparable, a spacelike distance.

    I think that, philosophically, that makes a big difference. If you pilosophically can't always go around morally comparing near-infinite sets, then it's silly to try to approximate how you would behave if you could. Which means you consider the moral value of the consequences which you could possibly anticipate. So yeah, if you are working on AI, you are morally obligated to think about FAI, because that's intentional action, and you would have to be a monster to say you didn't care. But you don't get to use FAI and the singularity to trump the here-and-now, because in many ways they're just not comparable.

    Which means, to me, for instance, that people can understand the singularity idea and believe it has a non-0 probability, and have abilities or resources that would be meaningful to the FAI effort, and still morally choose to simply live as "good people" in a more traditional sense (have a good life in which they make the people with whom they interact overall happier). It's not just a lack of ability to trace the consequences; it's also the possibility that the consequences of this or that outcome will be literally incomparable by any finite halting algorithm, whereas even our desperately-limited brains have decent approximations of algorithms for morally comparing the effect of, say, posting on OB versus washing the dishes.

    Going to wash the dishes now.

    comment by homunq · 2009-02-21T01:37:00.000Z · LW(p) · GW(p)

    Tim: You're right - if you are a reasonably attractive and charismatic person. Otherwise, the question (from both sides) is worse than the dust speck.

    (Asking people also puts you in the picture. You must like to spend eternity asking people a silly question, and learning all possible linguistic vocalizations in order to do so. There are many fewer vocalizations than possible languages, and many fewer possible human languages than 3^^^3. You will be spending more time going from one person of the SAME language to another, at 1 femtosecond per journey, than you would spend learning all possible human languages. That would be true even if the people were fully shuffled by language - just 1 femtosecond each for all the times when coincidence gives you two of the same language in a row. 3^^^3 is that big.)

    comment by HughRistik · 2009-09-11T00:36:05.035Z · LW(p) · GW(p)

    Torture is not the obvious answer, because torture-based suffering and dust-speck-based suffering are not scalar quantities with the same units.

    To be able to make a comparison between two quantities, the units must be the same. That's why we can say that 3 people suffering torture for 49.99 years is worse than 1 person suffering torture for 50 years. Intensity Duration Number of People gives us units of PainIntensity-Person-Years, or something like that.

    Yet torture-based suffering and dust-speck-based suffering are not measured in the same units. Consequently, we cannot solve this question as a simple math problem. For example, the correct units of torture-based suffering might involve Sanity-Destroying-Pain. There is no reason to believe that we can quantitatively compare Easily-Recoverable-Pain to Sanity-Destroying-Pain; at least, the comparison is not just a math problem.

    To be able to do the math, we would have to convert both types of suffering to the same units of disutility. Some folks here seem to think that no matter what the conversion functions are, 3^^^3 is just so big that the converted disutility of 3^^3 dust specs is greater than the converted disutility of 50 years of torture for one person. But determination of the correct disutility conversion functions is itself a philosophical problem that cannot be waved away, and it's impossible to evaluate that claim until those conversion functions have at least been hinted at.

    One way to get different types of suffering to have the same units would be to represent them as vectors, and find a way to get the magnitude of those vectors.

    The torture position seems to do the math by using pain intensity as a scalar. Yet there is no reason to to believe that suffering is a scalar quantity, or that the disutility accorded to suffering is a scalar quantity. Even pain intensity is case where "quantity has a quality all of its own": as you increase it, the suffering goes through qualitative changes. For example, if just a 10% increase in pain duration/intensity causes Post-Traumatic Stress Disorder, that pain is more than 10% worse, and it's because a qualitatively different type of suffering. The units change.

    Suffering may well be better represented as a vector. Other dimensions in the vector might include variables such as chance of Post-Traumatic Stress Disorder (0 in the case of dust specks which are uncomfortable but not traumatic, and approaching 100% in the case of torture), non-recovery chance (0% in the case of dust specks, approaching 100% in the case of torture), recovery time (<1 second in the case of dust specks, approaching infinity in the case of 50 years of torture), insanity, human rights violation, career-destruction, mental-health destruction, life destruction...

    Choice of pain intensity only over other variables relevant to suffering is begging the question. We could cherry-pick another dimension out of the vector to get a different result, such as life destruction. LifeDestructionChance(50YearsOfTorture) could be greater than LifeDestructionChance(DustSpeck) * 3^^^3 (I might be committing scope insensitivity saying this, but the point is that the answer isn't self-evident). Of course, life destruction isn't the only relevant variable to the calculation of suffering, but neither is pain intensity.

    Now, if there is a way to take the magnitude of a suffering vector (another philosophical problem), it's not at all self-evident that Magnitude( SpeckVector ) * 3^^^3 > Magnitude( 50YearsOfTortureVector), because the SpeckVector has virtually all its dimensions approaching 0 while the TortureVector has many dimensions approaching infinity or their max value (which I think reflects why people think torture is so bad). That would depend on what the dimensions of those vectors are and how the magnitude function works.

    Replies from: Cyan
    comment by Cyan · 2009-09-11T01:54:59.622Z · LW(p) · GW(p)

    But determination of the correct disutility conversion functions is itself a philosophical problem that cannot be waved away, and it's impossible to evaluate that claim until those conversion functions have at least been hinted at.

    You seem to have gotten hung up on 3^^^3, which is really just a placeholder for "some finite number so large it boggles the mind". If you accept that all types of pain can be measured on a common disutility scale, then all you need is a non-zero conversion factor, and the repugnant conclusion follows (for some mind-bogglingly large number of specks). I think that if a line of argument that rescues your rebuttal exists, it involves lexicographic preferences.

    comment by Bugle · 2009-09-12T13:05:05.835Z · LW(p) · GW(p)

    There is a false choice being offered, because every person in every lifetime is going to experience getting something in their eye, I get a bug flying into my eye on a regular basis whenever I go running (3 of them the last time!) and it'll probably have happened thousands of times to me at the end of my life. It's pretty much a certainty of human experience (Although I suppose it's statistically possible for some people to go through life without ever getting anything in their eyes).

    Is the choice being offered to make all humanities eyes for all eternity immune to small inconveniences such as bugs, dust or eyelashes? Otherwise we really aren't being offered anything at all.

    Replies from: Bugle
    comment by Bugle · 2009-09-12T18:00:53.787Z · LW(p) · GW(p)

    Although if we factor in consequences, say... being distracted by a dust speck in the eye while driving or doing any other such critical activity then statistically those trillions of dust specks have the potential to cause untold amounts of damage and suffering

    comment by Nubulous · 2009-09-12T22:45:16.767Z · LW(p) · GW(p)

    Doesn't "harm", to a consequentialist, consist of every circumstance in which things could be better, but aren't ? If a speck in the eye counts, then why not, for example, being insufficiently entertained ?

    If you accept consequentialism, isn't it morally right to torture someone to death so long as enough people find it funny ?

    Replies from: Alicorn
    comment by Alicorn · 2009-09-12T23:02:38.370Z · LW(p) · GW(p)

    I'm picking on this comment because it prompted this thought, but really, this is a pervasive problem: consequentialism is a gigantic family of theories, not just one. They are all still wrong, but for any single counterexample, such as "it's okay to torture people if lots of people would be thereby amused", there is generally at least one theory or subfamily of theories that have that counterexample covered.

    Replies from: PowerSet
    comment by PowerSet · 2009-09-13T07:56:44.175Z · LW(p) · GW(p)

    Isn't it paradoxical to argue against consequentialism based on its consequences?

    The reason you can't torture people is that those members of your population who aren't as dumb as bricks will realize that the same could happen to them. Such anxiety among the more intelligent members of your society should outweigh the fun experienced by the more easily amused.

    Replies from: Alicorn
    comment by Alicorn · 2009-09-13T12:51:54.850Z · LW(p) · GW(p)

    I typically argue against consequentialism based on appeals to intuition and its implications, which are only "consequences" in the sense used by consequentialism if you do some fancy equivocating.

    The reason you can't torture people is that those members of your population who aren't as dumb as bricks will realize that the same could happen to them. Such anxiety among the more intelligent members of your society should outweigh the fun experienced by the more easily amused.

    Pfft. It is trivially easy to come up with thought experiments where this isn't the case. You can increase the ratio of bricks-to-brights until doing the arithmetic leads to the result that you should go ahead and torture folks. You can choose folks to torture on the basis of well-publicized, uncommon criteria, so that the vast majority of people rightly expect it won't happen to them or anyone they care about. You can outright lie to the population, and say that the people you torture are all volunteers (possibly even masochists who are secretly enjoying themselves) contributing to the entertainment of society for altruistic reasons. Heck, after you've tortured them for a while, you can probably get them to deliver speeches about how thrilled they are to be making this sacrifice for the common morale, on the promise that you'll kill them quicker if they make it convincing.

    All that having been said, there are consequentialist theories that do not oblige or permit the torture of some people to amuse the others. Among them are things like side-constraints rights-based consequentialisms, certain judicious applications of deferred-hedon/dolor consequentialisms, and negative utilitarianism (depending on how the entertainment of the larger population cashes out in the math).

    comment by R_Nebblesworth · 2009-09-13T05:34:52.763Z · LW(p) · GW(p)

    It seems that many, including Yudkowsky, answer this question by making the most basic mistake, i.e. by cheating - assuming facts not in evidence.

    We don't know anything about (1) the side-effects of picking SPECKS (such as car crashes); and definitely don't know that (2) the torture victim can "acclimate". (2) in particular seems like cheating in a big way - especially given the statement "without hope or rest".

    There's nothing rational about posing a hypothetical and then adding in additional facts in your answer. However, that's a great way to avoid the question presented.

    Replies from: R_Nebblesworth
    comment by R_Nebblesworth · 2009-09-14T00:14:54.046Z · LW(p) · GW(p)

    I've received minus 2 points (that's bad I guess?) with no replies, which is very illuminating... I suppose I'm just repeating the above points on lexicographic preferences.

    Any answer to the question involves making value choices about the relative harms associated with torture and specks, I can't see how there's an "obvious" answer at all, unless one is arrogant enough to assume their value choices are universal and beyond challenge.

    Unless you add facts and assumptions not stated, the question compares torture x 50 years to 1 dust speck in an infinite number people's eyes, one time. Am I missing something? Because it seems It can't be answered without reference to value choices - which to anyone who doesn't share those values will naturally appear irrational.

    Replies from: CarlShulman
    comment by CarlShulman · 2009-09-14T00:39:00.957Z · LW(p) · GW(p)

    "I've received minus 2 points (that's bad I guess?) with no replies, which is very illuminating... "

    I think this is mainly because your comment seemed uninformed by the relevant background but was presented with a condescending and negative tone. Comments with both these characteristics tend to get downvoted, but if you cut back on one or the other you should get better responses.

    "It seems that many, including Yudkowsky, answer this question by making the most basic mistake, i.e. by cheating - assuming facts not in evidence."

    http://lesswrong.com/lw/2k/the_least_convenient_possible_world/

    "Any answer to the question involves making value choices"

    Yes it does.

    "compares torture x 50 years to 1 dust speck in an infinite number people's eyes"

    3^^^3 is a (very large) finite number.

    "It can't be answered without reference to value choices - which to anyone who doesn't share those values will naturally appear irrational."

    Moral anti-realists don't have to view differences in values as reflecting irrationality.

    Replies from: R_Nebblesworth, thomblake
    comment by R_Nebblesworth · 2009-09-14T00:46:28.262Z · LW(p) · GW(p)

    Fair enough, apologies for the tone.

    But if answering the question involves making arbitrary value choices I don't understand how there can possibly be an obvious answer.

    Replies from: CarlShulman
    comment by CarlShulman · 2009-09-14T00:52:57.020Z · LW(p) · GW(p)

    There isn't for agents in general, but most humans will in fact trade off probabilities of big bads (death, torture, etc) against minor harms, and so preferring SPECKS indicates a seeming incoherency of values.

    Replies from: R_Nebblesworth
    comment by R_Nebblesworth · 2009-09-14T00:57:31.580Z · LW(p) · GW(p)

    Thanks for the patient explanation.

    comment by thomblake · 2009-09-14T14:49:10.068Z · LW(p) · GW(p)

    Comments with both these characteristics tend to get downvoted, but if you cut back on one or the other you should get better responses.

    I'd just like to note that comments informed by the relevant background but condescending and negative are often voted down as well. Though Annoyance seems to have relatively high karma anyway.

    Replies from: CarlShulman, bogus
    comment by CarlShulman · 2009-09-14T14:52:56.252Z · LW(p) · GW(p)

    I considered that, which is why I said that the responses would be "better."

    comment by bogus · 2009-09-14T14:53:26.130Z · LW(p) · GW(p)

    I'd just like to note that comments informed by the relevant background but condescending and negative are often voted down as well.

    I agree. See DS3618 for a crystal-clear example.

    Replies from: thomblake, CarlShulman
    comment by thomblake · 2009-09-14T15:10:13.728Z · LW(p) · GW(p)

    I strongly doubt that person counts as "informed by the relevant background".

    comment by CarlShulman · 2009-09-14T16:39:49.946Z · LW(p) · GW(p)

    I don't think that case is crystal-clear, could you explain this a bit more?

    Looking at DS3618's comments, he (I estimate gender based on writing style and the demographics of this forum and of the CMU PhD program he claims to have entered) had some good (although obvious) points regarding peer-review and Flare. Those comments were upvoted.

    The comments that were downvoted seem to have been very negative and low in informed content.

    He claimed that calling intelligent design creationism "creationism" was "wrong" because ID is logically separable from young earth creationism and incorporates the idea of 'irreducible complexity.' However, arguments from design, including forms of 'irreducible complexity' argument, have been creationist standbys for centuries. Rudely chewing someone out for not defining creationism in a particular narrow fashion, the fashion advanced by the Discovery Institute as part of an organized campaign to evade court rulings, does deserve downvoting. Suggesting that the Discovery Institute, including Behe, isn't a Christian front group is also pretty indefensible given the public info on it (e.g. the "wedge strategy" and numerous similar statements by DI members to Christian audiences that they are a two-faced organization).

    This comment implicitly demanded that no one note limitations of the brain without first building AGI, and was lacking in content.

    DS3618 also claims to have a stratospheric IQ, but makes numerous spelling and grammatical errors. Perhaps he is not a native English speaker, but this does shift probability mass to the hypothesis that he is a troll or sock puppet.

    He says that he entered the CMU PhD program without a bachelor's degree based on industry experience. This is possible, as CMU's PhD program has no formal admissions requirements according to its document. However, given base rates, and the context of the claim, it is suspiciously convenient and shifts further probability mass towards the troll hypothesis. I suppose one could go through the CMU Computer Science PhD student directory to find someone without a B.S. and with his stated work background to confirm his identity (only reporting whether there is such a person, not making the anonymous DS3618's identity public without his consent).

    comment by shibl · 2009-10-16T06:42:59.034Z · LW(p) · GW(p)

    The obvious answer is that torture is preferable.

    If you have to pick yourself a chance of 1/3^^^3 of 50 years torture vs the dust spec you will pick the torture.

    We actually do this every day: we eat foods that can poison us rather than be hungry, we cross the road rather than stay at home, etc.

    Imagine there is a safety improvement to your car that will cost 0.0001 cent but will save you from an event that will happen once in 1000 universe lifetimes would you pay for it?

    Replies from: thomblake
    comment by thomblake · 2009-10-16T12:45:12.336Z · LW(p) · GW(p)

    I don't think it's very controversial that TORTURE is the right choice if you're maximizing overall net utility (or in your example, maximizing expected utility). But some of us would still choose SPECKS.

    comment by ABranco · 2009-10-25T03:28:53.681Z · LW(p) · GW(p)

    Very-Related Question: Typical homeopathic dilutions are 10^(-60). On average, this would require giving two billion doses per second to six billion people for 4 billion years to deliver a single molecule of the original material to any patient.

    Could one argue that if we administer a homeopathic pill of vitamin C in the above dilution to every living person for the next 3^^^3 generations, the impact would be a humongous amount of flu-elimination?

    If anyone convinces me that yes, I might accept to be a Torturer. Otherwise, I assume that the negligibility of the speck, plus people's resilience, would make no lasting effects. Disutility would vanish in miliseconds. If they wouldn't even notice or have memory of the specks after a while, it'd equate to zero disutility.

    It's not that I can't do the maths. It's that the evil of the speck seems too diluted to do harm.

    Just like homeopathy is too diluted to do good.

    Replies from: RobinZ, Nick_Tarleton
    comment by RobinZ · 2009-10-25T03:39:34.013Z · LW(p) · GW(p)

    Could one argue that if we administer a homeopathic pill of vitamin C in the above dilution to every living person for the next 3^^^3 generations, the impact would be a humongous amount of flu-elimination?

    Easily. 3^^^3 = 3^^27 = 3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3 is so much larger than 10^60 that it is almost certain that many people will receive significant doses of vitamin C. Heck, 3^3^3^3^3^3 ~= 8.719e115 >> 10^60, and that's merely 3^^6. If there is any causal relationship at all between receiving a dose of vitamin C and flu resistance (which I believe you imply for the purposes of the question), then a tremendous number of people will be protected from the flu -- much, much in excess of 3^^26.

    Replies from: ABranco
    comment by ABranco · 2009-10-25T04:21:58.452Z · LW(p) · GW(p)

    almost certain that many people will receive significant doses of vitamin C

    Not what I said.

    Each person will receive vitamin C diluted in the ratio of 10^(-60) (see reference here). The amount is the same for everyone, constant. Strictly one dose per person (as it was one speck per person).

    But the number of persons are all people alive in the next 3^^^3 generations.

    If there is any causal relationship at all between receiving a dose of vitamin C and flu resistance

    ...which wouldn't mean it is linear at all. Above a certain dose can be lethal; below, can have no effect.


    Does it sound reasonable that if you eat one nanogram of bread during severe starvation, it would retard your death in precisely zero seconds?

    Replies from: RobinZ, pengvado
    comment by RobinZ · 2009-10-25T04:38:21.740Z · LW(p) · GW(p)

    But each patient receives less than 10^60 molecules -- one must assume some probability distribution on the number of molecules if we are to suppose any medication is delivered at all. Assuming the dilutions are performed as prescribed in a typical homeopathic preparation, a minuscule fraction will randomly have significantly more than the expected concentration, but even so at least the logarithm of the fraction will be on an order of magnitude with the logarithm of 10^-60 -- and therefore will still multiply to a tremendous number in 3^^^3 cases.

    That said, even if you assume that the distribution is exactly as even as possible -- every patient receives either zero or one molecule of vitamin C -- there will be a minuscule probability that the effect of that one molecule will be at the tipping point. Truly minuscule -- probably on the order of 10^-20 to 10^-25, a few in one Avogadro's number -- but this still corresponds to aiding 1 in 10^80 to 10^85 people, which multiplies to a tremendous number in 3^^^3 cases.

    Replies from: ABranco
    comment by ABranco · 2009-10-25T05:09:30.935Z · LW(p) · GW(p)

    Mathematically, I have to agree with your reply: you either have no molecules or at least one. And then, your calculations hold true. And I'm wrong.

    Physiologically, though, my argument is that the "nanoutility" that this molecule would add would have such a negligible effect that nothing would change in the person's life measured by any practical purposes. It will pass completely unnoticed (zero!) — for each person in the 3^^^3 generations.

    I assume a fuzzy scale of flu, so that no single molecule would turn sure-flu to sure-non-flu. As I assumed with the specks.

    Replies from: RobinZ
    comment by RobinZ · 2009-10-25T05:15:10.978Z · LW(p) · GW(p)

    Even if you perform the more sophisticated analysis, the probability of the flu should shift slightly -- and that slightly will be on the order of 10^-23, as before. And that times 3^^^3...

    comment by pengvado · 2009-10-25T05:45:45.695Z · LW(p) · GW(p)

    Does it sound reasonable that if you eat one nanogram of bread during severe starvation, it would retard your death in precisely zero seconds?

    No. You use energy at some finite rate (I'll assume 2000 kilocalories/day, dunno how much starvation affects this). A nanogram of bread contains a nonzero amount of energy (~2.5 microcalories). So it increases your life expectancy by a nonzero time (~100 nanoseconds). A similar analysis can be performed for anything down to and including a single molecule.

    comment by Nick_Tarleton · 2009-10-25T05:22:37.176Z · LW(p) · GW(p)

    That's not really the point. The "dust speck" just means the mildest possible harm that a person can suffer; if you don't think a dust speck with no long-term consequences can be harmful, you should mentally substitute a stubbed toe (with no long-term consequences) or the like.

    comment by DanielLC · 2009-12-27T03:33:52.116Z · LW(p) · GW(p)

    I doubt anybody's going to read a comment this far down, but what the heck.

    Perhaps going from nothing to a million dust specks isn't a million times as bad as going from nothing to one dust speck. One thing is certain though: going from nothing to a million dust specks is exactly as bad as going from nothing to one dust speck plus going from one dust speck to two dust specks etc.

    If going from nothing to one dust speck isn't a millionth as bad as nothing to a million dust specks, it has to be made up somewhere else, like going from 999,999 to a million dust specks being more than a millionth as bad.

    What if the 3^^^3 were also horribly tortured for fifty years? Would going from that to that plus a dust speck change everything? It's now the worst dust speck you're adding, right?

    comment by dissidia · 2010-02-14T20:50:43.508Z · LW(p) · GW(p)

    Ask this to yourself to make the question easier. What would you prefer, getting 3^^^3 dust specks in your eye or being hit with a spiked whip for 50 years.

    You must live long enough to feel the 3^^^3 specks in your eye, and each one lasts a fraction of a second. You can feel nothing else but that speck in your eye.

    So, it boils down to this question. Would you rather be whipped for 50 years or get specks in your eye for over a googleplex of years.

    If I could possible put a marker of the utility of bad that a speck of dust in the eye is, and compare that to the negative utility that a year of depression could be, or being whipped once or having arms broken, it seems impossible that the 50 years of torture could give a more negative value.

    Replies from: Alicorn
    comment by Alicorn · 2010-02-14T22:40:42.573Z · LW(p) · GW(p)

    I asked this here.

    comment by Metacognition · 2010-02-18T20:20:40.077Z · LW(p) · GW(p)

    In the real world the possiblity of torture obviously hurts more people than just the person being tortured. By theorizing about the utility of torture you are actually subjecting possibly billions of people to periodic bouts of fear and pain.

    comment by MrHen · 2010-02-27T06:02:33.067Z · LW(p) · GW(p)

    Forgive me if this has been covered before. The internet here is flaking out and it makes it hard to search for answers.

    What is the correct answer to the following scenario: Is it preferable to have one person be tortured if it gives 3^^^3 people a miniscule amount of pleasure?

    The source of this question was me pondering the claim, "Pain is temporary; a good story lasts forever."

    Replies from: Blueberry, wedrifid
    comment by Blueberry · 2010-02-27T08:14:07.754Z · LW(p) · GW(p)

    Is it preferable to have one person be tortured if it gives 3^^^3 people a miniscule amount of pleasure?

    Great question, and if it has been covered before on this site, I haven't seen it. Philosophers have discussed whether or not "sadistic" pleasure from others' suffering should be included in utilitarian calculations, and in fact this is one of the classic arguments against (some types of) utilitarianism, along with the utility monster and the organ lottery.

    One possible answer is that utilitarians should maximize other terminal values besides just pleasure, and that sadistic pleasures like this go against the total of our terminal values, so utilitarians shouldn't allow these to cancel out torture.

    comment by wedrifid · 2010-02-27T08:57:38.947Z · LW(p) · GW(p)

    What is the correct answer to the following scenario: Is it preferable to have one person be tortured if it gives 3^^^3 people a miniscule amount of pleasure?

    Yes.

    comment by gimpf · 2010-05-15T17:04:31.667Z · LW(p) · GW(p)

    So, I'm very late into this game, and not through all the sequences (where the answer might already be given), but still, I am very interested in your positions (probably nobody answers, but who knows):

    1. Is there a natural number N for which you'd kill one person vs. giving N people one-single dust-speck? (I assume this depends on whether one expects an ever-lasting universe.)
    2. Do you "integrate" utility over time (or "experience-moments", as per timeless bla), or is it better to just maximize the "final" point, however one got there?
    3. Does breaking up the utility function into several categories really allow dutch-booking, as is indicated in one of the comments? (I hope you understand what I mean with the categories; you've a total strict-order for them, with no two identical, elements within categories "add up", but not even an infinite number of "bad" things in one category can add up to a single one in the next higher one)
    4. If "no" for 3, then: For a (current) human we only have neurons, and a real break-point can probably not be determined; but a re-engineered person could implement such a thing. Is it then preferable?

    I expect "yes" for 1, and I have to expect "yes" for 3 (I personally do not see this, but I'm bad at math, and have to trust the comments anyway). If "no" for 3, I still expect "no" for 4, per simplicity-argument, retold many times.

    I'm very curious for answer on question 2. Once Eliezer quoted "the end does not justify the means", but this sentence is so very much re-interpretable that it's worthless (even if he said otherwise). But as per updating: why should the order of when information is revealed change the final result? Whatever.

    When the answers of these questions are somewhere in the sequences, just ignore this, I will sooner or later get to them.

    Replies from: AlephNeil
    comment by AlephNeil · 2010-05-15T18:39:50.149Z · LW(p) · GW(p)

    Is there a natural number N for which you'd kill one person vs. giving N people one-single dust-speck? (I assume this depends on whether one expects an ever-lasting universe.)

    I don't think this question (or one discussed in the OP) admit meaningful answers. It seems a pity to just 'pour cold water over them' but I don't know what else to say - whatever 'moral truths' there are in the world simply don't reach as far as such absurd scenarios.

    Do you "integrate" utility over time (or "experience-moments", as per timeless bla), or is it better to just maximize the "final" point, however one got there?

    Depends what game you're playing, surely. If you're playing 'Invest For Retirement' and the utility function measures the size of your retirement fund, then naturally the 'final' point is what matters.

    On the other hand, if you're playing 'Enjoy Your Retirement' and the utility function measures how much money you have to spend on a monthly basis, then what's important is the "integrated" utility.

    Two points of interest here:

    (1) Final utility in 'Invest for retirement' equals integrated utility in 'Enjoy your retirement' (modulo some faffing around with discount rates).

    (2) The game of 'Enjoy your retirement' is notable insofar as it's a game with a guaranteed final utility of zero (or -infinity if you prefer).

    comment by lockeandkeynes · 2010-07-07T19:31:12.559Z · LW(p) · GW(p)

    I'd gladly get a speck of dust in my eye as many times as I can, and I'm sure those 3^^^3 people would join me, to keep one guy from being tortured for 50 years.

    Replies from: Vladimir_Nesov, RobinZ, Nick_Tarleton
    comment by Vladimir_Nesov · 2010-07-07T19:34:49.564Z · LW(p) · GW(p)

    Maybe you will indeed, but should you?

    comment by RobinZ · 2010-07-07T20:01:46.643Z · LW(p) · GW(p)

    Suppose some fraction of the 3^^^3 dropped out. How many dust specks would you be willing to take? Two? Ten? A thousand? A million? A billion? That's half a millimeter in diameter, now, and we're only at 10^9. How about 10^12? 10^15? 10^18? We're around half a meter in diameter now, approaching or exceeding the size of a football, and we've not even reached 3^^4 - and remember that 3^^^3 is 3^^3^^3 = 3^^7,625,597,484,987.

    What, you think that all of the 3^^^3 will go for it? All of them, chipping in to save one person who was getting 50 years of torture? In a universe with 3^^^3 people in it, how many people do you think are being tortured? Our planet has had around 10^11 human beings in history. If we say that only one of those 10^11 people were ever tortured for 50 years in history - or even that there were a one-in-a-thousand chance of it, one in 10^14 - how many people would be tortured for 50 years among the more than 3^^^3 we are positing? And do you think that all 3^^^3 will choose the same one you did?

    Would you consider think that, perhaps, one dust speck is a bit much to pay to save one part in 3^^^3 of a victim?

    Replies from: Vladimir_Nesov
    comment by Vladimir_Nesov · 2010-07-07T20:22:00.701Z · LW(p) · GW(p)

    Would you consider think that, perhaps, one dust speck is a bit much to pay to save one part in 3^^^3 of a victim?

    When multiple agents coordinate, their decision delivers the whole outcome, not a part of it. Depending on what you decide, everyone who reasons similarly will decide. Thus, you have the absolute control over what outcome to bring about, even if you are only one of a gazillion like-minded voters.

    Here, you decide whether to save one person, at the cost of harming 3^^^3 people. This is not equivalent to saving 1/3^^^3 of a person at the cost of harming one person, because the saving of 1/3^^^3 of a person is not something that actually could happen, it is at best utilitarian simplification, which you must make explicit and not confuse for a decision-theoretic construction.

    Replies from: RobinZ
    comment by RobinZ · 2010-07-07T22:14:10.368Z · LW(p) · GW(p)

    If it were a one-shot deal with no cheaper alternative, I could see agreeing. But that still leaves the other 3^^^3/10^14 victims and this won't scale to deal with those.

    comment by Nick_Tarleton · 2010-07-08T00:04:39.043Z · LW(p) · GW(p)

    This seems to work nearly as well for any harm less than being tortured for 50 years — say, being tortured for 25 years.

    Replies from: cousin_it
    comment by cousin_it · 2010-07-08T00:07:12.262Z · LW(p) · GW(p)

    I wouldn't volunteer for 25 years of torture to save a random person from 50. A relative, maybe.

    comment by kaimialana · 2010-07-27T18:40:38.911Z · LW(p) · GW(p)

    "Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

    I think the answer is obvious. How about you?"

    Yes, Eliezer, the answer is obvious. The answer is that this is a false dilemma, and that I should go searching for the third alternative, with neither 3^^^3 dust specks nor 50 years of torture. These are not optimal alternatives.

    comment by bojangles · 2010-12-16T21:59:06.270Z · LW(p) · GW(p)

    Construct a thought experiment in which every single one of those 3^^^3 is asked whether he would accept a dust speck in the eye to save someone from being tortured, take the answers as a vote. If the majority would deem it personally acceptable, then acceptable it is.

    Replies from: benelliott, HonoreDB, ata
    comment by benelliott · 2010-12-16T22:08:36.793Z · LW(p) · GW(p)

    This doesn't work at all. If you ask each of them to make that decision you are asking to compare their one dust speck, with somebody else's one instance of torture. Comparing 1 dust speck with torture 3^^^3 times is not even remotely the same as comparing 3^^^3 dust specks with torture.

    If you ask me whether 1 is greater than 3 I will say no. If you ask me 5 times I will say no every time. But if you ask me whether 5 is greater than 3 I will say yes.

    The only way to make it fair would be to ask them to compare themselves and the other 3^^^3 - 1 getting dust specks with torture, but I don't see why asking them should get you a better answer than asking anyone else.

    Replies from: topynate
    comment by topynate · 2010-12-16T22:17:20.222Z · LW(p) · GW(p)

    Compare two scenarios: in the first, the vote is on whether every one of the 3^^^3 people are dust-specked or not. In the second, only those who vote in favour are dust-specked, and then only if there's a majority. But these are kind of the same scenario: what's at stake in the second scenario is at least half of 3^^^3 dust-specks, which is about the same as 3^^^3 dust-specks. So the question "would you vote in favour of 3^^^3 people, including yourself, being dust-specked?" is the same as "would you be willing to pay one dust-speck in your eye to save a person from 50 years of torture, conditional on about 3^^^3 other people also being willing?"

    Replies from: benelliott
    comment by benelliott · 2010-12-16T23:08:34.924Z · LW(p) · GW(p)

    Let me try and get this straight, you are presenting me with a number of moral dilemmas and asking me what I would do in them.

    1) Me and 3^^^^3 - 1 other people all vote on whether we get dust specks in the eye or some other person gets tortured.

    I vote for torture. It is astonishingly unlikely that my vote will decide, but if it doesn't then it doesn't matter what I vote, so the decision is just the same as if it was all up to me.

    2) Me and 3^^^^3 - 1 other people all vote on whether everyone who voted for this option gets a dust speck in the eye or some other person gets tortured.

    This is a different dilemma, since I have to weigh up three things instead of two, the chance that my vote will save about 3^^^^3 people from being dust-specked if I vote for torture, the chance that my vote will save on person from being tortured if I vote for dust specks and the (much higher) chance that my vote will save me and only me from being dust-specked if I vote for torture.

    I remember reading somewhere that the chance of my vote being decisive in such a situation is roughly proportional to the square root of the number of people (please correct me if this is wrong). Assuming this is the case then I still vote for torture, since the term for saving everyone else from dust specks still dwarfs the other two.

    3) I have to choose whether I will receive a dust speck or whether someone else will be tortured, but my decision doesn't matter unless at least half of 3^^^^3 - 1 other people would be willing to choose the dust speck.

    Once again the dilemma has changed, this time I have lost my ability to save other people from dust specks and the probability of me successfully saving someone from torture has massively increased. I can safely ignore the case where the majority of others choose torture, since my decision doesn't matter then. Given that the others choose dust specks, I am not so selfish as to save myself from a dust speck rather than someone else from torture.

    You try to make it look like scenarios 2 and 3 are the same, but they are actually very, very different.

    The bottom line is that no amount of clever wrangling you do with votes or conditionals can turn 3^^^^3 people into one person. If it could, I would be very worried, since it would imply that the number of people you harm doesn't matter, only the amount of harm you do. In other words, if I'm offered the choice between one person dying and ten people dying, then it doesn't matter which I pick.

    Replies from: topynate
    comment by topynate · 2010-12-16T23:49:12.357Z · LW(p) · GW(p)

    Assuming a roughly 50-50 split the inverse square-root rule is right. Now my issue is why you incorporate that factor in scenario 2, but not scenario 3. I honestly thought I was just rephrasing the problem, but you seem to see it differently? I should clarify that this isn't you unconditionally receiving a speck if you're willing to, but only if half the remainder are also so willing.

    The point of voting, for me, is not an attempt to induce scope insensitivity by personalizing the decision, but to incorporate the preferences of the vast majority (3^^^^3 out of 3^^^^3 + 1) of participants about the situation they find themselves in, into your calculation of what to do. The Torture vs. Specks problem in its standard form asks for you to decide on behalf of 3^^^^3 people what should happen to them; voting is a procedure by which they can decide.

    [Edit: On second thought, I retract my assertion that scenario 1) and 2) have roughly the same stakes. That in scenario 1) huge numbers of people who prefer not to be dust-specked can get dust-specked, and in scenario 2) no one who prefers not to be dust-specked is dust-specked, makes much more of a difference than a simple doubling of the number of specks.]

    By the way, the problem as stated involves 3^^^3, not 3^^^^3, people, but this can't possibly matter so nevermind.

    Replies from: benelliott
    comment by benelliott · 2010-12-17T07:14:49.479Z · LW(p) · GW(p)

    There are actually two differences between 2 and 3. The first is that in 2 my chance of affecting the torture is negligible, whereas in 3 it is quite high. The second difference is that in 2 I have the power to save huge numbers of others from dust specks, and it is this difference which is important to me, since when I have that power it dwarfs the other factors so much as to be the only deciding factor in my decision. In your 'rephrasing' of it you conveniently ignore the fact that I can still do this, so I assumed I no longer could, which made the two scenarios very different.

    I think also, as a general principle, any argument of the type you are formulating which does not pay attention to the specific utilities of torture and dust-specks, instead just playing around with who makes the decision, can also be used to justify killing 3^^^^3 people to save one person from being killed in a slightly more painful manner.

    comment by HonoreDB · 2010-12-16T22:13:40.647Z · LW(p) · GW(p)

    The point of Torture vs. Dust Specks is that our moral intuition dramatically conflicts with strict utilitarianism.

    Your thought experiment helps express your moral intuition, but it doesn't do anything to resolve the conflict.

    Although, come to think of it, I think there's an argument to be made that the majority would answer no. If we interpret 3^^^3 people to mean qualitatively distinct individuals, there's not enough room in humanspace for all of those people to be human--the vast majority will be nonhumans. It can be argued, at least, that if you pick a random nonhuman individual, that individual will not be altruistic towards humans.

    comment by ata · 2010-12-16T22:52:56.812Z · LW(p) · GW(p)

    How about each of those 3^^^3 is asked whether they would accept a dust speck in the eye to save someone from 1/3^^^3 of 50 years of torture, and everyone's choice is granted? (i.e. the ones who say they'd accept a dust speck get a dust speck, and the person is tortured for an amount of time proportional to the number of people who refused.)

    I'm not quite sure what I'd expect to have happen in that case. That's harder than the moral question because we have to imagine a world that actually contains 3^^^3 different (i.e. not perfectly decision-theoretically correlated) people, and any kind of projection about that kind of world would pretty much be making stuff up. But as for the moral question of what a person in this situation should say, I'd say the reasoning is about the same — getting a dust speck in your eye is worse than 50/3^^^3 years of torture, so refuse the speck.

    (That's actually an interesting way of looking at it, because we could also put it in terms of each person choosing whether they get specked or they themselves get tortured for 50/3^^^3 years, in which case the choice is really obvious — but if you're still working with 3^^^3 people, and they all go with the infinitesimal moment of torture, that still adds up to a total 50 years of torture.)

    Edit: Actually, for that last scenario, forget 50/3^^^3 years, that's way less than a Planck interval. So let's instead multiply it by enough for it to be noticeable to a human mind, and reduce the intensity of the torture by the same factor.

    comment by Reivalyn · 2010-12-16T23:02:26.026Z · LW(p) · GW(p)

    Interesting question. I think a similar real-world situation is when people cut in line.

    Suppose there is a line of 100 people, and the line is moving at a rate of 1 person per minute.

    Is it ok for a new person to cut to the front of the line, because it only costs each person 1 extra minute, or should the new person stand at the back of the line and endure a full 100 minute wait?

    Of course, not everyone in line endures the same wait duration; a person near the front will have a significantly shorter wait than a person near the back. To address that issue one could average the wait times of everyone in line and say that there is an average wait time of 49.5 minutes per person in line [Avg(n) = (n-1) + Avg(n-1)].

    Is it ok for a second person to also cut to the front of the line? How many people should be allowed to cut to the front, and which people of those who could possibly cut to the front should be allowed to do so?

    Replies from: Vaniver
    comment by Vaniver · 2010-12-16T23:04:58.514Z · LW(p) · GW(p)

    Is it ok for a new person to cut to the front of the line, because it only costs each person 1 extra minute, or should the new person stand at the back of the line and endure a full 100 minute wait?

    This is one of the reasons why utilitarianism makes me cringe. "We can do first-order calculations and come up with a good answer! What could go wrong?"

    comment by [deleted] · 2011-02-05T06:12:55.034Z · LW(p) · GW(p)

    I would prefer the dust motes, and strongly. Pain trumps inconvenience.

    And yet...we accept automobiles, which kill tens of thousands of people per year, to avoid inconvenience. (That is, automobiles in the hands of regular people, not just trained professionals like ambulance drivers.) But it's hard to calculate the benefits of having a vehicle.

    Reducing the national speed limit to 30mph would probably save thousands of lives. I would find it unconscionable to keep the speed limit high if everyone were immortal. At present, such a measure would trade lives for parts of lives, and it's a matter of math to say which is better...though we could easily rearrange our lives to obviate most travel.

    Replies from: wedrifid
    comment by wedrifid · 2011-02-05T06:34:12.823Z · LW(p) · GW(p)

    Reducing the national speed limit to 30mph would probably save thousands of lives. I would find it unconscionable to keep the speed limit high if everyone were immortal.

    I had to read that twice before I realised you meant "immortal like an elf" rather than "immortal like Jack Harkness and Connor MaCleod".

    comment by rstarkov · 2011-02-21T20:34:15.440Z · LW(p) · GW(p)

    Idea 1: dust specks, because on a linear scale (which seems to be always assumed in discussions of utility here) I think 50 years of torture is more than 3^^^3 times worse than a dust speck in one's eye.

    Idea 2: dust specks, because most people arbitrarily place bad things into incomparable categories. The death of your loved one is deemed to be infinitely worse than being stuck in an airport for an hour. It is incomparable; any amount of 1 hour waits are less bad than a single loved one dying.

    Replies from: ata
    comment by ata · 2011-02-21T22:02:13.406Z · LW(p) · GW(p)

    Idea 1: dust specks, because on a linear scale (which seems to be always assumed in discussions of utility here) I think 50 years of torture is more than 3^^^3 times worse than a dust speck in one's eye.

    How much would you have to decrease the amount of torture, or increase the number of dust specks, before the dust specks would be worse?

    Replies from: rstarkov
    comment by rstarkov · 2011-02-21T23:30:04.370Z · LW(p) · GW(p)

    I don't know. I don't suppose you claim to know at which point the number of dust specks is small enough that they are preferable to 50 years of torture?

    (which is why I think that Idea 2 is a better way to reason about this)

    comment by AlephNeil · 2011-03-23T23:44:47.323Z · LW(p) · GW(p)

    I think it might be interesting to reflect on the possibility that among the 3^^^3 dust speck victims there might be a smaller-but-still-vast number of people being subjected to varying lengths of "constantly-having-dust-thrown-in-their-eyes torture". Throwing one more dust speck at each of them is, up to permuting the victims, like giving a smaller-but-still-vast number of people 50 years of dust speck torture instead of leaving them alone.

    (Don't know if anyone else has already made this point - I haven't read all the comments.)

    comment by TimFreeman · 2011-04-13T02:53:16.241Z · LW(p) · GW(p)

    These ethical questions become relevant if we're implementing a Friendly AI, and they are only of academic interest if I interpret them literally as a question about me.

    If it's a question about me, I'd probably go with the dust specs. A small fraction of those people will have time to get to me, and of those, none of those people are likely to bother me if it's just a dust speck. If I were to advocate the torture, the victim or someone who knows him might find me and try to get revenge. I just gave you a data point about the psychology of one unmodified human, which is relatively useless, so I don't think that's the question you really wanted answered.

    Perhaps the question is really what a non-buggy omnipotent Friendly AI would do. If it has been constructed to care equally about that absurd number of people, IMO it should choose torture. If it's not omnipotent, then it has to consider revenge of the victim, so the correct answer depends on the details of how omnipotent it isn't.

    comment by [deleted] · 2011-09-08T19:13:16.880Z · LW(p) · GW(p)

    I wonder if some people's aversion to "just answering the question" as Eliezer notes in the comments many times has to do with the perceived cost of signalling agreement with the premises.

    It's straightforward to me that answering should take the question at face value; it's a thought experiment, you're not being asked to commit to a course of action. And going by the question as asked the answer for any utilitarian is "torture", since even a very small increment of suffering multiplied by a large enough number of people (or an infinite number) will outweigh a great amount of suffering by one person.

    Signalling that would be highly problematic for some people because of what might be read into our answer -- does Eliezer expect that signalling assent here means signalling assent to other, as-yet-unknown conclusions he's made about (whatever issue where that bears some resemblance)? Does Eliezer intend to codify the terms of this premise into the basis for a decision theory underlying the cognitive architecture of a putative Friendly AI? Does Eliezer think that the real world, in short, maps to his gedankenexperiment sufficiently well that the terms of this scenario can meaningfully stand in for decisions made in that domain by real actors (human or otherwise)?

    For my own part I'd be very, very hesitant to signal any of that. Hence I find it difficult to answer the question as asked. It's analogous to my discomfort with the Ticking Time Bomb scenario -- by a straight reading of the premise you should trade a finite chance of finding and disabling the bomb, thereby saving a million lives, for the act of torturing the person who planted it. The logic is internally-consistent, but it doesn't map to any real-world situation I can plausibly imagine (where torture is not terribly effective in soliciting confessions, and the scenario of a "ticking time bomb with a single suspect unwilling to talk mere minutes beforehand" has AFAIK never happened as presented, and would be extremely difficult to set up).

    I recognize the internal consistency, yet I'm troubled by my uncertainty about what the author thinks I'm signing up for when I reply.

    comment by A1987dM (army1987) · 2011-09-14T19:03:11.020Z · LW(p) · GW(p)

    I choose the specks. My utility function u(what happens to person 1, what happens to person 2, ..., what happens to person N) doesn't equal f_1(what happens to person 1) + f_2(what happens to person 2) + ... + f_N(what happens to person N) for any choice of f_1, ..., f_N, not even allowing them to be different; in particular, u(each of n people gets one speck in their eye) doesn't approaches a finite limit as n approaches infinity, and this limit is less negative than u(one person gets tortured than 50 years)

    comment by kilobug · 2011-10-02T09:33:43.451Z · LW(p) · GW(p)

    I spent quite a while thinking about this one, and here is my "answer".

    My first line of questioning is "can we just multiply and compare the sufferings ?" Well, no. Our utility functions are complicated. We don't even fully know them. We don't exactly know what are terminal values, and what are intermediate values in them. But it's not just "maximize total happiness" (with suffering being negative happiness). My utility function also values things like fairness (it may be because I'm a primate, but still, I value it). The "happiness" part of my utility function will be higher for torture, the "fairness" part of it, lower. Since I don't know the exact coefficient of those two parts, I can't really "shut up and multiply".

    But... well... 3^^^3 is well... really a lot. I can't get out this way, even adding correcting terms, even if it's not totally linear, even taking into account fairness, well, 3^^^3 is still going to trump over 1.

    So for any realistic computation I would make of my utility function, it seems that "torture" will score higher than "dust speck". So I should chose torture ? Well, not sure yet. For I've ethical rules. What's an ethical rule ? It's an internal law (somehow, a cached thought, from my own computation, or coming from outside) that says "don't ever do that". It includes "do not torture !" it includes "nothing can ever justify torturing someone for 50 years". Why are those rules for ? There are here to protect myself from doing mistakes. Because I can't trust myself fully. I've biases. I don't have full knowledge. I've limited amount of time to take decisions, and I only run at 100Hz. So I need safeguards. I need rules, that I'll follow even when my computation tells me I shouldn't. Those rules can be overridden. But they need to be overridden by something with almost absolute certitude, and to be of the same (or higher) level. No amount of dust speck can trigger an override of the "no torture" rule. I know my history well enough to know that when you allow yourself to torture, because you're "sure" that if you don't something worse will happen, well, you end up becoming the worse. I've high ideals, I've the will to change the world for better - therefore I need rules to prevent me from becoming Stalin or the Holy Inquisition. And that's typically the case. 3^^^3 persons will receive dust speck ? Well, too bad. Sure, it'll be a less optimal utility function than allowing just one person to be tortured. But I don't trust myself to sentence that person to be tortured. So I'll chose dust specks for me and everyone.

    If you allow me to argue by fictional evidence, well, that reminds me to the end of Asimov Robot cycle (Robots and Empire, mostly). Warning: spoilers coming. If you didn't read it, go read it, and skip the rest of that paragraph ;) So... when the two robots, Daneel and Giskard, realize the limitations of the First Law: « A robot may not injure a human being or, through inaction, allow a human being to come to harm. », and try to craft the Law Zero: « A robot may not harm humanity, or, by inaction, allow humanity to come to harm. », they end up facing a very difficult problem - one for which they'll need Psychohistory to solve, and even then, only partially. It's relatively easy to know that a human being is in danger,or suffering, and how to help me. It's much, much harder to know that humanity is in danger and how to help it. That's a deep reason behind ethical rules : torture someone is just plain wrong. I may think it's good in that situation, because it'll prevent a Terror Attack, or help me win the war against that horrible Enemy, or because it'll deter crime, or because it'll save 3^^^3 persons from a dust speck. But I just don't trust myself enough to go as far as to torture someone because I computed it would do good overall.

    And the last important point on the issue is social rules. There is, in XXIest century western societies at least, a strong taboo on torture. That taboo is a shield. It means than when a president of the USA uses torture, he loses the elections (of course, it's much more complicated, but I think it did play a role). It makes using torture a very, very costly strategy. We have the same with political violence. When the cops attacked a anti-war protest at the Charonne metro station on Feb 8, 1962, killing 9 demonstrators including a 16 years old boy, that was the end of Algeria war. Of course, it wasn't just that. De Gaulle was already trying to stop war, it was lost. But the uproar (nearly half a million of people attended their burial) was so high that the political cost of still supporting the war was made much bigger, so that the end of the war was hastened.

    I won't take the responsibility of weakening those taboo (against torture, against political violence, ...) by breaking them myself. The consequences on society, on further people using more torture later on, are too scary.

    So, to conclude, I'll chose dust specks. Not because my utility function scores higher on dust speck. But because I can't trust myself enough to wield something as horrible as torture (I've ethical rules, and I'll follow them, even when my computations tell me to do otherwise, for it's the only safeguard I know against becoming Stalin) and because I value way too much the societal taboo against torture to take the responsibility of lowering it.

    Now... I've a feeling of discontent for reaching that conclusion, because it coincide with my initial gut-level reaction to the post. It somehow feel like I wrote the bottom line, and then the rationalization. But... I did my best, I did overcome the first "excuse" (non-linearity and valuing fairness) my mind gave me. But I don't find flaws in the two others. And well, reversed stupidity is not intelligence. Reaching the same conclusion that I had intuitively doesn't always mean it's a wrong conclusion.

    comment by [deleted] · 2011-10-07T20:59:24.955Z · LW(p) · GW(p)

    Let me attempt to shut up and multiply.

    Let's make the assumption that a single second of torture is equivalent to 1 billion dust specks to the eye. Since that many dust specks is enough to sandblast your eye, it seems reasonable approximation.

    This means that 50 years of this torture is equivalent to giving 1 single person (50 365.25 24 60 60 * 1,000,000,000) dust specks to the eye.

    According to Google's calculator,

    (50 365.25 24 60 60 1,000,000,000)/(3^39) = 0.389354356 (50 365.25 24 60 60 1,000,000,000)/(3^38) = 1.16806307

    Ergo, If someone convinces you 50 years of Torture, or 3^^3(3^27) people get Specks, pick Specks.

    But If someone convinces you 50 years of Torture, or (3^50) people get Specks, pick Torture,

    This appears to be a fair attempt to shut up and multiply.

    However, 3^^^3 is incomprehensibly bigger than any of that.

    You could turn every atom into the observable universe into a speck of dust. At wikipedia's almost 10^80 atoms, that is still not enough dust. http://en.wikipedia.org/wiki/Observable_universe

    You could turn every planck length in the obseravble universe into a speck of dust. At Answerbag's 2.5 x 10^184 cubic planck lengths, that's still not enough dust. http://www.answerbag.com/q_view/33135

    At this point, I thought maybe that another universe made of 10^80 computronium atoms is running universes like ours as simulation on individual atoms. That means 10^80 2.5 x 10^184 cubic planck lengths of dust. But that's still not enough dust. Again. 2.510^264 specks of dust is still WAY less than 3^^^3

    At this point, I considered checking if I could get enough dust specks if I literally converted everything in all Everett branches since the big bang beginning of time into dust, but my math abilities fail me. I'll try coming back to this later.

    Edit: My multiplication symbols were getting turned into Italics. Should be fixed now.

    comment by [deleted] · 2012-02-01T09:59:13.741Z · LW(p) · GW(p)

    I tentatively like to measure human experience with logarithms and exponentials. Our hearing is logarithmic, loudness wise, hence the unit dB. Human experiences are rarely linear, thus is it is almost never true that f(x*a) = f(x)*a.

    In the above hypothetical, we can imagine the dust specks and the torture. If we propose that NO dust speck ever does anything other than cause mild annoyance, never one enters the eye of a driver who blinks at an inopportune time and crashes; then I would propose we can say: awfulness(pain) = k^pain.

    A dust speck causes approximately Dust = epsilon Dols (unit of pain, think the opposite of hedons) while intense, effective torture causes possibly several kiloDols per second. Now it is simply a matter to say Torture = W kDol/s * 50 years, for some reasonable W; Lastly compare k^Dust * 3^^^3 <=> k^Torture.

    comment by gRR · 2012-02-08T17:32:59.636Z · LW(p) · GW(p)

    My utility function has non-zero terms for preferences of other people. If I asked each one of the 3^^^3 people whether they would prefer a dust speck if it would save someone a horrible fifty-year torture, they (my simulation of them) would say YES in 20*3^^^3-feet letters.

    Replies from: army1987, TheOtherDave
    comment by A1987dM (army1987) · 2012-02-08T17:41:19.646Z · LW(p) · GW(p)

    Conversely, if you asked somebody if they'd be willing to be tortured for 50 years in order to save 3^^^3 people from getting each a dust speck in the eye, they'd likely say NO FREAKIN' WAY!!!.

    BTW, welcome to Less Wrong -- you can introduce yourself in the welcome thread.

    comment by TheOtherDave · 2012-02-09T00:34:01.256Z · LW(p) · GW(p)

    If I asked each of a million people if they would give up a dollar's worth of value if it would give an arbitrarily selected person ten thousand dollars' worth, and they each said yes, would it follow that destroying a million dollar's worth of value in exchange for ten thousand dollars' worth was a good idea?

    If, additionally, my utility function had non-zero terms for the preferences of other people, would it follow then?

    Replies from: Maelin, gRR
    comment by Maelin · 2012-02-09T01:08:59.159Z · LW(p) · GW(p)

    I feel like this is misinterpreting gRR's comment. gRR is not claiming that nonutilitarian choices are preferable because the utility function has non-zero terms for preferences of other people. It is a necessary condition, but not a sufficient one.

    My model of other people says that a significantly smaller percentage of people would accept losing a dollar in order to grant one person ten grand, than would accept a dust speck in order to save one person 50 years of torture.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2012-02-09T03:24:09.927Z · LW(p) · GW(p)

    My model of other people says that a significantly smaller percentage of people would accept losing a dollar in order to grant one person ten grand, than would accept a dust speck in order to save one person 50 years of torture.

    As does mine.

    gRR is not claiming that nonutilitarian choices are preferable because the utility function has non-zero terms for preferences of other people. It is a necessary condition, but not a sufficient one.

    That's consistent with my understanding of their claim as well.

    I feel like this is misinterpreting gRR's comment.

    Can you expand further on why you feel like this?

    Replies from: Maelin
    comment by Maelin · 2012-02-09T05:15:23.807Z · LW(p) · GW(p)

    Sure, although updating upon reading your response, I now suspect that I have misinterpreted your comment. But I'll explain how I saw things when I first commented.

    Basically it looked like you were perceiving gRR's argument as a specific instance of the following general argument:

    (1a) lots of people might agree to take a small decrease in utility in order to provide lots of utility / avoid lots of disutility for an individual even if the total decrease in utility over all the people is substantially larger than the individual utility granted / disutility averted

    (2a) whenever lots of people would agree to that, it is a good idea to do it

    (3) therefore it is a good idea to take small amounts of utility from many people to give lots of utility / prevent lots of disutility to one person provided all/an overwhelming majority of the people agree to it

    You were then trying to reveal the fault in gRR's general argument by presenting a different example ($1m -> $10k) and asking if the same argument would still hold there (which you presume it wouldn't). Then you suggested throwing another premise, (1b) I have nonzero terms for others' preferences, and presumably replacing (2a) by (2b) which adds the requirement of (1b), and asking if that would make the argument hold.

    But gRR was not asserting that general argument - in particular, not premise (2a)/(2b). So it seemed like you seemed to be trying to tear down an argument that gRR was not constructing.

    comment by gRR · 2012-02-09T02:31:07.858Z · LW(p) · GW(p)

    It wouldn't follows that it is a good idea, or efficient idea. But it would follow that it is the preferred idea, as calculated by my utility function that has non-zero terms for preferences of other people.

    Fortunately, my simulation of other people doesn't suddenly wish to help an arbitrary person by donating a dollar with 99% transaction cost.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2012-02-09T03:29:14.469Z · LW(p) · GW(p)

    Hm. As with Maelin's comment above, I seem to agree with every part of this comment, but I don't understand where it's going. Perhaps I missed your original point altogether.

    Replies from: gRR
    comment by gRR · 2012-02-09T04:00:22.224Z · LW(p) · GW(p)

    My point was that the "SPECKS!!" answer to the original problem, which is intuitively obvious to (I think) most people here, is not necessarily wrong. It can directly follow from expected utility maximization, if the utility function values the choice of people, even if the choice is "economically" suboptimal.

    Replies from: TimS, TheOtherDave
    comment by TimS · 2012-02-09T04:17:29.662Z · LW(p) · GW(p)

    A substantial part of talking about utility functions is to assert we are trying to maximize something about utility (total, average, or whatnot). It seems very strange to say that we can maximize utility by being inefficient in our conversion of other resources into utility. If your goal is to avoid certain "efficient" conversations for other reasons, then it doesn't make a lot of sense to say that you are really trying to implement a utility function.

    In other words, Walzer's Spheres of Justice concept, which states that some trade-offs are morally impermissible, is not really implementable in a utility function. To the extent that he (or I) might be modeled by a utility function, there are inevitably going to be errors in what the function predicts I would want or very strange discontinuities in the function.

    Replies from: gRR
    comment by gRR · 2012-02-09T05:23:34.228Z · LW(p) · GW(p)

    But I am trying to maximize the total utility, just a different one.

    Ok, let me put it this way. I will drop the terms for other people's preferences from my utility function. It is now entirely self-centered. But it still values the good feeling I get if I'm allowed to participate in saving someone from fifty years of torture. The value of this feeling if much more than the miniscule negative utility of a dust speck. Now, assume some reasonable percent of the 3^^^3 people are like me in this respect. Maximizing the total utility for everybody results in: SPECKS!!

    Now an objection can be stated that by the conditions of the problem, I cannot change the utilities of the 3^^^3 people. They are given and are equal to a miniscule negative value corresponding to the small speck of dust. Evil forces give me the sadistic choice and don't allow me to share the good news with everyone. Ok. But I can still imagine what the people would have preferred if given a choice. So I add a term for their preference to my utility function. I'm behaving like a representative of people in a government. Or like a Friendly AI trying to implement their CEV.

    In other words, Walzer's Spheres of Justice concept, which states that some trade-offs are morally impermissible, is not really implementable in a utility function.

    My arguments have nothing to do with Walzer's Spheres of Justice concept, AFAICT.

    Replies from: TimS
    comment by TimS · 2012-02-09T05:49:18.174Z · LW(p) · GW(p)

    Now, assume some reasonable percent of the 3^^^3 people are like me in this respect. Maximizing the total utility for everybody results in: SPECKS!!

    The point of picking a number the size of 3^^^3 is that it is so large that this statement is false. Even if 99% are like you, I can keep adding ^ and falsify the statement. If utility is additive at all, torture is the better choice.

    My reference to Walzer was simply to note that many interesting moral theories exist that do not accept that utility is additive. I don't accept that utility is additive.

    Replies from: gRR
    comment by gRR · 2012-02-09T06:16:54.495Z · LW(p) · GW(p)

    Now, assume some reasonable percent of the 3^^^3 people are like me in this respect. Maximizing the total utility for everybody results in: SPECKS!!

    The point of picking a number the size of 3^^^3 is that it is so large that this statement is false.

    Why would it ever be false, no matter how large the number?

    Let S = negated disutility of speck, a small positive number. Let F = utility of good feeling of protecting someone from torture. Let P = the fraction of people who are like me (for whom F is positive), 0 < P <= 1. Then the total utility for N people, no matter what N, is N(PF - S), which is >0 as long as P*F > S.

    I don't accept that utility is additive.

    Well, we can agree that utility is complicated. I think it's possible to keep it additive by shifting complexities to the details of its calculation.

    Replies from: TimS
    comment by TimS · 2012-02-09T06:39:32.864Z · LW(p) · GW(p)

    F = utility of good feeling of protecting someone from torture.

    This knowledge among the participants is adding to the thought experiment. The original question:

    Which is worse: (a) 3^^^3 dust specks, or (b) one person tortured.

    You are asking:

    Which is worse: (a) 3^^^3 dust specks, or (b) one person tortured AND 3^^^3 people empathizing with the suffering of that person

    Notice how your formulation has 3^^^3 in both options, while the original question does not.

    Replies from: gRR
    comment by gRR · 2012-02-09T06:59:48.148Z · LW(p) · GW(p)

    Yes, I stated and answered this exact objection two comments ago.

    Replies from: TimS
    comment by TimS · 2012-02-09T07:19:26.745Z · LW(p) · GW(p)

    I have come to believe that - like a metaphorical Groundhog Day - every conversation on this topic is the same lines from the same play, with different actors.

    This is the part of the play where I repeat more forcefully that you are fighting the hypo, but don't seem to be realizing that you are fighting the hypo.

    In the end, the lesson of the problem is not about the badness of torture or what things count as positive utility, but about learning what commitments you make with various assertions about the way moral decisions should be made.

    Replies from: fubarobfusco, fezziwig
    comment by fubarobfusco · 2012-02-09T16:23:58.830Z · LW(p) · GW(p)

    It sounds to me as if you're asserting that the ignorance of the 3^^^3 people to the fact that their specklessness depends on torture, makes a positive moral difference in the matter.

    Replies from: wedrifid
    comment by wedrifid · 2012-02-09T16:31:57.752Z · LW(p) · GW(p)

    It sounds to me as if you're asserting that the ignorance of the 3^^^3 people to the fact that their specklessness depends on torture, makes a positive moral difference in the matter.

    That doesn't seem unreasonable. THat knowledge is probably worse than the speck.

    Replies from: fubarobfusco
    comment by fubarobfusco · 2012-02-09T16:59:44.615Z · LW(p) · GW(p)

    Sure, it does have the odd implication that discovering or publicizing unpleasant truths can be morally wrong, though.

    Replies from: TimS
    comment by TimS · 2012-02-09T21:06:55.490Z · LW(p) · GW(p)

    That's a really good point. Does the "repugnant conclusion" problem for total utilitarians imply that they think informing others of bad news can be morally wrong in ordinary circumstances? Or just the product of a poor definition of utility?

    I take it as fairly uncontroversial that a benevolent lie when no changes in decision by the listener are possible is morally acceptable. That is, falsely saying "Your son survived the plane crash" to the father who is literally moments from dying seems morally acceptable because the father isn't going to decide anything differently based on that statement. But that's an unusual circumstance, so I don't think it should trouble us.

    Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?

    Replies from: fubarobfusco
    comment by fubarobfusco · 2012-02-10T00:49:42.021Z · LW(p) · GW(p)

    That is, falsely saying "Your son survived the plane crash" to the father who is literally moments from dying seems morally acceptable because the father isn't going to decide anything differently based on that statement. But that's an unusual circumstance, so I don't think it should trouble us.

    Agreed. Lying to others to manipulate them deprives them of the ability to make their own choices — which is part of complex human values — but in this case the father doesn't have any relevant choice to deprive him of.

    Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?

    Not that I can tell.

    I suppose another way of looking at this is a collective-action or extrapolated-volition problem. Each individual in the SPECKS case might prefer a momentary dust speck over the knowledge that their momentary comfort implied someone else's 50 years of torture. However, a consequentialist agent choosing TORTURE over SPECKS is doing so in the belief that SPECKS is actually worse. Can that agent be implementing the extrapolated volition of the individuals?

    comment by fezziwig · 2012-02-09T18:54:15.834Z · LW(p) · GW(p)

    This is the part of the play where I repeat more forcefully that you are fighting the hypo, but don't seem to be realizing that you are fighting the hypo.

    I don't realize it either; I'm not sure that it's true. Forgive me if I'm missing something obvious, but:

    • gRR wants to include the preferences of the people getting dust-specked in his utility function.
    • But as you point out, he can't; the hypothetical doesn't allow it.
    • So instead, he includes his extrapolation of what their preferences would be if they were informed, and attempts to act on their behalf.

    You can argue that that's a silly way to construct a utility function (you seem to be heading that way in your third paragraph), but that's a different objection.

    Replies from: TimS
    comment by TimS · 2012-02-09T20:56:12.525Z · LW(p) · GW(p)

    If you want to answer a question that isn't asked by the hypothetical, you are fighting the hypo. That's basically the paradigmatic example of "fighting the hypo."

    I think gRR has the right answer to the question he is asking. But it is a different one that Eliezer was asking, and teaches different lessons. To the extent that gRR thinks he has rebutted the lessons from Eliezer's question, he's incorrect.

    Replies from: gRR
    comment by gRR · 2012-02-10T13:51:31.172Z · LW(p) · GW(p)

    I'm not sure why do you think I'm asking a different question. Do you mean to say that in the original Eliezer's problem all of the utilities are fixed, including mine? But then, the question appears entirely without content:

    "Here are two numbers, this one is bigger than that one, your task is to always choose the biggest number. Now which number do you choose?"

    Besides, if this is indeed what Eliezer meant, then his choice of "torture" for one of the numbers is inconsistent. Torture always has utility implications for other people, not just the person being tortured. I hypothesize that this is what makes it different (non-additive, non-commeasurable, etc) for some moral philosophers.

    Replies from: TimS
    comment by TimS · 2012-02-10T14:12:19.647Z · LW(p) · GW(p)

    As fubarobfusco pointed out, your argument includes the implication that discovering or publicizing unpleasant truths can be morally wrong (because the participants were ignorant in the original formulation). It's not obvious to me that any moral theory is committed to that position.

    And without that moral conclusion, I think Eliezer is correct that a total utilitarian is committed to believing that choosing TORTURE over SPECKS maximizes total utility. The repugnant conclusion really is that repugnant. All of that was not an obvious result to me.

    Replies from: gRR
    comment by gRR · 2012-02-10T15:19:43.100Z · LW(p) · GW(p)

    Any utility function that does not give an explicit overwhelmingly positive value to truth, and does give an explicit positive value to "pleasure" would obviously include the implication that discovering or publicizing unpleasant truths can be morally wrong. I don't see why it is relevant.

    If all the utilities are specified by the problem text completely, then TORTURE maximizes the total utility by definition. There's nothing to be committed about. But in this case, "torture" is just a label. It cannot refer to a real torture, because a real torture would produce different utility changes for people.

    comment by TheOtherDave · 2012-02-09T11:40:52.048Z · LW(p) · GW(p)

    Well, OK, sure, but... can't anything follow from expected utility maximization, the way you're approaching it? For all (X, Y), if someone chooses X over Y, that can directly follow from expected utility maximization, if the utility function values X more than Y.

    If that means the choice of X over Y is not necessarily wrong, OK, but it seems therefore to follow that no choice is necessarily wrong.

    I suspect I'm still missing your point.

    Replies from: gRR
    comment by gRR · 2012-02-09T14:43:14.561Z · LW(p) · GW(p)

    Given: a paradoxical (to everybody except some moral philosophers) answer "TORTURE" appears to follow from expected utility maximization.

    Possibility 1: the theory is right, everybody is wrong.

    But in the domain of moral philosophy, our preferences should be treated with more respect than elsewhere. We cherish some of our biases. They are what makes us human, we wouldn't want to lose them, even if sometimes they give "inefficient" answer from the point of view of simplest greedy utility function.

    These biases are probably reflexively consistent - even if we knew more, we would still wish to have them. At least, I can hypothesize that they are so, until proven otherwise. Simply showing me the inefficiency doesn't make me wish not to have the bias. I value efficiency, but I value my humanity more.

    Possibility 2: the theory (expected utility maximization) is wrong.

    But the theory is rather nice and elegant, I wouldn't wish to throw it away. So, maybe there's another way to fix the paradox? Maybe, something wrong with the problem definition? And lo and behold - yes, there is.

    Possibility 3: the problem is wrong

    As the problem is stated, the preferences of 3^^^3 people are not taken into account. It is assumed that the people don't know and will never know about the situation - because their total utility change regarding the whole is either nothing or a single small negative value.

    If people were aware of the situation, their utility changes would be different - a large negative value from knowing about the tortured person's plight and being forcibly forbidden to help, or a positive value from knowing they helped. Well, there would also be a negative value from moral philosophers who would know and worry about inefficiency, but I think it would be a relatively small value, after all.

    Unfortunately, in the context of the problem, the people are unaware. The choice for the whole humanity is given to me alone. What should I do? Should I play dictator and make a choice that would be repudated by everyone, if they only knew? This seems wrong, somehow. Oh! I can simulate them, ask what they would prefer, and give their preference a positive term within my own utility function. I would be the representative of the people in a government, or an AI trying to implement their CEV.

    Result: SPECKS!! Hurray! :)

    Replies from: TheOtherDave
    comment by TheOtherDave · 2012-02-09T23:23:46.999Z · LW(p) · GW(p)

    OK. I think I understand you now. Thanks for clarifying.

    comment by Dmytry · 2012-02-09T07:48:53.911Z · LW(p) · GW(p)

    The mathematical object to use for the moral calculations needs not be homologous to real numbers.

    My way of seeing it is that the speck of dust barely noticeable will be strictly smaller than torture no matter how many instances of speck of dust happen. That's just how my 'moral numbers' operate. The speck of dust equals A>0, the torture equals B>0, the A*N<B holds for any finite N . . I forbid infinities (the number of distinct beings is finite).

    If you think that's necessarily irrational you have a lot of mathematics to learn. You can start with ordinal numbers.

    edit: note, i am ignoring consequences of the specks in the eyes as I think they are not the point of the exercise and only obfuscate everything plus one has to make assumptions like specks ending up in eyes of people whom are driving.

    Replies from: TimS
    comment by TimS · 2012-02-09T08:07:59.538Z · LW(p) · GW(p)

    If I understand correctly, then I agree with you. But this viewpoint has consequences.

    Replies from: Dmytry
    comment by Dmytry · 2012-02-09T08:37:30.351Z · LW(p) · GW(p)

    The linked post still assumes that discomfort space is one dimensional, which it needs not be. The decision outcomes do need to behave like comparison does (if a>b and b>c it must follow that a>c) but that's about it.

    Bottom line is, we can't very well reflect on how we think about this issue, so its hard to come up with some model that works the same as your head, and which you can reflect on, calculate with computer, etc.

    By the way, consider a being made of 10^30 parts with 10^30 states each. That's quite big being, way bigger than human. The number of distinct states of such being is (10^30)^(10^30) = 10^(30*10^30) , which is unimaginably smaller than 3^^^3 . You can pick beings that are to humans as humans are to amoeba, repeated many times, and still be waaay short of 3^^^3. The guys who chose torture, congrats on also having a demonstrable reasoning failure when reasoning about huge numbers.

    edit: embarrassing math glitch of my own. It is difficult to reason about huge numbers and easy to miss something, such as number of 'people' exceeding number of possible human mind states by unimaginably far.

    comment by gRR · 2012-02-09T07:53:41.373Z · LW(p) · GW(p)

    Choosing TORTURE is making a decision to condemn someone to fifty years of torture, while knowing that 3^^^3 people would not want you to do so, would beg you not to, would react with horror and revulsion if/when they knew you did it. And you must do it for the sake of some global principle or something. I'd say it puts one at least into Well-intentioned Extremist / KnightTemplar category, if not outright villain.

    If an AI had made a choice like that, against known wishes of practically everyone, I'd say it was rather unfriendly.

    ADDED: Detailed

    comment by danlucraft · 2012-03-19T17:42:59.204Z · LW(p) · GW(p)

    People who choose torture, if the question was instead framed as the following would you still choose torture?

    "Assuming you know your lifespan will be at least 3^^^3 days, would you choose to experience 50 years worth of torture, inflicted a day at a time at intervals spread evenly across your life span starting tomorrow, or one dust speck a day for the next 3^^^3 days of your life?"

    Replies from: ArisKatsaris, Nornagest, TheOtherDave
    comment by ArisKatsaris · 2012-03-19T18:41:10.247Z · LW(p) · GW(p)

    I've heard this rephrasing before but it means less than you might think. Human instinct tells us to postpone the bad as much as possible. Put aside the dustspeck issue for the moment: let's compare torture to torture. I'd be tempted to choose a 1000 years of torture over a single year of torture, if the 1000 years are a few millions of years in the future, but the single year had to start now.

    Does this fact mean I need concede 1000 years of torture are less bad than a single year? Surely not. It just illustrates human hyperbolic discounting.

    comment by Nornagest · 2012-03-19T18:47:41.919Z · LW(p) · GW(p)

    Clever, but not, I think, very illuminating -- 3^^^3 is just as fantastically, intuition-breakingly huge as it ever was, and using the word "tomorrow" adds a nasty hyperbolic discounting exploit on top of that. All the basic logic of the original still seems to apply, and so does the conclusion: if a dust speck is in any way commensurate with torture (a condition assumed by the OP, but denied by enough objections that I think it's worth pointing out explicitly), pick Torture, otherwise pick Specks.

    One of the frustrating things about the OP is that most of the objections to it are based on more or less clever intuition pumps, while the post itself is essentially making a utilitarian case for ignoring your intuitions. Tends to lead to a lot of people talking past each other.

    comment by TheOtherDave · 2012-03-19T20:26:00.233Z · LW(p) · GW(p)

    I would almost undoubtedly choose a dust speck a day for the rest of my life. So would most people.

    The question remains whether that would be the right choice... and, if so, how to capture the principles underlying that choice in a generalizable way.

    For example, in terms of human intuition, it's clear that the difference between suffering for a day and suffering for five years plus one day is not the same as the difference between suffering for fifty years and suffering for fifty-five years, nor between zero days and five years. The numbers matter.

    But it's not clear to me how to project the principles underlying that intuition onto numbers that my intuition chokes on.

    Replies from: Dmytry
    comment by Dmytry · 2012-03-19T20:40:06.225Z · LW(p) · GW(p)

    I would almost undoubtedly choose a dust speck a day for the rest of my life. So would most people.

    Could it be that in the 50 years worth of torture would also amount to more than a dust spec of daily discomfort caused by having been psychologically traumatized by the torture, for the remaining 3^^^3 days?

    What if the 50 years of torture come at the end of the lifespan?

    Istill would rather just take the dust speck now and then though. Nothing forbids me from having a function more nonlinear than 3^^^^[n] 3 , as a messy wired neural network i can easily implement imprecise algebra on the numbers that are far beyond any up arrow notation, or even numbers x,y,z... that are such that any finite integer x < y , any finite integer y < z , and so on . Infinities are not hard to implement at all. Consider comparisons on arrays made like ab[1] . I'm using strings when I need that property in software, so that i can always make some value that will have precedence.

    edit: Note that one could think of the comparison between real values in above example as comparisons between a[1]*big number + a[2] , which may seem sensible, and then learn of the uparrows, get mind boggled, and reason that the up-arrows in a[2] will be larger than big number. But they never will change outcome of the comparison as per the actual logic where a[1] always matters more than a[2] .

    Replies from: TheOtherDave
    comment by TheOtherDave · 2012-03-19T21:22:38.888Z · LW(p) · GW(p)

    Sure, if I factor in the knock-on effects of 50 years of torture (or otherwise ignore the original thought experiment and substitute my own) I might come to different results.

    Leaving that aside, though, I agree that the nature of my utility function in suffering is absolutely relevant here, and it's entirely possible for that function to be such that BIGNUMBER x SMALLSUFFERING is worth less than SMALLNUMBER x BIGSUFFERING even if BIGNUMBER >>>>>> SMALLNUMBER.

    The key word here is possible though. I don't really know that it is.

    comment by TraderJoe · 2012-05-10T13:18:56.806Z · LW(p) · GW(p)

    Common sense tells me the torture is worse. Common sense is what tells me the earth is flat. Mathematics tells me the dust specks scenario is worse. I trust mathematics and will damn one person to torture.

    comment by PhilosophyTutor · 2012-05-10T23:49:58.956Z · LW(p) · GW(p)

    This "moral dilemma" only has force if you accept strict Bentham-style utilitarianism, which treats all benefits and harms as vectors on a one-dimensional line, and cares about nothing except the net total of benefits and harms. That was the state of the art of moral philosophy in the year 1800, but it's 2012 now.

    There are published moral philosophies which handle the speck/torture scenario without undue problems. For example if you accepted Rawls-style, risk-averse choice from a position where you are unaware whether you will be one of the speck-victims or the torture victim, you would immediately choose the specks. Choosing the specks maximises the welfare of the least well off (they are subject to a speck, not torture) and, if you don't know which role you will play, eliminates the risk you might be the torture victim.

    (Bentham-style utility calculations are completely risk-neutral and care only about expected return on investment. However nothing about the universe I'm aware of requires you to be this way, as opposed to being risk-averse).

    Or for that matter if you held a modified version of utilitarianism that subscribed to some notion of "justice" or "what people deserve", and cared about how utility was distributed between persons instead of being solely concerned with the strict mathematical sum of all utility and disutility, you could just say that you don't care how many dust specks you pile up, the degree of unfairness in a distribution where 3^^^3 people get out of a dust speck and one person gets tortured makes the torture scenario a less preferable distribution.

    I know Eliezer's on record as advising people not to read philosophy, but I think this is a case where that advice is misguided.

    Replies from: steven0461
    comment by steven0461 · 2012-05-11T00:22:24.926Z · LW(p) · GW(p)

    Rawls's Wager: the least well-off person lives in a different part of the multiverse than we do, so we should spend all our resources researching trans-multiverse travel in a hopeless attempt to rescue that person. Nobody else matters anyway.

    Replies from: PhilosophyTutor
    comment by PhilosophyTutor · 2012-05-11T06:25:56.057Z · LW(p) · GW(p)

    If this is a problem for Rawls, then Bentham has exactly the same problem given that you can hypothesise the existence of a gizmo that creates 3^^^3 units of positive utility which is hidden in a different part of the multiverse. Or for that matter a gizmo which will inflict 3^^^3 dust specks on the eyes of the multiverse if we don't find it and stop it. Tell me that you think that's an unlikely hypothesis and I'll just raise the relevant utility or disutility to the power of 3^^^3 again as often as it takes to overcome the degree of improbability you place on the hypothesis.

    However I think it takes a mischievous reading of Rawls to make this a problem. Given that the risk of the trans-multiverse travel project being hopeless (as you stipulate) is substantial and these hypothetical choosers are meant to be risk-averse, not altruistic, I think you could consistently argue that the genuinely risk-averse choice is not to pursue the project since they don't know this worse-off person exists nor that they could do anything about it if that person did exist.

    That said, diachronous (cross-time) moral obligations are a very deep philosophical problem. Given that the number of potential future people is unboundedly large, and those people are at least potentially very badly off, if you try to use moral philosophies developed to handle current-time problems and apply them to far-future diachronous problems it's very hard to avoid the conclusion that we should dedicate 100% of the world's surplus resources and all our free time to doing all sorts of strange and potentially contradictory things to benefit far-future people or protect them from possible harms.

    This isn't a problem that Bentham's hedonistic utilitarianism, nor Eliezer's gloss on it, handles any more satisfactorily than any other theory as far as I can tell.

    comment by White_Shark · 2012-06-30T00:12:22.279Z · LW(p) · GW(p)

    The dusk speck is a slight irritation. Hearing about somone being tortured is a bigger irritation. Also, pain depends on greatly on concentration. Something that hurts "twice as much" is actually much worse: lets say it is a hundred times worse. Offcourse this levels off (it is a curve) at some point, but in this case that is not problem as we can say that the torture is very close to the physical max and the speck's are very close to the physical minimum pain. The difference between the Speck and the torture is immense. Differense in time = 1.5 M. Difference in hurting 2 M. So we can have a huge number (like 2 Million to the power of 24 Mi to the power of 1.5 Mi). This number is going to be huge. Even if this does not add up to our number of specks one can see that one can define perimeters to make either side the better choice. In the end it is just a moral question.

    comment by BenjaminB · 2012-07-17T03:08:42.701Z · LW(p) · GW(p)

    At first, I picked the dust specks as being the preferable answer, and it seemed obvious. What eventually turned me around was when I considered the opposite situation -- with GOOD things happening, rather than BAD things. Would I prefer that one person experience 50 years of the most happiness realistic in today's world, or that 3^^^3 people experience the least good, good thing?

    Replies from: shminux, Alicorn
    comment by Shmi (shminux) · 2012-07-17T05:45:08.028Z · LW(p) · GW(p)

    Why do you think that there has to be a symmetry between positive and negative utility?

    comment by jacoblyles · 2012-07-18T09:08:28.068Z · LW(p) · GW(p)

    I was very surprised to find that a supporter of the Complexity of Value hypothesis and the author who warns against simple utility functions advocates torture using simple pseudo-scientific utility calculus.

    My utility function has constraints that prevent me from doing awful things to people, unless it would prevent equally awful things done to other people. That this is a widely shared moral intuition is demonstrated by the reaction in the comments section. Since you recognize the complexity of human value, my widely-shared preferences are presumably valid.

    In fact, the mental discomfort caused by people who heard of the torture would swamp the disutility from the dust specks. Which brings us to an interesting question - is morality carried by events or by information about events? If nobody else knew of my choice, would that make it better?

    For a utilitarian, the answer is clearly that the information about morally significant events is what matters. I imagine so-called friendly AI bots built on utilitarian principles doing lots of awful things in secret to achieve its ends.

    Also, I'm interested to hear how many torturers would change their mind if we kill the guy instead of just torturing him. How far does your "utility is all that matters" philosophy go?

    Replies from: TheOtherDave, thomblake
    comment by TheOtherDave · 2012-07-18T14:02:52.404Z · LW(p) · GW(p)

    There's something really odd about characterizing "torture is preferable to this utterly unrealizable thing" as "advocating torture."

    It's not obviously wrong... I mean, someone who wanted to advocate torture could start out from that kind of position, and then once they'd brought their audience along swap it out for simply "torture is preferable to alternatives", using the same kind of rhetorical techniques you use here... but it doesn't seem especially justified in this case. Mostly, it seems like you want to argue that torture is bad whether or not anyone disagrees with you.

    Anyway, to answer your question: to a total utilitarian, what matters is total utility-change. That includes knock-on effects, including mental discomfort due to hearing about the torture, and the way torturing increases the likelihood of future torture of others, and all kinds of other stuff. So transmitting information about events is itself an event with moral consequences, to be evaluated by its consequences. It's possible that keeping the torture a secret would have net positive utility; it's possible it would have net negative utility.

    All of which is why the original thought experiment explicitly left the knock-on effects out, although many people are unwilling or unable to follow the rules of that thought experiment and end up discussing more real-world plausible variants of it instead (as you do here).

    For a utilitarian, the answer is clearly that the information about morally significant events is what matters.

    Well, in some bizarre sense that's true. I mean, if I'm being tortured right now, but nobody has any information from which the fact of that torture can be deduced (not even me) a utilitarian presumably concludes that this is not an event of moral significance. (It's decidedly unclear in what sense it's an event at all.)

    I imagine so-called friendly AI bots built on utilitarian principles doing lots of awful things in secret to achieve its ends.

    Sure, that seems likely.

    I'm interested to hear how many torturers would change their mind if we kill the guy instead of just torturing him. How far does your "utility is all that matters" philosophy go?

    I endorse killing someone over allowing a greater amount of bad stuff to happen, if those are my choices. Does that answer your question? (I also reject your implication that killing someone is necessarily worse than torturing them for 50 years, incidentally. Sometimes it is, sometimes it isn't. Given that choice, I would prefer to die... and in many scenarios I endorse that choice.)

    Replies from: army1987, DaFranker
    comment by A1987dM (army1987) · 2012-07-18T16:20:44.225Z · LW(p) · GW(p)

    There's something really odd about characterizing "torture is preferable to this utterly unrealizable thing" as "advocating torture."

    You know, in natural language “x is better than y” often has the connotation “x is good”, and people go at lengths to avoid such wordings if they don't want that connotation. For example, “‘light’ cigarettes are no safer than regular ones” is logically equivalent to “regular cigarettes are at least as safe as ‘light’ ones”, but I can't imagine an anti-smoking campaign saying the latter.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2012-07-18T16:35:37.385Z · LW(p) · GW(p)

    Fair enough. For maximal precision I suppose I ought to have said "I reject your characterization of..." rather than "There's something really odd about characterizing...," but I felt some polite indirection was called for.

    comment by DaFranker · 2012-07-19T16:49:46.316Z · LW(p) · GW(p)

    Well, in some bizarre sense that's true. I mean, if I'm being tortured right now, but nobody has any information from which the fact of that torture can be deduced (not even me) a utilitarian presumably concludes that this is not an event of moral significance. (It's decidedly unclear in what sense it's an event at all.)

    Well, assuming the torture is artificially bounded to absolute impactlessness, then yes, it is irrelevant (in fact, it arguably doesn't even exist). However, a good rationalist utilitarian will retroactively consider future effects of the torture, supposing it is not so bounded, and once the fact of the torture can then be deduced, it does retroactively become a morally significant event in a timeless perspective, if I understand the theory properly.

    comment by thomblake · 2012-07-18T16:14:44.920Z · LW(p) · GW(p)

    The point was not necessarily to advocate torture. It's to take the math seriously.

    In fact, the mental discomfort caused by people who heard of the torture would swamp the disutility from the dust specks.

    Just how many people do you expect to hear about the torture? Have you taken seriously how big a number 3^^^3 is? By how many utilons do you expect their disutility to exceed the disutility from the dust specks?

    Replies from: jacoblyles
    comment by jacoblyles · 2012-07-18T19:06:17.771Z · LW(p) · GW(p)

    First, I don't buy the process of summing utilons across people as a valid one. Lots of philosophers have objected to it. This is a bullet-biting club, and I get that. I'm just not biting those bullets. I don't think 400 years of criticism of Utilitarianism can be solved by biting all the bullets. And in Eliezer's recent writings, it appears he is beginning to understand this. Which is great. It is reducing the odds he becomes a moral monster.

    Second, I value things other than maximizing utilons. I got the impression that Eliezer/Less Wrong agreed with me on that from the Complex Values post and posts about the evils of paperclip maximizers. So great evils are qualitatively different to me from small evils, even small evils done to a great number of people!

    I get what you're trying to do here. You're trying to demonstrate that ordinary people are innumerate, and you all are getting a utility spike from imagining you're more rational than them by choosing the "right" (naive hyper-rational utilitarian-algebraist) answer. But I don't think it's that simple when we're talking about morality. If it were, the philosophical project that's lasted 2500 years would finally be over!

    Replies from: thomblake
    comment by thomblake · 2012-07-18T19:23:11.878Z · LW(p) · GW(p)

    You were the one who claimed that the mental discomfort from hearing about torture would swamp the disutility from the dust specks - I assumed from that, that you thought they were commensurable. I thought it was odd that you thought they were commensurable but thought the math worked out in the opposite direction.

    I believe Eliezer's post was not so much directed at folks who disagree with utilitarianism - rather, it's supposed to be about taking the math seriously, for those who are. If you're not a utilitarian, you can freely regard it as another reductio.

    You don't have to be any sort of simple or naive utilitarian to encounter this problem. As long as goods are in any way commensurable, you need to actually do the math. And it's hard to make a case for a utilitarianism in which goods are not commensurable - in practice, we can spend money towards any sort of good, and we don't favor only spending money on the highest-order ones, so that strongly suggests commensurability.

    comment by Decius · 2012-07-18T19:01:44.653Z · LW(p) · GW(p)

    No. One of those actions, or something different, happens if I take no action. Assuming that neither the one person nor the 3^^^3 people have consented to allow me to harm them, I must choose the course of action by which I harm nobody, and the abstract force harms people.

    If you instead offer me the choice where I prevent the harm (and that the 3^^^3+1 people all consent to allow me to do so), then I choose to prevent the torture.

    My maximal expected utility is one in which there is a universe in which I have taken zero additional actions without the consent of every other party involved. With that satisfied, I seek to maximize my own happiness. It would make me happier to prevent a significant harm than to prevent an insignificant harm, and both would be preferable to preventing no harm, all other things being equal.

    If the people in question consented to the treatment, then the decision is amoral, and I would choose to inflict the insignificant harm.

    From a strict utility perspective, if you describe the value the torture as -1, do you describe the value of the speck of dust in one person's eye as less than -1/(3^^^3)? There is some epsilon for which it is preferable to have harm of epsilon done to any real number of people than to have harm of -1 done to one person. Admitting that does not prohibit you from comparing epsilons, either.

    comment by fubarobfusco · 2012-07-29T14:41:07.574Z · LW(p) · GW(p)

    How bad is the torture option?

    Let's say a human brain can have ten thoughts per second; or the rate of human awareness is ten perceptions per second. Fifty years of torture means just over one and a half billion tortured thoughts, or perceptions of torture.

    Let's say a human brain can distinguish twenty logarithmic degrees of discomfort, with the lowest being "no discomfort at all", the second-lowest being a dust speck, and the highest being torture. In other words, a single moment of torture is 2^19 = 524288 times worse than a dust speck; and a dust speck is the smallest discomfort possible. Let's call a unit of discomfort a "dol" (from Latin dolor).

    In other words, the torture option means 1.5 billion moments × 2^19 dols; whereas the dust-specks option means 3^^^3 moments × 1 dol.

    The assumptions going into this argument are the speed of human thought or perception, and the scale of human discomfort or pain. These are not accurately known today, but there must exist finite limits — humans do not think or perceive infinitely fast; and the worst unpleasantness we can experience is not infinitely bad. I have assumed a log scale for discomfort because we use log scales for other senses, e.g. brightness of light and volume of sound. However, all these assumptions can be empirically corrected based on facts about human neurology.

    Torture is really, really bad. But it is not infinitely bad.

    That said, there may be other factors in the moral calculation of which to prefer. For instance, the moral badness of causing a particular level of discomfort may not be linear in the amount of discomfort: causing three dols once may be worse than causing one dol three times. However, this seems difficult to justify. Discomfort is subjective, which is to say, it is measured by the beholder — and the beholder only has so much brain to measure it with.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2012-07-29T15:01:54.662Z · LW(p) · GW(p)

    I suspect that I would prefer the false memory of having been tortured for five minutes to the false memory of having been tortured for a year, assuming the memories are close replicas of what memories of the actual event would be like. I would relatedly prefer that someone else experience the former rather than the latter, even if I'm perfectly aware the memory is false. This suggests to me that whatever I'm doing to make my moral judgments that torture is bad, it's not just summing the number of perception-moments... there are an equal number of perception-moments in those two cases, after all. (Specifically, none at all.)

    That said, this line of thinking quickly runs aground on the "no knock-on effects" condition of the initial thought experiment.

    Replies from: fubarobfusco, Elithrion
    comment by fubarobfusco · 2012-07-29T15:08:32.541Z · LW(p) · GW(p)

    This suggests to me that whatever I'm doing to make my moral judgments that torture is bad, it's not just summing the number of perception-moments... there are an equal number of perception-moments in those two cases, after all. (Specifically, none at all.)

    True — we need a term for moments of discomfort caused by contemplation, not just ones caused by perception.

    It seems to me, though, that your brain can only perceive a finite number of gradations of unpleasant contemplation, too. The memory of being tortured for five minutes, the memory of being tortured for a year, and the memory of having gotten a dust speck in your eye could occupy points on this scale of unpleasantness.

    comment by Elithrion · 2013-01-21T00:09:14.836Z · LW(p) · GW(p)

    I suspect that I would prefer the false memory of having been tortured for five minutes to the false memory of having been tortured for a year, assuming the memories are close replicas of what memories of the actual event would be like.

    Actually, from what I read about related research in "Thinking, Fast and Slow", it's not clear that you would (or that the difference would be as large as you might expect, at least). It seems that memories of pain depend largely on the most intense moment of pain and on the final moment of pain, not necessarily on duration.

    For example, in one experiment (I read the book a week ago and write from memory), subjects were asked to put their hand in a bowl of cold water (a painful experience) for two minutes, then they were asked to put their hands in cold water for two minutes, followed by the water being warmed gradually over another 5 minutes. (There were reasonable controls, obviously.) Then they were asked which experience to repeat. The majority chose experience two, even though intuitively it is strictly worse than experience one.

    Of course, you'd have to find the actual related paper(s), check how high the correlation/ignoring-duration effect is, check if there's significant inter-individual variation (whether maybe you're an unusual person who cares about duration), but, regardless, there are significant reasons to doubt your intuitions in this scenario.

    Replies from: MugaSofer
    comment by MugaSofer · 2013-01-21T09:17:03.362Z · LW(p) · GW(p)

    ... huh.

    I wonder if we might actually value experiences this way?

    Replies from: Elithrion
    comment by Elithrion · 2013-01-21T19:54:34.522Z · LW(p) · GW(p)

    Daniel Kahneman suggests that we do. We remember thing imperfectly and optimize for the way we remember things. Wiki has a quick summary.

    comment by rkyeun · 2012-08-01T23:51:41.296Z · LW(p) · GW(p)

    I think I have to go with the dust specks. Tomorrow, all 3^^^3 of those people will have forgotten entirely about the speck of dust. It is an event nearly indistinguishable from thermal noise. People, all of them everywhere, get dust specks in their eyes just going about their daily lives with no ill effect.

    The torture actually hurts someone. And in a way that's rather non-recoverable. Recoverability plays a large part in my moral calculations.

    But there's a limit to how many times I can make that trade. 3^^^3 people is a LOT of people, and it doesn't take a significant fraction of THAT at all before I have to stop saving torture victims, lest everyone everywhere's lives consist of nothing but a sandblaster to the face.

    Replies from: DaFranker
    comment by DaFranker · 2012-08-02T00:41:40.542Z · LW(p) · GW(p)

    What you're doing there is positing a "qualitative threshold" of sorts where the anti-hedons from the dust specks cause absolutely zero disutility whatsoever. This can be an acceptable real-world evaluation within loaded subjective context.

    However, the problem states that the dust specks have non-zero disutility. This means that they do have some sort of predicted net negative impact somewhere. If that impact is merely to slow down the brain's visual recognition of one word by even 0.03 seconds, in a manner that is directly causal and where the dust speck would have avoided this delay, then over 3^^^3 people that is still more man-hours of work lost than the sum of all lifetimes of all humans on Earth to this day ever. If that is not a tragic loss much more dire than one person being tortured, I don't see what could be. And I'm obviously being generous there with that "0.03 seconds" estimate.

    Theoretically, all this accumulated lost time could mean the difference between the extinction or survival of the human race to a pan-galactic super-cataclysmic event, simply by way of throwing us off the particular course of planck-level-exactly-timed course of events that would have allowed us to find a way to survive just barely by a few (total, relatively absolute) seconds too close for comfort.

    That last is assuming the deciding agent has the superintelligence power to actually compute this. If calculating from unknown future causal utilities, and the expected utility of a dust speck is still negative non-zero, then it is simple abstraction of the above example and the rational choice is still simply the torture.

    Replies from: rkyeun, Eliezer_Yudkowsky
    comment by rkyeun · 2012-08-02T02:08:35.944Z · LW(p) · GW(p)

    If you ask me the slightly different question, where I choose between 50 years of torture applied to one man, or between 3^^^3 specks of dust falling one each into 3^^^3 people's eyes and also all humanity being destroyed, I will give a different answer. In particular, I will abstain, because my moral calculation would then favor the torture over the destruction of the human race, but I have a built-in failure mode where I refuse to torture someone even if I somehow think it is the right thing to do.

    But that is not the question I was asked. We could also have the man tortured for fifty years and then the human race gets wiped out BECAUSE the pan-galactic cataclysm favors civilizations who don't make the choice to torture people rather than face trivial inconveniences.

    Consider this alternate proposal:

    Hello Sir and/or Madam:

    I am trying to collect 3^^^3 signatures in order to prevent a man from being tortured for 50 years. Would you be willing to accept a single speck of dust into your eye towards this goal? Perhaps more? You may sign as many times as you are comfortable with. I eagerly await your response.

    Sincerely,

    rkyeun

    PS: Do you know any masochists who might enjoy 50 years of torture?

    BCC: 3^^^3-1 other people.

    comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-02T04:28:59.134Z · LW(p) · GW(p)

    We did specify no long-term consequences - otherwise the argument instantly passes, just because at least 3^^7625597484986 people would certainly die in car accidents due to blinking. (3^^^3 is 3 to the power of that.)

    Replies from: DaFranker, Kawoomba
    comment by DaFranker · 2012-08-02T11:26:44.380Z · LW(p) · GW(p)

    I admit the argument of long-term "side effects" like extinction of the human race was gratuitous on my part. I'm just intuitively convinced that such possibilities would count towards the expected disutility of the dust motes in a superintelligent perfect rationalist's calculations. They might even be the only reason there is any expected disutility at all, for all I know.

    Otherwise, my puny tall-monkey brain wiring has a hard time imagining how a micro-fractional anti-hedon would actually count for anything other than absolute zero expected utility in the calculations of any agent with imperfect knowledge.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2012-08-02T14:08:06.713Z · LW(p) · GW(p)

    Sure. Admittedly, when there are 3^^^3 humans around, torturing me for fifty years is also such a negligible amount of suffering relative to the current lived human experience that it, too, has an expected cost that rounds to zero in the calculations of any agent with imperfect knowledge, unless they have some particular reason to care about me, which in that world is vanishingly unlikely.

    Replies from: DaFranker
    comment by DaFranker · 2012-08-02T15:15:59.591Z · LW(p) · GW(p)

    Heh.

    When put like that, my original post / arguments sure seem not to have been thought through as much as I thought I had.

    Now, rather than thinking the solution obvious, I'm leaning more towards the idea that this eventually reduces to the problem of building a good utility function, one that also assigns the right utility value to the expected utility calculated by other beings based on unknown (or known?) other utility functions that may or may not irrationally assign disproportionate disutility to respective hedon-values.

    Otherwise, it's rather obvious that a perfect superintelligence might find a way to make the tortured victim enjoy the torture and become enhanced by it, while also remaining a productive member of society during all fifty years of torture (or some other completely ideal solution we can't even remotely imagine) - though this might be in direct contradiction with the implicit premise of torture being inherently bad, depending on interpretation/definition/etc.

    EDIT: Which, upon reading up a bit more of the old comments on the issue, seems fairly close to the general consensus back in late 2007.

    comment by Kawoomba · 2012-08-03T19:16:35.853Z · LW(p) · GW(p)

    If you still use "^" to refer to Knuth's up-arrow notation, then 3^^^3 != 3^(3^^26).

    3^^^3 = 3^^(3^^3) = 3^^(3^27) != 3^(3^^27)

    Replies from: Eliezer_Yudkowsky
    comment by Ronny Fernandez (ronny-fernandez) · 2012-08-10T22:00:54.348Z · LW(p) · GW(p)

    If asked independently whether or not I would take an eyeball speck in the eye to spare a stranger 50 years of torture, i would say "sure". I suspect most people would if asked independently. It should make no difference to each of those 3^^^3 dust speck victims that there are another (3^^^3)-1 people that would also take the dust speck if asked.

    It seems then that there are thresholds in human value. Human value might be better modeled by sureals than reals. In such a system we could represent the utility of 50 years of torture as -Ω and represent the utility of a dust speck in one's eye as -1. This way, no matter how many dust specks end up in eyes, they don't add up to torturing someone for 50 years. However we would still minimize torture, and minimize dust specks.

    The greater problem is to exhibit a general procedure for when we should treat one fate as being infinitely worse than another, vs. treating it as merely being some finite amount worse.

    Replies from: ronny-fernandez, Kindly, fubarobfusco
    comment by Ronny Fernandez (ronny-fernandez) · 2012-08-10T22:06:00.541Z · LW(p) · GW(p)

    Here's a suggestion: if someone going through a fate A, is incapable of noticing whether or not they're going through fate B, then fate A is infinitely worse than fate B.

    comment by Kindly · 2012-08-10T23:36:52.437Z · LW(p) · GW(p)

    That's a fairly manipulative way of asking you to make that decision, though. If I were asked whether or not I would take a hard punch in the arm to spare a stranger a broken bone, I would answer "sure", and I suspect most people would, as well. However, it is pretty much clear to me that 3^^^3 people getting punched is much much worse than one person breaking a bone.

    comment by fubarobfusco · 2012-08-11T06:01:21.252Z · LW(p) · GW(p)

    It should make no difference to each of those 3^^^3 dust speck victims that there are another (3^^^3)-1 people that would also take the dust speck if asked.

    That rests on the assumption that each person only cares about their own dust speck and the possible torture victim. If people are allowed to care about the aggregate quantity of suffering, then this choice might represent an Abilene paradox.

    comment by Irgy · 2012-09-08T01:30:39.025Z · LW(p) · GW(p)

    The other day, I got some dirt in my eye, and I thought "That selfish bastard, wouldn't go and get tortured and now we all have to put up with this s#@$".

    comment by mantis · 2012-09-09T18:54:47.021Z · LW(p) · GW(p)

    I don't see that it's necessary -- or possible, for that matter -- for me to assign dust specks and torture to a single, continuous utility function. On a scale of disutility that includes such events as "being horribly tortured," the disutility of a momentary irritation such as a dust speck in the eye has a value of precisely zero -- not 0.000...0001, but just plain 0, and of course, 0 x 3^^^3 = 0.

    Furthermore, I think the "minor irritations" scale on which dust specks fall might increase linearly with the time of exposure, and would certainly increase linearly with number of individuals exposed to it. On the other hand, the disutility of torture, given my understanding of how memory and anticipation affect people's experience of pain, would increase exponentially over time from a range of a few microseconds to a few days, then level off to something less than a linear increase with acclimatization over the range of days to years. It would increase linearly with the number of people suffering a given degree of pain for a given amount of time. (All other things being equal, of course. People's pain tolerance varies with age, experience, and genetics; it would be much worse to inflict any given amount of pain on a young child than on an adult who's already gone through, say, Navy S.E.A.L. training, and thus demonstrated a far higher-than-average pain tolerance.)

    Thus, it would be enormously worse to inflict X amount of pain on one individual for sixty minutes than on 60 individuals for one minute each, which in turn would be much worse than inflicting the same pain on 3600 individuals for one second each -- and if we could spread it out to a microsecond each for 36,000,000 people, the disutility might vanish altogether as the "experience" becomes too brief for the human nervous system to register at all, and thus ceases to be an experience. However, once we get past where acclimatization inflects the curve, it would be much worse to torture 52 people for one week each than to torture one person for an entire year. It might even be worse to torture ten people for one week each than one for an entire year -- I'm not sure of the precise values involved in this utility function, and happily, at the fine scale, I'll probably never need to work them out (the empirical test is possible in principle, of course, but could only be performed in practice by a fiend like Josef Mengele).

    There's also the fact that knowing many people can and have endured a particular pain seems to make it more endurable for others who are aware of that fact. As Spider Robinson says, "Shared joy is increased, shared pain is lessened" -- I don't know if that really "refutes entropy," but both of those clauses are true individually. That's part of the reason egalitarianism, as other commenters have pointed out, has positive utility value.

    Replies from: Kindly, None
    comment by Kindly · 2012-09-09T19:20:09.885Z · LW(p) · GW(p)

    If dust specks have a value of 0, then what's the smallest amount of discomfort that has a nonzero value instead? Use that as your replacement dust speck.

    And of course, the disutility of torture certainly increases in nonlinear ways with time. The 3^^^3 is there to make up for that. 50 years of torture for one person is probably not as bad as 25 years of torture for a trillion people. This in turn is probably not as bad as 12.5 years of torture for a trillion trillion people (sorry my large number vocabulary is lacking). If we keep doing this (halving the torture length, multiplying the number of people by a trillion) then are we always going from bad to worse? And do we ever get to the point where each individual person tortured experiences about as much discomfort as our replacement dust speck?

    Replies from: mantis
    comment by mantis · 2012-09-10T17:08:40.007Z · LW(p) · GW(p)

    If dust specks have a value of 0, then what's the smallest amount of discomfort that has a nonzero value instead?

    I don't know exactly where I'd make the qualitative jump from the "discomfort" scale to the "pain" scale. There are so many different kinds of unpleasant stimuli, and it's difficult to compare them. For electric shock, say, there's probably a particular curve of voltage, amperage and duration below which the shock would qualify as discomfort, with a zero value on the pain scale, and above which it becomes pain (I'll even go so far as to say that for short periods of contact, the voltage and amperage values lies between those of a violet wand and those of a stun gun). For localized heat, I think it would have to be at least enough to cause a small first-degree burn; for localized cold, enough to cause the beginnings of frostbite (i.e. a few living cells lysed by the formation of ice crystals in their cytoplasm). For heat and cold over the whole body, it would have to be enough to overcome the body's natural thermostat, initiating hypothermia or heatstroke.

    It occurs to me that I've purposefully endured levels of discomfort I would probably regard as pain with a non-zero value on the torture scale if it was inflicted on me involuntarily, as a result of working out at the gym (which has an expected payoff in health and appearance, of course), and from wearing an IV for two 36-hour periods in a pharmacokinetic study for which I'd volunteered (it paid $500); I would certainly do so again, for the same inducements. Choice makes a big difference in our subjective experience of an unpleasant stimulus.

    50 years of torture for one person is probably not as bad as 25 years of torture for a trillion people.

    Of course not; by the scale I posited above, 50 years for one person isn't even as bad as 25 years for two people.

    If we keep doing this (halving the torture length, multiplying the number of people by a trillion) then are we always going from bad to worse?

    No, but the length has to get pretty tiny (probably somewhere between a millisecond and a microsecond) before we reverse the direction.

    And do we ever get to the point where each individual person tortured experiences about as much discomfort as our replacement dust speck?

    Yes, we do; in fact, we eventually get to a point where each person "tortured" experiences no discomfort at all, because the nervous system is not infinitely fast nor infinitely sensitive. If you're using temperature for your torture, heat transfer happens at a finite speed; no matter how hot or cold the material that touches your skin, there's a possible time of contact short enough that it wouldn't change your skin temperature enough to cause any discomfort at all. Even an electric shock could be brief enough not to register.

    Replies from: Kindly, aspera
    comment by Kindly · 2012-09-10T20:34:43.613Z · LW(p) · GW(p)

    In other words, it follows that 1 person being tortured for 50 years is better than 3^^^3 people being tortured for a millisecond.

    You're well on your way to the dark side.

    Replies from: mantis
    comment by mantis · 2012-09-11T04:46:38.308Z · LW(p) · GW(p)

    I might have to bring it up to a minute or two before I'd give you that -- I perceive the exponential growth in disutility for extreme pain over time during the first few minutes/hours/days as very, very steep. Now, if we posit that the people involved are immortal, that would change the equation quite a bit, because fifty years isn't proportionally that much more than fifty seconds in a life that lasts for billions of years; but assuming the present human lifespan, fifty years is the bulk of a person's life. What duration of torture qualifies as a literal fate worse than (immediate) death, for a human with a life expectancy of eighty years? I'll posit that it's more than five years and less than fifty, but beyond that I wouldn't care to try to choose.

    Let's step away from outright torture and look at something different: solitary confinement. How long does a person have to be locked in a room against his or her will before it rises to a level that would have a non-zero disutility you could multiply by 3^^^3 to get a higher disutility than that of a single person (with a typical, present-day human lifespan) locked up that way for fifty years? I'm thinking, off the top of my head, that non-zero disutility on that scale would arise somewhere between 12 and 24 hours.

    comment by aspera · 2012-11-16T20:06:02.323Z · LW(p) · GW(p)

    The idea that the utility should be continuous is mathematically equivalent to the idea that an infinitesimal change on the discomfort/pain scale should give an infinitesimal change in utility. If you don't use that axiom to derive your utility funciton, you can have sharp jumps at arbitrary pain thresholds. That's perfectly OK - but then you have to choose where the jumps are.

    Replies from: shminux, mantis
    comment by Shmi (shminux) · 2012-11-16T21:25:14.756Z · LW(p) · GW(p)

    then you have to choose where the jumps are

    It could be worse than that: there might not be a way to choose the jumps consistently, say, to include different kinds of discomfort, some related to physical pain and others not (tickling? itching? anguish? ennui?)

    comment by mantis · 2012-11-21T20:54:13.758Z · LW(p) · GW(p)

    I think that's probably more practical than trying to make it continuous, considering that our nervous systems are incapable of perceiving infinitesimal changes.

    Replies from: aspera
    comment by aspera · 2012-11-23T05:40:16.850Z · LW(p) · GW(p)

    Yes, we are running on corrupted hardware at about 100 Hz, and I agree that defining broad categories to make first-cut decisions is necessary.

    But if we were designing a morality program for a super-intelligent AI, we would want to be as mathematically consistent as possible. As shminux implies, we can construct pathological situations that exploit the particular choice of discontinuities to yield unwanted or inconsistent results.

    comment by [deleted] · 2012-09-09T19:27:01.281Z · LW(p) · GW(p)

    If getting hit by a dust speck has u = 0, then air pressure great enough to crush you has u = 0.

    Replies from: mantis
    comment by mantis · 2012-09-11T04:20:20.581Z · LW(p) · GW(p)

    Nope, that doesn't follow; multiplication isn't the only possible operation that can be applied to this scale.

    comment by mantis · 2012-09-09T19:18:28.707Z · LW(p) · GW(p)

    Incidentally, I think that if you pick "dust specks," you're asserting that you would walk away from Omelas; if you pick torture, you're asserting that you wouldn't.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2012-09-09T20:18:17.239Z · LW(p) · GW(p)

    The kind of person who chooses an individual suffering torture in order to spare a large enough number of other people lesser discomfort endorses Omelas. The kind of individual who doesn't not only walks away from Omelas, but wants it not to exist at all.

    Replies from: Kindly
    comment by Kindly · 2012-09-09T22:09:34.088Z · LW(p) · GW(p)

    This is exactly what bothered me about the story, actually. You can choose to help the child and possibly doom Omelas, or you can choose not to, for whatever reason. But walking away doesn't solve the problem!

    Replies from: NancyLebovitz, TheOtherDave, mantis
    comment by NancyLebovitz · 2012-09-09T22:28:06.090Z · LW(p) · GW(p)

    It certainly doesn't. However, it shows more moral perceptiveness than most people have.

    comment by TheOtherDave · 2012-09-09T23:25:12.438Z · LW(p) · GW(p)

    Well, it depends on the nature of the problem I've identified. If I endorse Omelas, but don't wish to partake of it myself, walking away solves that problem. (I endorse lots of relationships I don't want to participate in.)

    Replies from: Kindly
    comment by Kindly · 2012-09-09T23:33:56.849Z · LW(p) · GW(p)

    That's not a moral objection, that's a personal preference.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2012-09-09T23:38:39.706Z · LW(p) · GW(p)

    Yes, that's true. It's hard to have a moral objection to something I endorse.

    comment by mantis · 2012-09-10T17:23:34.480Z · LW(p) · GW(p)

    True. On reflection, it's patently obvious that the Less Wrong way to deal with Omelas is not to accept that the child's suffering is necessary to the city's welfare, and dedicate oneself to finding the third alternative. "Some of them understand why," so it's obviously possible to know what the connection is between the child and the city; knowing that, one can seek some other way of providing whatever factor the tormented child provides. That does mean allowing the suffering to go on until you find the solution, though -- if you free the child and ruin Omelas, it's likely too late at that point to achieve the goal of saving both.

    comment by aspera · 2012-11-07T22:41:19.364Z · LW(p) · GW(p)

    Bravo, Eliezer. Anyone who says the answer to this is obvious is either WAY smarter than I am, or isn't thinking through the implications.

    Suppose we want to define Utility as a function of pain/discomfort on the continuum of [dust speck, torture] and including the number of people afflicted. We can choose whatever desiderata we want (e.g. positive real valued, monotonic, commutative under addition).

    But what if we choose as one desideratum, "There is no number n large enough such that Utility(n dust specks) > Utility(50 yrs torture)." What does that imply about the function? It can't be analytic in n (even if n were continuous). That rules out multaplicative functions trivially.

    Would it have singularities? If so, how would we combine utility functions at singular values? Take limits? How, exactly?

    Or must dust specks and torture live in different spaces, and is there no basis that can be used to map one to the other?

    The bottom line: is it possible to consistently define utility using the above desideratum? It seems like it must be so, since the answer is obvious. It seems like it must not be so, because of the implications for the utility function as the arguments change.

    Edit: After discussing with my local meetup, this is somewhat resolved. The above desiderata require the utility to be bounded in the number of people, n. For example, it could be a staurating exponential function. This is self-consistent, but inconsistent with the notion that because experience is independent, utilities should add.

    Interestingly, it puts strict mathematical rules on how utility can scale with n.

    comment by [deleted] · 2012-12-18T23:24:05.529Z · LW(p) · GW(p)

    To me, this experiment shows that absolute utilitarianism does not make a good society. Conversely, a decision between, say, person A getting $100 and person B getting $1 or both of them getting $2 shows absolute egalitarianism isn't satisfactory either (assuming simple transfers are banned). Perhaps the inevitable realization...is that some balance between them is possible, such as the weighted sum (sum indicating utilitarianism) with more weight applied to those who have less (this indicating egalitarianism) can provide such a balance?

    Replies from: BerryPick6
    comment by BerryPick6 · 2012-12-18T23:42:23.572Z · LW(p) · GW(p)

    To me, this experiment shows that absolute utilitarianism does not make a good society.

    I don't see how you've arrived at that at all. Would you mind elaborating?

    Replies from: None
    comment by [deleted] · 2012-12-18T23:56:23.961Z · LW(p) · GW(p)

    To choose torture rather than dust specks is the utilitarian option, maximizing the total sum of subjective utility. This, however, causes extreme pain to 1 person to merely make everyone else receive slightly less negligible inconvenience. Anyone who picks dust specks is agreeing that utilitarianism is not always right (in fact eliezer says in his follow-up of this that in doing so, one rejects a certain kind of utilitarianism). If you chose torture though, I can see why you'd feel otherwise.

    Replies from: BerryPick6
    comment by BerryPick6 · 2012-12-19T00:00:22.686Z · LW(p) · GW(p)

    Where's your argument to the effect that absolute utilitarianism does not make a good society? Further, could you taboo "good society" while you're at it?

    Replies from: None
    comment by [deleted] · 2012-12-19T00:04:38.154Z · LW(p) · GW(p)

    Right, I should have said "is not optimal" rather than "does not make a good society". My basic point being that if we agree that dust specks are best (which I admit we're not in unanimity about), we reject utilitarianism as an optimal allocation rule. I do not discredit it as a whole (i.e. utilitarianism still has some merit as a guideline), but if we reject it even once, "absolute utilitarianism" (the belief that it is always optimal) cannot hold.

    Replies from: BerryPick6
    comment by BerryPick6 · 2012-12-19T00:09:25.660Z · LW(p) · GW(p)

    So your basic contention is: "If you agree that dust specks is the answer, you can't say that torture is the answer"?

    This sounds fairly obvious.

    Replies from: None
    comment by [deleted] · 2012-12-19T00:14:39.244Z · LW(p) · GW(p)

    Heh, no I'm not saying that if X holds then ~X fails to hold, I expect that to also be the case, but that's not what I'm saying. I'm saying that we (those of us who chose dust specks) have chosen to reject utilitarianism and proposing an alternative, since we can't merely choose nonapples over apples.

    Replies from: BerryPick6
    comment by BerryPick6 · 2012-12-19T00:25:27.529Z · LW(p) · GW(p)

    Heh, no I'm not saying that if X holds then ~X fails to hold.

    I had a feeling you weren't. :)

    I'm saying that we (those of us who chose dust specks) have chosen to reject utilitarianism and proposing an alternative, since we can't merely choose nonapples over apples.

    Yes, that's accurate. If you take utilitarianism to its logical conclusion, you reach things like Torture in T v. DS problems. This conversation reminds me a lot of the excellent book "The Limits of Morality."

    I'd be curious as to why anyone would choose to reject utilitarianism on the basis of this thought experiment, though.

    Replies from: None
    comment by [deleted] · 2012-12-19T00:33:48.616Z · LW(p) · GW(p)

    Then it seems we've reached an agreement, as the agreement theorem says we should. And yes, this is a thought experiment, it is unlikely that anyone will ever have to choose between such extremes (or that 3^^^3 people will ever exist, at once or even in total). However, whether real or not, if one rejects utilitarianism here, they can't simply say "Well it works in all real scenarios though". Eliezer could have just as easily mentioned a utility monster, but he felt like conveying the same thought experiment in a more original way.

    Replies from: BerryPick6, ArisKatsaris
    comment by BerryPick6 · 2012-12-19T00:39:09.894Z · LW(p) · GW(p)

    However, whether real or not, if one rejects utilitarianism here, they can't simply say "Well it works in all real scenarios though". Eliezer could have just as easily mentioned a utility monster, but he felt like conveying the same thought experiment in a more original way.

    Right. I'm just unclear as to why people (not you specifically, I just meant it generally in my previous comment) interpret these kinds of stories as criticisms of utilitarianism. They are simply taking the axioms to their logical extremes, not offering arguments against accepting those axioms in the first place.

    Replies from: None
    comment by [deleted] · 2012-12-19T00:46:43.299Z · LW(p) · GW(p)

    Ah, well if that's the point you're making then yes, you're indeed correct. Eliezer has by no means argued that utilitarianism is entirely wrong, just shown that its logical extreme is wrong (which may or may not have been his intention). If you're arguing that others are seeing this in a different way than we agreeably have, and have interpreted this article in a different way than is rational...well, you may also have a point there. It's not particularly surprising though, since there are dozens (perhaps hundreds) of ways to succumb to 1 or more fallacies and only 1 way to succumb to none.

    comment by ArisKatsaris · 2012-12-19T03:44:04.021Z · LW(p) · GW(p)

    First of all, I am for the torture - so are 22.1% of the people recently surveyed vs 36.8% who are for the dust specks -- the rest don't want to respond or are unsure.

    Secondly, the issue of small dispersed disutilities vs large concentrated ones is one we constantly encounter in the real world, and time after time society accepts that for the purpose of e.g. the convenience of driving, we can tolerate the unavoidable tradeoff of the occasional traffic accidents. Where we don't sacrifice every tiny little luxury just to gather resources to save a single extra life. If you had to break 7 billion legs to save a single man from being tortured, most people would not accept this tradeoff as acceptable.

    Once this logic is in place, all that remains is the scope insensitivity where people can't really intuit the vast size of 3^^^3.

    comment by khriys · 2013-02-07T10:53:12.826Z · LW(p) · GW(p)

    I would suggest the answer is fairly obviously that one person be horribly tortured for 50 years, on the grounds that the idea "there exists 3^^^3 people" is incomprehensible cosmic horror even before you add in the mote of dust.

    Replies from: ygert
    comment by ygert · 2013-02-07T12:15:53.771Z · LW(p) · GW(p)

    I am not so sure the existence of 3^^^3 people is a bad thing, but even granting that, assume that the 3^^^3 people exist regardless, and the two choices you have are: a) one of them is tortured for 50 years, or b) each and every one of them gets a mote of dust in the eye.

    In general, if you find an objection to the premises of a question that does not directly impact the "point" of the question, you should find a variant of the premises that removes that objection, and answer the variant of the question with that as the premise. See The Least Convenient Possible World.

    Replies from: khriys
    comment by khriys · 2013-02-07T13:13:44.227Z · LW(p) · GW(p)

    Wait, does the original question simplify to:

    "[There exists 3^^^3 people] AND [of the set of all people there exists one that is tortured for 50 years OR of the set of all people, all get a mote of dust in the eye; which would you prefer]"?

    Because that would be quite different to:

    "[of the set of all people there exists one person who will be tortured for 50 years] OR [there exists 3^^^3 people AND each of them gets a mote of dust in the eye]; which would you prefer?"

    I answered the latter.

    Replies from: ArisKatsaris
    comment by ArisKatsaris · 2013-02-07T13:17:34.570Z · LW(p) · GW(p)

    The point of the question was to ask us to judge between the disutility of many people dust specked and a single person tortured, not to place a value on whether 3^^^3 existences is itself a bad or a good thing.

    So, kinda of the former interpretation, except that the "3^^^3 people" part is merely the setting that enables the question, not really the point of the question...

    EDIT: Btw, since I'm an anti-specker, I tried to calculate an upper bound once, for number of specks... It ended up being about 1.2 * 10^20 dust specks

    Replies from: khriys
    comment by khriys · 2013-02-07T13:36:39.657Z · LW(p) · GW(p)

    Surely the incomprehensibly large number is part of the point of the question, otherwise why not use the set of all existing people being dust specked? ~7 billion dustmoted vs. 1 tortured?

    3^^^3 people is more sentient mass than could physically fit in our universe.

    Edit: Here's how I imagined that playing out: 3^^^3 people are brought into existence, displacing all the matter of the universe. Which, while still momentarily conscious, each gets a mote of this matter in their eye, causing minor discomfort. They then all immediately die, and in the following eternity their bodies and the remainder of the universe collapses to a single point.

    Replies from: ArisKatsaris
    comment by ArisKatsaris · 2013-02-07T14:13:12.647Z · LW(p) · GW(p)

    Surely the incomprehensibly large number is part of the point of the question, otherwise why not use the set of all existing people being dust specked? ~7 billion dustmoted vs. 1 tortured?

    Because 7 billion dust specks aren't enough. Obviously.

    The point of the question is an extremely large number of tiny disutilities compared to a single vast disutility. When you're imagining 3^^^3 deaths instead and the destruction of the universe, you're kinda missing the point.

    Replies from: khriys
    comment by khriys · 2013-02-07T14:19:57.135Z · LW(p) · GW(p)

    What about 7 billion stubbed toes?

    Replies from: ArisKatsaris
    comment by ArisKatsaris · 2013-02-07T20:02:14.139Z · LW(p) · GW(p)

    A few posts up, I've already linked to some calculations about various scenarios. You can look at them, if you are really genuinely interested - but why would you be? It's the principle of the thing that's interesting, not some inexact numbers one roughly calculates.

    comment by ekramer · 2013-02-09T13:26:38.673Z · LW(p) · GW(p)

    I have been reading less wrong about 6 month but this is my first post. I'm not an expert but an interested amateur. I read this post about 3 weeks ago and thought it was a joke. After working through replies and following links, I get it is a serious question with serious consequences in the world today. I don’t think my comments duplicate others already in the thread so here goes…

    Let’s call this a “one blink” discomfort (it comes and goes in a blink) and let’s say that on average each person gets one every 10 minutes during their waking hours. In reality it is probably more, but you forget about such things almost as quick as they happen. If it is “right” to send one person to 50 years of torture to save 3^^^3 from a one blink discomfort, it is right to send 6 people per hour to the same fate for each of the sixteen waking hours per day, for a total of 96 people per day and 35,040 people each year.

    And if it is right to send 35,040 to 50 years of torture to save 3^^^3 people for a single year from all one blink discomforts, then it is right to send another 35,040 to the same fate in order to save 3^^^3 people from all two blink discomforts, assuming each person on average get a two blink discomfort (it comes and goes in two blinks) thrice per hour. We have now saved the multiverse from all one and two bink discomforts for one year at the cost of sending 70,080 people to 50 years torture.

    By the time the first person comes out of torture 50 years later, over 3.5 million people will have followed them into the torture chambers to save everyone else from discomforts lasting two blinks or less. If you follow the logic through from one blink up to each person gets one mild cold or some such per year that inflicts 2 days of discomfort, and you are sending untold trillions to 50 years torture every year. It seems to me a significant minority of the population would be in the torture chambers at any one time to save the majority from discomfort.

    My question to the torturers is how far are you willing to take your logic before you look at yourself in the mirror and see a monster?

    Replies from: BerryPick6, Kindly
    comment by BerryPick6 · 2013-02-09T14:00:54.786Z · LW(p) · GW(p)

    What makes you think that making the numbers bigger changes anything? Anyone who switches answers between the original question and yours is confused.

    Replies from: ekramer
    comment by ekramer · 2013-02-10T12:02:45.103Z · LW(p) · GW(p)

    So you would be willing to keep sending more and more people to torture for trivial less discomfort for the majority for each person tortured. At what point would you say enough is enough?

    Replies from: BerryPick6
    comment by BerryPick6 · 2013-02-10T14:45:26.905Z · LW(p) · GW(p)

    Once the positive consequences are outweighed by the negative consequences, obviously.

    comment by Kindly · 2013-02-09T15:30:18.835Z · LW(p) · GW(p)

    3^^^3 is very very very very large.

    If we're sending untold trillions of people to torture every year, out of 3^^^3 people total, that means that over the whole history of our universe, future and past, we have a vanishingly small chance of seeing a single person in our universe get taken away for torture. Two people is even more negligible. In the meantime, all discomforts up to the level of one mild cold get prevented for everyone.

    Heck, I'd be willing to absorb all the torture-probability our universe would receive for myself, just so I wouldn't have to suffer through the mild cold I'm having right now. I take a greater risk by walking down the stairs every day. Where do I sign up?

    Replies from: ekramer
    comment by ekramer · 2013-02-09T20:29:18.680Z · LW(p) · GW(p)

    OK, so you would accept less than one person per universe to be tortured for 50 years for everyone to avoid occasional mild discomfort. But that doesn’t answer the question of how far you are willing to take this logic. We haven’t even began to touch serious discomfort like half the population getting menstrual cramps every month, let alone prolonged pain and suffering. Would you send one person per planet for torture? One person per city? One person per family?

    The end result of this game is that a significant minority of people are being tortured at any one time so the majority can live lives free of discomfort, pain and suffering. So is your acceptable ratio 1:1,000,000, 1:10?

    Replies from: Kindly
    comment by Kindly · 2013-02-09T22:12:52.123Z · LW(p) · GW(p)

    I'm pretty sure that right now more than 1 in 1,000,000 people around the world (that is, around 7000 people total) are experiencing suffering at least as bad as the hypothetical torture. Taking that into account, a ratio of 1:1,000,000 would be a strict improvement. Faced with a choice like that, I might selfishly refuse due to the chance that I would be one of the unlucky few, whereas right now I am doing pretty well compared to most people. But I would like to be the sort of person that wouldn't refuse.

    (I'm also not convinced that a life completely free of discomfort, pain, and suffering is possible or desirable; however, this objection doesn't reach the heart of the matter, so I'm willing to ignore it for the sake of argument.)

    The decision would be more difficult once we get to a ratio which does not strictly dominate our current situation. The terrible unfairness of a world where you're either free of all discomfort or being horribly tortured bothers me; for this reason, I think I wouldn't make the trade for any ratio where the total amount of suffering is roughly comparable to the status quo. I would have to do some research to give you a precise number.

    But now we are very far off from the original problem of dust specks vs. torture, in which the number 3^^^3 is specifically chosen to be sufficiently large that if you have an acceptable exchange rate at all, 1 : 3^^^3 will be acceptable to you.

    Replies from: ekramer
    comment by ekramer · 2013-02-10T11:49:56.587Z · LW(p) · GW(p)

    Don’t be bamboozled by big numbers, it is exactly the same problem: How far would you go to maximize pain in the minority in order to minimize it in the majority. As Eliezer argued so forcefully in the comments above, this problem exists on a continuum and if you want to break the chain at any point you have to justify why that point and not another.

    Your argument for 1:1,000,000 does not go far enough in minimising pain for the majority. One person cannot take the pain of 1,000,000 people without dying or at least becoming unconscious. I suspect the maximum “other people’s pain” a person could endure without losing conscious is broadly between 5 and 50, let’s say 25.

    So if you are willing to send one human being out of 3^^^3 people to be tortured for 50 years to remove a vanishingly small momentary discomfort for the majority, then you must also be willing to continually torture 1 in 25 people to eradicate all pain in the majority of the other 24. They are two ends of the same continuum, you cannot break the chain.

    Both instances are brutally unfair on the people tortured, but at least in the second instance the majority will lead better lives while in the first instance not a single person is aware they had one less blink of discomfort in their entire lifetime. So my question remains to the torturers, are you a monster for sending 1 in 25 people to be tortured?

    Replies from: Kindly
    comment by Kindly · 2013-02-10T15:35:10.376Z · LW(p) · GW(p)

    When did we start talking about someone "taking the pain of other people"? This is news to me; it wasn't part of the argument before.

    This, I understand is the reason you're suggesting that I would torture 1 in 25 people. Well, I wouldn't torture 1 in 25 people. I have already stated that if the total amount of pain is conserved (there may be difficulties with measuring "total pain", but bear with me here) then I prefer it to be spread out evenly rather than piled onto one person.

    In the dust speck formulation, the 3^^^3 being dustspecked are, in aggregate, suffering much more than the one person being tortured. 3^^^3 is very large. For any continuum you could actually describe that ends in "torture 1 in X people so that the remainder live perfect lives", X will still be approximately 3^^^3. Possibly divided by some insignificant number like googolplex that can be writtten down in mere scientific notation.

    At no point did anyone accept your 1:25 proposal.

    comment by Drolyt · 2013-03-20T06:43:20.102Z · LW(p) · GW(p)

    I definitely think it is obvious what Eliezer is going for: 3^^^3 people getting dusk specks in their eyes being the favorable outcome. I understand his reasonig, but I'm not sure I agree with the simple Benthamite way of calculating utility. Popular among modern philosophers is preference utilitarianism, where the preferences of the people involved are what constitute utility. Now consider that each of those 3^^^3 people has a preference that people not be tortured. Assuming that the negative utility each individual computes for someone being tortured is larger in value than the negative utility of a speck of dust in their eyes, then even discounting the person being tortured (which of course you might as well given the disparity in magnitude, which is more or less Eliezer's point) you would have higher utility with the flecks of dust.

    There are in fact numerous other ways to calculate the utility so such that 3^^^3 people with dust flecks in their eyes is preferable to one person undergoing fifty years of torture while still preserving the essential consequentialist nature of the argument. John Stuart Mill might argue there is a qualitative difference between torture and dust flecks in your eyes that keeps you from adding them in this way while an existentialist might argue that pain and pleasure aren't what should be computing with the utility function but something closer to "human flourishing" or "eudaimonia" and that in this calculation any number of dust flecks has zero utility while torture has a large negative utility. It all depends on how you define your utility function.

    comment by Howdy · 2013-04-29T08:30:04.267Z · LW(p) · GW(p)

    This question reminds me of the dilemma posed to medical students. It went something like this;

    if the opportunity presented itself to secretly, with no chance of being caught, 'accidentally' kill a healthy patient who is seen as wasting their life (smoking, drinking, not exercising, lack of goals etc) in order to harvest his/her organs in order to save 5 other patients should you go ahead with it?

    From a utilitarian perspective, it makes perfect sense to commit the murder. The person who introduced me to the dilemma also presented the rationale for saying 'no'... Thankfully it wasn't "It's just wrong" or even "murder is wrong"... The answer suggested was "You wouldn't want to live in a world where doctors might regularly operate in such a manner nor would you want to be a patient in such a system... It would be terrifying".

    I suspect the key elements in the hospital and dust speck scenarios are a) someone power over an aspect of other peoples fates and b) the level of trust of those people. The net-sum calculation of overall 'good' might well suggest torture or organ harvesting as the solution, but how would you feel about nominating someone else to be the one who makes that decision... Would you want that person to favor the momentary 3^^^3 dust speck incident or the 50 year torture of an individual?

    Replies from: Howdy
    comment by Howdy · 2013-04-29T14:03:38.679Z · LW(p) · GW(p)

    I think this is an important thing to consider if we intend to make benevolent AI's that are harmonious with our own sense of morality

    Replies from: MugaSofer
    comment by MugaSofer · 2013-04-29T14:27:19.731Z · LW(p) · GW(p)

    Depends on whether we intend to use them as doctors or superintelligent gods, doesn't it?

    comment by Free_NRG · 2013-04-29T08:39:40.209Z · LW(p) · GW(p)

    I used to think that the dust specks was the obvious answer. Then I realized that I was adding follow-on utility to torture (inability to do much else due to the pain) but not the dust specks (car crashes etc due to the distraction). It was also about then that I changed from two-boxing to one-boxing, and started thinking that wireheading wasn't so bad after all. Are opinions to these three usually correlated like this?

    Replies from: MugaSofer
    comment by MugaSofer · 2013-04-29T08:52:36.371Z · LW(p) · GW(p)

    Then I realized that I was adding follow-on utility to torture (inability to do much else due to the pain) but not the dust specks (car crashes etc due to the distraction).

    Perhaps a better analogy would be dust specks that are only slightly distracting, detracting from whatever you were doing but not enough to cause you to make tangible mistakes, versus torturing somebody who's flying a plane at the time.

    In other words, this "follow-on utility" should be separated from opportunity costs, shouldn't it?

    comment by Jiro · 2013-05-08T19:55:57.717Z · LW(p) · GW(p)

    I would suggest that torture has greater and greater disutility the larger the size of the society. So given a specific society of a specific size, the dust specks can never add up to more suffering than the torture; the greater the number of dust specs possible, the greater the disutility of the torture, and the torture will always add up to worse.

    If you're comparing societies of different size, it may be that the society with the dust specks has as much disutility as the society with the torture, but this is no longer a choice between dust specks and torture, it's a choice between dust specks+A to torture+B, and it's not so counterintuitive that I might prefer torture+B.

    As for why I have such an odd utility function as "torture is worse tin a larger society"? I'm trying to derive my utility function from my preferences and this is what I come up with--I'm not choosing a utility function as a starting point.

    Replies from: shminux
    comment by Shmi (shminux) · 2013-05-08T20:17:28.107Z · LW(p) · GW(p)

    I'm trying to derive my utility function from my preferences and this is what I come up with--I'm not choosing a utility function as a starting point.

    Any utility function runs into a repugnant conclusion of one type or another. I wonder if there is a theorem to this effect, following from transitivity + continuity. Yours is no exception.

    For example, in your case of disutility of torture growing larger with the size of the society, doesn't the disutility of dust specks grow both with the number of people subjected to it and the society's size? If not, how about the intermediate disutilities, that of a stabbed toe, a one-min long agony and up and up slowly until you get to the full-blown 50 years of torture? Where it this magic boundary between the society size-independent disutility of specks and the scaling up disutility of torture?

    Replies from: Jiro
    comment by Jiro · 2013-05-08T21:44:13.930Z · LW(p) · GW(p)

    As I noted, I'm trying to compute my utility function from my preferences, not the other way around. So in response to that I'd refine the utility function a bit: My new utility function has two terms, the main term and an inequality term. While my original statement that torture has a term based on the size of the society is still true, it is true because increasing the size of the society and still torturing 1 person means more inequality.

    doesn't the disutility of dust specks grow both with the number of people subjected to it and the society's size?

    The extra term applies to the dust specks as well, but I don't think this is a problem.

    In the original problem, everyone gets a dust speck, so there's no inequality term. The torture does have an inequality term and ends up always worse than the dust specks.

    If you want to move towards intermediate values by increasing the main term and keeping the inequality term constant, thus increasing the dust specks to stubbed toes and the like, you'll eventually come to some point where it exceeds the torture. But at that point they won't be dust specks--instead you'll decide that, for instance, many people suffering 1 day of torture will be worse than one person suffering 50 years of torture. I can live with that result.

    If you want to move towards intermediate values by increasing the inequality term and keeping the main term constant, you would "clump up" the dust specks, so one person receives many dust specks worth of disutility. If you keep doing this, you might eventually exceed the torture as well--but again, at the point where you exceed the torture, you won't have dust specks any more, you'll have larger clumps and you'll say that many clumps (equivalent to 1 day of torture each, for instance) can exceed one person getting 50 years. Again, I can live with that result.

    If you want to move towards intermediate values by increasing the inequality term and not bothering to keep the population constant, adding more people (in a way that is otherwise neutral if you ignore the inequality term) would increase the disutility. I haven't worked out if this requires being able to increase the disutility beyond that of torture, but as I noted above, that would be a case of dust specks+A compared to torture+B and having either of those quantities be greater wouldn't surprise me..

    This is a type of variable value principle and avoids the Repugnant Conclusion itself, but may allow for a variety of Sadistic Conclusion, since adding some tortured people can be better than adding a larger number of well-off people. However, I would argue that despite the name "Sadistic", this should be okay: I am not claiming that adding tortured people is good, just that it is bad but less bad than the other choice. And the other choice is bad because the decrease in total utility from adding more people and increasing inequality overwhelms the increase in total utility from those new people living good lives.

    comment by selylindi · 2013-05-14T19:45:28.586Z · LW(p) · GW(p)

    There are many ways of approaching this question, and one that I think is valuable and which I can't find any mention of on this page of comments is the desirist approach.

    Desirism is an ethical theory also sometimes called desire utilitarianism. The desirist approach has many details for which you can Google, but in general it is a form of consequentialism in which the relevant consequences are desire-satisfaction and desire-thwarting.

    Fifty years of torture satisfies none and thwarts virtually all desires, especially the most intense desires, for fifty years of one individuals' life, and most of the subsequent years of life also due to extreme psychological damage. Barely noticeable dust specks neither satisfy nor thwart any desires, and so in a population of any finite size the minor pain is of no account whatever in desirist terms. So a desirist would prefer the dust specks.

    The Repetition Objection: If this choice was repeated say, a billion times, then the lives of the 3^^^3 people would become unlivable due to constant dust specks, and so at some point it must be that an additional individual tortured becomes preferable to another dust speck in 3^^^3 eyes.

    The desirist response bites the bullet. Dust specks in eyes may increase linearly, but their effect on desire-satisfaction and desire-thwarting is highly nonlinear. It's probably the case that an additional torture becomes preferable as soon as the expected marginal utility of the next dust speck is a few million desires thwarted, and certainly the case when the expected marginal utility of the next dust speck is a few billion desires thwarted.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2013-05-14T20:48:19.313Z · LW(p) · GW(p)

    Can you clarify your grounds for claiming that barely noticeable dust specks neither satisfy nor thwart any desires?

    Replies from: selylindi
    comment by selylindi · 2013-05-14T22:31:16.144Z · LW(p) · GW(p)

    Ah, yeah, that could be a problematic assumption. The grounds for my claim was generalization from my own experience. I have no consciously accessible desires which are affected by barely noticeable dust specks.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2013-05-14T23:26:43.942Z · LW(p) · GW(p)

    Fair enough. I don't know what desirism has to say about consciously inaccessible desires, but leaving that aside for now... can you name an event that would thwart the most negligable desire to which you do have conscious access?

    Replies from: selylindi
    comment by selylindi · 2013-05-15T04:38:54.503Z · LW(p) · GW(p)

    I have a high tolerance for chaotic surroundings, but even so I occasionally experience a weak, fleeting desire to impose greater order on other people's belongings in my physical environment. It could be thwarted by an event like a fly buzzing around my head once, which though not painful at all would divert my attention long enough to ensure that the desire died without having been successfully acted on.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2013-05-15T12:24:32.151Z · LW(p) · GW(p)

    OK. So, if we assume for simplicity that a fly-buzzing event is the smallest measurable desire-thwarting event a human can experience, you can substitute "fly-buzz" for "dust speck" everywhere it appears here and translate the question into a desirist ethical reference frame.

    The question in those terms becomes: is there some number of people, each of whom is experiencing a single fly-buzz, where the aggregated desire-thwarting caused by that aggregate event is worse than a much greater desire-thwarting event (e.g. the canonical 50 years of torture) experienced by one person?

    And if not, why not?

    Replies from: selylindi
    comment by selylindi · 2013-05-16T15:52:46.752Z · LW(p) · GW(p)

    Well, Yes, but then as stated earlier I think desirism bites the bullet on "dust speck", too, given more dust specks. For a quick Fermi estimate, if I suppose that the fly-buzz-scenario takes about 5 seconds and is 1/1000th as strong (in some sense) as the desire not to be tortured for 5 seconds, then the number of people where the fly-buzz-scenarios outweight the torture is about a half trillion.

    Granted, for people who don't find desirism intuitive, this altered scenario changes nothing about the argument. I personally do find desirism intuitive, though unlikely to be a complete theory of ethics. So for me, given the dilemma between 50 years of torture of one individual and one dust-speck-in-eye or one fly-buzz-distraction for each of 3^^^3 people, I have a strong gut reaction of "Hell yes!" to preferring the specks and "Hell no!" to preferring the distractions.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2013-05-16T16:25:21.362Z · LW(p) · GW(p)

    Ah. I think I misunderstood you initially, then. Thanks for the clarification.

    comment by [deleted] · 2013-05-20T00:25:16.485Z · LW(p) · GW(p)

    Forgive me for posting on such an old topic, but I've spent the better part of the last few days thinking about this and had to get my thoughts together somewhere. But after some consideration, I must say that I side with the "speckers" as it were.

    Let us do away with "specks of dust" and "torture" notions in an attempt to avoid arguing the relative value one might place on either event (i.e. - "rounding to 0/infinity"), and instead focus on the real issue. Replace torture with "event A" as the single most horrific event that can happen to an individual. Replace dust motes with "event B" as the least inconvenience that can still be considered an inconvenience to an individual.

    Similarly, let us do away with the notion of a reasoning about a googol, or 3^^^3, or 3^^^^3, as our brains treat each of these numbers as just a featureless conglomeration, regardless of how well we want to pretend we understand the differences in magnitude. Instead, replace this with "n", with "n" being an arbitrarily large number.

    The question then becomes: Is it better to subject a single individual to event A, or n individuals event B?

    The utilitarian argument supposes that this question can be equivalently stated as such: Is the total disutility of subjecting n individuals to event B greater than subjecting a single individual to event A?

    This seems reasonable enough, given a sufficiently "good" definition of utility. Let us assume that these statements are equivalent and proceed from here.

    Let "x" be the disutility value of event A, and "y" be the disutility value of event B. How can we compare x and y? Intuitively, it is obvious that enough additions of "y" would at the very least approach "x". I.E. - If you were to subject a single individual to event B often enough and for long enough, this would approach being as "bad" as subjecting that same individual to event A. Let "k" be such a number of additions, however large it may be. Thus we have x ~= ky, or y ~= x/k.

    But how exactly do we measure x? Does it even have a fixed value? Is the definition of event A even consistent across all individuals (or even the definition of event B for that matter)? Perhaps, perhaps not. But interestingly enough, we've already found a reasonable "fixed" definition for event B. This is simply the most trivial inconvenience that can be subjected to an individual which, if repeated enough times, would be approximately equivalent to subjecting them to event A.

    So lets choose a scale where event A has disutility 1 for a given individual. Now event B has disutility 1/k for that same individual. The scale may change relative to an individual, but lets make the assumption that this variance is massively dwarfed by the magnitude of "k", which again seems reasonable. In other words, the difference between the worst that could happen and the most trivial bad thing that can happen is so great, that any variance in an individual's definition for the worst event is trivial in comparison. "Sacred" vs "mundane", if you will. At least now we're only working in one variable.

    We also want to compare the utility of both situations over a population. That is, is it better for a single individual to have a disutility value of 1, or for n individuals to have disutility 1/k? And this is where a second problem arises. How exactly does one distribute a utility value across a population? It might be tempting to assume it just divides evenly into the population and is additive across individuals. For instance, one person stubbing their toe twice in a day is approximately equivalent to two people each stubbing their toe once. It may hold for small scale scenarios that we are used to dealing with, but I'm not certain it holds with larger scales.

    One questionable example is that of wealth distribution amongst a nation. This is a very complex and nuanced subject, but the underlying issues can be expressed in relatively simple terms. Assume utility here is directly proportional to wealth. If we want to maximize the average wealth of the nation, we could have a plethora of distributions where everyone is in poverty except for a small percentage, who have vast expanses of wealth. This is an entirely valid solution--if we are trying only to maximize average utility.

    But certainly, a good measure of utility should also take into account the status of each person with respect to the whole. Few would argue that a system where over half the population exists in poverty is better than a system with almost no poverty. But again, perhaps it is desirable to have some disparity in such a distribution, to entice people to work harder and to contribute more to society as a whole with the prospect of increasing their personal wealth. Perhaps this lends to a more sustainable system.

    It is for this reason that I believe a "good" function should not only attempt to maximize the average utility, but also minimize the (negative) deviation away from the group average for each individual--of course taking into account other constraints regarding sustainability, stability, etc.

    So let us now consider a "good" utility function that takes as parameters (1) the population size and (2) a list of the average utility scores for of each individual in that population. Since its all the same in this example, we'll just represent (2) as a single number. Let us call this function F. We can restate the question entirely in mathematical terms.

    Is F(n, 1/k) > F(1, 1)?

    Perhaps. Perhaps not. It depends mostly on what utility function would be considered "good" in this instance. What no one would disagree on, however, is that:

    F(n, k/k) = F(n, 1) >> F(1, 1) for n > 1.

    Also, F(n, (k-1)/k >> F(1, 1).

    And you can continue this pattern onwards. Consider the general equation:

    F(n, m/k) >? F(1, 1) : 1 <= m <= k.

    There is certainly a "breaking point" for which m is large enough for the general well being to eclipse the individual.

    In other words, there is certainly a point where subjecting each individual in an arbitrarily large population to a massively excessive amount of trivial inconveniences is morally worse than subjecting a single individual to a horrific event. But where is this "breaking point?"

    My conclusion is that it depends on the size of the population, the extent to which each individual can reasonably bear excessive trivial burdens, and what criteria is used for the function mapping utility to a population.

    I personally find it very hard to swallow that it would ever be a good idea to allow one individual to suffer immeasurably than to subject an immeasurable population to suffer trivially. I would suspect the "breaking point" in the example given would be somewhere between having everyone in the population stub a toe, and having everyone in the population lose a toe.

    A relatable example would be distributing stress in a building. It would generally be a better design to allow for each individual piece in the building to be stressed trivially to compensate for one piece bearing a disproportionate load, than to allow for any given piece to break away as necessary to prevent the rest from bearing a trivial load. Certainly there is a point, however, that it becomes undesirable to unnecessarily compromise the overall structure (or perhaps just to introduce unacceptable risks) for the sake of a single piece. Pieces are ultimately replaceable. The whole structure, however, is not.

    Is this an instance of me being irrational due to some form of scale insensitivity? Possibly. But to err is human, and I would rather err on the side of compassion than on that of cold calculation. I would also say that some caution should be taken when working with large scales and with continuums. It may be just as irrational to disregard our intuitions in the face of the unknown as to cling blindly to them.

    Replies from: None, None
    comment by [deleted] · 2013-05-20T00:28:17.623Z · LW(p) · GW(p)

    To flip the question on its head:

    Would it be morally acceptable for an immeasurably large population of individuals to allow a single individual to be mercilessly tortured if it would spare the entire population some trivial inconvenience?

    Replies from: ArisKatsaris
    comment by ArisKatsaris · 2013-05-20T03:16:01.271Z · LW(p) · GW(p)

    I think that example triggers our "not, it would be immoral" intuition, because an immoral population would make the choice against the trivial inconvenience with even greater ease. So, their saying "yes, do please allow some individual to be mercilessly tortured" functions as Bayesian evidence in support of their immorality.

    But if you had a large population of people decide between a trivial inconvenience for a different large population of people vs a single individual selected from their own midst to be mercilessly tortured, I'm guessing that the moral intuition would be the exact different, and it would feel immoral for this population to condemn a different large population to such an inconvenience just to benefit one of their own.

    Replies from: None
    comment by [deleted] · 2013-05-20T06:06:58.700Z · LW(p) · GW(p)

    So you're saying it is potentially immoral if the group themselves decide to make the decision, but potentially moral if an outsider of the group makes the exact same decision?

    Replies from: ArisKatsaris
    comment by ArisKatsaris · 2013-05-20T09:24:46.834Z · LW(p) · GW(p)

    No, I'm not saying that. Don't start with the ill-defined concept of "moral" and "immoral" -- start from the undisputed reality of the matter that people pass moral judgements on actions they hear about.

    So I'm saying that when Alice hears of
    X: group A choosing to sacrifice one of their own rather than inconvenience group B
    Alice is likely to pass a different moral judgement of that choice than if Alice hears of
    Y: group A choosing to sacrifice a member of group B rather than inconvenience themselves.

    Even though utilitarianism would argue that actions X and Y are equally moral taken by themselves, actions X and Y provide different evidence about whether group A is really acting on moral principles. So if the evolutionary purpose for our moral intuitions is to e.g. identify people as villains or not, action Y triggers our moral intuitions negatively and action X triggers our moral intuitions positively. Because at a deeper level the real purpose of judging the deed is to judge the doer.

    comment by [deleted] · 2013-10-22T08:55:35.970Z · LW(p) · GW(p)

    So after a lot of thought, and about 5 months spent reading articles on this site, I think I can see the big picture a little more clearly now. Imagine having a really large collection of grains of sand that are all suspended in the air in the shape of a flat disk. Imagine too, that it takes energy to move any single grain in collection upwards or downwards, but once a grain is moved, it stays put unless moved again.

    Just conceptually let grains of sand represent people and grain movement upwards/downwards represent utility/disutility.

    What Eliezer is arguing is that, assuming it takes the same amount of energy to move each individual grain of sand, then clearly it takes far less energy to move a single grain of sand very far downward than to move every grain of sand just slightly downward.

    What I initially objected to, and what I was trying to intuit through in my first post, is that perhaps it is the case that the energy required to move a single grain of sand is not constant. Perhaps it increases with distance from the disk. I still hold to this objection.

    Even if so, it is certainly a valid conclusion to draw that moving a single grain far enough downwards requires less energy than moving every grain slightly downwards. Increasing the number of grains of sand certainly affects this. And no matter what the growth factor may be on the nonlinear amount of energy required to move a single grain very far from its starting point, it is still finite. And you can add enough grains of sand so that the multiplicative factor of moving everything slightly downwards dwarfs the nonlinear growth factor.

    Thus, given enough people (and I do stress, enough people), it may be morally worse to subject them all to having a single dust speck enter their eye for a brief moment than to subject a single individual to torture for 50 years.

    Its just that our intuition says that for any scale our minds are even close to capable of reasoning about, exponential/super-exponential functions (even with a tiny starting value) greatly dwarf multiplicative scaling functions.

    But this intuition cannot be accurate for scales larger than our minds are capable of reasoning about.

    I understand now: "Shut up and multiply."

    Replies from: None
    comment by [deleted] · 2013-10-22T09:05:15.321Z · LW(p) · GW(p)
    comment by fractalman · 2013-05-27T02:57:10.040Z · LW(p) · GW(p)

    3^^^3 people? ...

    I can see what point you were trying to make....I think.

    But I happen to have a significant distrust of classic utilitarianism: if you sum up the happiness of a society with a finite chance of lasting forever, and subtract the sum of all the pain...you get infinity-infinity, which is conditionally convergent. the simplest patch is to insert a very, very, VERY tiny factor, reducing the weight of future societal happiness in your computation... Any attempt to translate to so many people ...places my intuition in charge of setting the summation up.

    or else, y'know, "dust specks, because that happens to include the ability to get off this planet and produce more humans than atoms in the currently visible universe".

    comment by Richard_Ngo (ricraz) · 2013-07-14T16:29:20.283Z · LW(p) · GW(p)

    It seems to me that preference utilitarianism neatly reconciles the general intuitive view against torture with a mathematical utilitarian position. If a proportion p of those 3^^^3 people have a moral compunction against people being tortured, and the remainder are indifferent to torture but have a very slight preference against dust specks, then as long as p is not very small, the overall preference would be for dust specks (and if p was very small, then the moral intuitions of humanity in general have completely changed and we shouldn't be in a position to make any decisions anyway). Is there something I'm missing?

    Replies from: TheOtherDave
    comment by TheOtherDave · 2013-07-14T17:00:49.016Z · LW(p) · GW(p)

    I'm not sure I'm understanding your reasoning here. It seems like you're simply thinking about people's preferences for a dust speck in the eye, relative to their preferences for torture, without reference to how many dust specks and how much torture... is that right?

    If so, that doesn't seem to capture the general intuitive view. Intuitively, I strongly prefer losing a finger to losing an arm, but I prefer 1 person losing an arm to a million people losing a finger. (Or, put differently, I prefer a one-in-a-million chance of losing my arm to the certainty of losing a finger.) Quantity seems to matter.

    comment by gwern · 2013-07-26T17:54:21.085Z · LW(p) · GW(p)

    "...Some may think these trifling matters not worth minding or relating; but when they consider that tho' dust blown into the eyes of a single person, or into a single shop on a windy day, is but of small importance, yet the great number of the instances in a populous city, and its frequent repetitions give it weight and consequence, perhaps they will not censure very severely those who bestow some attention to affairs of this seemingly low nature. Human felicity is produc'd not so much by great pieces of good fortune that seldom happen, as by little advantages that occur every day."

    --Benjamin Franklin

    comment by christopherj · 2013-11-26T06:56:15.006Z · LW(p) · GW(p)

    Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

    I would prefer that 3^^^3 people get dust specs in their eyes, because that means that we either figured out how to escape the death of our universe, or expand past our observable universe. [/cheating]

    Replies from: Sniffnoy
    comment by Sniffnoy · 2013-11-26T09:07:00.813Z · LW(p) · GW(p)

    s/cheating/EDT/ :)

    comment by Brian_Tomasik · 2014-03-10T10:44:26.989Z · LW(p) · GW(p)

    I have mixed feelings on this question. On the one hand, I agree that scope insensitivity should be avoided, and utility should count linearly over organisms. But at the same time, I'm not really sure the dust specks are even ... bad. If I could press a button to eliminate dust specks from the world, then (ignoring instrumental considerations, which would obviously dominate) I'm not sure whether I would bother.

    Maybe I'm not imagining the dust specks as being painful, whereas Eliezer had in mind more of a splinter that is slightly painful. Or we can imagine other annoying experiences like spilling your coffee or sitting on a cold toilet seat. Here again, I'm not sure if these experiences are even bad. They build character, and maybe they have a place even in paradise.

    There are many experiences that are actually bad, like severe depression, severe anxiety, breaking your leg, pain during a hospital operation, etc. These do not belong in paradise.

    If you imagine yourself signing up for 3^^^3 dust specks, that might fill you with despair, but in that case, your negative experience is more than a dust speck -- you're also imagining the drudgery of sitting through 3^^^3 of them. Just the dust specks by themselves may not be bad, if only one is experienced by any given individual, and no dust speck triggers more intense negative reactions.

    Replies from: TheOtherDave
    comment by TheOtherDave · 2014-03-10T14:04:54.112Z · LW(p) · GW(p)

    There's nothing important about the dust-specks here; they were chosen as a concrete illustration of the smallest unit of disutility. If thinking about dust specks in particular doesn't work for you (you're not alone in this), I recommend picking a different illustration and substituting as you read.

    comment by Davis_Goodman · 2014-05-01T15:56:28.060Z · LW(p) · GW(p)

    Would it change anything if the subjets where extremely cute puppies?

    comment by Davis_Goodman · 2014-05-01T15:57:20.117Z · LW(p) · GW(p)

    Would it change anything if the subjects were extremely cute puppies with eyes so wide and innocent that even the hardest lumberjack would soon?

    comment by [deleted] · 2015-03-24T19:49:32.685Z · LW(p) · GW(p)

    If the dust specks could cause deaths I would refuse to chose either. If I somehow still did, I would pick the dusts anyhow because I know that I myself would rather die by accident caused of a dust particle than be tortured for even ten years.

    Replies from: Jiro
    comment by Jiro · 2015-03-24T21:39:22.305Z · LW(p) · GW(p)

    Would you also refuse to drive because there is some non-zero chance that you'll hit someone and cause them to suffer torturous pain?

    Replies from: None
    comment by [deleted] · 2015-03-25T13:00:05.494Z · LW(p) · GW(p)

    No I would not. I am not sure what you are getting at, but my point is that the torture was a fact and the speck dusts were extremely low probabilites, scattered over a big population. (Besides, I don´t think it is possible for me to cause torturous pain to someone only by driving.)

    comment by TomStocker · 2015-03-26T12:55:40.879Z · LW(p) · GW(p)

    "The Lord Pilot shouted, fist held high and triumphant: "To live, and occasionally be unhappy!"" (three worlds collide) dust specks are just dust specks - in a way its helpful to sometimes have these things.

    But the thing changes if you don't distribute the dust specks 1 per person but 10 per second per person?

    Replies from: Quill_McGee
    comment by Quill_McGee · 2015-03-26T14:33:36.572Z · LW(p) · GW(p)

    In the Least Convenient Possible World of this hypothetical, each and every dust speck causes a small constant amount of harm, with no knock-on effects(no increasing one's appreciation of the moments when one does not have dust in ones eye, no preventing a 'boring painless existence,' nothing of the sort). Now it may be argued whether this would occur with actual dust, but that is not really the question at hand. Dust was just chosen as being a 'seemingly trivial bad thing.' and if you prefer some other trivial bad thing, just replace that in the problem and the question remains the same.

    comment by dragonfiremalus · 2015-12-11T18:10:20.148Z · LW(p) · GW(p)

    I think I've seen some other comments bring it up, but I'll say it again. I think people who go for the torture are working off a model of linear discomfort addition, in which case the badness of the torture would have to be as bad as 3^^^3 dust particles in the eye to justify taking the dust. However, I'd argue that it's not linear. Two specs of dust is worse than twice as bad as one spec. 3^^^3 people getting specs in their eyes is unimaginably less bad than one person getting 3^^^3 specs (a ridiculous image considering that's throwing universes into a dude's eye). So the spec very well may be less than 1/(3^^^3) as bad as torture.

    Even so, I doubt it. So purely utilitarian probably does suggest torture the one guy.

    Replies from: Jiro
    comment by Jiro · 2015-12-12T17:25:10.971Z · LW(p) · GW(p)

    However, I'd argue that it's not linear.

    It has to be more than just not linear for that to solve it, it has to be so nonlinear that no finite number of specks at all can add up to the torture, since otherwise we could just ask the same question using the new number instead of 3^^^3.

    If it's so nonlinear that no finite number of specks can add up to torture, then you find the maximum amount that a finite number of specks can add up to. Then there are two amounts (one slightly more than that and one slightly less) where one amount cannot be balanced by dust particles and one amount can, which doesn't really make any sense.

    comment by RaelwayScot · 2016-01-06T12:49:18.454Z · LW(p) · GW(p)

    I think the problem here is the way the utility function is chosen. Utilitarianism is essentially a formalization of reward signals in our heads. It is a heuristic way of quantifying what we expect a healthy human (one that can raise up and survive in a typical human environment and has an accurate model of reality) to want. All of this only converges roughly to a common utility because we have evolved to have the same needs which are necessarily pro-life and pro-social (since otherwise our species wouldn't be present today).

    Utilitarianism crudely abstracts from the meanings in our heads that we recognize as common goals and assigns numbers to them. We have to be careful what we want to assign numbers to in order to get results that we want in all corner cases. I think, hooking up the utility meter to neurons that detect minor inconveniences is not a smart way of achieving what we collectively want because it might contradict our pro-life and pro-social needs. Only when the inconveniences accumulate individually so that they condense as states of fear/anxiety or noticeably shorten human life, it affects human goals and it makes sense to include them into utility considerations (which, again, are only a crude approximation of what we have evolved to want).

    comment by Harbor · 2016-07-13T06:43:38.462Z · LW(p) · GW(p)

    I think the reason people are hesitant to choose the dust speck option is that they view the number 3^^^3 as being insurmountable. It's a combo chain that unleashes a seemingly infinite amount of points in the "Bad events I have personally caused" category on their scoreboard. And I get that. If the torture option is a thousand bad points, and the dust speck is 1/1000th of a point for each person, than the math clearly states that torture is the better option.

    But the thing is that you unleash that combo chain every day.

    Everytime you burn a piece of coal, or eat a seed, or an apple, you are potentially causing mild inconvenience to a hypothetically infinitely higher number of people than 3^^^3. What if the piece of coal could warm someone else up? What if that seed's offspring would go on to spread and feed a massive amount of people? The same applies to all meat and all fruit, and most vegetables. By gaining a slight benefit now, you are potentially robbing over 3^^^3 people of their own slight benefit. Now, is it likely that said animal or seed will go on to benefit so many? Maybe not, but the chance exists. Are you willing to take that chance with a number like 3^^^3?

    Well, you should be. Morality should not be solely based of off mathematical formula and cost/benefit analysis. It can greatly help determine a moral course of action, but if that is your motivation for wanting to do the right thing than you have lost sight of what morality is about. The basis of morality is this: Do unto others, as you would have done unto yourself. I, for one, would rather have a dust speck in my eye than be tortured for 50 years. And I wouldn't get 3^^^3 specks of dust in my eye, because none of them did either, they only got one. Even if I was assured of getting that many specks of dust in my eye, (in deep space, of course, because the resulting explosion of dust specks would surely engulf the Earth and possibly most of the Milky Way), I would still do it. Because I choose to save the person in front of me, and fix any negative results afterwards. I choose to stop the wrongdoing directly in front of me, because if everyone did so then everyone would be saved. Do what you can right now. Worry about dust specks later. Help that guy who is getting tortured now.

    The Ones Who Walk Away From Omelas Should Turn The Fuck Around And Sprint Back To The City, Because Holy Shit That Poor Kid Is Being Tortured So Those Fucks Can Have Air Conditioning. -An improved title, in my opinion.

    A massive amount of beneficial outcomes being caused by one person's misfortune is not always justifiable. If they are innocent, it's not justifiable. If they were going to directly and consciously and maliciously cause the massive negative outcome that will result if you do not stop them, then it is arguably justifiable. However, there is only one situation, one context, in which one person's suffering for the benefit of countless others is wholly and totally justified.

    You see, someone figured out the answer to this dilemna about 2000 years ago. You've probably heard of Him.

    One person's suffering benefitting countless others is a beautiful thing when they choose to suffer of their own free will.

    You can choose the 50 years of torture if you wish.....

    But only if that person being tortured is you will it be anything other than total evil.

    Replies from: Good_Burning_Plastic, gjm, tetraspace-grouping
    comment by Good_Burning_Plastic · 2016-07-13T09:18:49.410Z · LW(p) · GW(p)

    Everytime you burn a piece of coal, or eat a seed, or an apple, you are potentially causing mild inconvenience to a hypothetically infinitely higher number of people than 3^^^3.

    But you're also potentially causing a mild benefit to a hypothetically infinitely higher number of people than 3^^^3.

    comment by gjm · 2016-07-13T16:37:31.210Z · LW(p) · GW(p)

    a seemingly infinite amount of points in the "Bad events I have personally caused" category on their scoreboard

    Perhaps that is how some people who prefer TORTURE to DUST SPECKS are thinking, but I see no reason to think it's all of them, and I am pretty sure some of them have better reasons than the rather strawmanny one you are proposing. For instance, consider the following:

    • Which would you prefer: one person tortured for 50 years or a trillion people tortured for 50 years minus one microsecond?
      • I guess you prefer the first. So do I.
    • Which would you prefer: a trillion people tortured for 50 years minus one microsecond, or a trillion trillion people tortured for 50 years minus two microseconds?
      • I guess you prefer the first. So do I.
    • ... Now repeat this until we get to ...
    • Which would you prefer: N/10^12 people (note: N is very very large, but also vastly smaller than 3^^^3) tortured for one day plus one microsecond, or N people tortured for one day?
      • I am pretty sure you prefer the first option in every case up to here. Now perhaps microseconds are too large, so let's adjust a little:
    • Which would you prefer: N people tortured for one day, or 10^12*N people tortured for one day minus one nanosecond?
      • ... and continue iterating -- I bet you prefer the first option in every case -- until we get to ...
    • Which would you prefer, M/10^12 people tortured for ten seconds plus one nanosecond, or M people tortured for ten seconds? (M is much much larger than N -- but still vastly smaller than 3^^^3.)
      • I am pretty sure you still prefer the first case every time. Now, once the times get much shorter than this it may be difficult to say whether something is really torture exactly, so let's start adjusting the severity as well. Let's first of all replace the rather ill-defined "torture" with something less ambiguous.
    • Which would you prefer, M people tortured for ten seconds, or 10^12*M people tortured for 9 seconds and then kicked really hard on the kneecap but definitely not hard enough to cause permanent damage?
      • The intention is that the torture is bad enough that the latter option is less bad for each individual. I hope you still prefer the first case here.
    • Which would you prefer, 10^12*M people tortured for 9 seconds and then kicked really hard on the kneecap, or 10^24*M people tortured for 8 seconds and then kicked really hard on the kneecap, twice?
      • ... etc. ...
    • Which would you prefer, 10^108*M people tortured for one second and then kicked, or 10^120*M people just kicked 10 times?
      • OK. Now we can start cranking things down a bit.
    • Which would you prefer, 10^120*M people kicked really hard on the kneecap 10 times or 10^132*M people kicked really hard on the kneecap 9 times?
      • That's a much bigger drop in severity than I've used above; obviously we can make it more gradual if you like without making any difference to how this goes. Anyway, I hope you still much prefer the first case to the second. Anyway, continue until we get to one kick each and then let's try this:
    • Which would you prefer, 10^228*M people kicked really hard on the kneecap or 10^240*M people slapped hard in the face ten times?
      • You might want to adjust the mistreatment in the second case to make sure it's less bad than the kicking.

    ... Anyway, by this point I am probably belabouring things severely enough that it's obvious where it ends. After not all that many more steps we arrive at a choice whose second option is a very large number of people (but still much much much smaller than 3^^^3 people!) getting a dust speck in their eye. And every single step involves a really small decrease in the severity of what they suffer, and a trillionfold increase in the number of people suffering. But the chain begins with TORTURE and ends with DUST SPECKS, or more precisely with something strictly less bad than DUST SPECKS because the number of people involved is so much smaller.

    To consider TORTURE worse than DUST SPECKS is to consider that at least one of those steps is not making things worse: that at some point in the chain, having a trillion times more victims fails to outweigh a teeny-tiny decrease in the amount of suffering each one undergoes.

    I am a little skeptical, on general principles, of any argument concerning situations so far beyond any that either I or my ancestors have any experience of. So I will not go so far as to say that this makes TORTURE obviously less bad than DUST SPECKS. But I will say that the argument I have sketched above appears to me to deserve taking much more seriously than you are taking the TORTURE side of the argument, with your talk of scoreboards.

    Do unto others as you would have done to yourself

    This is a pretty good principle; there's a reason it and its near-equivalents have cropped up in religious and ethical systems over and over again since long before the particular instance I think you have in mind. But it doesn't deal well with cases where the "others" vary hugely in number. (It also has problems with cases where you and the others have very different preferences.)

    I, for one, would rather have a dust speck in my eye than be tortured for 50 years. And I wouldn't get 3^^^3 specks of dust in my eye, because none of them did either, they only got one.

    This reasoning would also suggest that if you have to choose between having $10 stolen from each of a million people and having $20 stolen from one person, you should choose the latter. That seems obviously wrong to me; if you agree, you should reconsider.

    engulf the Earth and possibly most of the Milky Way

    You are vastly underestimating how big 3^^^3 is.

    I choose to save the person in front of me, and fix any negative results afterwards

    That sounds very nice, but if you are unable to fix the negative results this may sometimes be a really terrible policy. Also, in the usual version of the hypothetical the dust specks and the torture are not different in "remoteness", so I don't see how this heuristic actually helps resolve it.

    somebody figured out the answer to this dilemma about 2000 years ago

    It is not, in fact, the same dilemma. (E.g., because in that scenario it isn't "one person getting something very bad, versus vast numbers getting something that seems only trivially bad", it's "one person getting something very bad, versus quite large numbers getting something very bad".)

    If you would like a religious argument then I would suggest the Open Thread as a better venue for it.

    Anyway, I think your discussion of harming A in order to help B misses the point. Inflicting harm on other people is indeed horrible, but note that (1) in the TvDS scenario harm is being inflicted on other people either way, and if you just blithely assert that it's only in the TORTURE case that it's bad enough to be a problem then you're simply begging the original question; and (2) in the TvDS scenario no one is talking about inflicting harm on some people to prevent harm to others, or at least they needn't and probably shouldn't be. The question is simply "which of these is worse?", and you can and should answer that without treating one as the default and asking "should I bring about the other one to avoid this one?".

    comment by Tetraspace (tetraspace-grouping) · 2016-07-13T17:39:18.587Z · LW(p) · GW(p)

    New situation: 3^^^3 people being tortured for 50 years, or one person getting tortured for 50 years and getting a single speck of dust in their eye.

    By do unto others, I should, of course, torture the innumerably vast number of people, since I'd rather be tortured for 50 years than be tortured for 50 years and get dust in my eye.

    comment by siIver · 2016-09-17T11:56:02.549Z · LW(p) · GW(p)

    To me it is immediately obvious that torture is preferable. Judging my the comments I'm in the minority.

    comment by Qustioner · 2019-04-28T06:56:29.163Z · LW(p) · GW(p)

    How can I avoid being the one your AI machine chooses to be tortured?

    comment by Mati_Roy (MathieuRoy) · 2020-03-21T04:24:40.712Z · LW(p) · GW(p)

    I think the best counterargument I came against this line of reasoning turns around the fact that there might not be 3^^^3 moral beings in mindspace

    comment by Mati_Roy (MathieuRoy) · 2020-03-22T13:24:20.042Z · LW(p) · GW(p)

    There might not be 3^^^3 moral beings in mindspace, and instantiating someone more than once might not create additional value. So there's probably something here to consider. I still would choose torture with my current model of the world, but I'm still confused about that point.

    comment by EditedToAdd · 2020-04-24T19:44:31.510Z · LW(p) · GW(p)

    Fun Fact: the vast majority of those 3^^^3 people would have to be duplicates of each other because that many unique people could not possibly exist.

    comment by Kevin McKee (kevin-mckee) · 2021-01-07T20:31:34.421Z · LW(p) · GW(p)

    The answer is obvious once you do the math.  

    I think most people read the statement above like it reads either torture one person alot or torture alot of people very little.  That is not what it says at all, because 3^^^3 or 3^7625597484987 is more like the idea of infinity than the idea of alot.  

    If you were divide up those 3^^^3 dust particles and send them through the eyes of anything with eyes since the dawn of time, it would be no minor irritant.  You wouldn't be just blinding everything ever.  Nor is it just like sandblasting everything in the eyes until they have holes through there skulls.  These aren't even close to the right scale.  

    Even after taking all that dust and dividing it by all the eyes every creature has ever had and spreading it over their entire lifetime the amount of dust particles that hit the eye per nanosecond is unimaginable.  I don't mean the ordinary type of unimaginable, I mean physicists couldn't imagine what the physics of the situation means.  If you are thinking compressing that amount of dust into that small a space would might cause a nuclear reaction you are still thinking on the wrong scale.   The creation of black holes isn't the right scale either.  That amount of material hasn't been in one spot since the Big Bang,  which is also isn't on the right scale.  Billions of billions of Big Bangs per nanosecond in every eye that has ever existed for it's entire life isn't the right scale either.  It would be so much worse than that, however it's around here that my ability to describe how bad this would be tops out.

    So yeah torturing a single person for a mere 50 years is the obvious answer.

    p.s.

    I know the question can be read as 3^^^3 people each getting one spec of dust in the eye, but that number of people in a single universe would also cause a disaster even worse than the one I described above.

    comment by Portia (Making_Philosophy_Better) · 2021-02-01T14:14:31.945Z · LW(p) · GW(p)

    I've taught my philosophy students that "obvious" is a red flag in rational discourse.

    It often functions as "I am not giving a logical or empirical argument here, and am trying to convince you that none is needed" (Really, why?) and "If you disagree with me, you should maybe be concerned about being stupid or ignorant for not seeing something obvious; a disagreement with my unfounded claim needs careful reasoning and arguments on your part, it may be better to be quiet, lest you are laughed at." It so often functions as a trick to get people to overlook an unjustified statement, or to get others to justify your statements for you, or to be doubtful of themselves when doubting your unfounded claim. (Which is the very effect you have produced, with commenters below going "I really don´t get it and it bothers me alot.", seeing the mistake for not understanding something that was not explained and is likely not true with themselves, and other commenters coming up with the arguments you did not supply.)

    If a statement is actually obvious - that is, universally and instantly convincing, with everyone capable of giving the argument for it easily and quickly - this does not need to be spelled out, and generally is not, as stating that it is obvious adds nothing to what everyone knows. If the statement is rather obvious, but not quite, that is, it can be proven with ease in a few lines, it might as well be given, right? 

    Furthermore, I am unaware of a compelling rational argument for total utilitarianism. It is deeply controversial, for good reasons, whether morality as a whole is something that can be purely rationally derived (Hume has expanded on this quite well; it is one thing to rationally deduce how to reach a given moral goal, it is quite another to rationally generate a moral goal like "maximise average or total human happiness", and to also prove that it is the only worthwhile goal.). And attempts to derive a purely rational moral system are notably contrary to utilitarianism (e.g. Kant's attempt to construct a morality that consists solely of one's actions being logically non-contradictory comes to mind, and he explicitly excludes the utility of an action from its moral judgement).

    If you offer humans the chance to live in a world governed by traditional utilitarianism, many of them wish not to live there, and strongly consider the idea of constructing such a world to be a moral wrong.

    Many humans choose to know uncomfortable truths, to be free, to create and discover, to sacrifice themselves for others, to have authentic self-expression, to be connected to reality, to live in a world that is just, etc. etc. over pure happiness. If offered a hypothetical scenario of being inserted into a machine where they will always feel happy, eternally fed virtual chocolate and virtual blowjobs and an endless sequence of diverting content to scroll past, forgetting all the bad that happened to them, blind to the outer world, losing their capacity for boredom and their yearning for more... Many would chose to instead live in a world that is often painful, but real, a world where their actions have impact. There is a realisation that there are things more important than happiness. 

    There is also often a strong feeling that there are evils that cannot be outweighed - and torturing an innocent non-consensually typically makes that list. Say we have a scenario where 20 men take a random woman, gangrape her, and kill her. They then argue that her one hour of suffering (dead now, she is suffering no more) is outweighed by the intense delight each of them feels, and will feel for decades - especially seeing as they are so many of them, and only one of her, and they really, really like raping. Heck, they'v even taped it, so millions of men will be able to look at it and get off, so it is a virtue, really. If you look at that scenario and think "That is fucked up", you aren't being irrational, you are showing empathy, recognising value beyond mere averages of happiness. If you were that woman, or any other oppressed group in such a system, being exploited for the "general good", it would be your right to fight such a system with everything you've got - and I fucking hope that many people would have your back, and not excuse this as obviously rational.

    comment by nim · 2021-04-06T00:54:27.882Z · LW(p) · GW(p)

    I drop the number into a numbers-to-words converter and get "seven trillion six hundred twenty-five billion five hundred ninety-seven million four hundred eighty-four thousand nine hundred eighty-seven". (I don't do it by hand, because a script that somebody tested is likely to make fewer errors than me). Google says there are roughly 7 billion people on earth at the moment. Does that mean that each person gets roughly 1089 dust specks, or that everyone who's born gets one dust speck until the 7 trillion and change speck quota has been met? I ask because if each person is allowed multiple specks, you could have one person getting all 7-odd trillion of them in their lifetime, and that sounds like an outcome that the sufferer might describe as "horrible torture for 50 years without hope or rest" while they were experiencing it.

    The answer that looks initially obvious to me is the specks, but that's because I calculate it with a societal system of morality in which it's absolutely not okay to perform certain experiments on humans which would be expected to make minor alleviations to the biological inconveniences experienced by all subsequent humans.

    This seems to hinge on an intuition that if you have to do something to people without their consent, it's do-unto-others-wise better to do a small thing to more people than a big thing to a few people. If the torture or specs would only be done to consenting parties, it'd be very different -- I think people could get to competing to see who could last to 50 years of torture, or who could take the most specks in the Dust Speck Challenge, because the human organism is a strange and beautiful thing.

    comment by TropicalFruit · 2021-11-17T01:27:49.635Z · LW(p) · GW(p)

    I'm fairly certain Elizer ended with "the choice is obvious" to spark discussion, and not because it's actually obvious, but let me go ahead and justify that - this is not an obvious choice, even though there is a clear, correct answer (torture).

    There are a few very natural intuitions that we have to analyze and dispel in order to get off the dust specks.

    1.) The negative utility of a dust speck rounds down to 0. 

    If that's the case, 3^^^3*0 = 0, and the torture is worse. The issue with this is two fold. 

    First, why does it have to be torture on the other side of the equation? If dust speck rounds down to 0, then why can't it be someone spraining their ankle? Or even lightly slapping someone on the wrist? Once we're committed to rounding dust specks to 0, all of the sudden, we are forced to pick the other side of the equation, regardless of how small it is.

    This, then, exposes our rounding fallacy. The dust speck's negative utility is not zero, it's just really really small, but when we then apply that really, really small downside to a really, really large number of people, all of the sudden the fact that it is certainly nonzero becomes very relevant. It's just that we need to pick a counter-option that isn't so negative as to blind us with emotional response (like torture does) in order to realize that fact.

    The more broad version of the mistake of rounding dust specks to 0 is that it implies there exists some threshold under which all things are of equal utility - they all round down to 0. Where is this threshold? Literally right at dust specks? They round to 0, but something slightly worse than a dust speck doesn't? Or is it a little higher than dust specks?

    Regardless, we need only analyze a problem like this on the border of our "rounds to zero" and "doesn't round to zero" utility in order to see the absurdity in this proposition.

    2.) What if it's not the absolute utility, but the gap, relative to torture, which causes specks to round to zero?

    This second attempt is more promising, but, again, on further analysis falls apart.

    One of the best ways to bypass the traditional pitfalls in single choice decisions is to pretend that you're making the choice many times, instead of just once. By doing so, we can close this subjective gap, and end up at an apples-to-apples (or in this case, torture-to-torture) comparison.

    We've already established that dust specks do not round down to zero, in an absolute sense, so all I need to do is ask you to make the choice enough times that the 3^^^3 people are essentially being tortured for 50 years.

    Specifically, this number of times:

    (torture's badness) / (dust specks' badness)

    Once you've made the choice that many times, guess what, 3^^^3 people are being tortured for 50 years by dust specs.

    If you'd picked the torture every time, (# of choices) people are being tortured for 50 years.

    Do you think that the torture is 3^^^3 times worse than the dust speck? (If so, there would be the same amount of people being tortured either way.) I can just change it to make it 40 years of torture instead, or 1 year of torture. Or I can make the dust speck a little less bad.

    The thing is, your desire to pick the dust specks doesn't come from rationally asserting that it's 3^^^3 less bad than torture. No matter what you think the factor is, I can always pick numbers that'll make you choose the torture, and your intuition is always going to hate it.

    3.) You can't calculate utility with a linear combination. Even if the sum of the dust specks is more negative, the torture is still worse, because it's so much worse for that one person.

    Let me be a little more clear about what I mean here. Imagine this choice:

    1. One person tortured for 10 hours.
    2. 601 people tortured for 1 minute.

    It's very reasonable to pick option #2 here. Even though it's an extra minute of torture, you could argue that utility is not linear in this case - being tortured for 1 minute isn't really that bad, and you can get over it, but being tortured for 10 hours is likely to break a person.

    That's fine - your utility function doesn't have to be a strictly linear function with respect to the inputs, but, critically, it is still a function.

    You might be tempted to say something along the lines of "I evaluate utility based on avoiding the worst outcome for a single person, rather than based on total utility (therefore I can pick the dust specks)."

    The problem is, no matter how how much proportionally less bad you think 1 minute of torture is than 10 hours, I can still always pick a number that causes you to pick the 10 hour option.

    What if I change it to 1000 people instead of 601? What about 10,000? What about 3^^^3?

    All of the sudden it's clear that picking to avoid the worst possible outcome for a single person is an incoherent intuition - that would force you to torture 3^^^3 people for 9 hours instead of 1 person for 10 hours.

    The non-obvious answer is therefore torture.

    The dust specks do not round down to zero. The gap between the dust specks and the torture doesn't cause the dust specks to round down to zero, either. You must multiply, and account for the multiplicity of the dust specks, lest you be forced to choose saving one person from -100 from saving hundreds from -99. Even if you discount the multiplicity and don't linearly sum the costs, again, treating them as 0 leads to incoherence, so there is still some cumulative effect.

    Therefore, you've gotta go torture, as much as your intuition hates it.

    tl;dr - Shut up and multiply.

    comment by Richard McDaniel (richard-mcdaniel) · 2022-04-28T03:12:09.872Z · LW(p) · GW(p)

    Gee, tough choice. We either spread the suffering around so that it’s not too intense for anyone or we scapegoat a single unlucky person into oblivion. I think you’d have to be a psychopath to torture someone just because “numbers go BRRR.”

    comment by CarlJ · 2022-06-20T18:21:27.905Z · LW(p) · GW(p)

    The answer is obvious, and it is SPECKS.
    I would not pay one cent to stop 3^^^3 individuals from getting it into their eyes.

    Both answers assume this is a all-else-equal question. That is, we're comparing two kinds of pain against one another. (If we're trying to figure out what the consequences would be if the experiment happened in real life - for instance, how many will get a dust speck in their eye when driving a car - the answer is obviously different.)

    I'm not sure what my ultimate reason is for picking SPECKS. I don't believe there are any ethical theories that are watertight.

    But if I had to give a reason, I would say that if I were among the 3^^^3 individuals who might get a dust speck in one's eye, I'd say I would of course pay that to help one innocent person from being tortured. And, I can imagine that not just me would do that, but so would also many others. If we can imagine 3^^^^3 individuals, I believe we can imagine that many people agreeing to save one, for a very small cost to those experiencing it.¹

    If someone then would show up and say: "Well, everyone's individual costs were negligible, but the total cost - when added up - is actually on the order of [3^^^3 / 10²⁹] years of torture. This is much higher, so obviously that is what we should we care most about!" ... I would ask then why one should care about that total number. Is there someone who experiences all the pain in the world? If not, why should we care about some non-entity? Or, if the argument is that we should care about the mulitversal bar of total utility for its own sake, how come?

    Another argument is that one needs to have a consistent utility function, otherwise you'll flip your preferences [LW · GW] - that is, step by step by going through different preference rankings until one inevitably prefers the other position than that which one started with. But I don't see how Yudkowsky achieves this. In this article, the most he proves is that someone, who prefers one person being tortured for 50 years to a googol number of people being tortured for a bit less than 50 years, would also prefer "a googolplex people getting a dust speck in their eye" as compared to "a googolplex/googol people getting two dust specks in their eye". How is the latter statement inconsistent with preferring SPECKS over TORTURE? Maybe that is valid for someone who has a benthamistic utility function, but I don't have that.

    Okay, but what if not everyone agrees to getting hit by a dust speck? Ah, yes. Those. Unfortunately there are quite a few of them - maybe 4 in the LW-community and then 10k-1M (?) elsewhere - so it is too expensive to bargain with them. Unfortunately, this means they will have to be a bit inconvenienced.

    So, yeah, it's not a perfect solution; one will not find such when all moral positions can be challenged by some hypothetical scenario. But for me, this means that SPECKS are obviously much more preferable than TORTURE.

    ¹ For me, I'd be willing to subject myself to some small amount of torture to help one individual not be tortured. Maybe 10 seconds, maybe 30 seconds, maybe half an hour. And if 3^^^3 more would be willing to submit themselves to that, and the one who would be tortured is not some truly radical benthamite (so they would prefer themselves being tortured to a much bigger amount of torture being produced in the universe), then I'd prefer that as well. I really don't see why it would be ethical to care about the great big utility meter - when it corresponds to no one actually feeling it. 

    comment by Portia (Making_Philosophy_Better) · 2023-04-29T13:43:23.893Z · LW(p) · GW(p)

    Strongly disagree.

     

    Utilitarianism did not fall from a well of truth, nor was it derived from perfect rationality.

     

    It is an attempt by humans, fallible humans, to clarify and spell out pre-existing, grounding ethical belief, and then turn this clarification into very simple arithmetic. All this arithmetic rests on the attempt to codify the actual ethics, and then see whether we got them right. Ideally, we would end up in a scenario that reproduces our ethical intuitions, but more precisely and quickly, where you look at the result and go “yes, that expresses what I wanted it to express, just even better than I expressed it before”. You would recognise your ethics in it. Divergences in assessment would be rare, and would dissolve upon closer assessment; if you thought and felt about them, you would conclude that a superficial thing had lead your judgement astray, and the arithmetic had captured what your judgement should have been, and change to it.

     

    E.g. in trolley problem variations (pushing a button to reroute a train, vs. dragging a person onto the train tracks and trying them down to stop the train with their body), I will encounter the fact that I feel more reluctant to personally, closely and physically bring someone to death by hand than to press a button, even if the reasons for it are equally just or injust; and then I will decide, upon consideration, that these things are actually identical (I am doing the same amount of hurt, and violating the same ethical principles), and that I am merely hiding the full horror of them for myself if I envision pressing a button, and that the solution I hence want is to visualise the full horror if I press the button, also, that I want to treat both scenarios the same (even if that makes me less inclined to press a button). In this case, math points out that I was irrationally biased against recognising harm if it is less viscerally close to me. The button pressing is just as bad, I am just shielded from having to see the badness, and clarity brings it forth.

     

    But this is *not* how I feel about classic utilitarian counter examples.

    If you apply utilitarianism to an ethical scenario, and the result runs massively counter to the ethical belief that spawned utilitarianism in the first place (e.g. telling me I should torture someone to avoid *a lot* of dust specks), not just to a degree that briefly feels confusing and uncomfortable until you assess it clearly, but to a degree that deeply repulses and horrifies me persistently, has me certain I would not wish to live in this world, and nor would by ethical mentors – then my conclusion is that my utilitarian model clearly failed to adequately capture the ethical belief in question, *and the utilitarian model needs to be overhauled*. The arithmetic may check out, but the far more important assumptions that spawned it and legitimised it clearly were not adequate depictions of my ethical beliefs.

     

    Most people do not want to live in Omelas. They do not want to see the organs of one person harvested to heal a dozen. They do not want themselves and an inconceivable number of humans spared a speck of dust in exchange for someone being tortured. They do not want to live in this universe, regardless of whether they are the poor sod being tortured, or the privileged supermajority spared an inconsequential inconvenience in exchange for another person being tortured. They don’t want to live in the artificial happy box that stimulates their nerves to be maximally happy all the time.

    It is ludicrous to tell them that they must convince themselves that they want this horrible world and live in it, when collectively, they very clearly do not, they do not prefer it, they feel bad about it, they do not feel their happiness outweighs this suffering, they feel that it is deeply wrong, they do not think that is what they meant when they explained that the consequences of actions has moral relevance to them, or that promoting happiness and avoiding pain has moral relevance for them, they clearly feel something important was forgotten here, *so in what meaningful way is it better if none of the people involved judge it to be better or want it?*

     

    And it is okay that they do not want it. There is no rationalist obligation to find that preferable, you do not need to talk yourself into it. An equation you invented in an attempt to capture your ethics should not hold power over you when it turns out that it has not. It was meant as a tool, not a guide. If this is not the world you want, the ethical code that spawned the arithmetic was clearly in error, or at least crucially incomplete.

     

    And that is *because* utilitarianism is crucially incomplete. People have been saying that, loudly, ever since the first person came up with it. There are entire books on the topic illustrating that this is broken. In every single philosophy course you run, many of your students will immediately speak up, saying this is not their ethics, and they are opposed to it, and pointing out problems. If you taught an ethics class and claimed this was the only rational ethical system, you’d get complaints to the dean for teaching an obvious falsehood. There are numerous alternative ethical systems in which happiness or consequences do not matter at all.

     

    Take the organ harvesting from live innocents to preserve multiple people example: Even if we edited all the memories of the people who loved the person being harvested so none of them feel grief and regret, even if we keep it secret so society is not destabilised by the knowledge that random innocents regularly get harvested, even if there are specificallt no consequences beyond the harvested person being killed, and a dozen other people being saved… I wouldn’t want to live in this world, I wouldn’t want it to be. It repulses me, in a way that cannot be outweighed. Killing sentient innocents to use their bodies is wrong, even if their bodies are very useful. If your ethics say these are fine, they have failed to capture something of supreme ethical importance to me, and we need to carry on looking in our attempt to codify ethics.

     

    Hence, to anyone reading this story and feeling that the answer is not obvious, or that they do not want to chose the torture option: This does not mean, in any way, that you are not a rational person.

    comment by dr_s · 2023-04-29T14:54:54.994Z · LW(p) · GW(p)

    Well, 3^^^3 dust specks in people's eyes imply that order of magnitude of people existing, which is an... interesting world, and sounds good news on its own. While 3^^^3 dust specks in the same person's eyes imply that they and the whole Earth get obliterated by a relativistic sand jet that promptly collapses into a black hole, so yeah.

    But way-too-literal interpretations aside, I would say this argument is why I don't think total sum utilitarianism is any good. I'd rather pick a version like "imagine you're born as a random sentient in this universe, would you take your chances?". High average, low standard deviation of the utility distribution. Which admittedly still allows the occasional Omelas, but that's far from the worst I can imagine.

    comment by Dave Scotese (dave-scotese) · 2024-02-05T04:03:10.613Z · LW(p) · GW(p)

    First, I wanted to suggest a revision to 3^^^3 people getting duct in their eyes: Everyone alive today and everyone who is ever born henceforth, as well as all their pets, will get the speck.  That just makes it easier to conceive.

    In any case, I would choose the speck simply on behalf of the rando who would otherwise get torture. I'd want to let everyone know that I had to choose and so we all get a remarkably minor annoyance in order to avoid "one person" (assuming no one can know who it will be) getting tortured.  This would happen only if there were a strong motivation to stop.  The best option is not presented: collect more information.

    Replies from: Ericf
    comment by Ericf · 2024-02-05T05:45:53.089Z · LW(p) · GW(p)

    Note that you have reduced the raw quantity of dust specks by "a lot" with that framing. Heat death of universe is in "only" 10^106 years, so that would be no more than 2^ (10^(106)) people (if we somehow double every year) compared to 3||3^(27), which is 3^ (10^ (a number too big to write down))

    comment by tailcalled · 2024-10-10T10:36:04.831Z · LW(p) · GW(p)

    This thought experiment is unrealistic, there is no and will never be a population of 3^^^3 homogenous agents to consider. In common realistic variants the considerations end up dominated by considerations like "who is being tortured?" and "what do the dust specks interfere with?".

    Replies from: gwern
    comment by gwern · 2024-10-10T15:26:37.292Z · LW(p) · GW(p)

    This thought experiment is unrealistic

    many such cases

    Replies from: sharmake-farah
    comment by Noosphere89 (sharmake-farah) · 2024-10-10T15:29:06.469Z · LW(p) · GW(p)

    They're not intended to be realistic, but as a way of disentangling different intuitions.

    This perspective is captured here:

    https://www.lesswrong.com/posts/s9hTXtAPn2ZEAWutr/please-don-t-fight-the-hypothetical [LW · GW]

    Where you shouldn't try to fight the hypothetical's assumptions, but instead actually answer the question.

    Replies from: tailcalled
    comment by tailcalled · 2024-10-10T18:00:26.306Z · LW(p) · GW(p)

    Hypothetical question for you: what if there is a sequence of wrong mental turns that leads towards a psychological attractor state that's bad, and accepting the hypothetical in the OP is one of these turns?

    Replies from: sharmake-farah
    comment by Noosphere89 (sharmake-farah) · 2024-10-10T18:03:17.357Z · LW(p) · GW(p)

    Then you shouldn't accept the hypothetical.

    I didn't mean for this to be a claim that you must accept all hypotheticals.