[SEQ RERUN] Torture vs. Dust Specks

post by MinibearRex · 2011-10-11T03:58:43.371Z · LW · GW · Legacy · 85 comments

Today's post, Torture vs. Dust Specks was originally published on 30 October 2007. A summary (taken from the LW wiki):

 

If you had to choose between torturing one person horribly for 50 years, or putting a single dust speck into the eyes of 3^^^3 people, what would you do?


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Motivated Stopping and Motivated Continuation, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

85 comments

Comments sorted by top scores.

comment by Jack · 2011-10-11T06:35:03.261Z · LW(p) · GW(p)

I still have trouble seeing where people are coming from on this. My moral judgment software does not accept 3^^^3 dust specs as an input. And I don't have instructions to deal with such cases by assigning a dust spec a value of -1 util and torture a very low but > -3^^^3 util count. I recognize my brain is just not equipped to deal with such numbers and I am comfortable adjusting my empirical beliefs involving incomprehensibly large numbers in order to compensate for bias. But I am not comfortable adjusting my moral judgments in this way -- because while I have a model of an ideally rational agent I do not have a model of an ideally moral agent and I am deeply skeptical that one exists. In other words, I recognize my 'utility function' is buggy but my 'utility function' says I should keep the bugs since otherwise I might no longer act in the buggy way that constitutes ethical behavior.

The claim that the answer is "obvious" is troubling.

Replies from: None
comment by [deleted] · 2011-10-11T14:22:28.160Z · LW(p) · GW(p)

I originally thought the answer was more obvious if you follow Eliezer's suggestion, and start doing the math like it's a physics problem. However, when I continued following up, I found that interestingly, the problem has even more lessons and can be related back to other posts by Eliezer and Yvain.

To begin to consider the physical implications of generating 3^^^3 Motes of dust. I'll quote part of my post from the previous thread:

However, 3^^^3 is incomprehensibly bigger than any of that.

You could turn every atom into the observable universe into a speck of dust. At wikipedia's almost 10^80 atoms, that is still not enough dust. http://en.wikipedia.org/wiki/Observable_universe

You could turn every planck length in the obseravble universe into a speck of dust. At Answerbag's 2.5 x 10^184 cubic planck lengths, that's still not enough dust. http://www.answerbag.com/q_view/33135

At this point, I thought maybe that another universe made of 10^80 computronium atoms is running universes like ours as simulation on individual atoms. That means 10^80 2.5 x 10^184 cubic planck lengths of dust. But that's still not enough dust. Again. 2.510^264 specks of dust is still WAY less than 3^^^3

At this point, I considered checking if I could get enough dust specks if I literally converted everything in all Everett branches since the big bang beginning of time into dust, but my math abilities fail me. I'll try coming back to this later.

Essentially, if you you treat it as a physics dilemma, and not an ethics dilemma, you realize very quickly that you are essentially converting multiple universes not even to disutility, but to dust to fuel disutility. Unless whatever inflicts the disutility is somehow epiphenomenal and does not obey physics, that many specks/speck equivalents seems to become a physics problem rather rapidly regardless of what you do.

If you then recast the problem as "I can either have one random person be tortured, or I can convert countless universes, many containing life, from getting turned into dust that will be used to hurt the residents of other countless universes?" Then the ethics problem becomes much simpler. I have a prohibition against torture, but I have a substantially larger and more thorough prohibition against destroying inhabited planets, stars, or galaxies.

I understand there is a possible counter to this in "Well, what of the least convenient world, where the second law of thermodynamics does not apply, and generating this much dust WON'T destroy universes, It will merely inconvenience people."

But then there are other problems. If 1 out of every 1 googol person loses their life because of the momentary blindness from rubbing their eyes, then I'm going to suggest to the guy that consigning countless people to death is a better choice than torturing one. That's not something I'm willing to accept either.

You could attempt to FURTHER counter this by saying "Well, what of the least convenient world, where this dust ISN'T going to kill people, It will merely inconvenience people! That's it!"

But then there are other problems still. If 1 out of every 1 googol person is simply receives substantial injury because of the momentary blindness from rubbing their eyes, then..

"ERGH! No injuries! Just magical incovenience dust! This disutility has no further physical implications!"

As a side note, I think belief in belief may relate. I'm willing to accept a world which is inconvenient, to a point. But the person who you are arguing with is acting extremely similarly to someone who has an invisible dragon in his garage. So if I don't believe in the invisible dragon, why do you trust this person?

But now suppose that we say to the claimant, "Okay, we'll visit the garage and see if we can hear heavy breathing," and the claimant quickly says no, it's an inaudible dragon. We propose to measure carbon dioxide in the air, and the claimant says the dragon does not breathe. We propose to toss a bag of flour into the air to see if it outlines an invisible dragon, and the claimant immediately says, "The dragon is permeable to flour."

Now, it is further possible to say "Your possible skepticism is not an issue! This is the least convenient possible world, so before giving he hit you with a beam which generates complete trust in what he says. You are completely sure the threat is real."

But that doesn't help. It's easy to say "The threat of 3^^^3 dust specks is real." I was taking that as a given BEFORE I started considering the problem harder. It was only after considering problem harder I realized it is potentially possible that part is fake, and this is going to involve an iterated loop.

(As a brief explanation, to make the mistake worse on the small risk side, any scenario slightly similar to this almost always involves an iterated loop, where you have to torture another person, and then another person, and then another person, and then you consign the entire earth to a lifetime of torture against a threat that was never real in the first place.)

I really am going to have to give this problem more thought to try to lay out the implications. It's a really good problem that way.

Edit: Eliezer didn't write Least Convenient Possible World. Yvain did. Fixed.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-10-11T14:41:03.535Z · LW(p) · GW(p)

As a side note, I think belief in belief may relate. I'm willing to accept a world which is inconvenient, to a point. But the person who you are arguing with is acting extremely similarly to someone who has an invisible dragon in his garage.

No. Just no. A factual claim and a hypothetical thought-experiment are not the same thing. When you object to the details of the hypothetical thought experiment, and you nonetheless aren't convinced by any modifications correcting it either, then you're simply showing that this isn't your true objection.

So many people seem to be trying to find ways to dance around the simple plain issue of whether we should consider the multiplication of small disutilities to possibly be morally equivalent (or worse) to a single humongous disutility.

On my part I say simply: YES. Torturing a person for 50 years is morally better than inflicting the momentary annoyance of a single dust speck to each of 3^^^3 people. I don't see much sense in any arguments more complicated than a multiplication.

As simple and plain as that. People who've already written their bottom line differently, I am sure they can find whatever random excuses they want to explain it. But I just urge them to introspect for a sec and actually see whether that bottom line was actually affected any by the argument they placed above it.

Replies from: NihilCredo, None
comment by NihilCredo · 2011-10-13T09:11:35.154Z · LW(p) · GW(p)

So many people seem to be trying to find ways to dance around the simple plain issue of whether we should consider the multiplication of small disutilities to possibly be morally equivalent (or worse) to a single humongous disutility.

On my part I say simply: YES. Torturing a person for 50 years is morally better than inflicting the momentary annoyance of a single dust speck to each of 3^^^3 people. I don't see much sense in any arguments more complicated than a multiplication.

I agree that this is the critical point, but you present this disagreement as if multiplying was the default approach, and the burden of proof fell entirely on any different evaluation method.

Myself, however, I've never heard a meaningful, persuasive argument in favour of naive utilitarian multiplication in the first place. I do believe that there is some humongous x_John above which it will be John's rational preference to take a 1/x_John chance of torture rather than suffer a dust spek. But I do not believe that a dust speck in Alice's eye is abstractly commensurable to a dust speck in Bob's eye, or Alice's torture to Bob's torture, and a fortiori I also do not believe that 3^^^3 dust specks are commensurable to one random torture.

If John has to make a choice between the two (assuming he isn't one of the affected people), he needs to consider the two possible worlds as a whole and decide which one he likes better, and he might have all sorts of reasons for favouring the dust speck world - for example, he might place some value on fairness.

comment by [deleted] · 2011-10-11T15:29:14.943Z · LW(p) · GW(p)

I already came to that conclusion (Torture) when I posted on the October 7th in the previous thread. When I was thinking about the problem again on October 11th, I didn't want to just repeat the exact same cached thoughts again, so I tried to think "Is there anything else about the problem I'm not thinking about?"

And then I thought "Oh look, concepts similarly to what other people mentioned on blog posts I've read. I'll use THEIR cached thoughts. Aren't I wise, parroting back the words of expert rationalists?" But that's a horrible method of thinking that wont get me anywhere.

Furthermore, I ended my post with "I'm going to have to give this more thought." Which is a stupid way to end a post. if it needs more thought, it needs more thought, so why did I post it?

So actually, I agree with your down vote. In retrospect, there are several reasons why that is actually a bad post, even though it seemed to make sense at the time.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-10-11T15:47:20.047Z · LW(p) · GW(p)

So actually, I agree with your down vote.

For clarity: I didn't downvote you.

Replies from: None
comment by [deleted] · 2011-10-11T17:15:06.991Z · LW(p) · GW(p)

Thank you for the clarification about that.

Either way I'll retract it but not blank it: Checking on other threads seems to indicate that blanking is inappropriate because it leaves some people wondering what was said, but it should be retracted.

comment by DanielLC · 2011-10-11T04:56:15.769Z · LW(p) · GW(p)

Here's a good way of looking at the problem.

Presumably, there's going to be some variation with how the people are feeling. Given 3^^^3 people, this will mean that I can pretty much find someone under any given amount of pleasure/pain.

Suppose I find someone, Bob, with the same baseline happiness as the girl we're suggesting torturing, Alice. I put a speck of dust in his eye. I then find someone with this nigh infinitesimally worse baseline, Charlie, and do it again. I keep this up until I get to a guy, Zack, that, after putting the dust speck in his eye, is at the same happiness as the guy we would be torturing if he is tortured.

To put numbers on this:

Alice and Bob have a base pain of 0, Charlie has 1, Dianne has 2, ... Zack has 999,999,999,999. I then add one unit of pain to each person. Now Alice has 0, Bob has 1, Charlie has 2, ... Yaana has 999,999,999,999, Zack has 1,000,000,000,000. I could instead torture one person. Alice has 1,000,000,000,000, Bob has 0, Charlie has 1, ... Zack has 999,999,999,999. In other words, Bob has 0, Charlie has 1, Diane has 2, ... Zack has 999,999,999,999, Alice has 1,000,000,000,000.

It's the same numbers both ways -- just different people. The only way you could decide which is better is if you care more or less than average about Alice.

Of course, this is just using 1,000,000,000,000 of 3^^^3 people. Add in another trillion, and now it's like torturing two people. Add in another trillion, and it's worse still. You get the idea.

Replies from: None, MinibearRex, Xece, Incorrect
comment by [deleted] · 2011-10-11T10:44:32.541Z · LW(p) · GW(p)

Presumably, there's going to be some variation with how the people are feeling. Given 3^^^3 people, this will mean that I can pretty much find someone under any given amount of pleasure/pain.

...

It's the same numbers both ways -- just different people. The only way you could decide which is better is if you care more or less than average about Alice.

If Yudkowsky had set up his thought experiment in this way, I would agree with him. But I don't believe there's any reason to expect there to be a distribution of pain in the way that you describe - or in any case it seems like Yudkowsky's point should generalise, and I'm not sure that it does.

If all 3^^^3 + 1 people are on the pain level of 0, and then I have the choice of bringing them all up to pain level 1 or leaving 3^^^3 of them on pain level 0 and bringing one of them up to pain level 1,000,000,000,000 - I would choose the former.

I may have increased the number of pain units in existence, but my value computation doesn't work by adding up "pain units". I'm almost entirely unconcerned about 3^^^3 people experiencing pain level 1; they haven't reached my threshold for caring about the pain they are experiencing. On the other hand, the individual being tortured is way above this threshold and so I do care about him.

I don't know where the threshold(s) are, but I'm sure that if my brain was examined closely there would be some arbitrary points at which it decides that someone else's pain level has become intolerable. Since these jumps are arbitrary, this would seem to break the idea that "pain units" are additive.

Replies from: Multipartite, DanielLC
comment by Multipartite · 2011-10-11T19:30:10.526Z · LW(p) · GW(p)

Is the distribution necessary (other than as a thought experiment)?

Simplifying to a 0->3 case: If changing (in the entire universe, say) all 0->1, all 1->2, and all 2->3 is judged as worse than changing one person's 0->3 --for the reason that, for an even distrubution, the 1s and 2s would stay the same number and the 3s would increase with the 1s decreasing-- then for what hypothetical distribution would it be even worse and for what hypothetical distribution would it be less bad? Is it worse if there are only 0s who all become 1s, or is it worse if there are only 2s who all become 3s? Is a dust speck classed as worse if you do it to someone being tortured than someone in a normal life or vice versa, or is it just as bad no matter what the distribution in which case the distribution is unimportant?

...then again, if one weighs matters solely on magnitude of individual change, then that greater difference can appear and disappear like a mirage when one shifts back and forth considering those involved collectively or reductionalistically... hrm. | Intuitively speaking, it seems inconsistent to state that 4A, 4B and 4C are acceptable, but A+B+C is not acceptable (where A is N people 0->1, B is N 1->2, C is N 2->3).

...the aim of the even distribution example is perhaps to show that by the magnitude-difference measurement the outcome can be worse, then break it down to show that for uneven cases too the suffering inflicted is equivalent and so for consistency one must continue to view it as worse...

(Again, this time shifting it to a 0-1-2, why would it be {unacceptable for N people to be 1->2 if and only if N people were also 0->1, but not unacceptable for N people to be 1->2 if 2N more people were 1->2} /and also/ {unacceptable for N people to be 0->1 if and only if N people ere also 1->2, but not unacceptable for N people to be 0->1 if 2N more people were 0->1}?)


The arbitrary points concept, rather than a smooth gradient, is also a reasonable point to consider. For a smooth gradient, the more pain anothe person is going through the more objectionable it is. For an arbitrary threshold, one could not find someone greatly to be an objectionable thing, yet find someone else suffering by a negligible amount more to be a significantly objectionable thing. Officially adopting such a cut-off point for sympathy--particularly one based on an arbitrarily-arrived-at brain structure rather than well-founded ethical/moral reasoning--would seem to be incompatible with true benevolence and desire for others' well-being, suggesting that even if such arbitrary thresholds exist we should aim to act as though they did not.

(In other words, if we know that we are liable to not scale our contribution depending on the scale of (the results of) what we're contributing towards, we should aim to take that into account and deliberately, manually, impose the scaling that otherwise would have been left out of our considerations. In this situation, if as a rule of thumb we tend to ignore low suffering and pay attention to high suffering, we should take care to acknowledge the unpleasantness of all suffering and act appropriately when considering decisions that could control such suffering.

(Preferable to not look back in the future and realise that, because of overreliance on hardwired rules of thumb, one had taken actions which betrayed one's true system of values. If deliberately rewiring one's brain to eliminate the cut-off crutches, say, one would hopefully prefer to at that time not be horrified by one's previous actions, but rather be pleased at how much easier taking the same actions has become. Undesirable to resign oneself to being a slave of one's default behaviour.)

comment by DanielLC · 2011-10-11T19:03:48.753Z · LW(p) · GW(p)

Why would they all be at pain number zero? I'd expect them to be randomly distributed in all their traits unless specified otherwise. If I give them a mean pain of zero and a standard deviation of 1, there'd be no shortage of people with a pain level of 1,000,000,000,000. The same goes with any reasonable distribtion.

If you play around with my paradox a bit more, you can work out that if you have 1,000,000,000,000 people at pain level n, and one person at pain level zero, there must be some n between 0 and 999,999,999,999 such that it's at least as bad to torture the one person as to give the rest dust specks.

Where is the marginal disutility like that? If you have 1,000,000,000 people at pain 999,999,999,999, and one at pain 0, would you rather torture the one, or give the 1,000,000,000,000 dust specks?

they haven't reached my threshold for caring about the pain they are experiencing So, are you saying that there's a threshold x, such that any amount of pain less than x doesn't matter? This would mean that increasing it from x-1 to x for 3^^^3 people would do nothing, but increasing it from x to x+1 would be horrible? Put another way, you have 3^^^3 people at a pain level of x-1, and you give them all one dust speck. This doesn't matter. If you give them a second dust speck, now it's an unimaginable atrocity.

I would expect a cutoff like this would be an approximation. You'd actually think that the marginal disutility of pain starts out at zero, and steadily increases until it approaches one. If this were true, one dust speck would bring the pain to 1, which would make the marginal disutility slightly above zero, so that would have some tiny amount of badness. If you multiply it by 3^^^3, now it's unimaginable.

Replies from: None
comment by [deleted] · 2011-10-11T19:24:20.279Z · LW(p) · GW(p)

Why would they all be at pain number zero? I'd expect them to be randomly distributed in all their traits unless specified otherwise. If I give them a mean pain of zero and a standard deviation of 1, there'd be no shortage of people with a pain level of 1,000,000,000,000. The same goes with any reasonable distribtion.

It's a thought experiment. The whole scenario is utterly far-fetched, so there's no use in arguing that this or that detail of the thought experiment is what we should "expect" to find.

As such, I choose the version of the thought experiment that best teases out the dilemma that Yudkowsky is trying to explore, which concerns the question of whether we should consider pain to be denominated all in the same units - i.e. 3^^^3 x miniscule pain > 1 x torture - in our moral calculations.

EDIT: in response to the rest of your comment, see my reply to "Unnamed".

comment by MinibearRex · 2011-10-11T21:26:32.475Z · LW(p) · GW(p)

To get Eliezer's point, make the world more inconvenient. 3^^^3 people all with equivalent pain tolerances to you getting dust specks in their eyes, or torture one person for 50 years.

comment by Xece · 2011-10-11T06:34:42.393Z · LW(p) · GW(p)

I believe the problem with this, is that you have given actual values (pain units), and equated the two levels of "torture" outlined in the original thought experiment. Specifically, equating one trillion humans with dust speck in eye and Alice being tortured.

Replies from: DanielLC
comment by DanielLC · 2011-10-11T19:08:45.066Z · LW(p) · GW(p)

So, what's the problem? Is a dust speck incomparable to torture? A dust speck is comparable to something slightly worse than a dust speck, which is comparable to something slightly worse than that, etc. At some point, you'll compare dust specks to torture. You may not live long enough to follow that out explicitly, just like you could never start with one grain of sand and keep adding them one at a time to get a beach, but the comparison still exists.

Replies from: shminux
comment by shminux · 2011-10-11T19:51:01.742Z · LW(p) · GW(p)

No comparison exists if, as I mentioned in my other post, the fleeting discomfort is lost in the noise of other minor nuisances and has no lasting effect. One blink, and the whole thing is forgotten forever, quickly replaced by an itch in your bum, flickering fluorescent light overhead, your roommate coughing loudly, or an annoying comment on LW.

Replies from: DanielLC
comment by DanielLC · 2011-10-11T19:57:23.205Z · LW(p) · GW(p)

the fleeting discomfort is lost in the noise of other minor nuisances and has no lasting effect.

One speck of sand will be lost in a beach, but adding a speck of sand will still make it a bigger beach, and adding 3^^^3 specks of sand will make it a black hole.

has no lasting effect.

You notice it while it's happening. You forget about it eventually, but even if you were tortured for 3^^^3 years before finally dying, you'd forget it all the moment you die.

Replies from: shminux, see
comment by shminux · 2011-10-11T20:23:12.708Z · LW(p) · GW(p)

One speck of sand will be lost in a beach, but adding a speck of sand will still make it a bigger beach, and adding 3^^^3 specks of sand will make it a black hole.

I consider it a faulty analogy. Here is one I like better: if the said speck of dust disintegrates into nothing after an instant, there is no bigger beach and no black hole.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-10-12T11:37:38.884Z · LW(p) · GW(p)

If you consider the disutility of the dust speck zero, because the brief annoyance will be forgotten, then can the disutility of the torture also be made into zero, if we merely add the stipulation that the tortured person will then have the memory of this torture completely erased and the state of their mind reverted to what it had been before the torture?

Replies from: shminux
comment by shminux · 2011-10-12T17:53:53.013Z · LW(p) · GW(p)

This is an interesting question, but it seems to be in a different realm. For example, it could be reformulated as follows: is this 50-year torture option that bad if it is parceled into 1 second chunks and any memory of each one is erased immediately, and it has no lasting side effects.

For the purpose of this discussion, I assume that it is 50 dismal years with all the memories associated and accumulated all the way through and thereafter. In that sense it is qualitatively in a different category than a dust speck. This might not be yours (or EY's) interpretation.

comment by see · 2011-10-12T11:35:57.462Z · LW(p) · GW(p)

One speck of sand will be lost in a beach, but adding a speck of sand will still make it a bigger beach, and adding 3^^^3 specks of sand will make it a black hole.

6 × 10^30 kilograms of sand on one beach on one inhabited planet will collapse it into a black hole, which is far, far smaller amount of mass than 3^^^3 molecules of silicon dioxide. But adding one molecule of silicon dioxide to each of 3^^^3 beaches on inhabited planets throughout as many universes as necessary seems to cause far less disutility than adding 6 × 10^30 kilograms of sand to one beach on one inhabited planet.

Is the problem that we're unable to do math? You can't possibly say one molecule of silicon dioxide is incomparable to 6 × 10^30 kilograms of sand, can you? They're indisputably the same substance, after all; 6 × 10^55 molecules of SiO2 is 6 × 10^30 kilograms of sand. Even if you make the disutility nonlinear, you have to do something really, really extreme to overcome 3^^^3 . . . and of you do that, why, let's substitute in 3^^^^3 or 3^^^^^3 instead.

Is the problem that we are failing to evaluate what happens if everybody else makes the same decision? If 6 × 10^55 people were given the decision and they all chose the molecule, 3^^^3 inhabited planets are converted into black holes, while if they made the other only 6 × 10^55 planets would be. So when faced with an option that seems to cause no disutility, must we annihilate seven billion people because it would if enough other people made our decision it would be far worse than if we and all of them made the other?

Replies from: DanielLC, wedrifid
comment by DanielLC · 2011-10-13T00:45:34.806Z · LW(p) · GW(p)

My point wasn't so much that it will cause a black hole, as that a tiny amount of disutility times 3^^^3 is going to be unimaginably horrible, regardless of how small 3^^^3.

Is the problem that we are failing to evaluate what happens if everybody else makes the same decision?

That's not the problem at all. Thinking about that is a good sanity check.If it's good to make that decision once it's better to make it 10^30 times. However, it's only a sanity check. Everybody isn't going to make the same decision as you, so there's no reason to assume they will.

comment by wedrifid · 2011-10-12T13:05:55.154Z · LW(p) · GW(p)

6 × 10^30 kilograms of sand on one beach on one inhabited planet will collapse it into a black hole, which is far, far smaller amount of mass than 3^^^3 molecules of silicon dioxide. But adding one molecule of silicon dioxide to each of 3^^^3 beaches on inhabited planets throughout as many universes as necessary seems to cause far less disutility than adding 6 × 10^30 kilograms of sand to one beach on one inhabited planet.

Analogy does not fit. Dust specks have an approximately known small negative utility. The benefit or detriment of adding sand to the beaches is not specified one way or the other. If it was specified then I'd be able to tell you whether it sounds better or worse than destroying a planet.

comment by Incorrect · 2011-10-11T06:05:01.020Z · LW(p) · GW(p)

The original thought experiment is used to provide a pure example of quantifying and comparing arbitrary levels of suffering as a test to see whether we support such a type of utilitarian consequentialism.

By comparing torture to torture, you are changing the scenario to test a slightly weaker version of the original type of utilitarian consequentialism where you do quantify and compare arbitrary changes to absolute levels of suffering with arbitrary absolute levels of suffering but not necessarily allowing the two instances of absolute levels of suffering to be arbitrary with respect to each other.

If anyone could rewrite this comment to be comprehensible I would appreciate it.

comment by Unnamed · 2011-10-11T17:07:09.122Z · LW(p) · GW(p)

Another way to reach the conclusion that dust specks are worse is by transitivity. Consider something that is slightly worse than getting a dust speck in your eye. For instance, maybe hearing the annoying sound of static on television is just a bit worse, as long as it's relatively brief and low volume. Now,

1a. Which is worse: everyone on Earth gets a dust speck in their eye, or one person hears a second of the annoying sound of static on a television with the volume set at a fairly low level [presumably you think that the dust specks are worse]
1b. Which is worse: one person briefly hears static, or 7 billion people each get a dust speck [generalizing 1a, to not depend on population of Earth or fact that it's "everyone"]
1c. Which is worse: n people briefly hear static, or (7 billion) x n people get a dust speck [generalizing 1b, equivalent to repeating 1b n times]

Now, consider something that is slightly worse than the static (or whatever you picked). For instance, maybe someone lightly flicking their finger into the palm of your hand is a bit more unpleasant.

2a. Which is worse: everyone on Earth hears a second of the annoying sound of fairly low volume static, or one person gets lightly flicked in the palm, the sensation of which fades entirely within a few seconds
2b. Which is worse: one person gets lightly flicked in the palm, or 7 billion people each briefly hear static
2c. Which is worse: n people get lightly flicked in the palm, or (7 billion) x n people each briefly hear static
2d. Which is worse: n people get lightly flicked in the palm, or (7 billion)^2 x n people get a dust speck [transitivity, from 1c & 2c]

And keep gradually increasing the badness of the alternative, with dust specks remaining the worse option (by transitivity), until you get to:

10000a. Which is worse: everyone on Earth gets 25 years of torture, or one person gets 50 years of torture
10000b. Which is worse: one person gets 50 years of torture, or 7 billion people each get 25 years of torture
10000c. Which is worse: n people get 50 years of torture, or (7 billion) x n people get 25 years of torture
10000d. Which is worse: n people get 50 years of torture, or (7 billion)^10000 x n people get a dust speck

10000e. Which is worse: 1 person gets 50 years of torture, or 3^^^3 people get a dust speck [from 10000d, letting n=1 and drastically increasing the number of dust specks]

If you think that torture is worse than dust specks, at what step do you not go along with the reasoning?

Replies from: None
comment by [deleted] · 2011-10-11T19:55:50.137Z · LW(p) · GW(p)

If you think that torture is worse than dust specks, at what step do you not go along with the reasoning?

When I first read Eliezer's post on this subject, I was confused by this transitivity argument. It seems reasonable. But even at that point, I questioned the idea that if all of the steps as you outline them seem individually reasonable, but torture instead of dust specks seems unreasonable, it is "obvious" that I should privilege the former output of my value computation over the latter.

My position now is that in fact, thinking carefully about the steps of gradually increasing pain, there will be at least one that I object to (but it's easy to miss because the step isn't actually written down). There is a degree of pain that I experience that is tolerable. Ouch! That's painful. There is an infinitesimally greater degree of pain (although the precise point at which this occurs, in terms of physical causes, depends on my mood or brain state at that particular time) that is just too much. Curses to this pain! I cannot bear this pain!

This seems like a reasonable candidate for the step at which I stop you and say no, actually I would prefer any number of people to experience the former pain, rather than one having to bear the latter - that difference just barely exceeded my basic tolerance for pain. Of course we are talking about the same subjective level of pain in different people - not necessarily caused by the same severity of physical incident.

This doesn't seem ideal. However, it is more compatible with my value computation than the idea of torturing someone for the sake of 3^^^3 people with dust specks in their eyes.

Replies from: Multipartite
comment by Multipartite · 2011-10-15T22:35:29.312Z · LW(p) · GW(p)

I can somewhat sympathise, in that when removing a plaster I prefer to remove it slowly, for a longer bearable pain, than quickly for a brief unbearable pain. However, this can only be extended so far: there is a set (expected) length of continuing bearable pain over which one would choose to eliminate the entire thing with brief unbearable pain, as with tooth disease and (hypothetical) dentistry, or unpleasant-but-survival-illness and (phobic) vaccination.

'prefer any number of people to experience the former pain, rather than one having to bear the latter': applying to across time as well as across numbers, one can reach the state of comparing {one person suffering brief unbearable pain} to {a world of pain, every person constantly existing just at the theshold at which it's possible to not go insane}. Somewhat selfishly casting oneself in the position of potential sufferer and chooser, should one look on such a world of pain and pronounce it to be acceptable as long as one does not have to undergo a moment of unbearable pain? Is the suffering one would undergo truly weightier than the suffering the civilisation wold labor under?

The above question is arguably unfair both in that I've extended across time without checking acceptability, and also in that I've put the chooser in the position of a sacrificer. For the second part, hopefully it can be resolved by letting it be given that the chooser does not notably value another's suffering above or below the importance of the chooser's own. (Then again, maybe not.)

As for time, can an infinite number of different people suffering a certain thing for one second be determined to be at least no less than a single person suffering the same thing for five seconds? If so, then one can hopefully extend suffering in time as well as across numbers, and thus validly reach the 'world of pain versus moment of anguish' situation.

(In regard to priveleging, note that dealing with large numbers is known to cause failure of degree appreciation due to the brain's limitations, whereas induction tends to be reliable.)

comment by shminux · 2011-10-11T16:36:28.462Z · LW(p) · GW(p)

Color me irrational, but in the problem as stated (a dust speck is a minor inconvenience, with zero chance of other consequences, unlike what some commenters suggest), there is no number of specks large enough to outweigh lasting torture (which ought to be properly defined, of course).

After digging through my inner utilities, the reason for my "obvious" choice is that everyone goes through minor annoyances all the time, and another speck of dust would be lost in the noise.

In a world where a speck of dust in the eye is a BIG DEAL, because the life is otherwise so PERFECT, even one speck is noticed and not quickly forgotten, such occurrences can be accumulated and compared with torture. However, this was not specified in the original problem, so I assume that people live through the calamities of the speck of dust magnitude all the time, and adding one more changes nothing.

Replies from: Jack, ArisKatsaris, jhuffman
comment by Jack · 2011-10-12T03:15:42.237Z · LW(p) · GW(p)

Eliezer's question for you is "would you give one penny to prevent the 3^^^3 dust specks?"

comment by ArisKatsaris · 2011-10-11T20:07:20.715Z · LW(p) · GW(p)

And tell me, in a universe where a trillion agents individually decide that adding a dust of speck to the lives of 3^^^3 people is in your words "NOT A BIG DEAL", and the end result is that you personally end up with a trillion specks of dust (each of them individually NOT A BIG DEAL), which leave you (and entire multiverses of beings) effectively blind -- are they collectively still not a big deal then?

If it will be a big deal in such a scenario, then can you tell me which ones of the above trillion agents should have preferred to go with torturing a single person instead, and how they would be able to modify their decision theory to serve that purpose, if they individually must choose the specks but collectively must choose the torture (lest they leave entire multiverses and omniverses entirely blind)?

Replies from: Jack, shminux
comment by Jack · 2011-10-11T21:35:43.420Z · LW(p) · GW(p)

If you have reason to suspect a trillion people are making the same decision over the same set of people the calculation changes since dust specks in the same eye do not scale linearly.

comment by shminux · 2011-10-11T20:15:33.336Z · LW(p) · GW(p)

which leave you (and entire multiverses of beings) effectively blind

I stipulated "noticed and not quickly forgotten" would be my condition for considering the other choice. Certainly being buried under a mountain of sand would qualify as noticeable by the unfortunate recipient.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-10-11T20:30:24.794Z · LW(p) · GW(p)

But each individual dust speck wouldn't be noticeable, and that's each individual agent decides to add - an individual dust speck to the life of each such victim.

So, again, what decision theory can somehow dismiss the individual effect as you would have it do, and yet take into account the collective effect?

Replies from: shminux
comment by shminux · 2011-10-11T21:18:36.606Z · LW(p) · GW(p)

My personal decision theory has no problems dismissing noise-level influences, because they do not matter.

You keep trying to replace the original problem with your own: "how many sand specks constitute a heap?" This is not at issue here, as no heap is ever formed for any single one of the 3^^^3 people.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-10-11T21:25:31.136Z · LW(p) · GW(p)

no heap is ever formed for anyone of the 3^^^3 people.

That's not one of the guarantees you're given, that a trillion other agents won't be given similar choices. You're not given the guarantee that your dilemma between minute disutility for astronomical numbers, and a single huge disutility will be the only such dilemma anyone will ever have in the history of the universe, and you don't have the guarantee that the decisions of a trillion different agents won't pile up.

Replies from: shminux
comment by shminux · 2011-10-11T21:37:10.438Z · LW(p) · GW(p)

Well, it looks like we found the root of our disagreement: I take the original problem literally, one blink and THAT'S IT, while you say "you don't have the guarantee that the decisions of a trillion different agents won't pile up".

My version has an obvious solution (no torture), while yours has to be analyzed in detail for every possible potential pile up, and the impact has to be carefully calculated based on its probability, the number of people involved, and any other conceivable and inconceivable (i.e. at the probability level of 1/3^^^3) factors.

Until and unless there is a compelling evidence of an inevitable pile-up, I pick the no-torture solution. Feel free to prove that in a large chunk (>50%?) of all the impossible possible worlds the pile-up happens, and I will be happy to reevaluate my answer.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-10-11T21:56:07.593Z · LW(p) · GW(p)

take the original problem literally, one blink and THAT'S IT

Every election is stolen one vote at a time.

My version has an obvious solution (no torture),

My version has also an obvious solution - choosing not to inflict disutility on 3^^^3 people.

and the impact has to be carefully calculated based on its probability,

That's the useful thing about having such an absurdly large number as 3^^^3. We don't really need to calculate it, "3^^^3" just wins. And if you feel it doesn't win, then 3^^^^3 would win. Or 3^^^^^3. Add as many carets as you feel are necessary.

while yours has to be analyzed in detail for every possible potential pile up,

Thinking whether the world would be better or worse if everyone decided as you did, is really one of the fundamental methods of ethics, not a random bizarre scenario I just concocted up for this experiment.

Point is: If everyone decided as you would, it would pile up, and universes would be doomed to blindness. If everyone decided as I would, they would not pile up.

Replies from: shminux, shminux
comment by shminux · 2011-10-11T22:11:07.558Z · LW(p) · GW(p)

If everyone decided as you would, it would pile up

Prove it.

comment by shminux · 2011-10-11T22:16:23.370Z · LW(p) · GW(p)

That's the useful thing about having such an absurdly large number as 3^^^3. We don't really need to calculate it, "3^^^3" just wins.

At this level, so many different low-probability factors come into play (e.g. blinking could be good for you because it reduces incidence of eye problems in some cases), that "choosing not to inflict disutility" relies on an unproven assumption that utility of blinking is always negative, no exceptions.

I reject unproven assumptions as torture justifications.

Replies from: dlthomas, ArisKatsaris
comment by dlthomas · 2011-10-11T22:26:38.374Z · LW(p) · GW(p)

If the dust speck has a slight tendency to be bad, 3^^^3 wins.

If it does not have a slight tendency to be bad, it is not "the least bad bad thing that can happen to someone" - pick something worse for the thought experiment.

Replies from: shminux
comment by shminux · 2011-10-11T23:31:02.629Z · LW(p) · GW(p)

If the dust speck has a slight tendency to be bad, 3^^^3 wins.

Only if you agree to follow EY in consolidating many different utilities in every possible case into one all-encompassing number, something I am yet to be convinced of, but that is beside the point, I suppose.

If it does not have a slight tendency to be bad, it is not "the least bad bad thing that can happen to someone" - pick something worse for the thought experiment.

Sure, if you pick something with a guaranteed negative utility and you think that there should be one number to bind them all, I grant your point.

However, this is not how the problem appears to me. A single speck in the eye has such an insignificant utility, there is no way to estimate its effects without knowing a lot more about the problem.

Basically, I am uncomfortable with the following somewhat implicit assumptions, all of which are required to pick torture over nuisance:

  • a tiny utility can be reasonably well estimated, even up to a sign
  • zillions of those utilities can be combined into one single number using a monotonic function
  • these utilities do not interact in any way that would make their combination change sign
  • the resulting number is invariably useful for decision making

A breakdown in any of these assumptions would mean needless torture of a human being, and I do not have enough confidence in EY's theoretical work to stake my decision on it.

Replies from: dlthomas
comment by dlthomas · 2011-10-11T23:57:58.593Z · LW(p) · GW(p)

Only if you agree to follow EY in consolidating many different utilities in every possible case into one all-encompassing number, something I am yet to be convinced of, but that is beside the point, I suppose.

If you have a preference for some outcomes versus other outcomes, you are effectively assigning a single number to those outcomes. The method of combining these is certainly a viable topic for dispute - I raised that point myself quite recently.

Sure, if you pick something with a guaranteed negative utility and you think that there should be one number to bind them all, I grant your point.

However, this is not how the problem appears to me. A single speck in the eye has such an insignificant utility, there is no way to estimate its effects without knowing a lot more about the problem.

It was quite explicitly made a part of the original formulation of the problem.

Considering the assumptions you are unwilling to make:

  • tiny utility can be reasonably well estimated, even up to a sign

As I've been saying, there quite clearly seem to be things that fall in the realm of "I am confident this is typically a bad thing" and "it runs counter to my intuition that I would prefer torture to this, regardless of how many people it applied to".

  • the resulting number is invariably useful for decision making

I addressed this at the top of this post.

  • zillions of those utilities can be combined into one single number using a monotonic function
  • these utilities do not interact in any way that would make their combination change sign

I think it's clear that there must be some means of combining individual preferences into moral judgments, if there is a morality at all. I am not certain that it can be done with the utility numbers alone. I am reasonably certain that it is monotonic - I cannot conceive of a situation where we would prefer some people to be less happy just for the sake of them being less happy. What is needed here is more than just monotonicity, however - it is necessary that it be divergent with fixed utility across infinite people. I raise this point here, and at this point think this is the closest to a reasonable attack on Eliezer's argument.

On balance, I think Eliezer is likely to be correct; I do not have sufficient worry that I would stake some percent of 3^^^3 utilons on the contrary and would presently pick torture if I was truly confronted with this situation and didn't have more time to discuss, debate, and analyze. Given that there is insufficient stuff in the universe to make 3^^^3 dust specks, much less the eyes for them to fly into, I am supremely confident that I won't be confronted with this choice any time soon.

comment by ArisKatsaris · 2011-10-11T22:29:05.827Z · LW(p) · GW(p)

The point of "torture vs specks" is whether enough tiny disutilities can add up to something bigger than a single huge disutility. To argue that specks may on average have positive utility kinda misses the point, because the point we're debating isn't the value of a dust speck, or a sneeze, or a stubbed toe, or an itchy butt, or whatever -- we're just using dust speck as an example of the tiniest bit of disutility you can imagine, but which nonetheless we can agree is disutility.

If dust specks don't suit you for this purpose, find another bit of tiny disutility, as tiny as you can make it.

(As a sidenote the point is missed on the opposite direction by those who say "well, say there's a one billionth chance of a dust speck causing a fatal accident, you would then be killing untold numbers of people if you inflicted 3^^^^3 specks." -- these people don't add up tiny disutilities, they add up tiny probabilities. They make the right decision in rejecting the specks, but it's not the actual point of the question)

I reject unproven assumptions as torture justifications.

Well, I can reject your unproven assumptions as justifications for inflicting disutility on 3^^^3 people, same way that I suppose spammers can excuse billions of spam by saying to themselves "it just takes a second to delete it, so it doesn't hurt anyone much", while not considering that these multiplied means they've wasted billions of seconds from the lives of people...

comment by jhuffman · 2011-10-11T19:05:55.060Z · LW(p) · GW(p)

I think the purpose of this article is to point to some intuitive failures of a simple linear utility function. In other words, probably everyone who reads it agrees with you. The real challenge is in creating a utility function that wouldn't output the wrong answer on corner cases like this.

Replies from: Jack, MinibearRex, ArisKatsaris
comment by Jack · 2011-10-11T21:23:05.637Z · LW(p) · GW(p)

No. No, that is not the purpose of the article.

Replies from: jhuffman
comment by jhuffman · 2011-10-12T13:52:01.271Z · LW(p) · GW(p)

Sorry I've read that and still don't know what it is that I've got wrong. Does this article not indicate a problem with simple linear utility functions, or is that not its purpose?

comment by MinibearRex · 2011-10-11T21:31:22.375Z · LW(p) · GW(p)

Eliezer disagrees

Replies from: shminux
comment by shminux · 2011-10-11T21:50:36.145Z · LW(p) · GW(p)

His point of view is

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant.

whereas myself and many others appeal to zero-aggregation, which indeed reduces any finite number (and hence the limit when this aggregation is taken to infinity) to zero.

The distinction is not that of rationality vs irrationality (e.g. scope insensitivity), but of the problem setup.

Replies from: MinibearRex
comment by MinibearRex · 2011-10-12T03:42:11.849Z · LW(p) · GW(p)

If you can explain zero aggregation in more detail, or point me to a reference, that would be appreciated, since I haven't seen any full discussion of it.

comment by ArisKatsaris · 2011-10-11T20:09:54.892Z · LW(p) · GW(p)

The wrong answer is the people who prefer the specks, because that's the answer which, if a trillion people answered that way, would condemn whole universes to blindness (instead of a mere trillion beings to torture).

Replies from: Jack, see
comment by Jack · 2011-10-11T21:29:07.627Z · LW(p) · GW(p)

Adding multiple dust specks to the same people definitely removes the linear character of the dust speck harm-- if you take the number of dust specks necessary to make someone blind and spread them out to a lot more people you drastically reduce the total harm. So that is not an appropriate way of reformulating the question. You are correct that the specks are the "wrong answer" as far as the author is concerned.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-10-11T22:06:26.870Z · LW(p) · GW(p)

Did the people choosing "specks" ask whether the persons in question would have suffered other dust specks (or sneezes, hiccups, stubbed toes, etc) immediately previous by potentially other agents deciding as they did, when they chose "specks"?

Replies from: Jack
comment by Jack · 2011-10-11T22:25:59.865Z · LW(p) · GW(p)

Most people I didn't, I suppose-- they were asked:

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

Which isn't the same as asking what people would do if they were given the power to choose one or the other. And even if people were asked that the latter is plausible they would not assume the existence of a trillion other agents making the same decision over the same set of people. That's a rather non-obvious addition to the thought experiment which is already foreign to everyday experience.

In any case it's just not the point of the thought experiment. Take the least convenient possible world: do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-10-11T22:37:04.926Z · LW(p) · GW(p)

do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?

Yes. The consideration of how the world would look like if everyone chose the same as me, is a useful intuition pumper, but it just illustrates the ethics of the situation, it doesn't truly modify them.

Any choice isn't really just about that particular choice, it's about the mechanism you use to arrive at that choice. If people believe that it doesn't matter how many people they each inflict tiny disutilities on, the world ends up worse off.

Replies from: Jack
comment by Jack · 2011-10-11T22:48:33.176Z · LW(p) · GW(p)

The point of the article is to illustrate scope insensitivity in the human utility function. Turning the problem into a collective action problem or an acausal decision theory problem by adding additional details to the hypothetical is not a useful intuition pump since it changes the entire character of the question.

For example, consider the following choice: You can give a gram of chocolate to 3^^^3 children who have never had chocolate before. Or you can torture someone for 50 years.

Easy. Everyone should have the same answer.

But wait! You forgot to consider that trillions of other people were being given the same choice! Now 3^^^3 children have diabetes.

This is exactly what you're doing with your intuition pump except the value of eating additional chocolate inverts at a certain point whereas dust specks in your eye get exponentially worse at a certain point. In both cases the utility function is not linear and thus distorts the problem.

comment by see · 2011-10-11T23:51:19.040Z · LW(p) · GW(p)

Only if you assume that the dust speck decisions must be made in utter ignorance of the (trillion-1) other decisions. If the ignorance is less than utter, a nonlinear utility function that accepts the one dust speck will stop making the decision in favor of dust specks before universes go blind.

For example, since I know how Texas will vote for President next year (it will give its Electoral College votes to the Republican), I can instead use my vote to signal which minor-party candidate strikes me as the most attractive, thus promoting his party relative to the others, without having to worry whether my vote will elect him or cost my preferred candidate the election. Obviously, if everyone else in Texas did the same, some minor party candidate would win, but that doesn't matter, because it isn't going to happen.

comment by Armok_GoB · 2011-11-10T21:17:25.904Z · LW(p) · GW(p)

Sorry I'm late, Anywhere this seems a good place to post my two (not quite) colloaries to the original post:

colloary 1: You can chose either a or b: a) All currently alive humans, including you, will be tortured with superhuman proficiency for a billion years, with certainty. b) There is a 1-in-1 000 000 risk (otherwise nothing happens) that 3^^^3 animals get dust specks in their eyes. These animals have mental attributes that makes them on average worth approximately 1/10^12 as much as a human- Further, the dust specks are so small only those with especially sensitive eyes (about 1 in a million) can even notice it.

not--a-colloary 2: Choices are as follows: a) nothing happens b) 3^^^3 humans get tortured for 3^^^3 years, and there's a 1/3^^^3 chance a friendly Ai is released into our universe and turns out to be able to travel to any number of other universe and persist in the multiverse creating Fun for eternity.

comment by fubarobfusco · 2011-10-12T06:42:48.737Z · LW(p) · GW(p)

Some considerations:

A dust speck takes a second to remove from your eye. But it is sufficiently painful, unpleasant, or distracting that you will take that second to remove it from your eye, forsaking all other actions or thoughts for that one second. If a typical human today can expect to live for 75 years, then one second is a one-in-2.3-billion part of a life. And that part of that life is indeed taken away from that person; since they surely are not pursuing any other end for the second it takes to remove that dust speck. If all moments of life were considered equal, then 2.3 billion dust specks would be the equal to one life spent entirely dealing with constant — but instant, which is to say, memoryless — moments of unpleasant distraction.


One of the things that is distracting about the word "torture" is that in our world, torture is something that is inflicted by some person. Someone in agonizing pain from, say, cancer, is not literally being tortured; that is, no agent chose to put that person in that situation. Human values consider the badness of an agent's intentional, malicious action to be worse than the equivalent consequence caused by non-agent phenomena. Torture implies a torturer.


It seems to me that one distinction between suffering and pain is that suffering includes a term for the knowledge that I am being diminished by what is happening to me: it is not merely negative utility, but undermines my ability to seek utility. Torture — actual torture — has further negative consequences after the torture itself is over: in diminution of the victim's physical and psychological health, alteration of their values and other aspects of their psyche. To ask me to envision "50 years of torture" followed by no further negative consequence is to ask me to envision something so contrary to fact as to become morally misleading in and of itself.


So rather than "torture vs. dust specks", if we say "fifty years of constant, memoryless, unpleasant distraction vs. 3^^^3 dust specks", then I would certainly favor DISTRACTION over SPECKS.

Replies from: Jack
comment by Jack · 2011-10-12T07:08:23.197Z · LW(p) · GW(p)

I think concentrating specks in one person over the course of her life increases the magnitude of the harm non-linearly.

Replies from: fubarobfusco
comment by fubarobfusco · 2011-10-12T18:19:18.963Z · LW(p) · GW(p)

Yes, it does. But not to the ratio of 3^^^3 over 2.3 billion.

comment by Emile · 2011-10-12T09:50:16.272Z · LW(p) · GW(p)

Alternative phrasing of the problem: do you prefer a certain chance of having a dust speck in your eye, or a one-in-3^^^3 chance of being tortured for 50 years?

When you consider that we take action to avoid minor incomforts, but that we don't always take action to avoid small risks of violence or rape etc., we make choices like that much pretty often, with higher chances of bad things happening.

Replies from: Jack
comment by Jack · 2011-10-12T09:54:51.537Z · LW(p) · GW(p)

Wait. Which side of the rephrasing corresponds to which side of the original?

Replies from: Emile
comment by Emile · 2011-10-12T10:01:50.214Z · LW(p) · GW(p)

Certain chance of dust speck = 3^^^3 people get dust specks;

One-in-3^^^3 chance of torture = one person gets tortured for 50 years.

(Just consider a population of 3^^^3, and choose between them all getting dust specks, or one of them getting tortured. If I was in that population, I'd vote for the torture.)

Replies from: jpulgarin
comment by jpulgarin · 2011-10-12T12:26:47.507Z · LW(p) · GW(p)

This alternate phrasing (considering a population of 3^^^3 and choosing all dust specks vs one tortured) is actually quite a different problem. Since I care much more about my utility than the utility of a random person, then I feel a stronger pull towards giving everyone an extra dust speck as compared to the original phrasing.

I think a more accurate rephrasing would be: You will live 3^^^3 consecutive lives (via reincarnation of course). You can choose to get an extra dust speck in your eye in each lifetime, or be tortured in a single random lifetime.

Replies from: Emile
comment by Emile · 2011-10-12T12:41:17.878Z · LW(p) · GW(p)

I'm not sure how the population-based phrasing changes things. Note that I didn't specify whether the decider is part of that population.

And I don't think it even matters whether "I" am part of the population: if I prefer A to B for myself, I should also prefer A to B for others, regardless of how differently I weight my welfare vs. their welfare.

Replies from: jpulgarin
comment by jpulgarin · 2011-10-12T13:04:31.364Z · LW(p) · GW(p)

You're right, for some reason I thought the decider was part of the population.

I've also updated towards choosing torture if I were part of that population.

comment by PuyaSharif · 2011-10-11T15:26:43.079Z · LW(p) · GW(p)

An interesting related question would be: What would people in a big population Q choose if given alternatives: extreme pain with probability p=1/Q or tiny pain with probability p=1. In the framework of expected utility theory you'd have to include not only the sizes of the pains and size of populations but also the risk aversion of the person asked. So its not only about adding up small utilities.

comment by [deleted] · 2012-12-24T23:20:26.505Z · LW(p) · GW(p)

Perhaps the answer is that there are multiple hierarchies of [dis]utility, for instance: n dust specks (where n is less than enough to permanently damage the eye or equate to a minimal pain unit) is hierarchy 1, a slap in the face is hierarchy 3, torture is hierarchy 50 (these numbers are just an arbitrary example) and the [dis]utility at hierarchy x+1 is infinitely worse than the [dis]utility at hierarchy x. Adding dust specks to more people won't increase the hierarchy, but adding more dust specks to the same person eventually will.

comment by DevilMaster · 2012-12-20T18:00:29.411Z · LW(p) · GW(p)

I just noticed this argument, I hope I'm not too late in expressing my view.

Premise: I want to live in the universe with the least amount of pain.

And now for some calculations. For the sake of quantification, let's assume that that the single tortured person will receive 1 whiplash per second, continuously, for 50 years. Let's also assume that the pain of 1 whiplash is equivalent to 1 "pain unit". Thus, if I chose to torture that person, I would add 3600 "pain units" per hour to the total amount of pain in the universe. In 1 day, the amount of pain in the universe would increase of 360024 = 110400 pain units. In 1 year, approximately 110400365+36006 = 40317600 pain units. In 50 years, approximately 4031760050 = 2015880000 pain units. And now, let's examine the specks. They were described as "barely enough to make you notice before you blink and wipe away the dust speck". In other words, while they can be felt, the sensation is insufficient to trigger the nociceptors. This means that each speck increases the level of pain in the universe of 0 pain units. So, if 3^^^3 people received each a dust speck in one of their eyes, the amount of pain in the universe would increase of exactly 0*3^^^3 = 0 pain units! This is why I would definitely choose SPECKS.

comment by DanielLC · 2011-10-13T00:50:47.408Z · LW(p) · GW(p)

One way to think about this is to focus on how small one person is compared to 3^^^3 people. You're unlikely to notice the dust speck each person feels, but you're much, much less likely to notice the one person being tortured against a background of 3^^^3 people. You could spend a trillion years searching at a rate of one galaxy per Planck time and you won't have any realistic chance of finding the person being tortured.

Of course, you noticed the person being tortured because they were mentioned in only a few paragraphs of text. It makes them more noticeable. It doesn't make them more important. Every individual is important. All 3^^^3 of them.

comment by occlude · 2011-10-11T21:43:45.414Z · LW(p) · GW(p)

If Omega tells you that he will give either 1¢ each to 3^^^3 random people or $100,000,000,000.00 to the SIAI, and that you get to choose which course of action he should take, what would you do? That's a giant amount of distributed utility vs a (relatively) modest amount of concentrated utility.

I suspect that part of the exercise is not to outsmart yourself.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-10-12T09:04:43.324Z · LW(p) · GW(p)

Let me note for a sec some not-true-objections: (a) A single cent coin is more of a disutility for me, considering value vs space it takes in my wallet. (b) Adding money to the economy doesn't automatically increase the value anyone can use. (c) Bad and stupid people having more money would be actually of negative utility, as they'd give the money to bad and stupid causes. (d) Perhaps FAI is the one scenario which truly outweighs even 3^^^3 utilons.

Now for the true reason: I'd choose the money going to SIAI, but that'd be strictly selfish/tribal thinking, because I live in the planet which SIAI has some chance of improving, and so the true calculation would be about 7 billion people getting a coin each, not 3^^^3 people getting a coin each. If my utility function was truly universal in scope, the 3^^^3 cents (barring not-true objections noted above) would be the correct choice.

comment by lavalamp · 2011-10-11T13:58:23.029Z · LW(p) · GW(p)

My utility function says SPECKS. I thought it was because it was rounding the badness of a dust speck down to zero.

But if I modify the problem to be 3^^^3 specks split amongst a million people and delivered to their eyes at a rate of one per second for the rest of their lives, it says TORTURE.

If the badness of specks add up when applied to a single person, then a single dust speck must have non-zero badness. Obviously, there's a bug in my utility function.

Replies from: VincentYu
comment by VincentYu · 2011-10-12T01:27:22.328Z · LW(p) · GW(p)

If the badness of specks add up when applied to a single person, then a single dust speck must have non-zero badness.

If I drink 10 liters of water in an hour, I will die from water intoxication, which is bad. But this doesn't mean that drinking water is always bad - on the contrary, I think we'll agree that drinking some water every once in a while is good.

Utility functions don't have to be linear - or even monotonic - over repeated actions.

With that said, I agree with your conclusion that a single dust speck has non-zero (in particular, positive) badness.

Replies from: lavalamp
comment by lavalamp · 2011-10-12T02:23:49.118Z · LW(p) · GW(p)

You know what? You are absolutely right.

If the background rate at which dust specks enter eyes is, say, once per day, then an additional dust speck is barely even noticeable. The 3^^^3 people probably wouldn't even be able to tell that they got an "extra" dust speck, even if they were keeping an excel spreadsheet and making entries every time they got a dust speck in their eye, and running relevant statistics on it. I think I just switched back to SPECKS. If a person can't be sure that something even happened to them, my utility function is rounding it off to zero.

Replies from: AlexSchell, occlude
comment by AlexSchell · 2011-10-12T04:55:01.087Z · LW(p) · GW(p)

If a person can't be sure that something even happened to them, my utility function is rounding it off to zero.

This may be already obvious to you, but such a utility function is incoherent (as made vivid by examples like the self-torturer).

comment by occlude · 2011-10-12T03:07:51.350Z · LW(p) · GW(p)

I expect that more than one of my brain modules are trying to judge between incompatible conclusions, and selectively giving attention to the inputs of the problem.

My thinking was similar to yours -- it feels less like I'm applying scope insensitivity and more like I'm rounding the disutility of specks down due to their ubiquity, or their severity relative to torture, or the fact that the effects are so dispersed. If one situation goes unnoticed, lost in the background noise, while another irreparably damages someone's mind, then that should have some impact on the utility function. My intuition tells me that this justifies rounding the impact of a speck down to zero, that the difference is a difference of kind, not of degree, that I should treat these as fundamentally different. At the same time, like Vincent, I'm inclined to assign non-zero disutility value to a speck.

One brain, two modules, two incompatible judgements. I'm willing to entertain the possibility that this is a bug. But I'm not ready yet to declare one module the victor.

comment by PuyaSharif · 2011-10-11T13:04:21.491Z · LW(p) · GW(p)

By putting a single speck of dust in somebody's eye, you increase the probability of that person getting injured or die in an accident. Lots of these people will be driving cars, crossing roads etc at that moment. Given the size of the number (3^^^3), that action would kill @#¤%¤&-illions of people. I would rather torture one.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-10-11T13:50:29.277Z · LW(p) · GW(p)

To consider it like this misses the point of the exercise: which is to treat each individual dust speck as the tiniest amount of disutility you can imagine, and multiply those tiny disutilities. If you treat the dust specks differently, as representing a small probability of a huge disutility(death) instead, the equation becomes different in the minds of many, because it's now about adding small probabilities instead of adding small disutilities.

In short: you ought consider the least convenient world, in which you are assured that the momentary inconvenience/annoyance of the dust speck is all the disutility these people will suffer if you choose "dust specks" -- they won't be involved in accidents of any kind, etc.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2011-10-18T16:37:42.466Z · LW(p) · GW(p)

Well, in that case, you're specifying that the additional stress is being applied to people who can take it.

Let's turn the whole danged thing around: Would you rather that 3^^^^3 people got one less dust speck in their eyes (in times when dust specks were not the limiting factor on much more important activities), or prevent one person from being horribly tortured for 50 years?

A related question would go: Would you volunteer not to have your dust speck count reduced, with it understood that A) if 3^^^3 people volunteer for this, someone will not be tortured, and B) there are well over 3^^^3 people being asked this, so it's not well beyond futile.

If you can find 3^^^3 volunteers, must they be irrational, or are they just willing to take that tiny hit?

Replies from: Normal_Anomaly, ArisKatsaris
comment by Normal_Anomaly · 2011-10-19T00:09:09.041Z · LW(p) · GW(p)

Question about your hypothetical: What happens if less than the necessary number of people volunteer for the dust speck? Do they A) get the dust speck to no purpose, or B) is their speck count reduced as though they hadn't volunteered?

I wouldn't volunteer either way, if you ignore irrational guilt trips like ArisKatsaris said. If there are 3^^^3 volunteers, they are irrational in situation B, and probably in situation A as well.

comment by ArisKatsaris · 2011-10-18T16:47:59.861Z · LW(p) · GW(p)

Would you rather that 3^^^^3 people got one less dust speck in their eyes (in times when dust specks were not the limiting factor on much more important activities), or prevent one person from being horribly tortured for 50 years?

That's the same question effectively, so the former.

A related but similar question would go: Would you volunteer not to have your dust speck count reduced, with it understood that A) if 3^^^3 people volunteer for this, someone will not be tortured, and B) there are well over 3^^^3 people being asked this, so it's not well beyond futile.

Yes, I would volunteer for this, but that's just because I can rationally anticipate that if I denied so volunteering I'd be irrationally having guilt-trips over this, which would be significantly higher disutility in the long term than a dust-speck. In short I'd be comparing a dust speck to the disutility of irrational guilt, not the disutility of torture/3^^^3

Replies from: Normal_Anomaly, Luke_A_Somers
comment by Normal_Anomaly · 2011-10-19T00:03:08.224Z · LW(p) · GW(p)

Question about your hypothetical: What happens if less than the necessary number of people volunteer for the dust speck? Do they get the dust speck to no purpose, or is their speck count reduced as though they hadn't volunteered?

comment by Luke_A_Somers · 2011-10-18T23:30:42.806Z · LW(p) · GW(p)

So if they said they'd wipe your memory, you wouldn't volunteer?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-10-18T23:42:11.150Z · LW(p) · GW(p)

I consider it a disutility to lose memories too, so switch that clause to "if you assume that you're magically not going to have any feelings you consider irrational regarding this decision of yours, or otherwise face some social penalty more severe than than specific dust speck", and I'll say "no, I wouldn't volunteer".