A response to "Torture vs. Dustspeck": The Ones Who Walk Away From Omelas

post by Logos01 · 2011-11-30T03:34:03.587Z · LW · GW · Legacy · 100 comments

For those not familiar with the topic, Torture vs. Dustspecks asks the question: "Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?"

 

Most of the discussion that I have noted on the topic takes one of two assumptions in deriving their answer to that question: I think of one as the 'linear additive' answer, which says that torture is the proper choice for the utilitarian consequentialist, because a single person can only suffer so much over a fifty year window, as compared to the incomprehensible number of individuals who suffer only minutely; the other I think of as the 'logarithmically additive' answer, which inverts the answer on the grounds that forms of suffering are not equal, and cannot be added as simple 'units'.

What I have never yet seen is something akin to the notion expressed in Ursula K LeGuin's The Ones Who Walk Away From Omelas.If you haven't read it, I won't spoil it for you.

I believe that any metric of consequence which takes into account only suffering when making the choice of "torture" vs. "dust specks" misses the point. There are consequences to such a choice that extend beyond the suffering inflicted; moral responsibility, standards of behavior that either choice makes acceptable, and so on. Any solution to the question which ignores these elements in making its decision might be useful in revealing one's views about the nature of cumulative suffering, but beyond that are of no value in making practical decisions -- they cannot be, as 'consequence' extends beyond the mere instantiation of a given choice -- the exact pain inflicted by either scenario -- into the kind of society that such a choice would result in.

While I myself tend towards the 'logarithmic' than the 'linear' additive view of suffering, even if I stipulate the linear additive view, I still cannot agree with the conclusion of torture over the dust speck, for the same reason why I do not condone torture even in the "ticking time bomb" scenario: I cannot accept the culture/society that would permit such a torture to exist. To arbitrarily select out one individual for maximal suffering in order to spare others a negligible amount would require a legal or moral framework that accepted such choices, and this violates the principle of individual self-determination -- a principle I have seen Less Wrong's community spend a great deal of time trying to consider how to incorporate into Friendliness solutions for AGI. We as a society already implement something similar to this, economically: we accept taxing everyone, even according to a graduated scheme. What we do not accept is enslaving 20% of the population to provide for the needs of the State.

If there is a flaw in my reasoning here, please enlighten me.

100 comments

Comments sorted by top scores.

comment by prase · 2011-11-30T12:23:34.125Z · LW(p) · GW(p)
  1. As others have said, the scenario doesn't require linearity.
  2. You are doing a fairly standard job of rejecting a thought experiment by pointing out several side issues that are stipulated to be missing in the original. Although this is what people ordinarily do when confronted with a counterintuitive repugnant argument, it muddles the discussion and makes you the person who misses the point. If you want to say that the assumptions of the dust speck dilemma are unrealistic, you are free to do it (although such a statement is rather trivial; nobody believes that there are 3^^^^3 humans in the world). If you, on the other hand, object to the utilitarian principles involved in the answer, then do it. But please don't mix these two types of objections together.
  3. There were already many people who espoused choosing "specks", rationalising it by all sorts of elaborate arguments (not a surprising thing to see, since "specks" is the intuitive answer). This is the easy part. But I haven't seen anybody propose a coherent general decision algorithm which returns "specks" for this dilemma and doesn't return repugnant or even paradoxical answers to different questions. This is the hard part, which if you engaged, it would be much more interesting.
Replies from: None, Logos01
comment by [deleted] · 2011-11-30T12:36:59.476Z · LW(p) · GW(p)

You are doing a fairly standard job of rejecting a thought experiment by pointing out several side issues that are stipulated to be missing in the original. Although this is what people ordinarily do when confronted with a counterintuitive repugnant argument, it muddles the discussion and makes you the person who misses the point.

This seems to be endemic in the discussion section, as of late.

comment by Logos01 · 2011-11-30T15:42:37.358Z · LW(p) · GW(p)

You are doing a fairly standard job of rejecting a thought experiment by pointing out several side issues that are stipulated to be missing in the original.

By what means do you justify this assertion? Actually, there's two. Please explain your reasoning for both:

  1. The notion that I am rejecting the thought experiment at all.

  2. That I do so by means of "issues that are stipulated to be missing in the original".

Insofar as I can determine, both of these are simply false.

Although this is what people ordinarily do when confronted with a counterintuitive repugnant argument, it muddles the discussion and makes you the person who misses the point.

What about my argument makes you believe that my rejections are based on finding things repugnant as opposed to rejections on purely utilitarian grounds?

If you, on the other hand, object to the utilitarian principles involved in the answer, then do it.

I am confused as to why you would believe that I was objecting to utilitarian principles when my argument depends upon consequential utilitarianism.

This is the hard part, which if you engaged, it would be much more interesting.

Examples?

Replies from: prase
comment by prase · 2011-11-30T16:54:41.404Z · LW(p) · GW(p)

By what means do you justify this assertion?

The original thought experiment presents you with a choice between X: one person will suffer horribly for 50 years, and Y: 3^^^3 people will experience minimal inconvenience for a second. The point clearly was to compare the utilities of X and Y, so it is assumed that all other things are equal.

You have said that you choose Y, because you "cannot accept the culture/society that would permit such a torture to exist". But the society would not be changed in the original experiment (assume, for example, that nobody except you would know about the tortured person). You have effectively added another effect Z: society would permit torture, and now you are comparing u(Y) against u(X and Z), not against u(X) alone.

So, to explicitly reply to your questions, (1) you reject the original problem whether u(Y) > u(X), because you answer a different question, namely whether u(Y) > u(X and Z), and (2) the issue missing in the original is Z.

What about my argument makes you believe that my rejections are based on finding things repugnant as opposed to rejections on purely utilitarian grounds?

Nothing. I have only said that you do the same thing what others do in similar situation.

(In order not to be evasive, I admit believing that you reject the "torture" conclusion intuitively and then rationalise it. But this belief is based purely on the fact that this is what most people do; there is nothing in your arguments (apart from them being unconvincing) that further supports this belief. Now, do you admit that the "torture" variant is repugnant to you?)

I am confused as to why you would believe that I was objecting to utilitarian principles when my argument depends upon consequential utilitarianism.

This is partly due to my bad formulation (I should have probably said "calculations" instead of "principles"), and partly due to the fact that it is not so clear from your post what your argument depends upon.

Examples?

Of what?

Replies from: Logos01
comment by Logos01 · 2011-11-30T17:10:50.575Z · LW(p) · GW(p)

The point clearly was to compare the utilities of X and Y, so it is assumed that all other things are equal.

You have said that you choose Y, [...] But the society would not be changed in the original experiment (assume, for example, that nobody except you would know about the tortured person).

This privileges the hypothesis. You're claiming that there will be no secondary consequences and therefore secondary consequences need not be considered. This is directly antithetical to the notion of treating these questions in an "all other things being equal" state: of course if you arbitrarily eliminate the potential results of decision X as compared to decision Y, that's going to affect the outcome of which decision is preferable. But that, then, isn't answering the question asked of us. THAT question is asked agnostic of the conditions in which it would be implemented. So we don't get to impose special conditions on how it would occur. Indeed, rather than me adding things the original hypothesis excludes, it seems to me that you are doing the exact opposite of this: you are excluding things the original hypothesis does not.

In other words; to my current understanding of that hypothetical, I am the one closest to answering it without imposed additional conditions.

You have effectively added another effect Z: society would permit torture, and now you are comparing u(Y) against u(X and Z), not against u(X) alone.

I see. There is an error in your reasoning here, but I can understand why it would be non-obvious. You are assuming that u(n) != n + Z(n) in my formulation. The reason why this would be non-obvious is because I listed no value for Z(Y). The reason why I did not list such a value is because I am not at this time aware that said value is non-zero.So the equation remains a question of whether u(Y) is greater or lesser than u(X). The point we disagree on is not the hypothesis itself -- the comparison of u(Y) to u(X), but rather the terms of the utility function.

In other words, exactly what I explicitly stated: I argue that the discussion on this topic thus far uses an insufficient definition of "utility", especially for consequentialistic utilitarianism, and therefore "misses the point".

(In order not to be evasive, I admit believing that you reject the "torture" conclusion intuitively and then rationalise it. But this belief is based purely on the fact that this is what most people do.

Fair enough. Thank you.

there is nothing in your arguments (apart from them being unconvincing) that further supports this belief.

I find no reason to accept the notion that my arguments are unconvincing. This, then, is the crux of the matter: What is your argument for supporting the notion that ONLY primary consequences are a valid form of consequences for a utilitarian to consider in making a decision?

Now, do you admit that the "torture" variant is repugnant to you?

Not at all. I have addressed this purely in terms of quantity. My argument is phrased in terms of utilon quantity. I reject the condonement of torture because of the utilitarian consequences of accepting it. (If it's any help, please be aware that I am a diagnosed autist, so my empathy to others is primarily intellectual in others. I am fully able to compartmentalize that trait when useful to dialogue.)

Examples?

Of what?

"But I haven't seen anybody propose a coherent general decision algorithm which returns "specks" for this dilemma and doesn't return repugnant or even paradoxical answers to different questions."

Replies from: prase
comment by prase · 2011-11-30T17:27:32.114Z · LW(p) · GW(p)

This privileges the hypothesis. You're claiming that there will be no secondary consequences and therefore secondary consequences need not be considered. This is directly antithetical to the notion of treating these questions in an "all other things being equal" state.

What? Which hypothesis do I privilege? How does assuming no secondary consequences of either variant contradict treating the other things as being equal?

There is an error in your reasoning here, but I can understand why it would be non-obvious. You are assuming that u(n) != n + Z(n) in my formulation. ...

If n refers to either X or Y, I certainly don't assume that u(n) != n + Z(n), because such a thing has no sensible interpretation ("u(X) = X" would read "utility of torture is equal to torture"). If n refers to number of people dust-specked or some other quantity, I still have no idea what you mean by Z(n). In my notation, Z was not a function, but a change of state of the world (namely, that society begins tolerating torture). So, maybe there is an error in my reasoning, but certainly you are not understanding my reasoning correctly.

As for your demanded examples, I am still not sure what do you want me to write.

Edit: seems to me that I made the same reply as paper-machine, even accidentally using the same symbols X, Y and Z, but in his use these are already utilities, while in my use they are situations. So, paper-machine.X = prase.u(X).

Replies from: Logos01
comment by Logos01 · 2011-11-30T17:43:13.327Z · LW(p) · GW(p)

How does assuming no secondary consequences of either variant contradict treating the other thing as being equal?

Because, in order to achieve that state, you must impose special conditions on the implementation of the hypothetical. Ones the hypothetical itself is agnostic to. The only way to eliminate secondary consequences from consideration, in other words, is to treat the hypotheticals unequally.

I also began by stating, if you'll recall, that if you do so isolate the query to first-consequences only, all that you practically achieve is a comparison of the net total quantity of suffering directly imposed by the two scenarios. And all that achieves is to suss out whether your view of suffering is linear or logarithmic in nature. To the logarithmic-adherent, the torture scenario is an effectively infinite suffering. I don't know if you've ever tortured or been tortured, but I can assure you that fifty years is far more than is necessary for a single person's psyche to be irrevocably demolished, reconstructed, and demolished repeatedly. Eliezer's original discussion of said torture evinced, quite clearly, that he adheres to the linear-additive perspective. This is perfectly clear when he says that it "isn't the worst thing that could happen to a person".

If n refers to either X or Y, I certainly don't assume that u(n) != n + Z(n), because such a thing has no sensible interpretation ("u(X) = X" would read "utility of torture is equal to torture").

Alright, fine. u(n) = s(n) + Z(n), where u(n) is the total anti-utility of scenario n, s(n) is the suffering directly induced by scenario n, and Z(n) is the anti-utility of all secondary consequences of scenario n*.

If n refers to number of people dust-specked or some other quantity, I still have no idea what you mean by Z(n). In my notation, Z was not a function, but a change of state of the world (namely, that society begins tolerating torture).

Z is the function for determining the secondary consequences of scenario n. It has a specific value depending on the scenario chosen.

but certainly you are not understanding my reasoning correctly.

Where am I mistaken? What am I mistaking you on?

As for your demanded examples, I am still not sure what do you want me to write.

... Why would you declare a topic that you are unable to even describe interesting? You are the one who brought it up... provide examples of scenarios that fulfill your description.

If you want to discuss the topic, if you find it interesting -- discuss it! I opened the floor to it.

Replies from: prase
comment by prase · 2011-11-30T21:47:06.246Z · LW(p) · GW(p)

I will not reply to the first paragraph, because we clearly disagree about what "ceteris paribus" means, while this disagreement has little to no relevance to the original problem.

effectively infinite

If it is finite, the logic behind choosing torture works. If it is infinite, you have other problems. But you can't have it both ways.

Where am I mistaken? What am I mistaking you on?

You have said "[y]ou are assuming that u(n) != s(n) + Z(n) in my formulation", I had been assuming no such thing.

Why would you declare a topic that you are unable to even describe interesting? You are the one who brought it up... provide examples of scenarios that fulfill your description.

Recall that you are probably reacting to this:

I haven't seen anybody propose a coherent general decision algorithm which returns "specks" for this dilemma and doesn't return repugnant or even paradoxical answers to different questions. This is the hard part, which if you engaged, it would be much more interesting.

No mention of any scenarios. If you want me to describe a consistent decision theory which returns "specks" and has no other obvious downsided, well, I can't, because I have none. Neither I believe that such a theory exists. You believe that "specks" is the correct solution.

Replies from: Logos01
comment by Logos01 · 2011-11-30T22:40:43.008Z · LW(p) · GW(p)

I will not reply to the first paragraph, because we clearly disagree about what "ceteris paribus" means, while this disagreement has little to no relevance to the original problem.

If you are not stipulating the relevance of secondary consequences to the original hypothesis then this conversation is at an end, with this statement. Either they are relevant, as is my entire argument, or they are not. Claiming via fiat that they are not will earn you no esteem on my part, and will cause me to consider your position entirely without merit of any kind; it is the ultimate in dishonest argumentation tactics: "You are wrong because I say you are wrong."

If it is finite, the logic behind choosing torture works.

Rephrase this. As I currently read it, you are stating that "if torture is infinite suffering, then torture is the better thing to be chosen." That is contradictory.

If it is infinite, you have other problems. But you can't have it both ways.

Not at all. As I have stated iteratively, suffering is not the sole relevant form of utility. Determining how to properly weight the various forms of utility against one another is necessary to untangling this. It is not at all obvious that they even can be so weighted.

You have said "[y]ou are assuming that u(n) != s(n) + Z(n) in my formulation", I had been assuming no such thing.

If that were the case then you really shouldn't have said this: "You have effectively added another effect Z: society would permit torture, and now you are comparing u(Y) against u(X and Z), not against u(X) alone."

Because now we are let with two contradictory statements uttered by you. Either Z(n) is a part of the function of u(n), or it is not. These are mutually exclusive. You cannot have both.

So, which statement of yours, then, is the false one?

No mention of any scenarios.

"repugnant or even paradoxical answers to different questions." <-- A rose, sir, by any other name.

I do not know why you seem to find it necessary to insist that things you have said aren't in fact things you have said; I do not know why you seem to find it necessary to adhere to such rigid verbiage usage that synonymous terminology for things you have said are rejected as non-existent statements by yourself.

It is, however, a frustrating pattern, and is causing me to lose interest in this dialogue.

Replies from: prase
comment by prase · 2011-12-01T11:23:43.291Z · LW(p) · GW(p)

It is, however, a frustrating pattern, and is causing me to lose interest in this dialogue.

Ending the dialogue may probably be the best option. I am only going to provide you one example of paradoxes you have demanded, since it was probably my fault that I haven't understood your request. (Next time I exhibit similar lack of understanding, please tell me plainly and directly what you are asking for. Beware illusion of transparency. I really have no dark motives to pretend misunderstanding when there is none.)

So, the most basic problem with choosing "specks" over "torture" is that which is already described in the original post: torturing 1 person for 50 years (let's call that scenario X(0)) is clearly better than torturing 10 people for 50 years minus 1 second (X(1)); to deny that means that one is willing to subject 9 people to 50 years of agony just to spare 1 person one second of agony. X(1) is then better than torturing 100 people for 50 years minus 2 seconds (X(2)) and so on. There are about 1,5 billion seconds in 50 years, so let's define X(n) recursively as torturing ten times more people than in scenario X(n-1) for time equal to 1,499,999,999/1,500,000,000 of time used in scenario X(n-1). Let's also decrease the pain slightly in each step: since pain is difficult to measure, let's precisely define the way torture is done: by simulating the pain one feels when the skin is burned by hot iron on p percent of body surface; at X(0) we start with burning the whole surface and p is decreased in each step by the same factor as the duration of torture. At approximately n = 3.8 * 10^10, X(n) means taking 10^(3.8*10^10) people and touching their skin with a hot needle for 1/100 of a second (the tip of the needle which comes into contact with the skin will have 0.0001 square milimeters). Now this is so negligible pain that a dust speck in the eye is clearly worse.

So, we have X(3.8*10^10) which is better than dust specks with just 10^(3.8*10^10) people (a number much lower than 3^^^3), and you say that dust specks are better than X(0). Therefore there must be at least one n such that X(n) is strictly worse than X(n+1). Now this seems paradoxical, since going from X(n) to X(n+1) means reducing the amount of suffering of those who already suffer by a tiny amount, roughly one billionth, for the price of adding nine new sufferers for each existing one.

(Please note that this reasoning doesn't assume anything about utility functions - it uses only preference ordering - nor it assumes anything about direct or indirect consequences of torture.)

Replies from: TimS, Logos01
comment by TimS · 2011-12-01T14:07:31.791Z · LW(p) · GW(p)

That is counter-intuitive, but isn't the anti-torture answer something analogous to sets? That is:

R(0) is the set of all real numbers. We know that it is an uncountable infinity, and therefore larger than any countable infinity. Set R(n) is R(0) with n elements removed. As I understand it, so long as n is a countable infinity or smaller, R(n) is equal in size to R(0). [EDITED TO REMOVE INCORRECT MATH.]

To cash out the analogy, it might be that certain torture scenarios are preferable to other torture scenarios, but all non-torture scenarios are less bad than all torture scenarios. As you increment down the amount of suffering in your example, you eventually remove so much that the scenario is no longer torture. In notation somewhat like yours, Y(50 yr) is the badness of imposing pain as you describe to one person for 50 years. We all seem to agree that Y(50 yr) is torture. I assert something like Y(50 yr - A) is torture if Y(A) would not be torture.

I agree that you can't say that suffering is non-linear (that is, think that dust-specks is preferable to torture) without believing something like what I laid out.

Logos, those "secondary" effects you point to are the properties that make Y(A) torture (or not).

Replies from: prase, asr
comment by prase · 2011-12-01T16:04:55.433Z · LW(p) · GW(p)

This is consistent. But it induces further difficulties in the standard utilitarian decision process.

To express the idea that all non-torture scenarios are less bad than all torture scenarios by utility function, there must be some (negative) boundary B between the two sets of scenarios, such that u(any torture scenario) < B and u(any non-torture scenario) > B. Now either B is finite or it is infinite; this matters when probabilities come into play.

First consider the case of B finite. This is the logistic curve approach: it means, that any number of slightly super-boundary inconveniences happening to different people are preferable to a single case of a slightly sub-boundary torture. I know of no natural physiological boundary of such sort; if severity of pain can change continuously, which seems to be the case, the sub-boundary and super-boundary experiences may be effectively indistinguishable. Are you willing to accept this?

Perhaps you are. Now this gets an interesting turn. Consider a couple of scenarios: X, which is slightly sub-boundary (thus "torture") with utility B - ε (ε positive), and Y, which is non-torture with u(Y) = B + ε. Now utilities may behave non-linearly with respect to the scenario-describing parameters, but expected utilities have to be pretty linear with respect to probabilities; anything else means throwing utilitarianism out from the window. A utility maximiser should therefore be indifferent between scenarios X' and Y', where X' = X with probability p and Y' = Y with probability p (B - ε) / (B + ε).

Lets say one of the boundary cases is, for sake of concreteness, giving a person 7.5 seconds long electric shock of a given strength. So, you may prefer to give a billion people 7.4999 s shock in order to avoid one person getting a 7.5001 s shock, but in the same time you would prefer, say, 99.98% chance of one person getting 7.5001 s shock to 99.99% chance of one person getting 7.4999 s shock. Thus, although the torture/non-torture boundary seems strict, it can be easily crossed when uncertainty is taken into account.

(This problem can be alleviated by postulating a gap in utilities between the worst non-torture scenario and the best torture scenario.)

If it still doesn't sound enough crazy, note the fact that if there already are people experiencing an almost boundary (but still non-torturous) scenario, decisions over completely unrelated options get distorted, since your utility can't fall lower than B, where it already sits. Assume that one has presently utility near B (which must be achievable by adjusting the number of almost tortured people and severity of their inconvenience - which is nevertheless still not torture, nobody is tortured as far as you know - let's call this adjustment A). Consider now decisions about money. If W is one's total wealth, then u(W,A) must be convex with respect to W if it's value is not much different from B, since no everywhere concave function can be bounded from below. Now, this may invert the usual risk aversion due to diminishing marginal utilities! (Even assuming that you can do literally nothing to change A).

(This isn't alleviated by a utility gap between torture and non-torture.)

Now, consider the second case, B = -∞. Then there is another problem: torture becomes the sole concern of one's decisions. Even if p(torture) = 1/3^^^3, the expected utility is negative infinity and all non-torturous concerns become strictly irrelevant. One can formulate it mathematically as having a 2-dimensional vector (u1,u2) representing the utility. The first component u1 is the measure of utility from torture and u2 measures the other utility. Now since you have decided to never trade torture for non-torture, you should choose the variant whose expected u1 is greater; only when u1(X) and u1(Y) are strictly equal, whether u2(X) > u2(Y) becomes important. Therefore you would find yourself asking questions like "if I buy this banana, would it increase the chance of people getting tortured?". I don't think you are striving to consistently apply this decision theory.

(This is related to distinction between sacred and unsacred values, which is a fairly standard source of inconsistencies in intuitive decisions.)

Replies from: TimS
comment by TimS · 2011-12-01T19:02:31.437Z · LW(p) · GW(p)

Your reference to sacred values reminded me of Spheres of Justice. In brief, Walzer argues that the best way of describing our morality is by noting which values may not be exchanged for which other values. For example, it is illicit to trade material wealth for political power over others (i.e. bribery is bad). Or trade lives for relief from suffering. But it is permissible to trade within a sphere (money for ice cream) or between some spheres (dowries might be a historical example, but I can't think of a modern one just this moment).

It seems like your post is a mathematical demonstration that I cannot believe the Spheres of Justice argument and also be a utilitarian. Hadn't thought about it that way before.

comment by asr · 2011-12-01T18:41:44.940Z · LW(p) · GW(p)

I hear your general point, and I don't dispute it.

But I think your set theory analogy isn't quite right. Consider the set R - [0,1] That's all real numbers less than 0 or greater than 1. This is still uncountably infinite, and has equal cardinality to R, even though I removed the set [0,1], which is itself uncountably infinite.

Replies from: TimS
comment by TimS · 2011-12-01T18:50:26.079Z · LW(p) · GW(p)

Edited to remove improper math. Thanks.

comment by Logos01 · 2011-12-01T13:51:56.517Z · LW(p) · GW(p)

X(0)) is a smaller value of anti-utility than X(1)), absolutely. I do not, however, know that the decrease of one second is non-negligible for that measurement of anti-utility, under the definitions I have provided.

There are about 1,5 billion seconds in 50 years, so let's define X(n) recursively as torturing ten times more people than in scenario X(n-1) for time equal to 1,499,999,999/1,500,000,000 of time used in scenario X(n-1).

That math gets ugly to try to conceptualize (fractional values of fractional values), but I can appreciate the intention.

since pain is difficult to measure, let's precisely define the way torture is done

This is a non-trivial alteration to the argument, but I will stipulate it for the time being.

At approximately n = 3.8 10^10, X(n) means taking 10^(3.810^10) people and touching their skin with a hot needle for 1/100 of a second (the tip of the needle which comes into contact with the skin will have 0.0001 square milimeters). Now this is so negligible pain that a dust speck in the eye is clearly worse.

"Clearly"? I suffer from opacity you apparently lack; I cannot distinguish between the two.

Now this seems paradoxical, since going from X(n) to X(n+1) means reducing the amount of suffering of those who already suffer by a tiny amount, roughly one billionth, for the price of adding nine new sufferers for each existing one.

The paradox exists only if suffering is quantified linearly. If it is quantified logarithmically, a one-billionth shift on some position of the logarithmic scale is going to overwhelm the signal of the linearly-multiplicative increasing population of individuals. (Please note that this quantification is on a per-individual basis, which can once quantified be simply added.)

This is far from being a paradox: it is a natural and expected consequence.

Replies from: prase
comment by prase · 2011-12-01T14:12:23.644Z · LW(p) · GW(p)

"Clearly"? I suffer from opacity you apparently lack; I cannot distinguish between the two.

Then substitute "worse or equal" for "worse", the argument remains.

I do not, however, know that the decrease of one second is non-negligible for that measurement of anti-utility, under the definitions I have provided.

Same thing, doesn't matter whether it is or it isn't. The only things which matters is that X(n) is preferable or equal to X(n+1), and that "specks" is worse or equal to X(3.8 * 10^10). If "specks" is also preferable to X(0), we have circular preferences.

If it is quantified logarithmically, a one-billionth shift on some position of the logarithmic scale is going to overwhelm the signal of the linearly-multiplicative increasing population of individuals.

So, you are saying that there indeed is n such that X(n) is worse than X(n+1); it means that there are t and p such that burning p percent of one person's skin for t seconds is worse than 0.999999999 t seconds of burning 0.999999999 p percent of skins of ten people. Do I interpret it correctly?

Edited: "worse" substituted for "preferable" in the 2nd answer.

Replies from: Logos01
comment by Logos01 · 2011-12-01T15:50:20.322Z · LW(p) · GW(p)

So, you are saying that there indeed is n such that X(n) is worse than X(n+1); it means that there are t and p such that burning p percent of one person's skin for t seconds is worse than 0.999999999 t seconds of burning 0.999999999 p percent of skins of ten people. Do I interpret it correctly?

Yes.

comment by WrongBot · 2011-12-01T19:24:11.601Z · LW(p) · GW(p)

There's something cruel about ending a post with a request for people to point out errors in your reasoning and then arguing in circles with anyone who tries. Are you trolling, or do you just never admit to being wrong?

comment by Manfred · 2011-11-30T05:23:24.108Z · LW(p) · GW(p)

and this violates the principle of individual self-determination -

To select 3^^^3 people to get dust specks in their eyes also violates the "principle" of individual self-determination. And if 3^^^3 people are possible, 3^^^^3 people are probably possible too, so the idea of fairness doesn't apply - these people have all been picked out to have their individual self-determination violated.

In general you seem to be trying to wriggle out of the hypothetical as stated by bringing in extra stuff and then deciding based only on that extra stuff.

Replies from: TimS, FAWS, Logos01
comment by TimS · 2011-11-30T05:30:14.876Z · LW(p) · GW(p)

In general you seem to be trying to wriggle out of the hypothetical as stated by bringing in extra stuff and then deciding based only on that extra stuff.

And assuming that those who reach a different conclusion didn't include the "extra stuff" in their analysis.

comment by FAWS · 2011-11-30T18:23:35.969Z · LW(p) · GW(p)

And if 3^^^3 people are possible, 3^^^^3 people are probably possible too,

No, that doesn't follow at all. It's ridiculous to even compare the two numbers that way. I would agree that 3^^^4 people might seem somewhat plausible in that case, and 3^^^4 is already larger than 3^^^3 by a factor incomprehensibly greater than 3^^^3. Even 3^^^4 is probably already far excessive for what you need for your argument.

comment by Logos01 · 2011-11-30T05:56:58.784Z · LW(p) · GW(p)

To select 3^^^3 people to get dust specks in their eyes also violates the "principle" of individual self-determination.

True, but it does so significantly less-grossly. The impact on a person's self-determination and ability to do so from a negligible dust-specking is effectively not-measurable, compared to the lasting results of being tortured, even assuming the individual survives; such torture has consequences beyond the immediate suffering and impedes that person's ability to be who or what they wish to be even once the torture has ended.

In general you seem to be trying to wriggle out of the hypothetical as stated

Please explain. I wasn't aware that the hypothetical as stated was anything other than "Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?"

by bringing in extra stuff and then deciding based only on that extra stuff.

If by "extra stuff" you mean "other consequences" and "based only on that extra stuff" you mean "determining which set of consequences would be less-optimal" then... you're absolutely right in this part.

Replies from: Manfred
comment by Manfred · 2011-11-30T06:40:10.210Z · LW(p) · GW(p)

To select 3^^^3 people to get dust specks in their eyes also violates the "principle" of individual self-determination.

True, but it does so significantly less-grossly.

Would you say it does so... a factor of 3^^^3 less grossly?

by bringing in extra stuff and then deciding based only on that extra stuff.

If by "extra stuff" you mean "other consequences" and "based only on that extra stuff" you mean "determining which set of consequences would be less-optimal" then... you're absolutely right in this part.

There's a phrase: "all other things being equal." You can always give your answer and then point out that it's irrelevant to the real world, or ADBOC. But if you start making your own list of things you want the hypothetical to be about, you haven't given an answer at all.

Replies from: Logos01
comment by Logos01 · 2011-11-30T08:21:06.917Z · LW(p) · GW(p)

Would you say it does so... a factor of 3^^^3 less grossly?

Please rephrase. I understand every word and phrase you use, but the arrangement of them in context of this conversation is inscrutable.

There's a phrase: "all other things being equal."

Ceteris paribus, yes. I'm fully aware of the phrase, and its ordinary implications to a given dialogue. I'm not able to derive intelligible clues as to why you think bringing it up is relevant to the conversation. Do you believe that I have somehow violated this principle? If so, please explain why -- because I disagree with that notion.

You can always give your answer and then point out that it's irrelevant to the real world, or ADBOC. But if you start making your own list of things you want the hypothetical to be about, you haven't given an answer at all.

Ahh... the question was framed in the context of how a consequentialistic utilitarian ought to answer it. In pointing out that first-order consequences are insufficient to properly calculate which is preferable I have not altered the question.

Is the nature of your objection to my position the simple fact that I refuse to only consider the immediate suffering of the proposition? If so, then simply put my argument is that you are creating an insufficiently narrow view of the question. I.e.; you are "making your own list of things you want the hypothetical to be about" -- or, rather, not about.

Whatever that thing is, it certainly doesn't properly address, as I argue, the hypothetical as given.

Tell me; on what grounds do you choose to exclude secondary consequences from the metric of deciding which of the two choices is preferable to a consequentialistic utilitarian? How, in addition, does this standard of excluding or including consequences from said calculus affect consequentialism?

Why are some consequences "fit" for consideration, whereas others are "unfit"?

Replies from: Manfred
comment by Manfred · 2011-11-30T08:58:54.149Z · LW(p) · GW(p)

Would you say it does so... a factor of 3^^^3 less grossly?

Please rephrase. I understand every word and phrase you use, but the arrangement of them in context of this conversation is inscrutable.

The upshot is that as soon as you allow things to be immoral (or violate rights, or whatever) to various degrees, not just black and white "immoral" and "not immoral," you have exactly the same problem, so talking about torture being a violation of rights doesn't bring anything new to the table unless you're prepared to bite some pretty bitter bullets.

Is the nature of your objection to my position the simple fact that I refuse to only consider the immediate suffering of the proposition?

Yeah, pretty much. If it were logically impossible for ceteris to be paribus, there would be every reason to reject the hypothetical. But it's not - those worlds are perfectly possible, you are merely asked to say which you like better. To bring in "secondary factors" (i.e. look at worlds where ceteris isn't paribus) and then decide based on those factors alone isn't a correction to the original question, it's answering a completely different question.

Replies from: Logos01
comment by Logos01 · 2011-11-30T09:19:13.334Z · LW(p) · GW(p)

The upshot is that as soon as you allow things to be immoral (or violate rights, or whatever) to various degrees, not just black and white "immoral" and "not immoral," you have exactly the same problem,

Do I correctly understand you to believe that I was including the right of individual self-determination as an independent valuative norm, rather than for its utility? (Also, please note that I did not originally use the term "right" but rather "principle".)

so talking about torture being a violation of rights doesn't bring anything new to the table unless you're prepared to bite some pretty bitter bullets.

Incorrect. I can only assume that you are thinking that I'm concerned, here, about the violation of rights in general, as opposed to individual self-determination in specific. My point was to demonstrate that the direct impact of torture vs. dustspecks goes beyond merely suffering.

Really, this only serves to illustrate my belief of the eroneous nature of associating "utilons" with "happiness". We derive utility from things other than feeling pleasure; we experience disutility from things other than experiencing suffering. However, suffering when present in sufficient quantities in a single person can and does impact those other forms of disutility... such as the loss of capacity for self-determination.

Please note: self-determination as I am using it does NOT refer to getting to choose whether or not the event itself (speck vs. torture) happens to you. It refers to the ability, thereafter, to make competent decisions about your own life, or have the capacity to determine for yourself who you wish to be.

Is the nature of your objection to my position the simple fact that I refuse to only consider the immediate suffering of the proposition?

Yeah, pretty much. If it were logically impossible for ceteris to be paribus,

I see. You are under the misapprehension that I am not applying the principle of ceteris paribus to the argument. Rest assured that this is a misapprehension. I in fact am treating this as an "all other things being equal" scenario.

I simply have a more expansive view of the definition of "consequence" than "suffering alone".

To bring in "secondary factors" (i.e. look at worlds where ceteris isn't paribus) and then decide based on those factors alone isn't a correction to the original question, it's answering a completely different question.

  1. I never even intimated that "only the secondary consequences should be considered". Please discontinue the use of this strawman view of my argument.

  2. Considering secondary consequences of a choice is not "looking at worlds where ceteris isn't paribus". I am quite frankly at a total loss as to understanding why you should be possessed of such a belief in the first place.

  3. Where did I go wrong in getting you to understand that my argument is in alignment with the ceteris paribus principle?

  4. Why do you continue to feel it appropriate do decide that only some consequences "actually count" as consequences?

Replies from: Manfred
comment by Manfred · 2011-11-30T20:34:11.431Z · LW(p) · GW(p)

Do I correctly understand you to believe that I was including the right of individual self-determination as an independent valuative norm, rather than for its utility? (Also, please note that I did not originally use the term "right" but rather "principle".)

Just that you were applying it to torture while not applying it to dust specks - a qualitative difference.

I never even intimated that "only the secondary consequences should be considered". Please discontinue the use of this strawman view of my argument.

You never said, it, and yet in your argument only the secondary factors mattered to your decision.

Considering secondary consequences of a choice is not "looking at worlds where ceteris isn't paribus". I am quite frankly at a total loss as to understanding why you should be possessed of such a belief in the first place.

If you don't think you can judge how much you'd like two worlds to exist independent of there being someone in those worlds to make a "choice," then you reject utilitarianism.

Where did I go wrong in getting you to understand that my argument is in alignment with the ceteris paribus principle?

My guess would be when your argument wasn't in alignment with the ceteris paribus principle.

Why do you continue to feel it appropriate do decide that only some consequences "actually count" as consequences?

Because the consequences that "actually count" were the ones that make up the original problem. "Secondary consequences" that are not logically equivalent (this does not mean causally related) to the original consequences merely mean that you're answering a different question than the one that was asked.

Replies from: Logos01
comment by Logos01 · 2011-11-30T20:54:38.766Z · LW(p) · GW(p)

Just that you were applying [the principle of self-determination] to torture while not applying it to dust specks - a qualitative difference.

Not even remotely. I applied it to both; it simply does not alter the anti-utility of dust specks. Receiving a dust-speck in one's eye does not alter in any measurable way the capacity for self-determination of an arbitrary individual.

You never said, it, and yet in your argument only the secondary factors mattered to your decision.

False. The secondary consequences when added to the primary caused torture, even in the linear-additive condition, to be the worse option. They overwhelmed the primary.

If you don't think you can judge how much you'd like two worlds to exist independent of there being someone in those worlds to make a "choice," then you reject utilitarianism.

... what? At what point, exactly, did this become a valid thing for you to say to me? The hypothetical asked us which scenario was worse. That is a choice to be made.

Furthermore, exactly how does the notion of required agency abrogate utilitarianism? That doesn't even remotely compute. Of the many-fold forms of utilitarianism of which I am aware, not a single one has such a standard. The very notion of a moral system which might require that it be applicable without an agent is self-contradicting.

My guess would be when your argument wasn't in alignment with the ceteris paribus principle.

Ahh. I must conclude that either you haven't been reading a single thing I've written, or else you are delusional, or else you are writing to me somehow from a parallel world. Or you are simply lying. These are the only available options, as your statement is not in accordance with the reality I can observe in this comment thread.

Because the consequences that "actually count" were the ones that make up the original problem.

This is not even remotely interesting as an argument. The consequences of the original problem are those consequences the original problem's choices would result in. Your failure to consider those consequences does not, under any rational circumstances, mean those consequences did not exist. It only means that you made an incomplete analysis.

And that, of course, was my original point: that the rejection of secondary consequences was a failure of analysis. They were there.

"Secondary consequences" that are not logically equivalent (this does not mean causally related) to the original consequences merely mean that you're answering a different question than the one that was asked.

A consequence of an action or history is a consequence of that action or history. When selecting from a given action or history against another, as a consequentialistic utilitarian, one must weigh the consequences of a given action or history against one another. That is essentially a tautological statement.

Now -- let's try this again: what makes one category of consequences "real" consequences, and others "unimportant", and why are you the arbiter of these things? Why are some forms of utility "countable" and others "ignorable"? Furthermore, on what grounds can you possibly justify making such an assertion and still call yourself a utilitarian?

Please, in responding, do note that the mere assertion that consequences are not logically equivalent to consequences will not fly. It is a non-argument. You'll simply have to provide an actual argument if you expect me to begin to be convinced of your position. Thus far, it seems inherently paradoxical. You expect me to believe that A != A. My brain is not wired to accept such paradoxes.

Replies from: Manfred
comment by Manfred · 2011-11-30T23:28:17.922Z · LW(p) · GW(p)

I'll start winding down my answers now. This looks like it may actually hit reverse returns.

Just that you were applying [the principle of self-determination] to torture while not applying it to dust specks - a qualitative difference.

Not even remotely. I applied it to both; it simply does not alter the anti-utility of dust specks. Receiving a dust-speck in one's eye does not alter in any measurable way the capacity for self-determination of an arbitrary individual.

And yet not long ago you said this:

To select 3^^^3 people to get dust specks in their eyes also violates the "principle" of individual self-determination.

True, but it does so significantly less-grossly.

Also remember that the dust speck causes them to blink, measurably.

-

If you don't think you can judge how much you'd like two worlds to exist independent of there being someone in those worlds to make a "choice," then you reject utilitarianism.

... what? At what point, exactly, did this become a valid thing for you to say to me? The hypothetical asked us which scenario was worse. That is a choice to be made.

Utilitarianism means that the preference ranking of possible worlds is determined only by the properties of those worlds. The hypothetical is asking for that preference ranking. The fact that you have to choose is a not one of the properties of those worlds.

Replies from: Logos01
comment by Logos01 · 2011-12-01T08:08:47.267Z · LW(p) · GW(p)

And yet not long ago you said this.

Also remember that the dust speck causes them to blink, measurably.

That has no bearing on the question of self-determination. The sum total of a finitely-large-but-humanly-incomprehensible infinitessimal suffering events in terms of their impact on self-determination is, arguably, non-negligible, but it certainly isn't equivalent to the total, repeated, ruination of said function.

The fact that you have to choose is a not one of the properties of those worlds.

May or may not be a property of those worlds. The question is agnostic to how the worlds are implemented. This is not a trivial or irrelevant detail. In the absence of justification for removal of this property it must be considered.

Even if we do away with that consideration, however, on the balance the argument I've been making holds true.

comment by Zack_M_Davis · 2011-11-30T04:47:33.464Z · LW(p) · GW(p)

the other I think of as the 'logarithmically additive' answer, which inverts the answer on the grounds that forms of suffering are not equal, and cannot be added as simple 'units'.

This is a poor choice of terminology. Logarithmic functions grow slowly, but they're still unbounded: even if the badness of the dustspecks is a logarithmic function (say, the natural log) in the number of people specked, ln(3^^^3) is still so incomprehensibly large that the torture-favoring conclusion still follows. Perhaps what you mean is something more like logistic additivity: the badness of the dustspecks as a function of the number of people specked could approach an asymptote, such that more dust specks are always worse, but the total badness of any amount of dustspecks is below some finite bound. With this assumption, we can deny the counterintuitive torture-favoring conclusion, but only at the price of having to accept a different counterintuitive conclusion, described by Unknown in a 2008 comment: there must exist some bad event such that no number of ever-so-slightly-less-bad events can be as bad as a finite number of the bad events.

Replies from: Logos01
comment by Logos01 · 2011-11-30T05:20:26.123Z · LW(p) · GW(p)

This is a poor choice of terminology. [...] Perhaps what you mean is something more like logistic additivity: the badness of the dustspecks as a function of the number of people specked could approach an asymptote, such that more dust specks are always worse, but the total badness of any amount of dustspecks is below some finite bound.

It's a poorly-defined topic. I definitely disagree with the logistic, however. I was attempting to compare the quantitative nature of the torture against the specs. In other words; I was attempting to create an association between the scale of suffering for an individual and the relationship in quantitative terms of that suffering for a given person, such that a "barely noticeable dust-speck in the eye" would be proportionate to a (nearly- or truly-)infinitessimal quantity of torturings.

there must exist some bad event such that no number of ever-so-slightly-less-bad events can be as bad as a finite number of the bad events.

What makes this counterintuitive? "Eternal And I Must Scream" is definitely incomparably worse than "Finite And I Must Scream", regardless of however many countably finite instances of FAIMS you provide.

Replies from: prase, Emile
comment by prase · 2011-11-30T11:49:26.611Z · LW(p) · GW(p)

What makes this counterintuitive?

You have probably misread. The counterintuitive fact is that there are two kinds of (finitely) bad events A and B, B only very slightly worse than A, such that there is a finite number of Bs which is worse than any (possibly infinite) number of As. Explained in detail in the Unknown's comment linked above.

Replies from: Logos01
comment by Logos01 · 2011-11-30T12:00:22.138Z · LW(p) · GW(p)

You have probably misread.

  1. "there must exist some bad event such that no number of ever-so-slightly-less-bad events can be as bad as a finite number of the bad events." is not an equivalent statement to "there are two kinds of (finitely) bad events A and B, B only very slightly worse than A, such that there is a finite number of Bs which is worse than any (possibly infinite) number of As." -- It removes the finiteness of the "original" bad event. That's very significant.

  2. Unknown's comment really doesn't bear relevance to my position, as I actively rejected the notion of the asymptotic limit for the dustspecks. (Hence my rejection of the "logistic" rather than "logarithmic".)

Replies from: prase
comment by prase · 2011-11-30T12:52:56.990Z · LW(p) · GW(p)

I have refered to Unknown's comment not because I thought you accepted asymptotic limit of cumulative disutility, but because this was providing the context in which the statement

there must exist some bad event such that no number of ever-so-slightly-less-bad events can be as bad as a finite number of the bad events

was made. The "original" bad event corresponds to a stubbed toe in the linked comment, something whose disutility is certainly finite. Infinite disutilities were mentioned nowhere in the debate and finite badness of the former event is also clear from that the latter event (finitely bad) is said to be "ever-so-slightly-less-bad" than the former.

Replies from: Logos01
comment by Logos01 · 2011-11-30T13:56:10.085Z · LW(p) · GW(p)

The "original" bad event corresponds to a stubbed toe in the linked comment, something whose disutility is certainly finite.

If my view of logarithmic quantification for suffering is valid, then the stubbed-toe would be of vastly greater 'anti-utilon' quantity than the dust-speck in the eye; and torture that much moreso.

finite badness of the former event is also clear from that the latter event (finitely bad) is said to be "ever-so-slightly-less-bad" than the former.

That holds true and relevant under Unknown's context and metrics, but not under mine; a stubbed-toe plus a dust-specking is "ever so slightly worse" than a stubbed-toe alone -- but because this has no asymptotic limit, Unknown's 'counterintuitive result' is irrelevant; it does not manifest.

Replies from: prase
comment by prase · 2011-11-30T16:13:53.092Z · LW(p) · GW(p)

If my view of logarithmic quantification for suffering is valid, then the stubbed-toe would be of vastly greater 'anti-utilon' quantity than the dust-speck in the eye; and torture that much moreso.

Is it meant to be a refutation of my claim that disutility of a stubbed toe is finite? Else, I don't see the relevance.

Unknown's 'counterintuitive result' is irrelevant; it does not manifest

Remember that originally you have written that it (more precisely Zack_M_Davis's restatement of it) is not counter-intuitive, not that it is irrelevant. Irrelevance was never disputed.

comment by Emile · 2011-11-30T11:23:44.786Z · LW(p) · GW(p)

It's a poorly-defined topic.

That doesn't justify sloppy use of technical terminology. It's better to use technical terminology correctly, or to not use it at all, than to make a nontechnical argument with the trappings of a technical one. The latter is borderline cargo-cult science.

Replies from: Logos01
comment by Logos01 · 2011-11-30T11:30:00.118Z · LW(p) · GW(p)

It's a poorly-defined topic.

That doesn't justify sloppy use of technical terminology.

I was opening into the explanation that I hadn't in fact made sloppy use of technical terminology, but rather said exactly what I meant to say. Did I not make this sufficiently clear?

Replies from: Emile
comment by Emile · 2011-11-30T12:53:42.070Z · LW(p) · GW(p)

I don't see anything in your explanation that justifies "logarithmically additive" - so no, I don't think you made it sufficiently clear.

Replies from: Logos01
comment by Logos01 · 2011-11-30T14:01:52.303Z · LW(p) · GW(p)

I see. I'll rephrase.

I was asserting that anti-utilons do not increase linearly with pain but logarithmically: the scale of measurement of difference between a dust-speck in the eye, a splinter in your thumb, a stubbed toe, a broken toe, a stomache-ache, and torture is one such that it requires multiples of each 'smaller' event to accumulate a single equivalence of anti-utilons for the next-higher-"unit" (I.e.; orders of magnitude; a logarithmic scale.)

This is what I was saying when I was stating that the logarithmically additive was referring to the quantitative nature of the various suffering events in terms of their scale; such that a dust-speck would be equivalent to an infinitessimal number of tortures.

Replies from: Emile, None
comment by Emile · 2011-11-30T21:37:34.047Z · LW(p) · GW(p)

I was asserting that anti-utilons do not increase linearly with pain but logarithmically

Whether the increase is linear or logarithmic does not change anything - in both cases there could be a number N large enough that the disutility of N dust specks is larger than that of one torture. That is why Eliezer picked a mindfuckingly large number like 3^^^3 - to sidestep nitpicking over the exact shape of the utility function.

What would make a difference was if the disutility was a bounded function, hence Zack's suggestion of the logistic function.

(Many people have been trying to tell you this in this thread, including TimS who seems to agree with your conclusions. You may want to update.)

Replies from: Logos01
comment by Logos01 · 2011-11-30T22:30:59.045Z · LW(p) · GW(p)

Whether the increase is linear or logarithmic does not change anything - in both cases there could be a number N large enough that the disutility of N dust specks is larger than that of one torture.

This does not follow. One torture could possess effectively infinite disutility, potentially. But that's irrelevant, as I was simply expressing the notion of logarithmic scaling of pain. Especially since we're dealing with a nearly-negligible kind of pain on the lower end. A "mind-fucking large number" of nearly-0-value instances of pain would not necessarily find its way back up the scale to the "mind-fucking large number" of anti-utilons induced by the torture-for-fifty-years.

An infinitessimal amount of pain, multiplied by a non-infinite "mind-fuckingly-large number", would not be guaranteed to exceed "1", let alone achieve its own "mind-fuckingly large number" all over again.

That is the entire point of noting the logarithmic nature of pain -- I was pointing out that the disutility experienced by the torture victim itself, according to that metric, was also a "mind-fuckingly large number". I should have expected this to be obvious from the fact that logarithmic functions are unbounded.

That is why Eliezer picked a mindfuckingly large number like 3^^^3 - to sidestep nitpicking over the exact shape of the utility function.

And if disutility added linearly that would be a successful achievement on his part.

What would make a difference was if the disutility was a bounded function, hence Zack's suggestion of the logistic function.

Zack's suggestion was not appropriate to describing my position even slightly. I strongly disagree with the logistic function. My assertion is not that there is an upper bound to how much suffering can be received by dust-specking, but rather that there is no upper bound on the suffering of torture.

But that still only considers the primary consequences.

(Many people have been trying to tell you this in this thread, including TimS who seems to agree with your conclusions. You may want to update.)

Many people have been trying to tell me many things. Mostly that my premise is invalid in its face -- but not a single one of them has provided anything resembling a non-illogical reason for their dismissal of my position.

I only update my beliefs when provided with legitimate arguments or with evidence. Nothing to this point has passed the muster of being non-contradictory.

There is further reason for my maintaining this position, however: even when I specifically stipulated the linear-additive -- that is, when I stated that the direct suffering of the torture victim was less than that of the dust-speckings -- by introducing the secondary consequences and their impact I STILL was left choosing the dust-speckings as the 'lesser of two evils'.

And that, in fact, was the real "core" of my argument: that we must not, if we are to consider all the consequences, limit ourselves solely to the immediate consequences when deciding which of the two sets of outcomes has the greater disutility. I further object to the notion that "suffering" vs. "pleasure" is the sole relevant metric for utility. And based on that standard, the additional forms of disutility comparing between the dust-speckings as opposed to the torture strongly weigh against the torture being conducted; the dust-speckings, while nuisancesome, simply do not register at all as a result on those other metrics.

Replies from: Emile
comment by Emile · 2011-12-01T09:38:32.991Z · LW(p) · GW(p)

An infinitessimal amount of pain, multiplied by a non-infinite "mind-fuckingly-large number", would not be guaranteed to exceed "1", let alone achieve its own "mind-fuckingly large number" all over again.

That is the entire point of noting the logarithmic nature of pain

The term "logarithmic" does not capture that meaning. Your concept of "infinitesimal" such that you can never get to 1 by multiplying it by a number no matter how large is not a part of "standard" mathematics; you can get something like that with transfinite numbers and some other weirdness, but none of those are particularly related to logarithms and orders of magnitude.

Your whole use of "really small numbers" and "really large numbers" in this thread (notably in the discussion with paper-machine) is inconsistent with the ways those concept are usually used in maths.

Replies from: Logos01
comment by Logos01 · 2011-12-01T10:58:32.323Z · LW(p) · GW(p)

Your concept of "infinitesimal" such that you can never get to 1 by multiplying it by a number no matter how large is not a part of "standard" mathematics; [and is not] particularly related to logarithms and orders of magnitude.

  1. I'm pretty sure you meant to say finite number, here.

  2. Are we talking about the same concept of orders of magnitude? (It might help to consider the notion that both torture and dust-speckings are distant from the same approximate "zero-magnitude" event which is the 1 anti-utilon.)

Your whole use of "really small numbers" and "really large numbers" in this thread (notably in the discussion with paper-machine) is inconsistent with the ways those concept are usually used in maths.

For any value of (finite) "really large number" n there is an equivalent "really small number" that can be expressed as 1/n. The notion of the logarithmic quantification of pain bears relevance to our discussion because of the fact that we have declared the dust-specking the "smallest possible unit of suffering". This essentially renders it nearly infinitessimal, and as such subject to the nature of infinitessimal values which are essentially the exact inverse of "mind-fuckingly large" numbers.

It is furthermore worth noting that there is, since we're on the topic of numbers and quantification, a sort of verbal slight-of-hand going on here: the 'priming' effect of associating 3^^^3 'dust-speckings' with a "mere" 50 'years of torture'. I have repeatedly been asked, "If 3^^^3 isn't enough, how about 3^^^^3 or 3^^^^^3?" -- or questions to that effect. When I note that this is priveleging the hypothesis and attempt to invert it by asking what number of years of torture would be sufficient to overwhelm 3^^^3 dust-speckings in terms of disutility, a universal response is given, which I will quote exactly:

" "

This, I feel, is very telling, in terms of my current point regarding that "slight-of-hand". Unlike units of measurement are being used here. I'll demonstrate by switching from measurement of pain to measurement of distance (note that I am NOT stating these are equivalent values; I'm demonstrating the principle I reference, here, and do NOT assert it to be a correct analogy of the torture-vs-specking answers.)

"Which is the longer distance? 50 lightcone-diameters or 3^^^3 nanometers?"

comment by [deleted] · 2011-11-30T14:44:26.746Z · LW(p) · GW(p)

Are you saying that 3^^^3 is not sufficiently large? Then consider 3^^^^3.

Whatever epsilon you assign to dust specks, there's still a yet larger number such that this number of dust specks is worse than torture. Everything else is just accounting that we can't feasibly calculate anyway.

Replies from: TimS, Logos01
comment by TimS · 2011-11-30T15:18:57.242Z · LW(p) · GW(p)

there's still a yet larger number such that this number of dust specks is worse than torture.

He (and I) deny this statement is true. There is no sum of sufferings that add up to torture. It is analogous to the fact that the sum of a countably infinite number of countably infinite sets is not as large as the set of real numbers.

I don't know why Logos insists that logarithmic captures this idea.

comment by Logos01 · 2011-11-30T15:20:44.146Z · LW(p) · GW(p)

Are you saying that 3^^^3 is not sufficiently large? Then consider 3^^^^3.

I'm saying that dust-specks are practically infinitessimal comparatively, and that's in direct comparison. Ergo; a nearly or potentially truly infinite number of dust-speckings would be required to equal one torturing for fifty years, just in terms of direct suffering.

If we were to include the fact that such a torturing would result in the total personality destruction of several reconstructed psyches over the period of that 50 years, I don't necessarily know that such a torture couldn't rightly be called effectively infinite suffering for a single individual.

there's still a yet larger number such that this number of dust specks is worse than torture.

In terms solely of the immediate and direct suffering, certainly. In terms of that suffering and the other consequences -- individual self-determination, for example; or social productivity costs, etc., even an infinite number of dust-speckings begins to become insufficient to the task of equalling a single fifty-year torture. How much those additional elements 'count' as compared to the suffering alone is a question which is not immediately available to simple calculation; we have no means of converting the various forms of utility into a single comparable unit.

Everything else is just accounting that we can't feasibly calculate anyway.

We're already comparing two unimaginably large numbers against one another. For example; the adjustment of 3^^^3 to 3^^^^3 -- how would you decide on the torture vs. dust-speckings if we did the same to torture? What number of years of torture would need to "exceed" 3^^^3 dust-speckings? 51? 500? 50^50?

Replies from: None, TimS
comment by [deleted] · 2011-11-30T16:40:26.307Z · LW(p) · GW(p)

I'm saying that dust-specks are practically infinitessimal comparatively, and that's in direct comparison. Ergo; a nearly or potentially truly infinite number of dust-speckings would be required to equal one torturing for fifty years, just in terms of direct suffering.

I don't understand what you mean by "practically infinitesimal". Are you saying the negative utility incurred by a dust speck is zero? Also, what do you mean by "nearly... infinite"? Either a quantity is infinite or finite.

there's still a yet larger number such that this number of dust specks is worse than torture.

In terms solely of the immediate and direct suffering, certainly. In terms of that suffering and the other consequences -- individual self-determination, for example; or social productivity costs, etc., even an infinite number of dust-speckings begins to become insufficient to the task of equalling a single fifty-year torture.

You've completely lost me. If X is the negative utility of N dust specks, and Y the negative utility of fifty years of torture, then the first sentence implies that X > Y. Then the second sentence defines a second kind of negative utility, Z, due to other consequences. It goes on to imply that X + Z < Y. All quantities involved are positive (i.e., the units involved are antiutilons), so there's a contradiction somewhere, unless I've misread something.

Replies from: Logos01
comment by Logos01 · 2011-11-30T16:50:35.512Z · LW(p) · GW(p)

Are you saying the negative utility incurred by a dust speck is zero?

Nearly zero. That's part of the hypothesis: that it be the smallest possible unit of suffering. If the logarithmic scale of quantification for forms of suffering holds true, then forms of suffering at the maximal end of the scale would be practically infinite comparably.

Either a quantity is infinite or finite.

Correct, but a number that approaches infinity is not itself necessarily infinite; merely very large. 3^^^3 for example.

You've completely lost me. If X is the negative utility of N dust specks, and Y the negative utility of fifty years of torture, then the first sentence implies that X > Y.

The negative utility yet considered. Also, keep in mind that we're at this point priveleging the hypothesis of torture being chosen: we are allowing the number of speckings to be adjusted but leaving the torture fixed. (While it doesn't really change anything in the discussion, it bears noting for considerations of the final conclusion.)

Then the second sentence defines a second kind of negative utility, Z, due to other consequences. It goes on to imply that X + Z < Y.

No, it implies that Z(X) + X < Z(Y) + Y.

so there's a contradiction somewhere, unless I've misread something.

My argument rests on the notion that the Z-function value of X is effectively zero, and my further assertion that the Z-function value of Y is such that it overwhelms, when added to Y, the value of X.

Replies from: None
comment by [deleted] · 2011-11-30T17:02:38.523Z · LW(p) · GW(p)

Nearly zero. That's part of the hypothesis: that it be the smallest possible unit of suffering. If the logarithmic scale of quantification for forms of suffering holds true, then forms of suffering at the maximal end of the scale would be practically infinite comparably.

As long as it's nonzero, then as I stated before, there exists some N such that N dust specks have greater negative utility than fifty years of torture. 3^^^3 and 50 are just proxies for whatever the true numbers are.

Correct, but a number that approaches infinity is not itself necessarily infinite; merely very large. 3^^^3 for example.

This is a category error. 3^^^3 does not approach infinity. It's a fixed number, it's not going anywhere.

The rest of your comment clarifies the offending inequality.

Replies from: Logos01
comment by Logos01 · 2011-11-30T17:20:06.730Z · LW(p) · GW(p)

This is a category error. 3^^^3 does not approach infinity. It's a fixed number, it's not going anywhere.

Can you intelligibly grasp it? Or is it "unimaginably large"? For purposes of human consideration, I do not feel it necessary to differentiate between a truly infinite number and one that is "pseudo-infinite" (where by pseudo-infinite I mean 'beyond our comprehension'). I admit this is an imperfect hack.

Replies from: None
comment by [deleted] · 2011-11-30T17:23:15.183Z · LW(p) · GW(p)

For purposes of human consideration, I do not feel it necessary to differentiate between a truly infinite number and one that is "pseudo-infinite" (where by pseudo-infinite I mean 'beyond our comprehension').

That way lies the madness of pre-Weierstrass analysis.

comment by TimS · 2011-11-30T15:28:45.837Z · LW(p) · GW(p)

In terms solely of the immediate and direct suffering, certainly.

Why do you concede this? All the suffering you list after this is just as direct.

Saying that torture-without-most-of-the-things-that-make-it-wrong is not so bad might be true, but it isn't useful.

Replies from: Logos01
comment by Logos01 · 2011-11-30T16:04:28.359Z · LW(p) · GW(p)

Why do you concede this? All the suffering you list after this is just as direct.

None of the things I listed afterwards were "suffering" at all. "Suffering" is not "the absence of pleasure" -- it is "the antithesis of pleasure". "Utility" is not synonymous with "enjoyment" or "pleasure". (Also, please do recall that hedonistic utilitarianism is far from the only form of utilitarianism in existence.)

Saying that torture-without-most-of-the-things-that-make-it-wrong is not so bad might be true,

What.. ? Just... who are you reading in these threads? I am finding myself more and more convinced that you are responding to the writings of someone other than me. You seem to have a persistent habit of introducing notions to our discussions -- in a manner as though you were responding to something I had written -- that just bear in no way whatsoever to anything I have written or implied by my writings.

Why is this?

Replies from: TimS
comment by TimS · 2011-11-30T16:12:55.414Z · LW(p) · GW(p)

I'll concede that suffering might not be the right word. But everything later in that sentence are essential parts of why torture is wrong. If torture didn't imply those things (i.e. wasn't torture), then it would be the right choice compared to dust-specks.

Replies from: Logos01
comment by Logos01 · 2011-11-30T16:33:09.269Z · LW(p) · GW(p)

But everything later in that sentence are essential parts of why torture is wrong.

Of course those things are essential parts of why torture is wrong. They would have to be, for my argument to be valid.

If torture didn't imply those things (i.e. wasn't torture), then it would be the right choice compared to dust-specks.

Are you simply unaware that the conventional wisdom here on Less Wrong is that the proper answer is to choose to torture one person for fifty years rather than dust-speck 3^^^3 people?

Replies from: TimS
comment by TimS · 2011-11-30T16:45:31.956Z · LW(p) · GW(p)

Are you simply unaware that the conventional wisdom here on Less Wrong is that the proper answer is to choose to torture one person for fifty years rather than dust-speck 3^^^3 people?

Yes, that is the conventional wisdom. I agree with you that it is wrong, because the features of torture you describe are why the badness quality of torture cannot be achieved in the sum of huge amounts of a lesser badness.

You seem to think that someone could think dust-specks was the right answer without taking into account those essential parts of torture. Otherwise, why do you think that the secondary effects of allowing torture were not considered in the original debate?

Replies from: Logos01
comment by Logos01 · 2011-11-30T16:52:13.705Z · LW(p) · GW(p)

Otherwise, why do you think that the secondary effects of allowing torture were not considered in the original debate?

Because I read the original submission and its conversation thread.

Replies from: TimS
comment by TimS · 2011-11-30T18:40:27.945Z · LW(p) · GW(p)

This is a bit of Meta-Comment about commenting:

As you noted in your post, people in the original thread objected to choosing torture for reasons that basically reduce to the "non-additive badness" position. For me, that position is motivated by the badness of torture you described in your post. So I read the other commenters charitably to include consideration of the sheer wrongness of torture. I simply can't see why one would pick dust-specks without that consideration.

Now you say I'm reading them too charitably. I've been told before that I do that too much. I'm not sure I agree.

comment by orthonormal · 2011-12-02T05:42:33.972Z · LW(p) · GW(p)

If you truly want to become stronger, note that several people whose intellects you respect have said that you're not processing their objections correctly. You really should consider the possibility that your mind is subconsciously shrinking away from a particular line of thought, which is notoriously difficult to see as it's happening, especially when perceived social status is at stake.

From my perspective, it looks like you're either rejecting consequentialism (which is a respectable philosophical position in most circles, but you don't admit outright this is what you're doing), or else you're importing an additional consequence for each of the 3^^^3 people, like living in a dystopia, which dodges the numerical issue by being worse for each of those 3^^^3 people than getting a dust speck would be. Thus you're dodging at least one real purpose of the thought experiment, how to balance massive consequences for a few against mild consequences for many, and you should at least construct for yourself a variant where that dilemma is really explored. We all agree that torture is bad (all things being equal), so if you change the consequences in this way you're reducing it to a trivial question.

comment by lessdazed · 2011-11-30T05:04:15.209Z · LW(p) · GW(p)

I do not believe...only...misses the point.

Am I reading that correctly?

I do not condone torture even in the "ticking time bomb" scenario: I cannot accept the culture/society that would permit such a torture to exist. To arbitrarily select out one individual for maximal suffering in order to spare others a negligible amount would require a legal or moral framework that accepted such choices

There are multiple questions here, and they don't necessarily have similar answers.

Some examples:

A person who campaigns to ban torture and make it illegal in all cases may be acting to shape the legal framework of the society while still endorsing the use of torture in some cases.

Society's banning of torture does not necessarily lower its incidence, so it is too convenient to say that "I still cannot agree with the conclusion of torture over the dust speck, [because] I cannot accept the culture/society that would permit such a torture to exist," as if there would never be a need to choose between torture's prevalence and endorsement by society. You have given no guidelines for choosing between a situation in which torture is illegal but common and a situation in which it is legal and less common. In truth, the original argument had nothing to do with legality.

A person might volunteer for the torture, making the dust speck option the only one that violates the principle of self determination.

The "conclusion by refusal to accept the alternative" is a weak principle because it implies lack of consideration for the consequences of the other choice. I cannot accept the world in which torture exists and cannot accept the world in which suffering is inflicted on so many people.

The focus on the arbitrariness of the choice doesn't get to the root of the issue, as some people seem to actually reject the torture vs. dust specks scenario for that reason. This is not a factor in many ticking time bomb scenarios.

In general there are so many unrelated things tied together here most simply cannot be true objections that always apply, other scenarios may pit these objections against each other.

the other I think of as the 'logarithmically additive' answer

As logarithmic functions don't have a finite limit, this seems like an odd way of labeling it.

Replies from: Logos01
comment by Logos01 · 2011-11-30T05:26:19.456Z · LW(p) · GW(p)

I do not believe...only...misses the point.

Am I reading that correctly?

Edited. ( s/do not// )

Society's banning of torture does not necessarily lower its incidence, so it is too convenient to say that "I still cannot agree with the conclusion of torture over the dust speck, [because] I cannot accept the culture/society that would permit such a torture to exist,"

The mere fact that immoral things occur is not a criticism of the moral framework itself but rather of our inability to adhere to it. Noting this flaw in our behaviors, I believe, does not provide grounds for argument against the framework.

In truth, the original argument had nothing to do with legality.

And I noted that it is useful for sussing out whether one holds to the linear or the logarithmic view of suffering, in that context.

As logarithmic functions don't have a finite limit, this seems like an odd way of labeling it.

I believe being tortured exquisitely for fifty years is (effectively, at least) infinitely worse than having a nearly-unnoticeable dust-speck in my eye.

Replies from: lessdazed
comment by lessdazed · 2011-11-30T05:36:29.975Z · LW(p) · GW(p)

The mere fact that immoral things occur

The sentences you quoted should be interpreted as fitting within their paragraph - they introduce the possibility that making something illegal might not reduce how often it occurs. In general, for almost no moral framework should one attempt to make the law match it.

Can I or can I not take you to mean that increased incidence of torture, likely or assured, is always worth the benefit of torture being illegal according to the law and/or common perception? Usually worth it? Never worth it? Perhaps the biggest consideration is the nature of the causal relationship between one's act and the decision to torture?

Replies from: Logos01
comment by Logos01 · 2011-11-30T05:45:28.625Z · LW(p) · GW(p)

In general, for almost no moral framework should one attempt to make the law match it.

Of course not. I was not especially aware that I had extended the discussion beyond the realm of the moral into the legal, however -- so I can't say I find anything relating to the comparison between the two to be especially relevant to the discussion at hand.

Can I or can I not take you to mean that increased incidence of torture, likely or assured, is always worth the benefit of torture being illegal according to the law and/or common perception? Usually worth it? Never worth it?

I am not, here, making any forays into the legal arena. I will say that I, agnostic to any other considerations, strongly prefers scenarios that result in less torture being committed as opposed to more.

Perhaps the biggest consideration is the nature of the causal relationship between one's act and the decision to torture?

My argument, here, does rest on the need to consider secondary consequences when making a properly "consequentialist" argument for which choice to make, yes. I'm not entirely sure that actually answers the question you're asking, however.

Replies from: lessdazed
comment by lessdazed · 2011-11-30T06:01:33.989Z · LW(p) · GW(p)

I am not, here, making any forays into the legal arena.

I see the following as an argument against legalizing or otherwise endorsing behavior, but not as an argument against an individual's performing the behavior:

I cannot accept the culture/society that would permit such a torture to exist.

On the balance, how did the gatherer effect the social taboo against "work" on the Sabbath?

Numbers 15:32-36 King James Version (KJV)

And while the children of Israel were in the wilderness, they found a man that gathered sticks upon the sabbath day.

And they that found him gathering sticks brought him unto Moses and Aaron, and unto all the congregation.

And they put him in ward, because it was not declared what should be done to him.

And the LORD said unto Moses, The man shall be surely put to death: all the congregation shall stone him with stones without the camp.

And all the congregation brought him without the camp, and stoned him with stones, and he died; as the LORD commanded Moses.

He provided an excuse to reinforce the prohibition. Someone knowing the outcome of the story couldn't have said his action made it less taboo, as it wasn't previously established that it as a capital offense. All the more so for the 1,000,001st person to torture someone, after the million who preceded him, so long as the 1,000,001st is punished severely.

Replies from: Logos01
comment by Logos01 · 2011-11-30T06:20:06.540Z · LW(p) · GW(p)

I see the following as an argument against legalizing or otherwise endorsing behavior, but not as an argument against an individual's performing the behavior:

As a general trend if we accept one form of action as opposed to the other we are reducing the threshold towards its being repeated. This is akin to the Broken Window Theory: what was permitted once may be argued more permissible in the future due to said permission. Individual instances of a behavior then become arguments for or against it. For example, I believe that the US's practice of condoning "enhanced interrogation techniques" was directly contributive to the events in Abu Ghraib.

What I mean to say is: as a practical argument, deciding between the two must consider the impact of the decision on the likelihood for the type of behavior to recur, amongst other things. The key is in that "or otherwise endorsing behavior" -- graffiti in a neighborhood results in increased burglaries, litering, and other forms of crime. Increasing the instances of intentional/chosen torture increases the likelihood of acts of equivalent or lesser severity being committed.

On the balance, how did the gatherer effect the social taboo against "work" on the Sabbath? [...] All the more so for the 1,000,001st person to torture someone, after the million who preceded him, so long as the 1,000,001st is punished severely.

There is historical inertia to how individual actions accumulate to affect the actions society deems acceptable, yes. This is an element of my argument.

Replies from: lessdazed
comment by lessdazed · 2011-11-30T06:41:47.935Z · LW(p) · GW(p)

as a practical argument, deciding between the two must consider the impact of the decision on the likelihood for the type of behavior to recur, amongst other things.

Part of figuring out the impact of the decision on the likelihood for the type of behavior to recur is other peoples' responses to it, amongst other things.

The act of painting graffiti doesn't cause crime, certain responses to it do. Increasing the instances of graffiti only increases crime all else equal, but does not increase it irrespective of communal response.

In this post and several of its threads you seem to be violating the principle of least convenient possible world. One of your original criticisms of torture instead of specks was that it assumed very particular consequences of actions - that torturing wouldn't ever affect future choices to torture. However, you illegitimately assume that it will always affect future choices to torture by making it more likely. It seems almost parallel.

If anything, appealing to future cases makes the argument for specking stronger. At a certain number of future cases, at a certain quantity of specks, more people would be tortured for 50 years if one always chose specks than if one always chose torture!

Replies from: Logos01
comment by Logos01 · 2011-11-30T09:00:22.733Z · LW(p) · GW(p)

One of your original criticisms of torture instead of specks was that it assumed very particular consequences of actions - that torturing wouldn't ever affect future choices to torture.

... My original criticism depended on the idea that torturing would affect future choices to torture. That continues to be my criticism. From where do you derive this idea that I assert it does not?

However, you illegitimately assume that it will always affect future choices to torture by making it more likely.

Please explain why you find this to be an "illegitimate assumption". Especially in the face of the explanations I have thus far given as to why it would in fact occur.

At a certain number of future cases, at a certain quantity of specks, more people would be tortured for 50 years if one always chose specks than if one always chose torture!

I disagree. Strongly. Very strongly, in fact. For the same reason I've already made: by the time 3^^^3 people are tortured for fifty years as a result of dustspecks, for the equivalent number of choices to be made for torture instead would require -- even if we assume that the torture scenario has only a quarter the total suffering of the 3^^^3 speckings -- the sheer volume of such tortures would definitely invoke the Broken Window Theory. At a certain point human beings will -- from sheer necessity for psychological stability -- engage in the suspension of moral belief. "One person dying is a tragedy; a thousand is a statistic; a million is a number." Such immunization to the suffering of others as would be resultant from the sheer volume of such suffering would result, unless some major alterations are made to human psychology, in the institutionalization of such suffering. As a result of that, then, there would be -- again, all other things being equal -- far more torture, rape, and sheer absence of compassion and aid resultant. We still have societies of this nature today.

If nothing else, the expansion from a single instance to multiple makes this principle far more overtly obvious -- it allowed me to make what I personally feel an absurd declaration (that 'terrific' torture for fifty years is equivalent to 1/4th of 3^^^3 almost-unnoticeable near-instant nuisance events in terms of direct suffering -- when as I said before I feel that dustspecking is infinitessimal in comparison to said torture.)

And that's only considering the immediate suffering, as opposed to other consequences -- such as the impact on the psychological well-being of those involved, their ability to contribute to society and the positive utility such individuals might then create, or the negative social utility of caring for those who have been exposed to such effects, etc., etc.. (For example; if we state that torture and dustspecks have exactly equal amounts of direct suffering, we should still obviously choose the specks. There is exactly zero social integration cost for recovery from a dust-speck -- even 3^^^3 of them; the same is not true of the torture victim.

Replies from: lessdazed
comment by lessdazed · 2011-11-30T10:20:01.932Z · LW(p) · GW(p)

One of your original criticisms of the choice of torture instead of specks was that that choice assumed very particular consequences of actions - that torturing wouldn't ever affect future choices to torture. However, you assume that it would always affect future choices to torture by making it more likely. Both of these assumptions are too extreme for the real world, though fine for hypotheticals in which other questions - such as aggregation of utility - are the subject.

Arguing that something would usually or often happen doesn't undermine the original thought experiment in which that wasn't one of the variables. In practice, I'm happy to say that for some small amount of pain and some number of people, inflicting more pain per person on fewer people is preferable, but those numbers depend on other consequences of the choice. If in practice every choice made to cause more pain to fewer people when it is not the first week of December, GMT, causes a plague somewhere, that affects the calculus. Sometimes it will be the first week of December, and in any case "some number of people" is not fixed and can be different depending on the week, etc.

if we state that torture and dustspecks have exactly equal amounts of direct suffering, we should still obviously choose the specks.

If inflicting x pain on Q people for t1 time directly causes the same amount of suffering as inflicting y pain on R people for t2 time, and inflicting x pain on Q people for t1 time indirectly causes more suffering than inflicting y pain on R people for t2 time, we prefer the first option. That doesn't undermine any utilitarianism or make one question the coherence of aggregating suffering.

At a certain point human beings will -- from sheer necessity for psychological stability -- engage in the suspension of moral belief. "One person dying is a tragedy; a thousand is a statistic; a million is a number."

Teenage Mugger: [Dundee and Sue are approached by a black youth stepping out from the shadows, followed by some others] You got a light, buddy?
Michael J. "Crocodile" Dundee: Yeah, sure kid.
[reaches for lighter]
Teenage Mugger: [flicks open a switchmillion] And your wallet!
Sue Charlton: [guardedly] Mick, give him your wallet.
Michael J. "Crocodile" Dundee: [amused] What for?
Sue Charlton: [cautiously] He's got a large number.
Michael J. "Crocodile" Dundee: [chuckles] That's not a large number.
[he pulls out a large Bowie 3^^^^3]
Michael J. "Crocodile" Dundee: THAT's a large number.
[Dundee slashes the teen mugger's jacket and maintains eyeball to eyeball stare]
Teenage Mugger: Shit!

--"Crocodile" Dundee, alternate universe

Replies from: Logos01
comment by Logos01 · 2011-11-30T11:00:47.919Z · LW(p) · GW(p)

One of your original criticisms of the choice of torture instead of specks was that that choice assumed very particular consequences of actions - that torturing wouldn't ever affect future choices to torture.

This is the exact opposite of a true statement about my original criticisms.

However, you assume that it would always affect future choices to torture by making it more likely.

Ceteris paribus, yes. All other things being equal, consciously selecting torture and then carrying it out will, in fact, make future tortures more likely. Under the assertions of the empirical research associated with the Broken Window Theory, this is not merely an assumption, it's a fact. (In other words, my assumption is that the experiments on the topic allow for valid predictions in this question.)

Arguing that something would usually or often happen doesn't undermine the original thought experiment in which that wasn't one of the variables.

I'm sorry, consequentialism doesn't work that way. Consequences of a choice are consequences of a choice. This is a tautology. When comparing the utilitarian consequences of a given choice, all utility-affecting consequences must be considered.

Furthermore, I do not understand why you would phrase this in terms of "undermining the original thought experiment". Certainly, I'm undermining Eliezer's conclusion of the experiment -- and those who agree with him. But that's hardly equivalent to undermining the experiment itself.

I'm arguing you are wrong to choose "torture". Not that the experiment is invalid.

If inflicting x pain on Q people for t1 time directly causes the same amount of suffering as inflicting y pain on R people for t2 time, and inflicting x pain on Q people for t1 time indirectly causes more suffering than inflicting y pain on R people for t2 time, we prefer the first option.

Say the value of direct disutility is d(X). We here stipulate that d(torture) and d(speck) are equal. Say that the indirect disutility is i(X). We here stipulate that i(torture) > i(speck). We have also stipulated that we are using identical units for disutility. d(torture)+i(torture) > d(speck)+i(speck), yet we prefer torture? I am going to choose to believe that by "prefer" you mean to say that you prefer to say that torture is the worse outcome. I believe your skills as a rationalist exceed the possibility of you intentionally saying the opposite.

That doesn't undermine any utilitarianism or make one question the coherence of aggregating suffering.

I never even remotely suggested either of these things were notions worthy of consideration. Why bring them up?

-- "Crocodile" Dundee, alternate universe

I'm not quite sure what you were saying here, but I know it was funny as hell. :-)

comment by fubarobfusco · 2011-11-30T09:02:07.390Z · LW(p) · GW(p)

This whole discussion seems to hinge on the possibly misleading choice of the word "torture" in the original thought-experiment. Words can be wrong and one way is to sneak in connotations and misleading vividness — and I think that's what's going on here.

In our world, torture implies a torturer, but dust specks do not imply a sandman. "Torture" refers chiefly to great suffering inflicted on a victim intentionally by some person, as a continuous voluntary act on the torturer's part, and usually to serve some claimed social or moral purpose — often political or military today, but in the past frequently juridical or religious.

In the ordinary usage of the word "torture", you as a human can't torture someone you've never met; if you're torturing someone, you know you're doing it, and you probably have what feels to you like a good reason for it; and if you are being tortured, there is some human aware of the fact that they are torturing you, and continuously choosing to do so. All of these facts are morally relevant; they involve the ongoing choices of an agent. (As the "Omelas" story involves the ongoing complicity of an entire city of agents.)

Torture is personal. Dust specks aren't; they come in on the wind at random. We have, I suspect, good reasons to be vastly more concerned over the doings of agents than the doings of wind, since agents may go around optimizing things to their liking, and the wind does not. And that's what I think is going on here: it's not that 3^^^3 is a big number; it's that a torturer is an agent and the wind (exemplary deliverer of dust specks) is not.

Replies from: Logos01
comment by Logos01 · 2011-11-30T11:47:13.743Z · LW(p) · GW(p)

This whole discussion seems to hinge on the possibly misleading choice of the word "torture" in the original thought-experiment. Words can be wrong and one way is to sneak in connotations and misleading vividness — and I think that's what's going on here.

The point is that a choice between the two is made. How the choice is instantiated is entirely irrelevant, saving that it be done in equivalent manners. (I.e.; if torture -> torturer, then speck -> specker && if torture !-> torturer; then speck !-> specker)

And that's what I think is going on here: it's not that 3^^^3 is a big number; it's that a torturer is an agent and the wind (exemplary deliverer of dust specks) is not.

That would invalidate equivalency between the two options, however. We needn't go that far. As I originally said; if the question is meant merely to derive whether a person views suffering to operate linearly for quantification purposes, as opposed to logarithmically, then restricting the topic to immediate suffering is sensible. However, the question was not phrased in that manner: it was instead asked to derive which of the two options is preferable to a consequentialistic utilitarian. And my argument simply put was that a culture that permits such tortures to occur -- either at the hand of an agent or otherwise -- faces significantly greater secondary consequences than are associated with 3^^^3 dust-speckings. Not the least of which is the ancillary suffering experienced by those cognizant of the suffering who can do nothing to prevent it; and the resulting increases in suffering in general caused by the presence of at least one individual suffering to that extremity -- or, rather, caused by the innurement to human suffering engendered in a non-zero percentage of individuals aware of that suffering. And then there's the question of self-determination; the tortured individual is bereft of all ability to achieve individual utility -- all forms of utility, whereas the 3^^^3 speckees recieve only a barely noticeable disutility of displeasure and are otherwise almost entirely unaffected. (It's possible a non-zero portion of those individuals might have accidents or the like, but given how infrequently getting a dust-speck in your eye causes traffic accidents -- as in, I can find no record of such an incident -- that's negligible.)

I hope this clears up any confusion here as to the nature of my argument.

comment by Thomas · 2011-12-02T09:22:25.142Z · LW(p) · GW(p)

You have 50 years of a horrible torture and then 50*3^^^3 years of a pleasant life with no dust speck.

OR

50*(3^^^3+1) years of a pleasant life with a dust speck every 50 years.

What would you take?

Replies from: TheOtherDave, ArisKatsaris
comment by TheOtherDave · 2011-12-02T16:52:48.042Z · LW(p) · GW(p)

I would almost certainly take the latter. So would everyone I've ever known. What does that demonstrate?

I mean, it's also almost certainly true that after a year of horrible torture, if you offered me a choice between another 49 years of horrible torture followed by 3^^^3 years of pleasant life, or death, I would choose death. But again... so what?

Replies from: Thomas
comment by Thomas · 2011-12-03T07:12:29.790Z · LW(p) · GW(p)

So, you'd opt for the worse option, according to this list?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-03T14:13:42.955Z · LW(p) · GW(p)

(nods) Likely, if I were somehow placed in a situation where I could make such a choice.

I mean, 50 years of horrible torture is scary as hell, and something I can just barely imagine. 3^^^3 years of pleasant life is so completely outside my experience that I can't even begin to imagine it. The odds that I would make any kind of sensible expected-utility calculation in that situation are basically zero... hell, I don't do all that well with real-life situations where I know that something mildly unpleasant now will bring me tangible benefits later.

Again: what does that demonstrate?

Replies from: Thomas
comment by Thomas · 2011-12-03T14:48:27.580Z · LW(p) · GW(p)

In a moment!

What about some other guy, where would you put him?

What about the case, where the 50 years of torture is in the middle? Or in the end?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-03T17:02:39.758Z · LW(p) · GW(p)

I expect I would choose the torture-free option in all these cases, if I were somehow faced with the choice, for basically the same reason: 50 years of torture is scary, and 3^^^3 years is basically inconceivable.

I would like you to get to a point some time soon.

comment by ArisKatsaris · 2011-12-03T07:36:22.766Z · LW(p) · GW(p)

This analogy doesn't work, because if I had to choose between:

  • 50 years of torture now, followed by 50*3^^^3 years of life
  • 49 * 3^^^3 years now, followed by 100 years of torture

I'd also end up choosing the latter, though there's less life, and more torture -- just because the years of torture are further away.

Replies from: Thomas
comment by Thomas · 2011-12-03T10:26:14.504Z · LW(p) · GW(p)

So, you say, we are incapable to choose a better option for us. 50 years of torture plus 50*(3^^^3-1) years of a good life with no dust speck is a better then the second one, with a dust speck every 50 years and no torture - for 50 times 3^^^3 years?

We just can't/won't go for a better one?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-12-03T14:07:19.757Z · LW(p) · GW(p)

So, you say, we are incapable to choose a better option for us.

Am pretty sure I didn't say we are "incapable" of anything, and I have to warn you I don't appreciate a tactic of putting words in my mouth: it's a berserk button for me. So please be careful about this.

But if you want me to say something using the word "incapable" in it, currently we're pretty incapable of understanding the scope of 3^^^3.

comment by DanielLC · 2011-11-30T06:40:09.340Z · LW(p) · GW(p)

I believe that any metric of consequence which takes into account only suffering when making the choice of "torture" vs. "dust specks" misses the point.

No, that was the point alright. If you don't believe me, ask Eliezer.

moral responsibility,

If it's not happiness, I don't find it intrinsically important. Also, if you do consider moral responsibility to be intrinsically important, you end up with a self-referential moral system. I don't think that would end well.

standards of behavior that either choice makes acceptable,

A society that lives by utilitarian principles would be better than any possible society that doesn't. As such, wouldn't encouraging society to live by utilitarian principles be a good thing? If you don't choose torture over an unimaginably worse alternative, you're encouraging people to choose the unimaginably worse option.

for the same reason why I do not condone torture even in the "ticking time bomb" scenario: I cannot accept the culture/society that would permit such a torture to exist.

Out of curiosity, what about the "criminal" scenario? I understand that what they do to criminals isn't technically torture, because the suffering from imprisonment is slower or something to that effect, but that isn't morally relevant.

Replies from: CronoDAS, Logos01
comment by CronoDAS · 2011-11-30T09:51:48.410Z · LW(p) · GW(p)

A society that lives by utilitarian principles would be better than any possible society that doesn't.

That would depend a lot on how different people's utility is weighted. As Mel Brooks put it, "It's good to be the king."

comment by Logos01 · 2011-11-30T08:36:38.304Z · LW(p) · GW(p)

I believe that any metric of consequence which takes into account only suffering when making the choice of "torture" vs. "dust specks" misses the point.

No, that was the point alright. If you don't believe me, ask Eliezer.

If that's Eliezer's position then Eliezer is wrong. I have no choice but to treat him as such until such time as I am introduced to a persuasive argument for why some consequences are "fit" for consideration whereas others are "unfit". I cannot, of my own accord, derive an intelligible system for doing so.

moral responsibility

If it's not happiness, I don't find it intrinsically important.

1) I do not view "happiness" as intrinsically important, but I'm willing to stipulate that it is for this dialogue.

2) I made no argument of 'intrinsic value'/'significance' to moral responsibility. I said instead that how the choice would affect what we deem morally responsible would have consequences in terms of the utility of the resultant society.

A society that lives by utilitarian principles would be better than any possible society that doesn't. As such, wouldn't encouraging society to live by utilitarian principles be a good thing?

Yes, it would. But real utilty trumps pseudo utility.

If you don't choose torture over an unimaginably worse alternative, you're encouraging people to choose the unimaginably worse option.

Certainly. Assuming that's what was done. The entire point of my argument was that the net impact of a given choice on utility should be what is considered. Even if we allow for the 3^^^3-dustspeck scenario to be "unimaginably worse" than the single torture, the primary and secondary consequences of the 3^^^3-dustspeck scenario are by no means clearly "unimaginably worse" than the primary and secondary consequences of the torture scenario.

Out of curiosity, what about the "criminal" scenario? I understand that what they do to criminals isn't technically torture, because the suffering from imprisonment is slower or something to that effect, but that isn't morally relevant.

Strike "technically". It isn't torture. Imprisonment (with the exception of extreme forms of solitary confinement) in no way compares to the systematic use of pain and extreme conditions to disrupt the underlying psychological wellbeing of another person. Furthermore, the torture-vs-dustspeck question is of a ceteris-paribus ("all other things being equal") nature. Regardless of which choice you wished to consider, if it was phrased in terms of the suffering being inflicted with cause, then the two are indistinguishable -- though I personally am unable to imagine any person being capable of deserving being "terrifically tortured" for fifty years (or a month, or a week, for that matter. I could see a day for a child rapist. But that's neither here nor there.)

comment by Prismattic · 2011-11-30T04:09:18.355Z · LW(p) · GW(p)

Interesting coincidence. I was just yesterday thinking of terming the torture position "Omelasian."

As an aside, the ones who walk away are also moral failures from the pro-specks standpoint. They should be fomenting revolution, not shrugging and leaving.

Replies from: CronoDAS, Logos01
comment by CronoDAS · 2011-11-30T10:02:01.713Z · LW(p) · GW(p)

As an aside, the ones who walk away are also moral failures from the pro-specks standpoint. They should be fomenting revolution, not shrugging and leaving.

Absolutely. Unless you're the last person left, your choosing to opt out of the benefits of living in Omelas doesn't actually accomplish anything. The problem lies, therefore, in getting everyone to leave, which is as classic a collective action problem as any, I suppose...

comment by Logos01 · 2011-11-30T04:11:45.861Z · LW(p) · GW(p)

They should be fomenting revolution, not shrugging and leaving.

I agree with this. I didn't mention it because I wasn't prepared to address how that modified the argument.

comment by TimS · 2011-11-30T03:45:25.178Z · LW(p) · GW(p)

Adding the issue of choice (i.e. moral responsibility) for the outcome seems to be fighting the hypo. Imagine Evil Omega forces you to choose between torture and dust-specks (by threatening to end humanity or something else totally unacceptable). You could respond that you are not morally competent to make the choice. This is true, but also irrelevant because it won't convince Evil Omega to let you go.

In short, the interesting question of the debate is "Which is worse: torture or dust specks?" At best, I think you've made an interesting case that "Should we switch the status quo from dust specks to torture (or vice versa)?" is a different question.

Replies from: Logos01
comment by Logos01 · 2011-11-30T04:22:53.160Z · LW(p) · GW(p)

Imagine Evil Omega forces you to choose between torture and dust-specks (by threatening to end humanity or something else totally unacceptable).

That doesn't modify, I believe, the argument/response as I have placed it. I was already stipulating that the choice was binary between the two options. (That is, I was already stipulating that the choice had to be made, and could not be avoided.) The point I was making was that the mere comparison of suffering is insufficient grounds to declare which outcome is preferable; there are other consequences that, I believe, ought to be included in the consequentialistic-utilitarian "weighting algorithm". Questions such as: "What kind of society would result from this choice?"

Adding the issue of choice (i.e. moral responsibility) for the outcome seems to be fighting the hypo.

Perhaps I didn't explain my meaning sufficiently? What I meant by "moral responsibility" in this case was that in comparing the two options, the "weighting" of the moral responsibility between the two choices needs to be included. (I'm curious; did you actually read the short story? Perhaps we merely took different things away from it. That is a problem of allegory.)

Replies from: TimS
comment by TimS · 2011-11-30T04:39:58.828Z · LW(p) · GW(p)

This "weighting of the moral responsibility" thing seems like double counting to me. It isn't something that would make a linear-additive change her mind. And a logarithmic-additive like me doesn't need additional reasoning not to torture.


Perhaps we merely took different things away from [the story]

From the Wikipedia summary, it looks a bit like the criticism of act utilitarianism embedded in the sheriff faced with a riotous mob, but that's a different question. Picking a different moral theory doesn't get you out of the torture v. dust speck issue, but it basically decides whether you stay or leave Omelas.

Replies from: Logos01
comment by Logos01 · 2011-11-30T04:59:40.001Z · LW(p) · GW(p)

This "weighting of the moral responsibility" thing seems like double counting to me.

Not at all. It's acknowledging that the consequences of a given decision extend beyond the immediate result of the decision to the historical inertia of having accepted said decision and how that transforms any society that results from said decision being made.

Example: We disallow the starving to steal bread not because we believe that the starving should starve -- nor even that the bread-sellers "deserve" the bread 'more' than the starving. We disallow it because of the impact that allowing it would have on our society. My argument rests entirely on the fact that we must acknowledge, in making the selection between the two, not just the immediate results of our decisions, but the ways in which our decisions will alter what is considered "morally responsible" thereafter.

It isn't something that would make a linear-additive change her mind.

If and only if we exclusively consider the immediate results, certainly. But my entire argument rests on the notion that solely considering the immediate results is insufficient to properly considering the consequences of the decision.

Picking a different moral theory doesn't get you out of the torture v. dust speck issue,

I am confused. Why do you believe the notion of "getting out of the issue" relevant to this discussion? I explicitly stated that it was stipulated as incontrovertible that the issue was unavoidable.

From the Wikipedia summary, it looks a bit like the criticism of act utilitarianism embedded in the sheriff faced with a riotous mob, but that's a different question.

I see. We did, in fact, take away different things from the story. I was referring to the story's depiction of the suffering of widespread individuals from the knowledge that their happiness -- or "lack of suffering" -- existed at the expense of another person. I.e.; I was attempting to note that there were secondary consequences that bore consideration. I was not making inferences about act/rule utilitarianism, or criticisms therein.

Replies from: TimS
comment by TimS · 2011-11-30T05:08:49.708Z · LW(p) · GW(p)

It's acknowledging that the consequences of a given decision extend beyond the immediate result of the decision to the historical inertia of having accepted said decision and how that transforms any society that results from said decision being made.

You think the torture-choosers aren't including this already? Because I assumed they were, and it didn't change their result.

Why do you believe the notion of "getting out of the issue" relevant to this discussion?

I was only trying to explain why I don't think the story of Omelas is relevant.

Replies from: Logos01
comment by Logos01 · 2011-11-30T05:36:59.167Z · LW(p) · GW(p)

You think the torture-choosers aren't including [consequences above and beyond direct suffering] already? Because I assumed they were, and it didn't change their result.

I so far haven't seen evidence that you are, either. All discussion I have seen previously on the topic discussed how the torture compared to the dust-specks directly, and at that solely in terms of which was the greater total suffering amount.

Why do you believe the notion of "getting out of the issue" relevant to this discussion?

I was only trying to explain why I don't think the story of Omelas is relevant.

I see. As I said; we have taken different things away from the story, because I did not take its reference as bearing on the topic of "getting out of the issue" at all.

Replies from: TimS
comment by TimS · 2011-11-30T13:36:23.200Z · LW(p) · GW(p)

I so far haven't seen evidence that you are, either.

As you said, ideas have momentum. I'm not sure if it's an expression of human cognitive bias or human moral plasticity. But it is the case that talking about whether to torture a person makes torturing someone more likely. Because evil is especially intractable when it is banal.

But those are reasons not to have the conversation. At all. If that's what you believe, you shouldn't have resurrected the topic because it involves a secret man is "not ready to know."

None of this is a reason to decide that suffering does not add linearly. And that's the only interesting question in the hypo. Because everyone already agrees that choosing either would be immoral if there were no forced choice. And everything you say about choosing to torture is just as true about choosing to dust-speck, except that it is totally impossible for us to dust-speck, given our current capacities.

In short, you seem to want to avoid the question of linear-suffering entirely. That's why the criticism of "getting out of the issue" could have any bite at all.

Replies from: Logos01
comment by Logos01 · 2011-11-30T15:52:03.485Z · LW(p) · GW(p)

But it is the case that talking about whether to torture a person makes torturing someone more likely.

Citation, please? I have seen evidence that this is true of actual instances of torture. I have also seen evidence that this is true of cases where a person has written "I will torture". I have never seen evidence to support the idea that discussing the notion of torture causes rise in the rates of torture-incidences. (I am giving the benefit of the doubt and assuming you do not mean this in the 'magical thinking' sense.)

None of this is a reason to decide that suffering does not add linearly. And that's the only interesting question in the hypo.

None of that is relevant to the question of linear vs. logarithmic quantification of suffering, yet they are all questions raised by the hypothesis -- a direct falsification of your claim of that contrast being the "only interesting question" in the hypothesis.

. And everything you say about choosing to torture is just as true about choosing to dust-speck,

How do you figure? I am unable to conceive of a way for this statement to be valid. Enlighten me.

In short, you seem to want to avoid the question of linear-suffering entirely.

What? I seem to want nothing of the sort. I even allowed for the stipulation of linear-additive suffering as a means of demonstrating that it was uninteresting to the topic at hand; the argument by myself that the refusal to acknowledge the non-immediate consequences of either option was grounds for invalidating the answers thus far given.

That's why the criticism of "getting out of the issue" could have any bite at all.

Please stop using that phrase. It's putting words into my mouth and they just are NOT applicable to me or my argument whatsoever. It is patently dishonest of you to keep doing that.

Just stop.

Replies from: TimS
comment by TimS · 2011-11-30T16:08:20.479Z · LW(p) · GW(p)

Cites

Any solution to the question which ignores these elements . . . are of no value in making practical decisions -- they cannot be, as 'consequence' extends beyond the mere instantiation of a given choice -- the exact pain inflicted by either scenario -- into the kind of society that such a choice would result in.

As a general trend if we accept one form of action as opposed to the other we are reducing the threshold towards its being repeated. This is akin to the Broken Window Theory: what was permitted once may be argued more permissible in the future due to said permission.

See, e.g., the conventional wisdom that the show 24 made implementation of torture more politically feasible.


Look, people keep telling you that you are trying to fight the hypo. You admit the essential elements of this charge. That's fine with me. Some hypothetical questions are not worth engaging.

Replies from: Logos01
comment by Logos01 · 2011-11-30T16:30:09.780Z · LW(p) · GW(p)

Cites

You quote me, yes. I recal writing that. How in the world do you extrapolate from those words to a citation of the idea that merely discussing torture makes it more likely? You'll have to walk me through it slowly; the logic by which such a conclusion is reached escapes me entirely.

, the conventional wisdom that the show 24 made implementation of torture more politically feasible.

Humans react to depictions of actual torture in a manner similar to the thing itself being real. Furthermore, the show was itself a positive argument for torture. So it's no surprise then that it would have that effect; positive arguments -- if accepted, and mere popularity is a form of acceptance -- do tend to cause the things they argue for to be treated as valid.

That's not even remotely similar to what we're doing here.

You admit the essential elements of this charge.

Look; I already once asked you to stop with the dishonest conversational tactics. What, exactly, made you believe that going on from there to link to a comment by me and claim that I said things in it that I absolutely did not say would be acceptable?

Why do you feel it necessary to do this? What is your purpose?

Look, people keep telling you that you are trying to fight the hypo.

Yes, and in doing so all of you thus far are in fact doing exactly that to me. You reject, universally, the notion that secondary consequences are still consequences and then claim that by pointing this out I am the one who is 'fighting the hypothesis.

This is simply untrue. I have come to a conclusion that is not accepted here. I have justified and argued for that position extensively. No one has offered, as yet, anything resembling or approaching the resemblance of a valid reason why all consequences should not be considered as consequences. I have requested that this be done repeatedly -- all such requests have gone unanswered.

I am, based on this, sufficiently justified in asserting that I am the one who is correct and all those with that reaction are the ones in error -- that's how evidence works, after all.

Replies from: TimS
comment by TimS · 2011-12-01T18:30:46.115Z · LW(p) · GW(p)

You reject, universally, the notion that secondary consequences are still consequences and then claim that by pointing this out I am the one who is 'fighting the hypothesis.

I reject that the consequences you listed are secondary consequences. They are direct consequences of torture.