Do the people behind the veil of ignorance vote for "specks"?

post by D227 · 2011-11-11T01:26:52.017Z · LW · GW · Legacy · 69 comments

Contents

  I want to emphasize that in no way did I intend for this post to declare anything.  And want to thank everyone in advance for picking apart every single word I have written.  Being wrong is like winning the lottery.  I do not claim to know anything, the assertive manner in which I wrote this post wa...
None
69 comments

The veil of ignorance as Rawls put it ..."no one knows his place in society, his class position or social status; nor does he know his fortune in the distribution of natural assets and abilities, his intelligence and strength, and the like." 

 

The device allows for certain issues like slavery and income distribution to be determined beforehand.  Would one vote for a society in which there is a chance of severe misfortune, but greater total utility?  e.g, a world where 1% earn $1 a day and 99% earn $1,000,000 vs. a world where everyone earns $900,000 a day.  Assume that dollars are utilons and they are linear (2 dollars indeed gives twice as much utility).  What is the obvious answer?  Bob chooses $900,000 a day for everyone.  

 

But Bob is clever and he does not trust himself that his choice is the rational choice, so he goes into self-dialogue to investigate:

Q: "What is my preference, value or goal(PVG), such that, instrumental rationality may achieve it?"

A "I my preference/value/goal is for there to be a world in which total utility is less, but severe misfortune eliminated for everyone"

Q " As an agent are you maximizing your own utility by your actions of choosing a $900,000 a day world?

A " Yes, my actions are consistent with my preferences; I will maximize my utility by achieving my preference of limiting everyone's utility.  This preference takes precedence.

Q: "I will now attack your position with the transitivity argument.  At which point does your consistency change?  What if the choices where 1% earns $999,999 and 99% earn 1,000,000?"

A: "My preference,values and goals have already determined a threshold, in fact my threshold is my PVG.  Regardless the fact that my threshold may be different from everyone else's threshold, my threshold is my PVG.  And achieving my PVG is rational."

Q: "I will now attack your position one last time, with the "piling argument".  If every time you save one person from destitution, you must pile on the punishment on the others such that everyone will be suffering."

A: "If piling is allowed then it is to me a completely different question.  Altering what my PVG is.  I have one set of values for a non piling and piling scenario.  I am consistent because piling and not piling are two different problems."

 

In the insurance industry, purchasing insurance comes with a price.  Perhaps 1.5% premium of the cost of reimbursing you for your house that may burn down.  The actuaries have run the probabilities and determine that you have a 1% chance that your house will burn down.  Assume that all dollar amounts are utilons across all assets.  Bob once again is a rational man.  Every year Bob is chooses to pay 1.5% in premium even though his average risk is technically a 1% loss, because Bob is risk adverse. So risk adverse that he prefers a world in which he has less wealth, the .5% went to the insurance companies making a profit. Once again Bob questions his rationality on purchasing insurance:

Q: "What is my preference?"

A: "I would prefer to sacrifice more than my share of losses( .5% more), for the safety-net of zero chance catastrophic loss."

Q "Are your actions achieving your values?"

A "Yes, I purchased insurance, maximizing my preference for safety."

Q "Shall I attack you with the transitivity argument?"

A "It wont work.  I have already set my PVG, it is a premium price at which I judge to make the costs prohibitive.  I will not pay 99% premium to protect my house , but I will pay 5%."

Q "Piling?"

A "This is a different problem now."

 

Eliezer's post on Torture vs. Dust Specks [Herehas generated lots of discussion as well as what Eliezer describes as interesting [ways] of [avoiding] the question.  We will do no sort of thing in this post, we will answer the question as intended; I will interpret that eye specks is cumulatively greater suffering than the suffering of 50 years. 

 My PVG tells me that I would rather have a speck in my eye, as well as the eye's of 3^^^3 people, than to risk to have one (perhaps me) suffer torture for 50 years, even though my risk is only 1/(3^^^3) which is a lot less than 50 years (Veil of ignorance).  My PVG is what I will maximize, and doing so is the definition of instrumental rationality.  

In short, the rational answer is not TORTURE or SPECKS, but depends on what your preference, values and goals are.  You may be one of those whose preference is to let that one person feel torture for 50 years, as long as your actions that steer the future toward outcomes ranked higher in your preferences, you are right too.

Correct me if I am wrong but I thought rationality did not imply that there were absolute rational preferences, but rather rational ways to achieve your preferences...

 

I want to emphasize that in no way did I intend for this post to declare anything.  And want to thank everyone in advance for picking apart every single word I have written.  Being wrong is like winning the lottery.  I do not claim to know anything, the assertive manner in which I wrote this post was merely a way to convey my ideas, of which, I am not sure off.   

 

 

 

69 comments

Comments sorted by top scores.

comment by Desrtopa · 2011-11-11T03:03:18.454Z · LW(p) · GW(p)

The device allows for certain issues like slavery and income distribution to be determined beforehand. Would one vote for a society in which there is a chance of severe misfortune, but greater total utility? e.g, a world where 1% earn $1 a day and 99% earn $1,000,000 vs. a world where everyone earns $900,000 a day. Assume that dollars are utilons and they are linear (2 dollars indeed gives twice as much utility). What is the obvious answer? Bob chooses $900,000 a day for everyone.

This choice only makes sense if we assume that dollars aren't utility. The second choice looks obviously better to us because we know they're not, but if the model says the first scenario is better, then within the constraints of the model, we should choose the first scenario.

If a model assigns high utility to things that are not actually good, that's a problem with the model, not with pursuing maximum utility.

Replies from: Pfft
comment by Pfft · 2011-11-11T05:08:31.405Z · LW(p) · GW(p)

If a model assigns high utility to things that are not actually good, that's a problem with the model

But the question at hand is: which things are good and which are bad? Is one happy and one unhappy person better or worse than two persons going "meh"? Is one person being tortured better or worse than a large number of people suffering dustspecks?

And in particular, are there any way to argue that one set of answers to these questions is objectively less "rational" than another, or is it just a matter of preferences that you happen to have and have to take as axiomatic?

Replies from: Desrtopa, FAWS
comment by Desrtopa · 2011-11-11T13:15:28.892Z · LW(p) · GW(p)

But the question at hand is: which things are good and which are bad? Is one happy and one unhappy person better or worse than two persons going "meh"? Is one person being tortured better or worse than a large number of people suffering dustspecks?

Depends how happy and unhappy, and how much torture vs. how many dustspecks.

One set of answers is only objectively more or less rational according to a particular utility function, and we can only do our best to work out what our utility functions actually are. So I certainly can't say "everyone having $900,000 is objectively better according to all utility functions than 99% of people having $1,000,000 and everyone else having $1," but I can say objectively "this model describes a utility function in which it's better for 99% of people to have $1,000,000 than for everyone to have $900,000." And I can also objectively say "this model doesn't accurately describe the utility function of normal human beings."

comment by FAWS · 2011-11-11T10:18:54.895Z · LW(p) · GW(p)

All ways to split up utility between two people are exactly equally good as long as total utility is conserved. If this is not the case we are not talking about utility. If you want to talk about some other measure of well-being tied to a particular person, and discuss how important equal distribution vs sum total of this measure is please use another word, not the word utility. I suggest IWB, indexed well-being.

Replies from: Pfft
comment by Pfft · 2011-11-12T00:17:09.286Z · LW(p) · GW(p)

See my other comment here. Note that using the word utility to mean something like IWB has a long history, e.g.

every writer [...] who maintained the theory of utility, meant by it [...] pleasure itself, together with exemption from pain

Mill, Utilitarianism

comment by DanielLC · 2011-11-11T02:26:33.995Z · LW(p) · GW(p)

Assume that dollars are utilons and they are linear (2 dollars indeed gives twice as much utility).

If each dollar gives the same amount of utility, then one person with $0 and one person with $1,000,000 would be just as good as two people with $500,000. That's how utility is defined. If Bob doesn't consider these choices just as good, then they do not give the same utility according to his PVG.

If you are a prioritarian, you'd go for specks. That said, I think you'd be less prioritarian if you had less of a scope insensitivity. If you really understood how much torture 3^^^3 dust specks produces, and you really understood how unlikely a 1/3^^^3 chance is, you'd probably go with torture.

Replies from: Pfft, D227
comment by Pfft · 2011-11-11T04:55:13.320Z · LW(p) · GW(p)

If each dollar gives the same amount of utility, then one person with $0 and one person with $1,000,000 would be just as good as two people with $500,000. That's how utility is defined. If Bob doesn't consider these choices just as good, then they do not give the same utility according to his PVG.

I think this argument is unclear because there is two different senses of "utility" in play.

First, there is the sense from decision theory, your utility function encodes your preferences for different worlds. So if we were talking about Bob's utility function, these states would indeed be indifferent per definition.

The other sense is from (naive?) utilitarianism, which states something like: "In order to decide which state of the world I prefer, I should take into account the preferences/happiness/something of other beings. In particular, I prefer states that maximize the sum of the utilities of everyone involved" (because that best agrees with everyone's preferences?). This argument that we should prefer dustspecks in effect says that our utility functions should have this particular form.

But that is a rather strong statement! In particular, if you you find Rawl's veil-of-ignorance appealing, your utility function does not have that form (it would seem to be the minimum rather than the sum of the other individuals' utilities). So many actual humans are not that kind of utilitarians.

Replies from: CarlShulman
comment by CarlShulman · 2011-11-11T05:56:11.705Z · LW(p) · GW(p)

your utility function does not have that form (it would seem to be the minimum rather than the sum of the other individuals' utilities).

The average, rather, if the people expect to get utility randomly sampled from the population distribution. The original position gives you total utilitarianism if the parties face the possibility of there "not being enough slots" for all.

comment by D227 · 2011-11-11T19:22:39.173Z · LW(p) · GW(p)

If you really understood how much torture 3^^^3 dust specks produces...

You make a valid point. I will not deny that you have a strong point. All I ask is that you not deny me of having you remain consistent with your reasoning. I have reposted a thought experiment, please tell me what your answer is:

Omega has given you choice to allow or disallow 10 rapists to rape someone. Why 10 rapists? Omega knows the absolute utility across all humans, and unfortunately as terrible as it sounds, the suffering/torturing of 10 rapists not being able to rape is more suffering than what the victim feels. What do you do? 10 is < 3^^^3 suffering rapists. So lucky you, Omega need not burden you with the suffering of 3^^^3, if you chose to have rapist suffer. It is important that you not finagle your way out of the question. Please do not say that not being able to rape is not torture. Omega has already stated that indeed there is suffering for these rapist. It matters not if you would suffer such a thing.

Disclaimer: I am searching for the truth through rationality. I do not care whether the answer is torture, specks, or rape, only that it is the truth. If the rational answer is rape, I can do nothing but accept that for I am only in search of truth and not truth that fits me.

There are implications to choosing rape as the right answer. It means that in a rational society we must allow bad things to happen if that bad thing allows for total less suffering. We have to be consistent. Omega has given you a number of rapist far far far less than 3^^^3, surely you must allow for the rape to occur.

Literally, DanielLC, walks into a room with 10 rapists and a victim. The rapists tell him to "go away, and don't call the cops.". Omega appears and says, you may stop it if you want to, but I am all knowing and know that the utility experienced by the rapist or suffering from being deprived of raping is indeed greater than the suffering of the victim. What does Daniel do?

If you really understood how much torture 3^^^3 dust specks deprived rapists produces...

Replies from: ArisKatsaris, wedrifid, DanielLC
comment by ArisKatsaris · 2011-11-11T19:54:27.739Z · LW(p) · GW(p)

So is there an actual reason that you chose a topic as emotionally frought (and thus mind-killing) as rape, and at the same time created a made-up scenario where we're asked to ignore anything we know about "rape" by being forced to not use our judgment but Omega's on what constitutes utility?

And anyway, i think people misunderstand the purpose of utility. Daniel acts according to his own utility function. That function isn't obliged to have a positive factor on the rapists' utility; it may very well be negative. If said factor is negative, then the more utility the rapists get out of their rape, the less he's inclined to let them commit it.

comment by wedrifid · 2011-11-13T14:42:29.071Z · LW(p) · GW(p)

Omega has given you choice to allow or disallow 10 rapists to rape someone. Why 10 rapists? Omega knows the absolute utility across all humans, and unfortunately as terrible as it sounds, the suffering/torturing of 10 rapists not being able to rape is more suffering than what the victim feels.

Errr... I don't care? I'm not a utilitarian. Utilitarian morals are more or less abhorrent to me. (So in conclusion I'd tell the rapists that they can go fuck themselves. But under no circumstances can they do the same to anyone else without consent.)

comment by DanielLC · 2011-11-12T01:40:28.569Z · LW(p) · GW(p)

rape

Please don't use loaded words like that. It's not worth while to let ten people rape someone. By using that word, you're bringing in connotation that doesn't apply.

It means that in a rational society we must allow bad things to happen if that bad thing allows for total less suffering.

Bad things happening can't result in less suffering then no bad things happening, unless you allow negative suffering. In the example you gave, we can either choose for one person to suffer, or for ten. We must allow bad things to happen because they were the only options. There is no moral pattern or decision theory that can change that.

I'd go with allowing the "rape". This situation is no different than if there were one rapist, ten victims, and the happiness from the rapist was less than the sadness from the victims. I'd hurt the fewer to help the many.

comment by see · 2011-11-11T20:18:58.594Z · LW(p) · GW(p)

Hmm. I think, actually, that I'm figuring out the whole problem with Torture vs. Dust Specks.

The fundamental problem with the "shut up and do the math" argument is the assumption of the type of math underlying it. If accurate modeling of utility requires assigning a mere real number to accurately model the disutility of both dust and torture, "3^^^3 is really big" gives us lots of good guidance as to the actual results, because we know what happens if you put a real number through an equation where 3^^^3 increases things. And even if 3^^^3 isn't big enough, you can sub in 5^^^^^5 or 9^^^^^^^^^9 and change the result without changing the logic.

But, if accurate modeling of utility requires assigning a matrix (or hypercomplex number) to accurately model the disutility of both dust and torture, instead of a mere real number, and a nonlinear utility function is applied, well, exponentiation of matrices is difficult, and "3^^^3 is really big" does not give us good guidance as to the actual results.

Now, does utility need to be expressed as a matrix? It wouldn't if our sole value was, for example, maximizing inclusive genetic fitness. But since Thou Art Godshatter, we have multiple different internal systems voting on the utility of any particular action or state. Arrow's Impossibility theorem then proves we can't reliably turn the results of the voting into something that would fit on the real number line.

It seems likely that when you throw around a big enough number (3^^^3 or 9^^^^^^^^^9 or whatever) it will overwhelm the initial difference . . . but the difference has moved from "shut up and acknowledge the inevitable result of the math" to "dammit, we don't have enough computing resources for this, we have to guess."

Replies from: Dan_Moore
comment by Dan_Moore · 2011-11-14T16:45:31.191Z · LW(p) · GW(p)

Perhaps a related concept is this: Are utility functions necessarily scalar, or can they be vector functions? If they are scalar, then there is always a large enough value of dust specks to outweigh the disutility of one person being tortured. If utility can be a vector function, then it's possible that the disutility of torture outweighs the disutility of any number of dust specks.

comment by Manfred · 2011-11-11T04:14:41.511Z · LW(p) · GW(p)

Are you familiar with prospect theory? You seem to be describing what you (an imperfectly rational agent) would choose, simply using "PVG" to label the stuff that makes you choose what you actually choose, and you end up taking probability into consideration in a way similar to prospect theory.

Are you also familiar with the reasons why prospect theory and similar probability-dependent values are pretty certainly irrational?

Replies from: D227
comment by D227 · 2011-11-11T04:54:51.929Z · LW(p) · GW(p)

Are you familiar with prospect theory?

No, but I will surely read up on that now.

You seem to be describing what you (an imperfectly rational agent) would choose, simply using "PVG" to label the stuff that makes you choose what you actually choose, and you end up taking probability into consideration in a way similar to prospect theory.

Absolutely. In fact I can see how a theist will simply say, "it is my PVG to believe in God, therefore It is rational for me to do so."

I do not have a response to that. I will need to learn more before I can work this out in my head. Thank you for the insightful comments.

comment by FAWS · 2011-11-11T10:45:38.484Z · LW(p) · GW(p)

You clearly don't understand how large 3^^^3 is. If the number of people dust-specked enters consideration at all it is large enough to change the answer.

Now that I think about it the original problem doesn't even make sense. 3^^^3 is large enough that if we allow a dust speck every millisecond in the eye of a particular person to count as torture and assume a base rate of dust specks of one in a billion milliseconds we get billions of additional people tortured for 50 years (who would otherwise have been tortured for 50 years -1 millisecond, who are replaced by billions of people who would otherwise have been tortured for 50 years -2 milliseconds, and so on, down to people who would have gotten only one dust speck and now get two consecutive ones)

comment by spuckblase · 2011-11-15T18:46:15.200Z · LW(p) · GW(p)

For those who read german or can infer the meaning: Philosopher Cristoph Fehige shows a way to embrace utilitarianism and dust specks.

comment by Peter Wildeford (peter_hurford) · 2011-11-11T17:10:05.080Z · LW(p) · GW(p)

No one ever understands just how friggin large 3^^^3 is.

One could safely argue that it is better for the entire current world population to suffer a dust speck each than for someone to get tortured for fifty years, but expand that to 3^^^3 people? Radically different story.

Replies from: CarlShulman, jimrandomh, Prismattic, beriukay, D227
comment by CarlShulman · 2011-11-13T04:23:29.548Z · LW(p) · GW(p)

One could safely argue that it is better for the entire current world population to suffer a dust speck each than for someone to get tortured for fifty years,

Would anyone challenge this? Doing the arithmetic, if dust specks cause one second of discomfort, having everyone on Earth get specked would be the equivalent of ~222 person-years of specking. If torture is at least 4.5 times as bad as specking on a per-second basis, even a straight total utilitarian calculation with constant badness per second would favor specks on that scale. That seems like a very, very easy case.

comment by jimrandomh · 2011-11-13T17:18:06.906Z · LW(p) · GW(p)

I verbally pronounce ^^^ as "trip-up". I find this clarifies its meaning, and the practical effects of using it, considerably.

comment by Prismattic · 2011-11-11T21:03:56.993Z · LW(p) · GW(p)

For what N would you say that {N specks better-than torture better-than N+1 specks}? If small quantities of utility or disutility have perfectly additive properties across groups of people, it should be simple to provide a concrete answer.

(Sidenote -- there should be symbolic terminiology for better-than and worse-than. ">" and "<" would just be confusing in this context.)

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2011-11-12T05:46:07.587Z · LW(p) · GW(p)

I don't know the precise utility values of torture vs. dust specks, but I would reason that...

Getting one dust speck is around a 1000x more preferable than being tortured for a second. There are 1,576,800,000 seconds in 50 years.

Thus, I place N roughly around 1,576,800,000,000.

Replies from: Prismattic
comment by Prismattic · 2011-11-12T05:58:08.719Z · LW(p) · GW(p)

Torture does not scale linearly with time. Indeed, I suspect even a simple exponential curve would understate the increase.

Replies from: wedrifid
comment by wedrifid · 2011-11-12T06:28:50.623Z · LW(p) · GW(p)

Torture does not scale linearly with time. Indeed, I suspect even a simple exponential curve would understate the increase.

Wow. I thought you were going the other way with that one. The fiftieth year of torture is not nearly as damaging as the first.

Replies from: Prismattic, peter_hurford
comment by Prismattic · 2011-11-12T06:35:44.685Z · LW(p) · GW(p)

That is interesting. But note that he was starting with a unit of 1 second of torture. 1 second of waterboarding is not 1/30th as distressing as 30 seconds of waterboarding. And 1 second of Chinese water torture, or the ice room, or simply isolation, is less disutility than dust speck. Actually, the particular case of isolation is one where the 50th year probably is worse than the first year, assuming the victim has not already gone completely bonkers by that point.

Replies from: wedrifid
comment by wedrifid · 2011-11-12T07:02:19.177Z · LW(p) · GW(p)

assuming the victim has not already gone completely bonkers by that point.

If you have been torturing someone for 49 years and they are not already completely bonkers then you are probably doing something wrong!

comment by Peter Wildeford (peter_hurford) · 2011-11-12T07:26:56.850Z · LW(p) · GW(p)

I agree with this, but I have no idea how to accurately discount it, so I decided to go linear and overestimate.

Replies from: wedrifid
comment by wedrifid · 2011-11-12T08:00:36.019Z · LW(p) · GW(p)

I don't have a better idea. I really don't have enough knowledge about how to torture people at extreme levels over a long period given the possibility of sufficiently advanced technology.

comment by beriukay · 2011-11-12T10:29:53.783Z · LW(p) · GW(p)

I thought you were going somewhere else with the second sentence. My natural thought, after admitting that I can't possibly understand how big 3^^^3 is, was that if one prefers the torture for one person for 50 years, it should be true that one would also prefer to torture the entire current world population for 50 years over the, as D227 called it, the Dust Holocaust.

Replies from: FAWS, peter_hurford
comment by FAWS · 2011-11-12T12:54:10.917Z · LW(p) · GW(p)

Yes, of course. Or even as many people as there are atoms in the universe. That's a minor difference when we are talking about numbers like 3^^^3.

Replies from: wedrifid
comment by wedrifid · 2011-11-12T22:15:25.739Z · LW(p) · GW(p)

Is it? 3^^^3 isn't all that much of a ridiculous number. Larger than the number of atoms in the universe, certainly, but not so much so that certain people's methods of non-linear valuations of disutility per speck couldn't make that kind of difference matter. (I tend to prefer at least 3^^^^3 for my stupid-large-numbers.)

Replies from: FAWS
comment by FAWS · 2011-11-13T01:49:11.330Z · LW(p) · GW(p)

Larger than the number of atoms in the universe, certainly,

That's quite a bit of an understatement. 3^^4 (~10^3638334640025) is already vastly larger than the number of atoms in the universe (~10^80), 3^^5 in turn is incomprehensibly larger again, and 3^^^3 = 3^^7625597484987.

80 orders of magnitude is an extremely narrow band for a balance point to fall into when one of the numbers involved is greater by so many orders of magnitude they can't even reasonably be expressed without a paradigm for writing big numbers about as strong as up arrow notation. Hitting the space between one human and 10^80 humans would take truly extraordinary precision. (Or making yourself money-pumpable by choosing different sides of the same deal when split in 10^80 sub-deals)

Replies from: wedrifid
comment by wedrifid · 2011-11-13T04:35:18.777Z · LW(p) · GW(p)

80 orders of magnitude is an extremely narrow band for a balance point to fall into when one of the numbers involved is greater by so many orders of magnitude they can't even reasonably be expressed without a paradigm for writing big numbers about as strong as up arrow notation. Hitting the space between one human and 10^80 humans would take truly extraordinary precision.

Not necessarily. For a lot of people the limit of disutility of the scenario as number of dustspecks approaches infinity is non even infinite. In such cases it is perfectly plausible - and even likely - that it is considered worse that torturing one person but not as bad as torturing 10^80 people. (In which case the extra Knuth arrow obviously doesn't help either.)

Replies from: FAWS
comment by FAWS · 2011-11-13T13:38:23.470Z · LW(p) · GW(p)

See the comment in the parentheses. Choosing torture over 3^^^^3 dust specks, but not 3^^^3*10^-80 dust specks takes extraordinary precision. Choosing one torture over 3^^^3*10^-80 dust specks but not 10^80 tortures over 3^^^3 dust specks implies inconsistent preferences.

Replies from: wedrifid
comment by wedrifid · 2011-11-13T14:20:54.885Z · LW(p) · GW(p)

See the comment in the parentheses

Your comment in the parenthesis (if you were referring to the one when you were saying requires you to be money pumpable) was false but I was letting it pass. If you are telling me to see my own comment in parentheses that says about same thing as your second sentence then, well, yes we are mostly in agreement about that part, albeit not quite to the same degree.

Choosing one torture over 3^^^3*10^-80 dust specks but not 10^80 tortures over 3^^^3 dust specks implies inconsistent preferences.

Just not true. It implies preferences in which 10^80 tortures is not 10^80 times worse than 1 torture. There isn't anything inconsistent about not valuing increased instances of the same thing to be worth a different amount than the previous instance. In fact it is the usual case. It is also not exploitable - anything you can make an agent with those preferences do based on its own preferences will be something it agrees in hindsight is a good thing to do.

Replies from: FAWS
comment by FAWS · 2011-11-13T15:52:33.286Z · LW(p) · GW(p)

Can you explain how such a preference can be consistent? The total incidence of both torture and dust specks is unknown in either case. On what basis would an agent that trades one torture for avoiding 3^^^3*10^-80 dust specks refuse the same deal a second time? Or the 10^80th time? Given that 3^^^3*10^-80 people are involved it seems astronomically unlikely that the rate of torture changed noticeably even only assuming knowledge available to the agent. In any case 10^80 separate instances of the agent with no knowledge of each other would make the same deal 10^80 times, and can't complain about being deceived since no information about the incidence of torture was assumed. Even assuming the agent only makes the deal only a single time consistency would then require that the agent prefer trading 3^^^3 dust specks for avoiding 10^80 instances of torture over trading 3^^^3*(1+10^-80) dust specks for 10^80 +1 instances of torture, which seems implausible.

Replies from: wedrifid
comment by wedrifid · 2011-11-13T16:17:46.424Z · LW(p) · GW(p)

The total incidence of both torture and dust specks is unknown in either case.

Where was this declared? (Not that it matters for the purpose of this point.) The agent has prior probabilities distributed over the number of possible incidence of torture and dustspecks. It is impossible not to. And after taking one such deal those priors will be different. Sure, restricting the access to information about the current tortured population will make it harder for an agent to implement preferences that are not linear with respect to additional units but it doesn't make those preferences inconsistent and it doesn't stop the agent doing its best to maximise utility despite the difficulty.

Replies from: FAWS
comment by FAWS · 2011-11-13T17:08:47.808Z · LW(p) · GW(p)

Where was this declared?

There is no information on the total incidence of either included in the problem statement (other than the numbers used), and I have seen no one answer conditionally based on the incidence of either.

The agent has prior probabilities distributed over the number of possible incidence of torture and dustspecks.

Yes, of course, I thought my previous comment clearly implied that?

And after taking one such deal those priors will be different.

Infinitesimally. I thought I addressed that? The problem implies the existence of an enormous number of people. Conditional on there actually being that many people the expected number of people tortured shifts by the tiniest fraction of the total. If the agent is sensitive to such a tiny shift we are back to requiring extraordinary precision.

comment by Peter Wildeford (peter_hurford) · 2011-11-12T18:55:57.513Z · LW(p) · GW(p)

I think that actually would be the case.

comment by D227 · 2011-11-11T18:32:37.794Z · LW(p) · GW(p)

That is the crux of the problem. Bob understands just as much as you claim you understand what 3^^^3 is. Yet he chooses the "Dust Holocaust".

First let me assume that you, peter_hurford, are a "Torturer" or rather, you are from the camp that obviously chooses 50 years. I have no doubt in my mind that you bring extremely rational and valid points to this discussions. You are poking holes in Bobs reasoning at its weakest points. This is a good thing.

I whole-heartedly concede that you have compelling points, by poking into holes into Bob's reasons. But lets start poking around your reasoning now.

Omega has given you choice to allow or disallow 10 rapist to rape someone. Why 10 rapist? Omega knows the absolute utility across all humans, and unfortunately as terrible as it sounds, the suffering/torturing of 10 rapist not being able to rape is more suffering than what the victim feels. What do you do? 10 is < 3^^^3 suffering rapists. So lucky you, Omega need not burden you with the suffering of 3^^^3, if you chose to have rapist suffer. It is important that you not finagle your way out of the question. Please do not say that not being able to rape is not torture. Omega has already stated that indeed there is suffering for these rapist. It matters not if you would suffer such a thing.

Disclaimer: I am searching for the truth through rationality. I do not care whether the answer is torture, specks, or rape, only that it is the truth. If the rational answer is rape, I can do nothing but accept that for I am only in search of truth and not truth that fits me.

There are implications to choosing rape as the right answer. It means that in a rational society we must allow bad things to happen if that bad thing allows for total less suffering. We have to be consistent. Omega has given you a number of rapist far far far less than 3^^^3, surely you must allow for the rape to occur.

Literally, peter_hurford ,walks into a room with 10 rapists and a victim. The rapists tell him to "go away, and don't call the cops.". Omega appears and says, you may stop it if you want to, but I am all knowing and know that the utility experienced by the rapist or suffering from being deprived of raping is indeed greater than the suffering of the victim. What does peter do?

Edit: Grammer

Replies from: Nornagest, peter_hurford
comment by Nornagest · 2011-11-11T19:31:44.192Z · LW(p) · GW(p)

You've essentially just constructed a Utility Monster. That's a rather different challenge to utilitarian ethics than Torture vs. Dust Specks, though; the latter is meant to be a straightforward scope insensitivity problem, while the former strikes at total-utility maximization by constructing an intuitively repugnant situation where the utility calculations come out positive. Unfortunately it looks like the lines between them have gotten a little blurry.

I'm really starting to hate thought experiments involving rape and torture, incidentally; the social need to signal "rape bad" and "torture bad" is so strong that it often overwhelms any insight they offer. Granted, there are perfectly good reasons to test theories on emotionally loaded subjects, but when that degenerates into judging ethical philosophy mostly by how intuitively benign it appears when applied to hideously deformed edge cases, it seems like something's gone wrong.

Replies from: D227
comment by D227 · 2011-11-11T19:53:26.749Z · LW(p) · GW(p)

Unfortunately it looks like the lines between them have gotten a little blurry.

I will consider this claim, if you can show my how it is really different.

I have taken considerable care to construct a problem in which we are indeed are dealing with the trading suffering for potentially more suffering. It does not effect me one bit, that the topic has now switched from specks to rape. In fact if "detraction" happens, shouldn't it be the burden of the person who feels detracted to explain it? I merely ask for consistency.

In my mind I choose to affiliate with the I do not know the answer camp. There is no shame in that. I have not resolved the question yet. Yet there are people for whom it is obvious to choose torture, and refuse to answer the rape question. I am consistent in that I claim not to know or not to have resolved the question yet. May I ask for the same amount of consistency?

Replies from: Nornagest
comment by Nornagest · 2011-11-11T20:41:28.617Z · LW(p) · GW(p)

I will consider this claim, if you can show my how it is really different.

Torture vs. Dust Specks attempts to illustrate scope insensitivity in ethical thought by contrasting a large unitary disutility against a fantastically huge number of small disutilities. And I really do mean fantastically huge: if the experiences are ethically commensurate at all (as is implied by most utilitarian systems of ethics), it's large enough to swamp any reasonable discounting you might choose to perform for any reason. It also has the advantage of being relatively independent of questions of "right" or "deserving": aside from the bare fact of their suffering, there's nothing about either the dust-subjects or the torture-subject that might skew us one way or another. Most well-reasoned objections to TvDS boil down to finding ways to make the two options incommensurate.

Your Ten Very Committed Rapists example (still not happy about that choice of subject, by the way) throws out scope issues almost entirely. Ten subjects vs. one subject is an almost infinitely more tractable ratio than 3^3^3^3 vs. one, and that allows us to argue for one option or another by discounting one of the options for any number of reasons. On top of that, there's a strong normative component: we're naturally much less inclined to favor people who get their jollies from socially condemned action, even if we've got a quasi-omniscient being standing in front of us and saying that their suffering is large and genuine.

Long story short, about all these scenarios have in common is the idea of weighing suffering against a somehow greater suffering. Torture vs. Dust Specks was trying to throw light on a fairly specific subset of scenarios like that, of which your example isn't a member. Nozick's utility monster, by contrast, is doing something quite a lot like you are, i.e. leveraging an intuition pump based on a viscerally horrible utilitarian positive. I don't see the positive vs. negative utility distinction as terribly important in this context, but if it bothers you you could easily construct a variant Utility Monster in which Utilizilla's terrible but nonfatal hunger is temporarily assuaged by each sentient victim or something.

Replies from: D227
comment by D227 · 2011-11-12T00:07:49.600Z · LW(p) · GW(p)

Torture vs. Dust Specks attempts to illustrate scope insensitivity in ethical thought by contrasting a large unitary disutility against a fantastically huge number of small disutilities

Your Ten Very Committed Rapists example (still not happy about that choice of subject, by the way) throws out scope issues almost entirely. Ten subjects vs. one subject is an almost infinitely more tractable ratio than 3^3^3^3 vs. one, and that allows us to argue for one option or another by discounting one of the options for any number of reasons.

I do sincerely apologize if you are offended, but rape is torture as well and Eliezer's example can be equally if not more reprehensible.

It is simple why I chose 10. It is to highlight the paradox of those who choose to torture. I have made it easier for you. Lets see that we increase 10 to 3^^^3 deprived rapists. The point is, if you surely would not let the victim be raped when there are 3^^^3 deprived rapists suffering, you surely would not allow it to happen if it was only 10 suffering rapists. So with that said, how is it different?

Replies from: Nornagest
comment by Nornagest · 2011-11-12T01:06:04.331Z · LW(p) · GW(p)

It is simple why I chose 10. It is to highlight the paradox of those who choose to torture. I have made it easier for you. Lets see that we increase 10 to 3^^^3 deprived rapists. The point is, if you surely would not let the victim be raped when there are 3^^^3 deprived rapists suffering, you surely would not allow it to happen if it was only 10 suffering rapists. So with that said, how is it different?

I just went over how the scenarios differ from each other in considerable detail. I could repeat myself in grotesque detail, but I'm starting to think it wouldn't buy very much for me, for you, or for anyone who might be reading this exchange.

So let's try another angle. It sounds to me like you're trying to draw an ethical equivalence between dust-subjects in TvDS and rapists in TVCR: more than questionable in real life, but I'll grant that level of suffering to the latter for the sake of argument. Also misses the point of drawing attention to scope insensitivity, but that's only obvious if you're running a utilitarian framework already, so let's go ahead and drop it for now. That leaves us with the mathematics of the scenarios, which do have something close to the same form.

Specifically: in both cases we're depriving some single unlucky subject of N utility in exchange for not withholding N \ K utility divided up among several subjects for some K > 1. At this level we can establish a mapping between both thought experiments, although the exact K*, the number of subjects, and the normative overtones are vastly, sillily different between the two.

Fine so far, but you seem to be treating this as an open-and-shut argument on its own: "you surely would not let the victim [suffer]". Well, that's begging the question, isn't it? From a utilitarian perspective it doesn't matter how many people we divide up N \ K* among, be it ten or some Knuth up-arrow abomination, as long as the resulting suffering can register as suffering. The fewer slices we use, the more our flawed moral intuitions take notice of them and the more commensurate they look; actually, for small numbers of subjects it starts to look like a choice between letting one person suffer horribly and doing the same to multiple people, at which point the right answer is either trivially obvious or cognate to the trolley problem depending on how we cast it.

About the only way I can make sense of what you're saying is by treating the N case -- and not just for the sake of argument, but as an unquestioned base assumption -- as a special kind of evil, incommensurate with any lesser crime. Which, frankly, I don't. It all gets mapped to people's preferences in the end, no matter how squicky and emotionally loaded the words you choose to describe it are.

Replies from: D227
comment by D227 · 2011-11-12T02:51:09.477Z · LW(p) · GW(p)

From a utilitarian perspective it doesn't matter how many people we divide up N * K among, be it ten or some Knuth up-arrow abomination, as long as the resulting suffering can register as suffering.

I agree with this statement 100%. That was the point in my TvCR thought experiment. People who obviously picked T should again pick T. No one except one commentor actually conceded this point.

The fewer slices we use, the more our flawed moral intuitions take notice of them and the more commensurate they look; actually, for small numbers of subjects it starts to look like a choice between letting one person suffer horribly and doing the same to multiple people, at which point the right answer is either trivially obvious or cognate to the trolley problem depending on how we cast it.

Again, I feel as if you are making my argument for me. The problem is as you say obvious to the trolley problem on how we cast it.

You say my experiment is not really the same as Eliezer's. fine. If doesn't matter because we could just use your example. If utilitarians do not care for how many people we divide N*K with, then these utilitarians should state that they would indeed allow T to happen no matter what subject matter the K is as long as K is >1

Replies from: None
comment by [deleted] · 2011-11-12T03:47:35.772Z · LW(p) · GW(p)

The thing is, thought experiments are supposed to illustrate something. Right now, your proposed thought experiment is illustrating "we have trouble articulating our thoughts about rape" which is (1) obvious and (2) does not need most of the machinery in the thought experiment.

comment by Peter Wildeford (peter_hurford) · 2011-11-12T05:49:32.256Z · LW(p) · GW(p)

My very first reaction would be to say that you've stated a counterfactual... rape will never directly produce more utility than disutility. So the only way it could be moral if, somehow, unbeknown to us, this rape will somehow prevent then next Hitler from rising to power in some butterfly effect-y way that Omega knows of.

I have to trust Omega if he's by definition infallible. If he says the utility is higher, then we still maximize it. It's like you're asking "Do you do the best possible action, even if the best possible action sounds intuitively wrong?"

comment by Grognor · 2011-11-11T08:08:40.554Z · LW(p) · GW(p)

Having a utility function that allows an incomprehensible greater total suffering is a failure of epistemic, not instrumental, rationality. By choosing the dust specks, you implicitly assert that more suffering than has ever been known and ever will be known on Earth, times a hundred million billion trillion, is superior to a single torture victim.

This is probably the most patronizing thing I'll ever say on this website, but: Think about that for a second.

I'm only pointing this out because no one else mentioned that instrumental rationality is independent of what your goals are, so that says nothing about whether your goals are simply incorrect. (You don't really want the Dust Holocaust, do you? to allow that much suffering in exchange for, comparatively, nothing?)

I haven't read ALL the discussion about this issue, but that's because it's so damned simple if you seek to feel fully the implications of that number, that number that transcends any attempt to even describe.

Edited to add: here Pfft said to just take goals and make them axiomatic. That is literally impossible. You have to have a reason for every one of your goals, and all the better to have motivations based in fact, not fancy.

Replies from: moridinamael, Richard_Kennaway, D227
comment by moridinamael · 2011-11-11T17:36:30.379Z · LW(p) · GW(p)

I think that the Torture versus Dust Specks "paradox" was invented to show how utilitarianism (or whatever we're calling it) can lead to on-face preposterous conclusions whenever the utility numbers get big enough. And I think that the intent was for everybody to accept this, and shut up and calculate.

However, for me, and I suspect some others, Torture versus Dust Specks and also Pascal's Mugging have implied something rather different: that utilitarianism (or whatever we're calling it) doesn't work correctly when the numbers get too big.

The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.

Let's consider a Modified Torture versus Specks scenario: You are given the same choice as in the canonical problem, except you are also given the opportunity to collect polling data from every single one of the 3^^^3 individuals before you make your decision. You formulate the following queries:

"Would you rather experience the mild distraction of a dust speck in your eye, or allow someone else to be tortured for fifty years?"

"Would you rather be tortured for fifty years, or have someone else experience the mild discomfort of a dust speck in their eye?"

You do not mention, in either query, that you are being faced by the Torture versus Specks dilemma. You are only allowing the 3^^^3 to consider themselves and one hypothetical other.

You get the polling results back instantly. (Let's make things simple and assume we live in a universe without clinical psychopathy.) The vast majority of respondents have chosen the "obviously correct" option.

Now you have to make your decisions knowing that the entire universe totally wouldn't mind having dust specks in exchange for preventing suffering for one other person. If that doesn't change your decision ... something is wrong. I'm not saying something is wrong with the decision so much as something is wrong with your decision theory.

Replies from: falenas108, dlthomas, mkehrt, Grognor
comment by falenas108 · 2011-11-11T19:41:23.352Z · LW(p) · GW(p)

I don't think this works. Change it to:

"Would you rather be tortured for a week, or have someone else be tortured for 100 years?"

"Would you rather be tortured for 100 years, or have someone else be tortured for a week?"

The popular opinion would most likely be one week in both cases, which by this logic would lead to 3^^^3 people being tortured for a week. Utilitarianism definitely does not lead to this conclusion, so the query is not equivalent to the original question.

comment by dlthomas · 2011-11-11T18:03:33.757Z · LW(p) · GW(p)

But they're not taking a dust-speck to prevent torture - they're taking a dust-speck to prevent torture and cause the dust-speck holocaust. If you drop relevant information, of course you get different answers; I see no reason your representation here is more essentially accurate, and some reason it might be less.

Replies from: moridinamael
comment by moridinamael · 2011-11-11T19:25:03.753Z · LW(p) · GW(p)

Sure, and that is intentional. You wouldn't bother polling the universe to determine their answer to the same paradox you're solving.

You can look at it this way. Each person who responds to your poll is basically telling you: "Do not factor me, personally, into your utility calculation." It is equivalent to opting out of the equation. "Don't you dare torture someone on my behalf!" The "dust-speck holocaust" then disappears!

Imagine this: You send everyone a little message that says, "Warning! You are about to get an annoying dust speck in your eye. But, partially due to this sacrifice of your comfort, someone else will be spared horrible torture." Would/should they care that the degree to which they contributed to saving someone from torture is infinitesimal?

Let's go on to pretend that we asked Omega to calculate exactly how many humans with dust-specks is equivalent to one person being tortured for fifty years. Let's pretend this number comes out to 1x10^14 people. Turns out it was much smaller than 3^^^3. Omega gives us all this information and then tells us he's only going to give dust specks to 1x10^14 minus one people. We breath a huge sigh of relief - you don't have to torture anybody, because the math worked out in your favor by a vanishingly small fraction! Then Omega suddenly tells you he's changing the deal - he's going to be putting dust speck in YOUR eye, as well.

Deciding, at this point, that you now have to torture somebody is equivalent to denying that you have the choice to say, "I can ignore this dust speck if it means torturing somebody." Bearing in mind that you are choosing to put dust specks in an exactly equal number of eyes as you were before, plus only your own.

My example above is merely extrapolating this case to the case where each individual can decide to opt out.

Replies from: dlthomas, None, Grognor
comment by dlthomas · 2011-11-11T21:11:44.194Z · LW(p) · GW(p)

But that's not what a vote that way means; consider polling 100 individuals who are so noble as to pick 9 hours of torture over someone else getting 10. How many of them would pick torturing 99 other conditionally willing people over torturing one unwilling person? It is simply not the same question.

comment by [deleted] · 2011-11-12T00:11:13.263Z · LW(p) · GW(p)

The correct response when Omega changes the deal is "Oh, come on! You're making me decide between two situations that are literally within a dust speck's worth of each other. Why bother me with such trivial questions?" Because that's what it is. You're not choosing between "dust speck in my eye" and "terrible thing happens". You're choosing between "terrible thing happens" and "infinitesimally less terrible thing happens, plus I have a dust speck in my eye."

comment by Grognor · 2011-11-11T22:34:33.896Z · LW(p) · GW(p)

The first paragraph of this comment is a nitpick, but I felt impelled to it: there is no way that 10^14 dust specks is anywhere near enough to equal one torture victim. Maybe if you multiplied it by a googolplex, then by the number of atoms in the universe, you'd be within a few orders of magnitude.

And now for the meaty response.

You're making the whole case extremely arbitrary and ignoring utility metrics, which I will now attempt to demonstrate.

Eliezer chose the number 3^^^3 so that no calculation of the disutility of the torture could ever match it, even if you have deontological qualms about torture (which most humans do). It simply doesn't compare. Utilitarianism in the real world doesn't work on fringe cases because utility can't actually be measured. But if you could measure it, then you'd always pick the slightly higher value, every single time. In your example,

We breath a huge sigh of relief - you don't have to torture anybody, because the math worked out in your favor by a vanishingly small fraction! Then Omega suddenly tells you he's changing the deal - he's going to be putting dust speck in YOUR eye, as well.

you ignore that part of my utility function that includes selflessness. Sacrificing something that means little to me for sparing intense suffering by someone else leads to positive utility for me, and I'm assuming other people. (This interestingly also invalidates the example you gave earlier where you polled the 3^^^3 people asking what they wanted - you ignored altruism in the calculation).

Your problems with the Torture vs. Dust Specks dilemma all boil down to "Here's how the decision changes if I change the parameters of the problem!" (and that doesn't even work in most of your examples).

Here's the real problem underlying the equation, and invulnerable to nitpicks:

Omega comes to you and says "I will create 3^^^3 units of disutility, or disutility equal or lesser to the destruction of a single galaxy full of sentient life. Which do you choose?"

As has been said before, I think the answer is obvious.

comment by mkehrt · 2011-11-12T04:44:23.246Z · LW(p) · GW(p)

I'm not entirely convinced by the rest of your argument, but

The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.

Is, far and away, the most intelligent thing I have ever seen anyone write on this damn paradox.

Come on, people. The fact that naive preference utilitarianism gives us torture rather than dust specks is not some result we have to live with, it's an indication that the decision theory is horribly, horribly wrong,

It is beyond me how people can look at dust specks and torture and draw the conclusion they do. In my mind, the most obvious, immediate objection is that utility does not aggregate additively across people in any reasonable ethical system. This is true no matter how big the numbers are. Instead it aggregates by minimum, or maybe multiplicatively (especially if we normalize everyone's utility function to [0,1]).

Sorry for all the emphasis, but I am sick and tired of supposed rationalists using math to reach the reprehensible conclusion and then claiming it must be right because math. It's the epitome of Spock "rationality".

Replies from: wedrifid
comment by wedrifid · 2011-11-12T05:38:54.659Z · LW(p) · GW(p)

The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.

I would say, instead, that it gives a valid total-suffering value but that said value is not necessarily what is important. It is not how I extrapolate my intuitive aversion to suffering, for example.

Sorry for all the emphasis, but I am sick and tired of supposed rationalists using math to reach the reprehensible conclusion and then claiming it must be right because math. It's the epitome of Spock "rationality".

I would say the same but substitute 'torture' for 'reprehensible'. Using math in that way is essentially begging the question - the important decision is in which math to choose as a guess at our utility function after all. But at the same time I don't consider choosing torture to be reprehensible. Because the fact that there are 3^^3 dust specks really does matter.

comment by Grognor · 2011-11-11T22:13:09.087Z · LW(p) · GW(p)

"Would you rather experience the mild distraction of a dust speck in your eye, or allow someone else to be tortured for fifty years?" "Would you rather be tortured for fifty years, or have someone else experience the mild discomfort of a dust speck in their eye?"

Asking this question to (let's say) humans will cause them to believe that only one person is getting the dust speck in the eye. Of course they're going to come up with the wrong answer if they have incomplete information.

Now you have to make your decisions knowing that the entire universe totally wouldn't mind having dust specks in exchange for preventing suffering for one other person. If that doesn't change your decision ... something is wrong.

There are two problems with this. The first is that if you take a number of people as big as 3^^^3 and ask them all this question, an incomprehensibly huge number will prefer to torture the other guy. These people will be insane, demented, cruel, or dreaming or whatever, but according to your ethics they must be taken into account. (And according to mine, as well, actually). The number of people saying to torture the guy will be greater than the number of Planck lengths in the observable universe. That alone is enough disutility to say "Torture away!"

The other problem is that you assert that "something is wrong" when my decision remains the same after a wrong question is asked with incomplete information 3^^^3 times does not change my decision. What is wrong? I can tell you that your intuition that "something must be wrong" is just incorrect. Nothing is wrong with the decision. (And this paragraph is for the LCPW where everyone answered selflessly to the question, which is of course not even remotely plausible).

comment by Richard_Kennaway · 2011-11-11T16:53:56.380Z · LW(p) · GW(p)

You have to have a reason for every one of your goals

That leads to an infinite regress. What is a reason for a goal, but another goal?

comment by D227 · 2011-11-11T17:24:29.704Z · LW(p) · GW(p)

Excellent points. I now seek your consistency to test your beliefs. Prepare yourself to hear a sick and twisted problem.

Omega has given you choice to allow or disallow 10 rapist to rape someone. Why 10 rapist? Omega knows the absolute utility across all humans, and unfortunately as terrible as it sounds, the suffering/torturing of 10 rapist not being able to rape is more suffering than what the victim feels. What do you do? 10 is < 3^^^3 suffering rapists. So lucky you, Omega need not burden you with the suffering of 3^^^3, if you chose to have rapist suffer. It is important that you not finagle your way out of the question. Please do not say that not being able to rape is not torture. Omega has already stated that indeed there is suffering for these rapist. It matters not if you would suffer such a thing.

Disclaimer: I am searching for the truth through rationality. I do not care whether the answer is torture, specks, or rape, only that it is the truth. If the rational answer is rape, I can do nothing but accept that for I am only in search of truth and not truth that fits me.

There are implications to choosing rape as the right answer. It means that in a rational society we must allow bad things to happen if that bad thing allows for total less suffering. We have to be consistent. Omega has given you a number of rapist far far far less than 3^^^3, surely you must allow for the rape to occur.

Literally, Grognor walks into a room with 10 rapists and a victim. The rapists tell him to "go away, and don't call the cops.". Omega appears and says, you may stop it if you want to, but I am all knowing and know that the utility experienced by the rapist or suffering from being deprived of raping is indeed greater than the suffering of the victim. What does Grognor do?

Replies from: Richard_Kennaway, Grognor
comment by Richard_Kennaway · 2011-11-11T21:58:06.930Z · LW(p) · GW(p)

Excellent points. I now seek your consistency to test your beliefs. Prepare yourself to hear a sick and twisted problem.

The problem with your problem is that it is wrong. You have Omega asserting something we have good reason to disbelieve. You might as well have Omega come in and announce that there is an entity somewhere who will suffer dreadfully if we don't start eating babies.

All you're saying is "suppose were actually good"? Well, suppose away. So what?

Do you see the difference between your Omega and the one who poses Newcomb's problem?

Replies from: D227
comment by D227 · 2011-11-11T23:55:09.527Z · LW(p) · GW(p)

Richard

I sincerely appreciate your reply. Why do we accept Omega in Eleizers thought experiment and not mine? In the original some people claim to obviously pick torture, yet unwilling to pick rape because why? Well, like you said, you refuse to believe that rapist suffer. That is fair. But if that is fair, then Bob might refuse to believe that people with specks in their eyes suffer as well...

You can not assign rules for one and not the other.

All you're saying is "suppose were actually good"? Well, suppose away. So what?

Not true. I am saying that some people get utility from evil. Not me, not you but why am I not allowed to use that as an example?

Bottom line is that I personaly am unresolved and I will remain unresolved rationally across all examples. I know what I would do. I would 3^^^3 pick dust and follow up with 3^^^3 deprived rapists. But for strong "torturers" such as Grognor, depriving rapists will be inconsistent with his beliefs.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-11-13T10:05:32.661Z · LW(p) · GW(p)

Well, like you said, you refuse to believe that rapist suffer.

I also "refuse" to believe that the Earth is flat -- or to put it more accurately, I assert that it is false.

That is fair. But if that is fair, then Bob might refuse to believe that people with specks in their eyes suffer as well...

The difference is that Bob would be wrong.

Not me, not you but why am I not allowed to use that as an example?

Making random shit up and saying "what if this?", "what if that?" doesn't make for a useful discussion.

Then again, I am not a utilitarian, so I have no problem with saying that the more someone wants to do an evil thing, the more they should be prevented from doing it.

comment by Grognor · 2011-11-11T22:48:44.693Z · LW(p) · GW(p)

There are two major problems with your proposition.

One is that Omega appears to be lying in this problem, very simply. In the universe where he isn't lying, though...

I'm partly what you'd call a "negative utilitarian". That's minimize suffering first, then maximize joy. It does not appear to me that not being able to rape people for a small number of hedonists (like, say, the number of rapists on the planet) is greater than the suffering that would be inflicted if they had their way.

If you accept those premises I just put forward, then you understand that my choice is to stop the rapists for utilitarian reasons also because I don't want them to do this again.

So okay, least-convenient possible world time. Given that they won't cause any additional suffering after this incident, given that their suffering from not being able to commit rape is greater than the victim's (why this would be true I have no idea), then sure, whatever, let them have their fun shortly before their logically ridiculous universe is destroyed because the consequences of this incident as interpreted by our universe would not occur.

I hope this justifies my position from a utilitarian standpoint, though I do have deontological concerns about rape. It's one of those things that seems to Actually Be Unacceptable, but I hope I've put this intuition sufficiently aside to address your concerns.

One more thing... It kind of pisses me off that people still bring up the torture vs. dust specks thing. From where I stand, the debate is indisputably settled. But, ah, I guess you might call that "arrogance". But whatever.

Replies from: D227
comment by D227 · 2011-11-11T23:38:08.466Z · LW(p) · GW(p)

Then you are not consistent. For one example you are willing to allow suffering because the 50 years of torture is less than 3^^^3 dust holocaust yet. You claim that suffering is suffering. Yet only 10 deprived rapist already has you changing your thoughts.

I do not have an answer. If anything I would consider my self a weak dusk specker. The only thing that I claim is I am not arrogant, I am consistent in my stance. I do not know the answer but am willing to explore the dilemma of torture vs speck, and rape vs deprived rapists. Torture is rape is it not? Yet I will allow torture for 50 years because you do not believe that deprived rapist are not suffering. I am afraid that is not up to you to decide.

All I ask is to present tough questions. The down votes I believe are hurting discussion as I have never declared any thing controversial accept ask people to reconcile their beliefs to be consistent. I am actually quite disappointing in how easily people are frustrated. I apologize if I have pissed you off.

Replies from: Grognor
comment by Grognor · 2011-11-11T23:44:49.620Z · LW(p) · GW(p)

You must have missed the part of my response where I say that given your premises, yes, I choose to let the fucking rapists commit the crime. The rest of my post just details how your premises are wrong. I am internally consistent.

Your comment was saying that "if you change your answer here, it shows that you are not consistent." I replied with reasons that this is not true, and you replied by continuing on the premise that it is true.

No! You do not get to decide whether I'm consistent!

See also this comment, which deserves a medal. Your problem is wrong, which is why you're coming to this incorrect conclusion that I am inconsistent.

Replies from: D227
comment by D227 · 2011-11-12T01:37:31.347Z · LW(p) · GW(p)

Grognor,

Thanks for your reply. You are right you are consistent as you did admit in your second scenario that you would let the sickos have their fun.

I would like to continue the discussion on why my problem is wrong in a friendly and respectable way, but the negative score points really are threatening my ability to post, which is quite unfortunate.