Torture vs Specks: Sadist version

post by andrew sauer (andrew-sauer) · 2021-07-31T23:33:42.224Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    2 theme_arrow
    1 Tao Lin
None
1 comment

Suppose that instead of the classic version of torture vs specks where the choice is between specks in the eyes of 3^^^3 people or one person tortured for 50 years, there are no specks but rather there are 3^^^3 people who just want the one guy to be tortured. (No particular reason, this just happens to be part of their utility function, which is not up for grabs) The preference of each is mild but somewhat stronger than the preference to not get a speck in one's eye. Is torture the right decision?

I am especially interested in hearing from people who answer differently in this situation than in the original situation.

Answers

answer by theme_arrow · 2021-08-01T03:56:13.869Z · LW(p) · GW(p)

I think negative utilitarianism is the most common ethical framework that would cause someone to choose the torture in the specks vs. torture case and no torture in this case. That's because in the specks vs. torture case involves people being harmed in both cases, whereas this case involves people gaining positive utility vs. someone being harmed. Some formulations of negative utilitarianism, like that advocated for by Brian Tomasik, would say that avoiding extreme suffering is the most important moral principle and would therefore argue in favor of avoiding torture in both cases. But a very simple negative utilitarian calculus might favor torture in the first case but not in the second. 

I would guess that few people in the rationalist/EA community (and perhaps in the broader world as well) are likely to think that kind of simplistic negative utilitarian calculation is the morally correct one. My guess is that most people would either think that preventing extreme suffering is the most important or that a more standard utilitarian calculus is correct. For a well-reasoned argument against the negative utilitarian formulation, Toby Ord has a discussion of his point of view that's worth checking out. 

comment by andrew sauer (andrew-sauer) · 2021-08-01T15:08:41.256Z · LW(p) · GW(p)

I'm not sure how negative utilitarianism changes things. Positive and negative utilitarianism are equivalent whenever UFs are bounded and there are no births or deaths as a result of the decision.

Negative utilitarianism interprets this situation as the sadists suffering from boredom which can be slightly alleviated by knowing that the guy they hate is suffering.

Replies from: theme_arrow
comment by theme_arrow · 2021-08-01T16:14:16.128Z · LW(p) · GW(p)

You might be correct, but I'm not convinced that all negative utilitarians would agree with you. I think that some formulations (e.g. potentially NHU as described here) would describe the person not getting tortured as producing a reduction in pleasure for the sadists, and thus not ascribe any moral value to the sadists' preferences not getting fulfilled.

I'd be curious to read more about your comment that "Positive and negative utilitarianism are equivalent whenever UFs are bounded and there are no births or deaths as a result of the decision." Do you have some resources you could link for me to read? 

Replies from: andrew-sauer
comment by andrew sauer (andrew-sauer) · 2021-08-01T23:54:08.684Z · LW(p) · GW(p)

I'm referring to the fact that utility functions are equivalent under positive affine transformations (if you add a constant and multiply it by a positive constant, the UF remains the same in the sense that it has the same preference in every situation)

Assuming we are computing the utility of an outcome by assigning a utility to each person and then summing them, adding a constant value to any person's utility doesn't change the comparison between outcomes, because the net effect is just to add a constant to the utility of each outcome(as long as the person we are adding a constant value to exists in every outcome).

Therefore, we can convert the situation to negative utilitarian without functionally changing it, by subtracting the maximum utility from each person, ensuring that everyone's utility will be negative in every outcome. We can also convert it to positive utilitarian by subtracting the minimum in a similar way.

This analysis assumes that there is a maximum and a minimum utility, and that every outcome has the same set of people in it, so if these assumptions break there may be relevant differences.

Replies from: theme_arrow
comment by theme_arrow · 2021-08-02T14:40:50.253Z · LW(p) · GW(p)

Okay I see what you're saying here. But do you think that that a substantial number of negative utilitarians would agree with that argument? I don't think they would, because I think integral to many conceptions of negative utilitarianism is the idea that there's a qualitative difference between suffering and lack of pleasure. 

Replies from: andrew-sauer
comment by andrew sauer (andrew-sauer) · 2021-08-02T23:10:10.946Z · LW(p) · GW(p)

Okay, maybe the NUs wouldn't interpret the problem as I phrased it in this way, but the problem can be slightly changed to have a similar conclusion, by saying that the sadists are mildly annoyed when the guy isn't being tortured, instead of wanting it for their pleasure.

comment by Neel Nanda (neel-nanda-1) · 2021-08-01T13:55:53.718Z · LW(p) · GW(p)

Do you mean negative utilitarianism would get them to choose torture, rather than dust specks? I would have considered both to be forms of suffering.

Replies from: theme_arrow
comment by theme_arrow · 2021-08-01T15:43:26.326Z · LW(p) · GW(p)

Ah you’re right, sorry. Edited.

answer by Tao Lin · 2021-08-01T00:46:25.217Z · LW(p) · GW(p)

If consequences are completely ignored, I lean towards the torture, but if consequences are considered I would choose no torture out of hope it accelerates moral progress (at least if they had never seen someone who "aught to be tortured" get away, the first one might spark change. which might be good?). In the speck case, I choose torture.

comment by andrew sauer (andrew-sauer) · 2021-08-01T00:56:59.475Z · LW(p) · GW(p)

I should say we assume that we're deciding which one a stable, incorruptible AI should choose. I'm pretty sure any moral system which chose torture in situations like this would not lead to good outcomes if applied in a practical circumstance, but that's not what I'm wondering about, I'm just trying to figure out which outcome is better. In short, I'm asking an axiological question, not a moral one. https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/

My intuition strongly says that the torture is worse here even though I choose torture in the original, but I don't have an argument for this because my normal axiological system, preference utilitarianism, seems to unavoidably say torture is better.

comment by andrew sauer (andrew-sauer) · 2021-08-01T01:00:29.705Z · LW(p) · GW(p)

Although under strict preference utilitarianism, wouldn't change in values/moral progress be considered bad, for the same reason a paperclip maximizer would consider it bad?

1 comment

Comments sorted by top scores.

comment by Shmi (shminux) · 2021-08-01T06:02:03.497Z · LW(p) · GW(p)

There was an interesting discussion in my old post on a related topic [LW · GW].