Embracing the "sadistic" conclusion

post by Stuart_Armstrong · 2014-02-13T10:30:46.492Z · LW · GW · Legacy · 41 comments

Contents

  Sadism versus repugnance
  Remove the connotations, then the argument
None
41 comments

This is not the post I was planning to write. Originally, it was going to be a heroic post where I showed my devotion to philosophical principles by reluctantly but fearlessly biting the bullet on the sadistic conclusion. Except... it turns out to be nothing like that, because the sadistic conclusion is practically void of content and embracing it is trivial.

Sadism versus repugnance

The sadistic conclusion can be found in Gustaf Arrhenius's papers such as "An Impossibility Theorem for Welfarist Axiologies." In it he demonstrated that - modulo a few technical assumptions - any system of population ethics has to embrace either the Repugnant Conclusion, the Anti-Egalitarian Conclusion or the Sadistic conclusion. Astute readers of my blog posts may have noticed I'm not the repugnant conclusion's greatest fan, evah! The anti-egalitarian conclusion claims that you can make things better by keeping total happiness/welfare/preference satisfaction constant but redistributing it in a more unequal way. Few systems of ethics embrace this in theory (though many social systems seem to embrace it in practice).

Remains the sadistic conclusion. A population ethics that accepts this is one where it is sometimes better to create someone whose life is not worth living (call them a "victim"), rather a group of people whose lives are worth living. It seems well named - can you not feel the top hatted villain twirl his moustache as he gleefully creates lives condemned to pain and misery, laughing manically as he prevents the intrepid heroes from changing the settings on his incubator machine to "worth living"? How could that sadist be in the right, according to any decent system of ethics?

Remove the connotations, then the argument

But the argument is flawed, for two main reasons: one that strikes at the connotations of "sadistic", the other at the heart of the comparison itself.

The reason the sadistic aspect is a misnomer is that creating a victim is not actually a positive development. Almost all ethical systems would advocate improving the victim's life, if at all possible (or ending it, if appropriate). Indeed some ethical systems which have the "sadistic conclusion" (such as prioritarianism or egalitarianism) would think it more important to improve the victim's life that some ethical systems that don't have the conclusion (such as total utilitarianism). Only if such help is somehow impossible do you get the conclusion. So it's not a gleeful sadist inflicting pain, but a reluctant acceptance that "if universe conspires to prevent us from helping this victim, then it still may be worth creating them as the least bad option" (see for instance this comment).

"The least bad option." For the sadistic conclusion is based on a trick, contrasting two bad options and making them seem related (see this comment). Consider for example whether it is good to create a large permanent underclass of people with much more limited and miserable lives than all others - but whose lives are nevertheless just above some complicated line of "worth living". You may or may not agree that this is bad, but many people and many systems of population ethics do feel it's a negative outcome.

Then, given that this underclass is a bad outcome (and given a few assumptions as to how outcomes are ranked) then we can find other bad outcomes that are not quite as bad as this one. Such as... a single victim, a tiny bit below the line of "worth living". So the sadistic conclusion is not saying anything about the happiness level of a single created population. It's simply saying that sometime (A) creating underclasses with slightly worthwhile lives can sometimes be bad, while (B) creating a victim can sometimes be less bad. But the victim isn't playing a useful role here: they're just an example of a bad outcome better than (A), only linked to (A) through superficial similarity and rhetoric.

For most systems of population ethics the sadistic conclusion can thus be reduced to "creating underclasses with slightly worthwhile lives can sometimes be bad." But this is the very point that population ethicists are disputing each other about! Wrapping that central point into a misleading "sadistic conclusion" is... well, the term "misleading" gave it away.

41 comments

Comments sorted by top scores.

comment by owencb · 2014-02-13T16:11:13.421Z · LW(p) · GW(p)

I do like this as an explanation of how you think the sadistic conclusion is reasonable.

I think the examples you use to motivate the intuitions are doing a poor job, however, because they involve too many other complicating factors.

Consider for example whether it is good to create a large permanent underclass of people with much more limited and miserable lives than all others - but whose lives are nevertheless just above some complicated line of "worth living".

The first problem here is that there are various detrimental effects on the quality of life of people living in such an underclass because they are in the underclass, compared to a population who have the same resources available to them but aren't part of the society including those doing better. These detrimental effects are supposed to be built into where the threshold for "worth living" is, but it's not clear that that's filtering through to our intuitions when we think about this.

Secondly, introducing such a population very likely has negative externalities, making the lives of those already living in the society worse (which is why I think you would prefer to live in the society without the underclass than the one with, even if you knew you would not be in the underclass).

Because of this I don't think this is a good scenario to use to interrogate our intuitions: they will predictably give the answer that having the underclass is worse, but for reasons that have little to do with population axiologies (so we learn little).

A better question would be whether it's good to add a second population, who have much less good lives than the first (but still worth living), if the two populations never interact or know of each other's existence. I think this factors out the complications, so we should get better insight into what our intuitions are telling us.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-02-14T15:24:29.406Z · LW(p) · GW(p)

I think the examples you use to motivate the intuitions are doing a poor job, however, because they involve too many other complicating factors.

That's deliberate, for two reasons. The first is that I think that some non-individual preferences are relevant to population ethics. I don't see them playing a strong role, but I do see them existing - that, say, a fully equal population is at least somewhat better than one with same total happiness and great inequality. Given that those are my preferences, I like to include elements which I feel may have such impacts in my descriptions. People who see only personal preferences should not be affected by the way I phrase the descriptions (just the ultimate happiness/utility/preference satisfaction). And it would be useful for others to see a more complicated and evocative image of how things go, which may trigger new intuitions.

But the main reason I try and do it is because I feel that the debate has gotten too clinical - that we throw around terms like "barely worth living" without really connecting to what they mean. "Barely worth living" is, by modern western standards, really awful - a young girl born into a rural village, fell in love with X at 14 but was raped by Y that year and made to marry him. Working on the land every day with her fading strength eyesight, she found some pleasure in her (very few) surviving children, but they all ultimately pre-deceased her, her favourite son being gutted before her by invaders, who then raped her again and broke her legs. Unable to work much after than, she took to begging in a new village, and found some solace in religious belief and in playing the role of the madwoman, but ended up dying terrified, alone and in great pain.

Or something along those lines, depending on where exactly you put the zero. I think that people who are in favour of the repugnant conclusion should be willing to face exactly what it means to have a life just above zero - if you inflicted that fate on most people today, you'd get deservedly reviled and imprisoned. I can see the worth of being dispassionate in making decisions, but not in choosing your values.

Secondly, introducing such a population very likely has negative externalities, making the lives of those already living in the society worse

A very valid point, though my gut feeling was that it would have positive externalities as well (people like to feel superior). You can reduce the negative externalities by making the new population docile and unwilling to revolt, and reducing the cross-population empathy. Problem solved! (contrast this with my first point above).

Replies from: owencb, owencb, ThisSpaceAvailable
comment by owencb · 2014-02-14T16:57:18.436Z · LW(p) · GW(p)

"Barely worth living" is, by modern western standards, really awful. [...]

I am very uncertain about where the "barely worth living" threshold is. The best self-introspecting question I can think of to try to get a handle on it is:

Would you rather vividly dream (including all physical pleasures and pains) a day at random in the person's life, or have a dreamless night of sleep?

From your description I guess that this life lies substantially below the threshold (indeed many lives today may be, even in affluent societies). Note that a life which is 'not worth living' for the intrinsic value could easily be worth living for the instrumental values of helping others and leaving a better world behind than without the life.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-02-15T06:58:46.027Z · LW(p) · GW(p)

The debate about the zero level is interesting - Anders told me his zero is lower than I described, that you can get worse and still have a life worth living.

Your idea has merit, but the hedonistic treadmill is a problem - people would not want to dream a life much worse than their own, whatever the absolute value of it.

comment by owencb · 2014-02-14T16:47:50.498Z · LW(p) · GW(p)

The first is that I think that some non-individual preferences are relevant to population ethics.

If this means what it appears to, that takes you outside the realm of welfarist axiologies -- in which case Arrhenius' impossibility results don't even apply.

I don't see them playing a strong role, but I do see them existing - that, say, a fully equal population is at least somewhat better than one with same total happiness and great inequality.

However, this view can lie well within the realm of welfarist. Only personal preferences need be counted (the welfarist assumption makes no requirement about how they be aggregated).

So I'm not sure what you're saying. I was trying to provide an example where you took the same population and separated the societies, while holding everyone's welfare levels constant. If you accept the welfarist assumption, then this should be exactly as good as the one where they were part of the same society. Is that what you think?

I think trying to be clinical is useful in terms of slicing up the space of interacting factors and discovering which are actually driving our intuitions, and which merely get swept up along the way and tainted by association because of the examples we first thought of.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-02-15T07:41:47.124Z · LW(p) · GW(p)

If this means what it appears to, that takes you outside the realm of welfarist axiologies -- in which case Arrhenius' impossibility results don't even apply.

My understanding is that they would apply even stronger - more extraneous factors you add, the more likely you are to get one or more of the impossibility results (eg you can get the sadist conclusion in "total utilitarianism + intrinsic value for art").

However, this view can lie well within the realm of welfarist. Only personal preferences need be counted (the welfarist assumption makes no requirement about how they be aggregated).

When I said "non-personal preferences", I did not put the cut directly at the welfarist-non-welfarist boundary. But anyway, here are some things I suspect would make it into my finale population ethics:

  • Some value to more equal systems (also between groups).
  • Some value to diversity of existing beings.
  • In contrast, some very broad set of criteria that encompass "the human condition" and possibly reduced value to entities outside that.
  • Some small value to cultural entities such as cultures and tradition groups (though this may be contained within diversity).
  • Some small value to the continued existence of certain human practices that seem worthwhile (eg making stories, valuing truth to some extent).
  • Some intrinsic value to the "individual liberties" of today.
  • Possibly some value to continued society progress.
  • Avoidance of the repugnant conclusion.
  • Asymmetry between birth and death.

As you can see, it's far from a well formed whole, yet! Most of these (apart from the birth-death asymmetry, not rep concl, and possibly diversity) I don't hold particularly strongly, and would just prefer some accommodation. ie if most people are part of the same huge, happy monoculture, a much smaller number of people in alternate cultures (with the possibility of others joining them) would be fine.

I think trying to be clinical is useful in terms of slicing up the space of interacting factors and discovering which are actually driving our intuitions, and which merely get swept up along the way and tainted by association because of the examples we first thought of.

There definitely a need for clinical detachment, but I think there's also a time for emotional engagement. Decomposing our intuitions may sometimes just destroy them for no good reason (eg how bureaucracies can commit atrocities because responsibility is diffused, and no individual component is doing something strongly wrong).

Replies from: owencb, private_messaging
comment by owencb · 2014-02-17T10:40:15.532Z · LW(p) · GW(p)

My understanding is that they would apply even stronger - more extraneous factors you add, the more likely you are to get one or more of the impossibility results (eg you can get the sadist conclusion in "total utilitarianism + intrinsic value for art").

I think you're confused here. The impossibility result is the theorem that says you get one of these apparently undesirable conclusions. It's a theorem about the class of axiologies which are welfarist (so depend only on personal welfare levels). If you look at a wider class of axiologies, the theorem doesn't apply. Of course some of them may additionally get one of these conclusions as well. It's also possible that we could extend the theorem to a slightly larger class.

Here's another, equivalent, statement of the theorem:

Any system of population ethics does at least one of the following:

  1. Embraces the repugnant conclusion;
  2. Embraces the sadistic conclusion;
  3. Embraces the anti-egalitarian conclusion;
  4. Is not welfarist.
Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-02-17T11:28:10.542Z · LW(p) · GW(p)

Mixed systems (welfarist+other stuff) still fall prey to the argument. You can see this by holding "other stuff" constant, and considering the choices between populations that differ only in welfare. Or you can allow "other stuff" to vary, which makes it easy to violate more of the six conditions (the Dominance principle, the Addition principle, the Minimal Non-Extreme Priority principle, the Repugnant conclusion, the Sadistic conclusion, and the Anti-Egalitarian conclusion), maybe violating them all.

Replies from: owencb
comment by owencb · 2014-02-17T11:35:38.313Z · LW(p) · GW(p)

It doesn't seem clear that it is always possible to keep "other stuff" constant when varying welfare?

I guess I don't see how you're defining mixed systems. My first version makes any axiology at all "mixed", since you can just take the reliance on welfare to be trivial (which is a trivial example of a welfarist system).

If you have broader theorems about violating this set of conditions, they would be very interesting to know about.

Replies from: owencb
comment by owencb · 2014-02-17T12:33:50.080Z · LW(p) · GW(p)

Actually I'm not sure the anti-egalitarian conclusion is even well-formed for non-welfarist systems. You can look at welfare levels (if you think those exist) to get what looks like a form of the conclusion, but then we might say that what looks like it's anti-egalitarian is not better because of the less equal arrangement of welfare, but for some other, non-welfare, reasons. Which doesn't seem necessarily pathological (if you are happy with non-welfare reasons entering in).

comment by private_messaging · 2014-02-15T14:49:58.770Z · LW(p) · GW(p)

It seems to me that the most defective assumption is that the well being of the whole is some linear combination of individual preferences, up to very large numbers, and in very atypical circumstances (e.g. involving copies).

You can't expect that you could divide the universe into arbitrarily small cubes (10cm? 1cm? 1mm? 1nm?) and then measure each cube's preferences or quality of life and sum those.

So, on the purely logical grounds, summing can not be the general rule that you can apply to all sorts of living things, including the living thing that is 3cm by 3cm by 3cm cube within your head.

I thus don't see any good reason to keep privileging this assumption.

If you are a philosopher, and you want your paper to look scientific, then you need mathematical symbols and you need to do some algebra so it looks like something worthwhile is going on. In which case, by all means, go on and assume summation, this will help write a paper.

But if you are interested in studying actual human ethics, it is clear that we evaluate the whole in non-linear ways - we have to do that to as much as recognize an ethical value of a human despite not recognizing an ethical value of a quark - we don't think that any individual quarks within a human feel pain, but we think that the whole does.

Conversely, for a very large number of pedophiles, we do recognize that an individual pedophile wants to watch child porn, but we do not sacrifice a child, however large the number of pedos is. And it is not any more incoherent than recognizing that an individual person has preferences but the elementary particles do not.

comment by ThisSpaceAvailable · 2014-02-18T06:52:53.720Z · LW(p) · GW(p)

if you inflicted that fate on most people today, you'd get deservedly reviled and imprisoned.

I find that a rather odd statement. Isn't "zero welfare", by definition, the amount of welfare such that any life with greater welfare is a life which it is good to create? It seems like there's a paradox in the conception of "zero welfare". We wish to define such absolute minimum amount of acceptable welfare, but the idea of subjecting someone to the bare minimum of welfare is repugnant.

It's like those "structuring" laws, where it's illegal to not report a transaction with more than some maximum amount of money ... but it's also illegal to arrange transactions with the intent of having them be under that ceiling. If it's illegal to transfer $10,000 without reporting it, then someone who is transferring $9,999 is obviously up to no good. But then someone transferring $9,998 is obviously trying to avoid the suspicion that $9,999 would generate, so $9,998 should also be suspicious. If this logic is followed long enough, there would be no amount that wouldn't be suspicious. Similarly, if creating a just-barely-worth-living life is repugnant, then creating any life is repugnant.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-02-19T13:19:49.708Z · LW(p) · GW(p)

Several different points:

First of all, the general point is that reducing a (standard, western) human to "zero welfare" would involve inflicting great pain and suffering upon them, which would get you reviled and imprisoned, with judges unlikely to be impressed by your philosophical justification.

I find that a rather odd statement. Isn't "zero welfare", by definition, the amount of welfare such that any life with greater welfare is a life which it is good to create?

That's a bit question begging, as some ethical systems have zero welfare, but don't generally advocate creating people at that level. My favourite definition, for what it's worth, would be that "lives below zero welfare are not worth creating in any circumstances (unless as instrumental goals for something else)".

Similarly, if creating a just-barely-worth-living life is repugnant, then creating any life is repugnant.

Most ethical system that reject the repugnant conclusion also reject that argument. How do they do it? Generally by making the decision on the creation of lives dependent on the existence of other lives. Average utilitarianism would advocate against creating lives below the average, advocate for creating lives above the average (indeed average utilitrianism, uniquely among population ethics as far as I can tell, does not need a "zero" level). Egalitarianism would advocate creating any life above zero that didn't decrease equality, etc...

So all these systems would have some situation in which creating a life just above zero would be good, and (many) situations in which it would be bad.

comment by DefectiveAlgorithm · 2014-02-13T11:32:14.541Z · LW(p) · GW(p)

This is (one of the reasons) why I'm not a total utilitarian (of any brand). For future versions of myself, my preferences align pretty well with average utilitarianism (albeit with some caveats), but I haven't yet found or devised a formalization which captures the complexities of my moral intuitions when applied to others.

Replies from: Stuart_Armstrong, roystgnr
comment by Stuart_Armstrong · 2014-02-13T12:13:48.099Z · LW(p) · GW(p)

A proper theory of population ethics should be complex, as our population intuitions are complex...

Replies from: NoSuchPlace
comment by NoSuchPlace · 2014-02-13T13:30:40.801Z · LW(p) · GW(p)

our population intuitions are complex...

Are they? They certainly look complex, but that could be because we haven't found the proper way to describe them. For example the Mandelbrot set looks complex, but it can be defined in a single line.

Also "complex" leads to ambiguity, perhaps it needs to be defined. I used it in the sense that something is complex if it cannot be quickly defined for a smart and reasonably knowledgeable (in the relevant domain) human, since this seems to be the relevant sense here.

Replies from: Vulture, Stuart_Armstrong
comment by Vulture · 2014-02-13T19:49:32.044Z · LW(p) · GW(p)

There's no particular reason why we should expect highly abstract aspects of our random-walk psychological presets to be elegant or simply defined. As such, it's practically guaranteed that they won't be.

Replies from: NoSuchPlace
comment by NoSuchPlace · 2014-02-13T21:37:53.689Z · LW(p) · GW(p)

I'm not saying that our population intuitions are simple, I'm saying that we can't rule out the possibility. For example a prior I wouldn't have expected physics to turn out to be simple, however (at least to the level that I took it) physics seems to be remarkably simple (particularly in comparison to the universe it describes), this leads me to conclude that there is some mechanism by which things turn out to be simpler than I would expect.

To give an example, my best guess (besides "something I haven''t though of") for this mechanism is that mathematical expressions are fairly evenly distributed over patterns which occur in reality, and that one should hence expect there to be a fairly simple piece of mathematics which comes very close to describing physics, a similar thing might happen with our population intuitions.

There's no particular reason why we should expect highly abstract aspects of our random-walk psychological presets to be elegant or simply defined.

Wouldn't highly abstract aspects of our psychology be be more recent and as such simpler?

As such, it's practically guaranteed that they won't be.

This depends on your priors. If you assign comparable probabilities to simple and complex hypothesis, this follows. If you assign higher probabilities to simple hypothesis than complex ones it doesn't.

Replies from: Vulture
comment by Vulture · 2014-02-13T23:03:47.110Z · LW(p) · GW(p)

This depends on your priors. If you assign comparable probabilities to simple and complex hypothesis, this follows. If you assign higher probabilities to simple hypothesis than complex ones it doesn't.

If you flip 1000 fair coins, the resulting output is more likely to be a mishmash of meaningless clumps than it is to be something like "HHTTHHTTHHTTHHTT..." or another very simple repeating pattern. Similarly, a chaotic[1] process like the evolution of our ethical intuitions is more likely to produce an arbitrary mishmash of conflicting emotional drives than it is to produce some coherent system which can easily be extrapolated into an elegant theory of population ethics. All of this is perfectly consistent with any reasonable formalization of Occam's Razor.

EDIT: The new definition of "complex" that you added above is a reasonable one in general, but in this case it might lead to some dangerous circularity - it seems okay right now, but defining complexity in terms of human intuition while we're discussing the complexity of human intuition seems like a risky maneuver.

Wouldn't highly abstract aspects of our psychology be be more recent and as such simpler?

The abstract aspects in question are abstractions and extrapolations of much older empathy patterns, or are trying to be. So, no.


  1. In the colloquial sense of "lots and lots and lots of difficult-to-untangle significant contributing factors"
comment by Stuart_Armstrong · 2014-02-13T14:57:27.722Z · LW(p) · GW(p)

Maybe a better phrasing would be that we don't a priori expect them to be simple...

comment by roystgnr · 2014-02-13T17:05:50.467Z · LW(p) · GW(p)

For future versions of myself, my preferences align pretty well with average utilitarianism (albeit with some caveats)

Could you explain? Those sound like awfully big caveats. If I consider the population of "future versions of myself" as unchangeable, then average utilitarianism and total utilitarianism are equivalent. If I consider that population as changeable, then average utilitarianism seems to suggest changing it by removing the ones with lowest utility: e.g. putting my retirement savings on the roulette wheel and finding some means of painless suicide if I lose.

Replies from: blacktrance
comment by blacktrance · 2014-02-13T17:16:44.326Z · LW(p) · GW(p)

Death is a major source of negative utility even if one accepts average utilitarianism.

Replies from: roystgnr, Stuart_Armstrong
comment by roystgnr · 2014-02-16T21:32:11.623Z · LW(p) · GW(p)

Yes, but this is the "consider my population unchangeable" case I mentioned, wherein "average" and "total" cease being distinct. Certainly if we calculate average utility by summing 1 won-at-roulette future with 37 killed-myself futures and dividing by 38, then we get a lousy result, but it (as well as the result of any other hypothetical future plans) is isomorphic to what we'd have gotten if we'd obtained total utility by summing those futures and then not dividing. To distinguish average utility from total utility we have to be able to make plans which affect the denominator of that average.

comment by Stuart_Armstrong · 2014-02-15T07:19:03.838Z · LW(p) · GW(p)

Not for hedonistic utlitarianism - there only fear of death is bad (or the death of people who don't get replaced by others of equivalent or equal happiness).

comment by casebash · 2015-12-22T13:16:54.339Z · LW(p) · GW(p)

I think that this was a good attempt, but it is still flawed. As pointed out by Owencb, our intuition that it would be bad for a large underclass with a life barely worth living to exist comes from us postulating a life barely worth living, then adding in the harms of being in an underclass which would push it into a life not worth living. If we accept that these lives are still positive, then there is no benefit from replacing these people with a single victim. Additionally, we can get the sadistic conclusion in a universe where most people's lives are not worth living by choosing to add another person whose life is not worth living, but whose life is less bad on average.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2015-12-30T10:41:21.352Z · LW(p) · GW(p)

Additionally, we can get the sadistic conclusion in a universe where most people's lives are not worth living by choosing to add another person whose life is not worth living, but whose life is less bad on average.

That's only true for some systems (eg average utilitarianism). All sensible non-total utilitarian population ethics have some "sadistic" conclusions, but that doesn't mean that they have all sadistic conclusions.

comment by torekp · 2014-02-15T02:11:14.862Z · LW(p) · GW(p)

For most systems of population ethics the sadistic conclusion can thus be reduced to "creating underclasses with slightly worthwhile lives can sometimes be bad."

To flesh it out further though, it says "creating many slightly worthwhile lives can sometimes be worse than creating one slightly worse-than-nothing life". Which may not really deserve the label "sadistic", but still strikes me as highly counter-intuitive.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-02-16T10:29:44.736Z · LW(p) · GW(p)

That is generally a simple conclusion of "category A contains bad stuff, and scales (you can make it better or worse)" and "category B contains bad stuff, and scales", then it's not surprising that you can find something in A worse than something in B (except if you use aberrations like lexicographical ordering). It's like the 3^^^3 dust specks/stubbed toes over again...

Replies from: torekp
comment by torekp · 2014-02-17T17:50:58.725Z · LW(p) · GW(p)

Good point. One way out of the counter-intuitiveness - which hasn't gone away for me, despite your explanation - is to deny that "it scales." I.e., deny that the badness of creating a just-barely-worthwhile life, into a world containing many good lives, scales with the number of just-barely-worthwhile lives. Something approaching a maximin view - the idea that in population ethics, there's a fundamental component of value that depends on how the worst-off person fares - while I wouldn't agree with it, doesn't seem so implausible. And I think it would get you many of the conclusions that you're after.

comment by NancyLebovitz · 2014-02-14T16:13:39.427Z · LW(p) · GW(p)

My first thought wasn't the mustache-twirling villain, it was Le Guin's "The Ones Who Walk Away from Omelas". It sets up the sadistic conclusion (targeted at one severely neglected child in a utopia), but doesn't give a mechanism for why it would work that way.

In general, do the people who believe in the sadistic conclusion give a mechanism?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-02-15T07:15:23.802Z · LW(p) · GW(p)

No mechanisms, generally - just as there's no explanation why people keep on getting tied to tracks, often in groups of five, in front of hurtling trolleys...

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-02-15T14:28:57.040Z · LW(p) · GW(p)

Trolley problems are explicitly presented as thought experiments.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-02-16T10:22:02.604Z · LW(p) · GW(p)

So is the sadist conclusion - that specific issues hasn't come up in reality, though analogues may have (similar to the trolley problem)

comment by VAuroch · 2014-02-13T22:14:49.083Z · LW(p) · GW(p)

This seems like a Main post to me. And if this isn't, a more detailed version is.

comment by Lalartu · 2014-02-13T14:34:51.587Z · LW(p) · GW(p)

What's wrong with Anti-Egalitarian Conclusion? People are not equal, cannot be equal and should not be equal. An so should not be treated equally. If it contradicts some moral intuitions, then it is this intuition a problem to be solved.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-02-13T14:56:42.540Z · LW(p) · GW(p)

Anti-egalitarianism is stronger than that: it says that things get better if they become less equal, holding total happiness constant. Make the happier person happier, make the saddest sadder by the same amount, and things have gotten better overall.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-02-13T17:59:05.569Z · LW(p) · GW(p)

IOW, it is not "anti-egalitarian" in the sense of "not caring about maximizing equality", but rather in the sense of "caring about maximizing inequality".

Replies from: Lalartu
comment by Lalartu · 2014-02-14T12:54:03.911Z · LW(p) · GW(p)

Yes, disrupting perfect equality is a good thing in itself. I think that moral systems are not in any way laws of nature, but social constructs, and should be evaluated not by some higher principles, but by their effect on society. In practice equality mean stagnation, therefore any moral system that holds perfect equality as an ideal is flawed. There is an optimal level of inequality for any given circumstances, and it is never zero.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-02-14T15:01:42.555Z · LW(p) · GW(p)

But that inequality is an instrumental rather than a terminal value. You only value it because it prevents stagnation, not because it's intrinsically a good thing.

Replies from: Lalartu
comment by Lalartu · 2014-02-15T13:31:18.876Z · LW(p) · GW(p)

Yes, it is instrumental.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-02-16T10:23:19.222Z · LW(p) · GW(p)

So it's not anti-egalitarian in the sense used here.