Integral versus differential ethics
post by Stuart_Armstrong · 2014-12-01T18:04:56.293Z · LW · GW · Legacy · 44 commentsContents
In population ethics... In general... None 44 comments
In population ethics...
Most people start out believing that the following are true:
- That adding more happy lives is a net positive.
- That redistributing happiness more fairly is not a net negative.
- That the repugnant conclusion is indeed repugnant.
Some will baulk on the first statement on equality grounds, but most people should accept those three statements as presented. Then they find out about the mere addition paradox.
Someone who then accepts the repugnant could then reason something like this:
Adding happy people and redistributing fairly happiness, if done many, many times, in the way described above, will result in a repugnant conclusion. Each step along the way seems solid, but the conclusion seems wrong. Therefore I will accept the repugnant conclusion, not on its own merits, but because each step is clearly intuitively correct.
Call this the "differential" (or local) way or reasoning about population ethics. As long as each small change seems intuitively an improvement, then the global change must also be.
Adding happy people and redistributing fairly happiness, if done many, many times, in the way described above, will result in a repugnant conclusion. Each step along the way seems solid, but the conclusion seems wrong. Therefore I will reject (at least) one step, not on its own merits, but because the conclusion is clearly intuitively incorrect.
Call this the "integral" (or global) way of reasoning about population ethics. As long as the overall change seems intuitively a deterioration, then some of the small changes along the way must also be.
In general...
Now, I personally tend towards integral rather than differential reasoning on this particular topic. However, I want to make a more general point: philosophy may be over dedicated to differential reasoning. Mainly because it's easy: you can take things apart, simplify them, abstract details away, and appeal to simple principles - and avoid many potential biases along the way.
But it's also a very destructive tool to use in areas where concepts are unclear and cannot easily be made clear. Take the statement "human life is valuable". This can be taken apart quite easily, critiqued from all directions, its lack of easily described meaning its weakness. Nevertheless, integral reasoning is almost always applied: something called "human life" is taken to be "valuable", and many caveats and subdefinitions can be added to these terms without changing the fundamental (integral) acceptance of the statement. If we followed the differential approach, we might end up with the definition of "human life" as "energy exchange across a neurone cell membrane" or something equally ridiculous but much more rigorous.
Now, that example is a parody... but only because no-one sensible does that, we know that we'd lose too much value from that kind of definition. We want to build an extensive/integral definition of life, using our analysis to add clarity rather than simplify to a few core underlying concepts. But in population ethics and many other cases, we do feel free to use differential ethics, replacing vague overarching concepts with clear simplified versions that clearly throw away a lot of the initial concept.
Maybe we do it too much. To pick an example I disagree with (always a good habit), maybe there is such a thing as "society", for instance, not simply the total of individuals and their interactions. You can already use pretty crude consequentialist arguments with "societies" as agents subject to predictable actions and reactions (social science does it all the time), but what if we tried to build a rigorous definition of society as something morally valuable, rather than focusing on individual?
Anyway, we should be aware when, in arguments, we are keeping the broad goal and making the small steps and definitions conform to it, and when we are focusing on the small steps and definitions and following them wherever they lead.
44 comments
Comments sorted by top scores.
comment by Gunnar_Zarncke · 2014-12-02T00:05:47.768Z · LW(p) · GW(p)
I think the seeming contradiction can be broken and located on a more or less precisely defined point between the extremes. And this goes as follows:
Most humans don't value all lives the same. The ancestral environment has made us to value people near us more than distant people. People more distant than the closest 150 (dunbars number) receive significantly less attention (and in some hunter gatherer communities the value may be actually zero or possibly even negative). Our modern society has allowed us to apply ethics and empathize will all beings and this flattens out the valuation of human lives. But only in the idealized case does it value all beings equally. Humans intuitively don't.
So if you add people and redistribute most people even if they consciously accept equal valuation will be caught by the conclusion repugnant to their intuition. They don't really value all those many beings equally after all. If you'd apply some transitional valuation function the math works out a bit differnt as each add-and-redistribute step adds a little bit less value - because the added being is more distant to you after all - even if only a very small bit.
I think this model is more honest to what humans really feel. Societies opinion may differ though.
comment by solipsist · 2014-12-01T18:51:25.522Z · LW(p) · GW(p)
One man's modus ponens is another man's modus tollens...
Replies from: torekp↑ comment by torekp · 2014-12-02T14:25:48.202Z · LW(p) · GW(p)
The usual way to handle such cases is "reflective equilibrium". In a simple variant of the Mere Addition Paradox we have three intuitive premises: A+ is not worse than A, B- is not worse than A+, B is not worse than B-; one intuitive inference rule (transitivity of not-worse-than); and one counterintuitive conclusion. For those who feel the pull of that typical pattern of intuitions, you just have to decide which (at least) one of those intuitions to reject. (Trying to explain each one away often helps.) Other things being equal, if we wind up rejecting exactly one intuition, we will be "integral thinkers" N/(N+1) of the time, where N is the number of premises+rules that leads to the counterintuitive conclusion.
The "integral vs differential" framework is useful for identifying cognitive habits or biases. But once we lay it all out as a choice between rejecting statements or inference rules, the terrain looks different.
comment by Unknowns · 2014-12-01T19:22:43.392Z · LW(p) · GW(p)
I accept the "repugnant" conclusion, on its own merits, namely because it seems obviously true to me.
Replies from: Fluttershy, Dagon, shminux↑ comment by Fluttershy · 2014-12-01T19:45:19.920Z · LW(p) · GW(p)
I feel the term "repugnant conclusion" is unfair, and biases people against accepting the said "repugnant conclusion", regardless of whether or not the "repugnant conclusion" is, in fact, something one might want to accept. Maybe you could instead call the "repugnant conclusion" something like "the happiness-sharing principle", if you wanted more people to accept it.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-12-01T20:22:50.085Z · LW(p) · GW(p)
I feel that the fact that people are presented with the repugnant conclusion only in the context of the mere addition paradox (whereas adding happy people and redistributing utility are often presented on their own) is unfair ^_^
↑ comment by Dagon · 2014-12-01T19:29:31.349Z · LW(p) · GW(p)
I find it repugnant. It doesn't come up much for me, though - I reject #2. Redistributing happiness can be net negative.
Replies from: Unknowns↑ comment by Unknowns · 2014-12-01T19:41:07.231Z · LW(p) · GW(p)
Yes, I'm pretty skeptical of #2 as well. At least it doesn't seem obviously true.
Replies from: Jiro↑ comment by Jiro · 2014-12-01T19:48:46.933Z · LW(p) · GW(p)
I am more inclined to reject the premise about adding very happy lives increasing overall happiness. If inequality is bad, adding very happy lives to a larger population produces more inequality than adding them to a smaller population. If the population is large enough, adding the happy lives can decrease overall utility.
Replies from: Luke_A_Somers, Dagon, Stuart_Armstrong↑ comment by Luke_A_Somers · 2014-12-02T18:17:21.872Z · LW(p) · GW(p)
That isn't one of the premises - it didn't say very happy, it just said happy - at least minimally happy.
So, your rejection is significantly stronger than needed in order to dodge the repugnant conclusion. It's also more dubious.
↑ comment by Dagon · 2014-12-01T20:44:58.471Z · LW(p) · GW(p)
Freaky. Adding very happy lives at no cost to others seems like an unqualified win to me. I do reject the idea that inequality is always bad, which probably explains our disagreement on this.
Visible, salient inequality can reduce both gross and net happiness in some populations, but that's not a necessary part of moral reasoning, just an observed situation of current humanity.
↑ comment by Stuart_Armstrong · 2014-12-01T20:20:47.460Z · LW(p) · GW(p)
That's more my approach. You can get this by, eg, assuming a little bit of average utilitrianism alongside your other values.
↑ comment by Shmi (shminux) · 2014-12-01T21:36:24.823Z · LW(p) · GW(p)
I don't think you do, you are just saying that you do.
Replies from: None, Unknowns↑ comment by [deleted] · 2014-12-01T22:22:44.767Z · LW(p) · GW(p)
I don't see how you can distinguish the two? For a ethical belief that has direct practical implications (e.g. "eating animals is bad"), you can accuse someone of being hypocritical by pointing out that they don't actually act that way (e.g. they eat a lot of meat). But the repugnant conclusion isn't directly applicable to practical decisions - nothing an individual can do will change the world enough to bring it about, and the repugnant conclusion doesn't directly imply anything about the marginal value of quality vs. quantity of lives.
↑ comment by Unknowns · 2014-12-02T01:28:56.363Z · LW(p) · GW(p)
If you mean you don't believe I would take practical steps that favored the supposedly repugnant conclusion if I had a choice, then you are mistaken. For example, I am opposed to the use of contraception because it keeps population lower, and I would maintain this opposition even if it were clearly shown that with the increasing population, the standard of living would continue to decline until everyone's lives were barely worth living. This affects me in practical ways such as what I try to persuade other people to do, what I would vote for, what I would do myself in regard to a family, and so on.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-12-02T02:07:02.386Z · LW(p) · GW(p)
I am opposed to the use of contraception because it keeps population lower
How many women have you impregnated so far?
Replies from: Unknowns↑ comment by Unknowns · 2014-12-02T02:19:32.363Z · LW(p) · GW(p)
First of all, this is rude, especially if I am woman myself.
Second, I am interested in increasing the population, but I am also interested in other things. (And this does not mean that I want these other things and a low population instead of a high population; I expect a high population to happen whether or not I am involved in bringing it about, so I can achieve both goals by working mainly for the other goals.)
Replies from: shminux↑ comment by Shmi (shminux) · 2014-12-02T07:18:22.858Z · LW(p) · GW(p)
Eh, I concede you have a point there re actively working toward spreading utility as thin as possible. I still have trouble believing that you truly mean what you say. But hey, I've been wrong before.
comment by Shmi (shminux) · 2014-12-01T18:47:43.693Z · LW(p) · GW(p)
I agree with what you call an integral reasoning (if you end up with an "obviously" bad outcome, you should not accept it based on your individual steps, no matter how solid they seem or even are). But to me it is not a symptom of a flaw in your premises or logical steps. Rather, this is an indication that human ethics is a complicated mess of heuristics and contradictions, and any attempt to formalize it inevitably hits one of these contradictions, when extrapolated outside its domain of applicability.
Re your other points, it seems like a typical Sorites emergence scenario. Human life/agency/society are all emergent concepts which make no sense outside the domain of the models describing them. For example, of course there is such a thing as "society". It's a good statistical model of how thousands of interacting humans act on aggregate. No, there is no society of one or two humans. Though I don't understand what "build a rigorous definition of society as something morally valuable, rather than focusing on individual" is supposed to mean.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-12-01T20:29:01.820Z · LW(p) · GW(p)
Though I don't understand what "build a rigorous definition of society as something morally valuable, rather than focusing on individual" is supposed to mean.
I was thinking aloud (or a-writing or whatever the term is). Some people value, say, the existence of "French civlization" (insert your own preferred example here). What if we took this seriously? What if we actually valued civilizations/cultures/nations/societies to the extent that we would make decisions that were in their "interests" (most likely the interest of continuing to exist), to some extent, above and beyond the interests of the people who value them?
As I said, I don't agree with that, but I thought it would be interesting to explore integral reasoning in an area I disagreed with...
Replies from: shminux↑ comment by Shmi (shminux) · 2014-12-01T21:46:39.154Z · LW(p) · GW(p)
What if we actually valued civilizations/cultures/nations/societies to the extent that we would make decisions that were in their "interests" (most likely the interest of continuing to exist), to some extent, above and beyond the interests of the people who value them?
Yeah, we know how this approach invariably turns out, but I see what you mean now. I am not sure it is a good example of differential vs integral, because in a thriving society individual people tend to thrive, too, even if you do not necessarily construct it by satisfying interests of the individual. It's more of a local vs global maximum thing.
comment by Kindly · 2014-12-03T19:59:39.884Z · LW(p) · GW(p)
Why are the small steps necessary? Axiom 1 is sufficient to add an arbitrarily large population of minimally-happy people. If you think that this is repugnant, then you should not assume Axiom 1.
(You then need Axiom 2 to get from the state of "tiny-but-happy A, plus huge and minimally happy B" to a state of uniform minimal happiness. But I don't think that changes much, except that if you're imagining yourself as living in A you might selfishly prefer to remain very happy.)
I apologize for engaging with the example rather than the thesis.
comment by Fluttershy · 2014-12-01T19:33:17.973Z · LW(p) · GW(p)
People who don't object to the way in which the problem leading to the "repugnant conclusion" is set up, and who accept that B is preferable or equal to B-, which is preferable or equal to A+, which is preferable or equal to A, as defined here, must either accept that B is preferable or equal to A, or reject transitivity.
Most decision theories which would be viewed as sane around LW accept transitivity. I prefer A+ to B-, though, so I don't have to accept all forms of the "repugnant conclusion", even though I accept transitivity, and prefer A+ to A. I do agree that "For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living”, though-- I don't value the lives of people with high standards of living infinitely more than the lives of people with lives barely worth living.
Also see dust specks vs. torture.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-12-01T20:15:47.444Z · LW(p) · GW(p)
must either accept that B is preferable or equal to A
B versus A is whole class of B versus a whole class of A. You need but reject that once (while accepting it in most cases), and the mere addition paradox no longer does through.
comment by IlyaShpitser · 2014-12-01T18:31:41.525Z · LW(p) · GW(p)
I agree with this point, and I am glad you articulated it. Do you think differential reasoning is an analytic thing, though?
comment by SilentCal · 2014-12-02T23:22:47.007Z · LW(p) · GW(p)
I've thought about the same kind of distinction in politics, with the terms 'bottom-up' and 'top-down'. 'Bottom-up' view: "Does it make sense for citizens to pay police officers to fine them money if they drive without seat belts?" 'Top-down view': "Do seat belt laws reduce fatalaties?"
This example may not be the most impartial, but I think it's an important principle that if you did both methods perfectly, they'd agree; in the absence of such perfection, it's worth looking at both. So too for ethics.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-12-03T16:04:03.988Z · LW(p) · GW(p)
This example may not be the most impartial
Indeed. But I can't actually figure out which way it's partial :-)
if you did both methods perfectly, they'd agree
Yes... and no. What's happening in your example is that the phrasings are triggering different ethical values. So while they would agree as descriptions of reality, people's (malleable) values would be triggered in different ways by the two descriptions. But why do I bring up this somewhat unrealated point?
So too for ethics.
Because in ethics, we are choosing our values (among incompatible or at least in-tension values) so there's no reason to suspect that the two approaches would reach the same outcome.
Replies from: SilentCal↑ comment by SilentCal · 2014-12-03T23:15:40.837Z · LW(p) · GW(p)
Indeed. But I can't actually figure out which way it's partial :-) I'm happy to hear that--I think :)
Yes... and no. What's happening in your example is that the phrasings are triggering different ethical values. I don't think that's the entire story--I think there are facts that are more apparent to some views than to others. A perfect bottom-up reasoner would be able to consider an individual and conclude (if it were true) that paying police to fine them for not wearing seat belts really would make them wear seat belts more with net positive effect. A perfect top-down reasoner would see the aggregate effects of seat belt laws (if there were any) on autonomy, annoyance, and any other things that we don't have numbers on in real life. That is, either view will trigger all of the relevant values if it's done well enough.
In the ethics case, I'm similarly hopeful that there is a coherent answer--that is, that if the repugnant conclusion really is wrong, a perfect differential reasoner would immediately spot the flawed step without having to consider the integral effect, and if the repugnant conclusion is correct, a perfect integral reasoner would see that without having to construct a series of mere addition steps.
Why do I think there's a coherent answer? Maybe just optimism... but the post was suggesting that we should use integral ethics more. The 'should' in the previous sentence suggests that the choice between the obvious differential answer and the obvious integral answer is at least not arbitrary. Also, maybe I'm taking the mathematical terminology too literally, but to a logically perfect reasoner the differential and integral forms should imply each other.
comment by NancyLebovitz · 2014-12-02T15:52:25.427Z · LW(p) · GW(p)
Speaking of the world probably changing quite a bit, what's the likelihood of a future that won't wake you up because they think something about you makes it not worthwhile to have you around? It could be something you think is completely normal, and it's not necessarily on the current lists of what the future won't like about us.
If you're lucky, they leave you frozen because a society that wants you might appear sooner or later. If you're not lucky, they're very sure they're right, so they destroy your body.
comment by Salemicus · 2014-12-02T14:02:00.911Z · LW(p) · GW(p)
I think this logic applies to a whole class of problems, not just ethics. Specifically, if we are unsure of both our premises and our conclusion, then we might want to apply a method of reflective equilibrium, which combines what you call the "integral" and "differential" perspectives.
comment by Dagon · 2014-12-01T19:33:00.926Z · LW(p) · GW(p)
If your differentials don't add up to the integral, pick a better epsilon. Seriously - inconsistent implies untrue, but reversing it by changing one of the premises arbitrarily doesn't help get to truth either.
Replies from: roystgnr, shminux↑ comment by roystgnr · 2014-12-02T02:35:33.859Z · LW(p) · GW(p)
If your differentials have an associated error term that is O(epsilon) or worse, then no choice of epsilon will get them to add up correctly.
Replies from: Dagon↑ comment by Dagon · 2014-12-02T12:39:05.975Z · LW(p) · GW(p)
That's a really good point. The solution there would be to acknowledge the uncertainty, rather than changing either the sum or the components.
Another bullet for me to bite: I'm fairly confident in #1. I'm highly uncertain on #2. I'm middling uncertain on #3. I don't intuitively like it, but I'm not actually sure how firmly I can defend that position (to myself).
Thanks! It adds up again!
↑ comment by Shmi (shminux) · 2014-12-01T21:40:15.353Z · LW(p) · GW(p)
Existing human ethics is not math, it is inconsistent but it works within its limits, so in this sense it is true. What is untrue is any formalization of it that relies on the high-school math.
comment by Alejandro1 · 2014-12-01T18:51:18.282Z · LW(p) · GW(p)
Another dilemma where the same dichotomy applies is torture vs. dust specks. One might reason that torturing one person 50 years is better than torturing 100 people infinitesimally less painfully for 50 years minus one second, and that this is better than torturing 10,000 people very slightly less painfully for 50 years minus two seconds……. and at the end of this process accept the unintuituive conclusion that torturing someone 50 years is better than having a huge number of people suffer a tiny pain for a second (differential thinking). Or one might refuse to accept the conclusion and decide that one of these apparently unproblematic differential comparisons is in fact wrong (integral thinking).
Replies from: TheOtherDave, Stuart_Armstrong↑ comment by TheOtherDave · 2014-12-01T19:17:26.086Z · LW(p) · GW(p)
(nods) That said, "integral thinking" is difficult to apply consistently to thought-experimental systems as completely divorced from anything like my actual life as TvDS.
I find in practice that when I try, I mostly just end up ignoring the posited constraints of the thought-experimental system -- what is sometimes called "fighting the hypothetical" around here.
For example, when I try to apply "integral thinking" to TvDS to reject the unintuitive conclusion, I end up applying intuitions developed from life in a world with a conceivable number of humans, where my confidence that the suffering I induce will alleviate a greater amount of suffering elsewhere is pretty low, to a thought-experimental world with an inconceivable number of humans where my confidence is extremely high.
↑ comment by Stuart_Armstrong · 2014-12-01T20:17:36.255Z · LW(p) · GW(p)
Torture vs dust specks has other features - in particular, the fact that "torture" is clearly the right option under aggregation (if you expect to to face the same problem 3^^^3 times).
Replies from: Alejandro1, shminux↑ comment by Alejandro1 · 2014-12-01T20:44:59.019Z · LW(p) · GW(p)
The "clearly" is not at all clear to me, could you explain?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-12-02T10:27:11.297Z · LW(p) · GW(p)
Yes, I did under specify my answer. Let's assume that a billion dust specks will completely shred one person.
Then if have a specific population (key assumption) of 3^^^3 people and face the same decision a billion times, then you have the choice between a billion tortures and 3^^^3 deaths.
If you want to avoid comparing different negatives, figure out how many dust specks impacts (and at what rate) equivalent to 50 years of torture, painwise, and apply a similar argument.
Replies from: Alejandro1↑ comment by Alejandro1 · 2014-12-02T16:31:30.201Z · LW(p) · GW(p)
I think that violates the spirit of the thought experiment. The point of the dust speck is that it is a fleeting, momentary discomfort with no consequences beyond itself. So if you multiply the choice by a billion, I would say that the billion dust specks should aggregate in a way they don't pile up and "completely shred one person"--e.g., each person gets one dust speck per week. This doesn't help solving the dilemma, at least for me.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-12-04T12:57:52.554Z · LW(p) · GW(p)
Ok, then it doesn't solve the torture vs dust specks. But it does solve many analogous problems, like 0.5 sec torture for many people vs 50 years for one person, for example.
I touched on the idea here: http://lesswrong.com/lw/1d5/expected_utility_without_the_independence_axiom/
But it's important to note that there is no analogue to that in population ethics. I think I'll make a brief post on that.
↑ comment by Shmi (shminux) · 2014-12-01T21:43:22.366Z · LW(p) · GW(p)
I thought it's an excellent example of differential vs integral and the Sorites paradox.