Integral vs differential ethics, continued
post by Stuart_Armstrong · 2015-08-03T13:25:08.053Z · LW · GW · Legacy · 18 commentsContents
18 comments
I've talked earlier about integral and differential ethics, in the context of population ethics. The idea is that the argument for the repugnant conclusion (and its associate, the very repugnant conclusion) is dependent on a series of trillions of steps, each of which are intuitively acceptable (adding happy people, making happiness more equal), but reaching a conclusion that is intuitively bad - namely, that we can improve the world by creating trillions of people in torturous and unremitting agony, as long as balance it out by creating enough happy people as well.
Differential reasoning accepts each step, and concludes that the repugnant conclusions are actually acceptable, because each step is sound. Integral reasoning accepts that the repugnant conclusion is repugnant, and concludes that some step along the way must therefore be rejected.
Notice that key word, "therefore". Some intermediate step is rejected, but not for intrinsic reasons, but purely because of the consequence. There is nothing special about the step that is rejected, it's just a relatively arbitrary barrier to stop the process (compare with the paradox of the heap).
Indeed, things can go awry when people attempt to fix the repugnant conclusion (a conclusion they rejected through integral reasoning) using differential methods. Things like the "person-affecting view" have their own ridiculousness and paradoxes (it's ok to bring a baby into the world if it will have a miserable life; we don't need to care about future generations if we randomise conceptions, etc...) and I would posit that it's because they are trying to fix global/integral issues using local/differential tools.
The relevance of this? It seems that integral tools might be better suited to deal with the bad convergence of AI problem. We could set up plausibly intuitive differential criteria (such as self-consistency), but institute overriding integral criteria that can override these if they go too far. I think there may be some interesting ideas in that area, potentially. The cost is that integral ideas are generally seen as less elegant, or harder to justify.
18 comments
Comments sorted by top scores.
comment by CronoDAS · 2015-08-03T23:34:03.415Z · LW(p) · GW(p)
My own resolution to the "repugnant conclusion" is that the goodness of a population isn't a state function: you can't know if a population is better than another simply by looking at the well-being of each currently existing person right now. Instead, you have to know the history of the population as well as its current state.
Replies from: Stuart_Armstrong, None↑ comment by Stuart_Armstrong · 2015-08-04T09:02:23.695Z · LW(p) · GW(p)
Very integral reasoning ^_^
Replies from: None↑ comment by [deleted] · 2015-08-10T03:49:33.540Z · LW(p) · GW(p)
Observation: seemingly, consequentialists should be using "integral reasoning", while deontologists use "differential reasoning". If what you really care about is the final outcome, then you shouldn't assign much weight to what your intuitions say about each individual step, you care about the final outcome.
↑ comment by [deleted] · 2015-08-10T03:52:07.942Z · LW(p) · GW(p)
Well when you're dealing with real people, their memories in the present time record all kinds of past events, or at least information about such, so any coherent, real-life state-function for values of people must either take past events into true account (which sounds like the right thing), or take memories into account (which sounds like a very, very hacky and bad idea).
comment by Dagon · 2015-08-03T15:29:40.386Z · LW(p) · GW(p)
I suspect much of the problem is that humans aren't very good at consistency nor calculation. Scope insensitivity (and other errors) cause us to accept steps that lead to incorrect results once aggregated. If you can actually define your units and measurements, I strongly expect that the sum of the steps will equal the conclusion, and you will be able to identify the steps which are unacceptible (or accept the conclusion).
I'd advise against the motivated reasoning that "if you don't like the conclusion, you have to find a step to reject", but rather the "I notice I'm confused that I have different evaluations of the steps and the aggregate, so I've probably miscalculated."
And if this is the case (the mismatch is caused by compounded rounding errors rather than a fundamental disconnect), then it seems unlikely to be a useful solution to AI problems. Unless we fix the problems in our calculation, not just use the method we've proven doesn't work.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-08-04T09:01:41.211Z · LW(p) · GW(p)
"I notice I'm confused that I have different evaluations of the steps and the aggregate, so I've probably miscalculated."
But then you have to choose either to correct the steps to get in line with the aggregate (integral reasoning), or the aggregate to get in line with the steps (differential reasoning).
Replies from: Dagon, None↑ comment by Dagon · 2015-08-04T16:02:19.730Z · LW(p) · GW(p)
Inconsistency shows that there is at least one error, it does not imply (actually, it gives some evidence against) that either calculation is correct. You can't choose which one to adjust to fit the other, you have to correct all errors. Remember, consistency isn't a goal in itself, it's just a bit of evidence for correctness.
For the specific case in point, the error is likely around not being numerical in the individual steps - how much better is the universe with one additional low-but-positive life added? How much (if any) is the universe improved by a specific redistribution of life-quality? Without that, you can't know if any of the steps are valid and you can't know if the conclusion is valid.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-08-05T08:27:28.418Z · LW(p) · GW(p)
These are values we're talking about - the proof is a proof of inconsistency between two value sets, and you have to choose which parts of your values to give up, and how. Your choice of how to be numerical in each step determines which values you're keeping.
Replies from: Dagon↑ comment by Dagon · 2015-08-05T18:12:25.325Z · LW(p) · GW(p)
I think we agree on the basics - the specificity of calculation allows you to identify exactly what you're considering, and find out what the mismatch is (missing a step, making an incorrect step, and/or mis-stating the summation). This is true for values as well as factual beliefs.
It is only after this that you understand your proposed values well enough to know whether they are different value-sets, or just a calculation mistake in one or both. Once you know that, then you can decide which, if either, apply to you.
I guess you should also separately decide if it's good and important for you to think you're a unitary individual, vs a series of semi-connected experiences. Do you (singlular you) want to have a single consistent set of values, or are all the future you-components content to behave somewhat randomly over time and context. This is mostly assumed in this kind of discussion, but probably worth stating if you're questioning what (if anything) you learn from an inconsistency.
↑ comment by [deleted] · 2015-08-14T02:12:41.746Z · LW(p) · GW(p)
Hey, a thought occurred. I was random-browsing The Intuitions Behind Utiliarianism and saw the following:
You can say anything, but Graham's number is very large; if the disutility of an air molecule slamming into your eye were 1 over Graham's number, enough air pressure to kill you would have negligible disutility.
It occurs to me that this sounds a lot like the problem with the linear scaling used by "utilitarianism": "paradox" of the heap or not, things can have very different effects when they come in large numbers or very small numbers. You really should not have a utility function that rates "disutility of an air molecule slamming into your eye" and then scales up linearly with the number of molecules, precisely because one molecule has no measurable effect on you, while an immense number (eg: a tornado, for instance) can and will kill you.
When you assume linear scaling of utility as an axiom (that "utilons" are an in-model real scalar), you are actually throwing out the causal interactions involving the chosen variable (eg: real-world object embodying a "utilon") that scale in nonlinear ways. The axiom is actually telling you to ignore part of the way reality works just to get a simpler "normative" model.
So in a typical, intuitive case, we assume that "maximizing happiness" means some actually-existing agent actually experiences the additional happiness. But when you instead have a Utilitarian AI that adds happiness by adding not-quite-clinically-depressed people, the map of "utility maximizing" as "making the individual experiences more enjoyable" has ceased to match the territory of "increase the number of individuals until the carrying capacity of the environment is reached". A nonlinear scaling effect happened - you created so many people that they can't be individually very happy - but the "normativity" of the linear-utilons axiom told your agent to ignore it.
I think a strong criterion for a True Ethical System should be precisely that it doesn't "force" you to ignore the causal joints of reality.
comment by Username · 2015-08-17T10:35:18.026Z · LW(p) · GW(p)
The Puzzle of the Self-torturer talks about transitive and intransitive preferences.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2015-08-17T12:04:13.245Z · LW(p) · GW(p)
Thanks for the link.
comment by IffThen · 2015-08-06T22:37:22.547Z · LW(p) · GW(p)
I would re-frame the issue slightly; the process that philosophy/ethics goes through is something more like this:
If given A, B, and C we get D, and if not-A is unacceptable and not-B is unacceptable and not-C is unacceptable and D is unacceptable, then we do not fully understand the question. So lets play around with all the possibilities and see what interesting results pop up!
Playing around should involve frequent revisits to the differential and integral inspections of the argument; if you are doing just one type of inspection, you are doing it wrong.
But in the end you might come to a solution, and/or you might write a very convincing sounding paper on a solution, but the assumption isn't that you have now solved everything because your integrating or differentiating is nicely explicable. It is highly debatable whether any argument in ethics is more than an intuition pump used to convince people of your own point of view. After all, you cannot prove even the basic fact that happy people are good; we simply happen to accept that as a warrant... or, in some cases, make it our definition of 'good'.
It is worth considering the possibility that all these ethical arguments do is try to make us comfortable with the fact that the world does not work in our favor... and the correct solution is to accept as a working hypothesis that there is no absolutely correct solution, only solutions that we should avoid because they feel bad or lead to disaster. (To clarify how this would work in the case of the Repugnant Conclusion, the correct-enough solution might involve each population setting its own limits on population size and happiness ranges, and those who disagree having to make their own way to a better population; alternatively, we might define an acceptable range and stick to it, despite political pundits criticizing our decision at every turn, and many people maintaining roiling angst at our temerity.)
In the end, the primary and perhaps only reason we continue to engage in philosophy is because it is foolish to stop thinking about questions simply because we do not have a solution (or an experimental process) that applies.
comment by [deleted] · 2015-08-10T03:46:47.359Z · LW(p) · GW(p)
Do these apply to non-utilitarian theories of ethics?
comment by DataPacRat · 2015-08-04T00:59:25.934Z · LW(p) · GW(p)
What are the forms of math called where you can compare numbers, such as to say that 3 is bigger than 2, but can't necessarily add numbers - that is, 2+2 may or may not be bigger than 3?
Replies from: gjm, banx, Stuart_Armstrong↑ comment by gjm · 2015-08-04T14:37:14.334Z · LW(p) · GW(p)
There are mathematical structures that allow for comparison but not arithmetic. For instance, a total order on a set is a relation < such that (1) for any x,y we have exactly one of x<y, x=y, y<x and (2) for any x,y,z if x<y<z then x<z. (We say it's trichotomous and transitive.)
The usual ordering on (say) the real numbers is a total order (and it's "compatible with arithmetic" in a useful sense), but there are totally ordered sets that don't look much like any system of numbers.
There are weaker notions of ordering (e.g., a partial order is the same except that condition 1 says "at most one" instead of "exactly one", and allows for some things to be incomparable with others; a preorder allows things to compare equal without being the same object) and stronger ones (e.g., a well-order is a total order with the sometimes-useful property that there's no infinite "descending chain" a1 > a2 > a3 > ... -- this is the property you need for mathematical induction to work).
The ordinals, which some people have mentioned, are a generalization of the non-negative integers, and comparison of ordinals is a well-order, but arithmetic (albeit slightly strange arithmetic) is possible on the ordinals and if you're thinking about preferences then the ordinals aren't likely to be the sort of structure you want.
↑ comment by Stuart_Armstrong · 2015-08-04T09:06:03.233Z · LW(p) · GW(p)
As banx said, take ordinal numbers (and remove ordinal addition). Classical ordinal numbers can be added, but they can't be scaled - you can't generally have x% of an ordinal number.