Why Do We Engage in Moral Simplification?
post by Wei Dai (Wei_Dai) · 2011-02-14T01:16:40.615Z · LW · GW · Legacy · 36 commentsContents
36 comments
It appears to me that much of human moral philosophical reasoning consists of trying to find a small set of principles that fit one’s strongest moral intuitions, and then explaining away or ignoring the intuitions that do not fit those principles. For those who find such moral systems attractive, they seem to have the power of actually reducing the strength of, or totally eliminating those conflicting intuitions.
In Fake Utility Functions, Eliezer described an extreme version of this, the One Great Moral Principle, or Amazingly Simple Utility Function, and suggested that he was partly responsible for this phenomenon by using the word “supergoal” while describing Friendly AI. But it seems to me this kind of simplification-as-moral-philosophy has a history much older than FAI.
For example, hedonism holds that morality consists of maximizing pleasure and minimizing pain, utilitarianism holds that everyone should have equal weight in one’s morality, and egoism holds that moralist consists of satisfying one’s self-interest. None of these fits all of my moral intuitions, but each does explain many of them. The puzzle this post presents is: why do we have a tendency to accept moral philosophies that do not fit all of our existing values? Why do we find it natural or attractive to simplify our moral intuitions?
Here’s my idea: we have a heuristic that in effect says, if many related beliefs or intuitions all fit a certain pattern or logical structure, but a few don’t, the ones that don’t fit are probably caused by cognitive errors and should be dropped and regenerated from the underlying pattern or structure.
As an example where this heuristic is working as intended, consider that your intuitive estimates of the relative sizes of various geometric figures probably roughly fit the mathematical concept of “area”, in the sense that if one figure has a greater area than another, you’re likely to intuitively judge that it’s bigger than the other. If someone points out this structure in your intuitions, and then you notice that in a few cases your intuitions differ from the math, you’re likely to find that a good reason to change those intuitions.
I think this idea can explain why different people end up believing in different moral philosophies. For example, many members of this community are divided along utilitarian/egoist lines. Why should that be the case? The theory I proposed suggests two possible answers:
- They started off with somewhat different intuitions (or the same intuitions with different relative strengths), so a moral system that fits one person’s intuitions relatively well might fit anther’s relatively badly.
- They had the same intuitions to start with, but encountered the moral philosophies in different orders. If each person accepts the first moral system that fits their intuitions “well enough”, and more than one fits “well enough”, then they’ll accept the first such moral system, which changes their intuitions, causing the rest to be rejected.
I think it’s likely that both of these are factors that contribute to the apparent divergence in human moral reasoning. This seems to be another piece of bad news for the prospect of CEV, unless there are stronger converging influences in human moral reasoning that (in the limit of reflective equilibrium) can counteract these diverging tendencies.
36 comments
Comments sorted by top scores.
comment by cousin_it · 2011-02-14T02:08:34.022Z · LW(p) · GW(p)
Why do we find it natural or attractive to simplify our moral intuitions?
I'll go with the Hansonian answer: we keep our old and complex systems of reasons-for-actions, but verbally endorse simple moral frameworks because they make it easier to argue against enemies or make allies. I don't believe people who profess to have adopted some moral system in earnest, because all simple moral systems recommend very extreme behavior when followed to their logical conclusions.
I like Nesov's idea of ditching the abstract phlogiston of "rightness" that doesn't have much causal or explanatory power anyway, and thinking only about concrete and varied reasons-for-actions instead. Accepting this view (ignoring abstract moral intuitions that have no motive power) might even make things easier for CEV.
Replies from: Wei_Dai, paulfchristiano↑ comment by Wei Dai (Wei_Dai) · 2011-02-14T02:19:39.629Z · LW(p) · GW(p)
It seems to me that when someone adopts a moral system, they may not follow all of its conclusions, but their moral intuitions as well as actual actions do shift toward the recommendations of that system. Do you disagree?
Replies from: cousin_it, sabaton↑ comment by cousin_it · 2011-02-14T03:39:20.404Z · LW(p) · GW(p)
I agree with that, but feel that they don't shift by very much. And when they do shift, the causality might well run in the other direction: sometimes we change our professed morality to justify our preferred actions. And most of our actions are caused by reasons other than our current professed morality anyway, so it's not likely to play a large role in the preferences that CEV will infer from us.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-02-14T04:17:46.664Z · LW(p) · GW(p)
If we consider a human as a group of agents with different values, we could say that the conscious self's values are greatly shifted when adopting a moral system, but its power is limited, because most of the human's actions are not under its direct control. For example, someone might eat too much and gain weight as a result, even if that is against their conscious desires. Depending on technological advances, that power balance could be changed, say if someone came up with a pill lets you control your appetite.
FAI essentially let's the conscious self have total dominance, if it chooses to. Why should CEV weigh its values according to the balance of power as of 2011?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-02-14T10:47:56.634Z · LW(p) · GW(p)
If we consider a human as a group of agents with different values
Things like this is why it looks like a good idea to me to taboo "values". Human includes many heuristics that together add up to what counts as an "agent". Separate aspects/parts of a human include fewer heuristics, which makes these parts less like agents, and "values" for these parts become even less defined than for the whole.
So "human as group of agents with different values" translates as "human as a collection of parts with different structure", which sounds far less explanatory (as it should).
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-02-14T19:34:14.126Z · LW(p) · GW(p)
I agree that sometimes it can be useful to taboo "values". But I'm not sure why it would be helpful to taboo it here. I could rephrase my comment as saying that the subset of heuristics that corresponds to the conscious self, after adopting a new moral system, would cause a large shift in actions if it could (i.e., was given tools to overpower other conflicting heuristics), so it's not clear that adopting new moral systems should or would have little effect on CEV. Does tabooing "values" bring any new insights to this discussion?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-02-14T20:03:55.834Z · LW(p) · GW(p)
Does tabooing "values" bring any new insights to this discussion?
Probably not, but it lifts the illusion of understanding, which tabooing is all about. It's good practice to avoid unnecessary imprecision or harmless equivocation.
(Also, I'd include all the heuristics into "conscious self", not just some of them. They all have a hand in forming conscious decisions, and inability to know or precisely alter the workings of particular heuristics similarly applies to all of them. At least, the same criteria that exclude some of the heuristics from your conscious self should allow including external tools in it.)
↑ comment by sabaton · 2011-02-14T13:23:39.765Z · LW(p) · GW(p)
When someone verbally endorses a given framework, I understand it as saying "This is the framework that best fits my intuitions", but understand there are likely some points that diverge.
But maybe I am wrong and most people have actually realigned all their intuitions/behavior once they have picked a system?
↑ comment by paulfchristiano · 2011-02-14T16:35:01.889Z · LW(p) · GW(p)
I believe that abstract moral intuitions do have motive power.
For example, I have never been a very good utilitarian because I am selfish, lazy,etc. However, if one year ago you had offered me the option of becoming a very good preference utilitarian, in an abstract context where my reflexes didn't kick in, I would have accepted it. If you had given me the option to implement an AI which was a preference utilitarian (in some as-yet-never-made-sufficiently-concrete sense which seemed reasonable to me) I would have taken it.
I also am not sure what particular extreme behavior preference utilitarianism endorses when followed to its logical conclusion. I'm not aware of any extreme consequences which I hadn't accepted (of course, I was protected from unwanted extreme consequences by the wiggle room of a grossly under-determined theory).
comment by NancyLebovitz · 2011-02-14T03:55:38.550Z · LW(p) · GW(p)
I nominate wishful thinking. Life would be so much simpler and easier if we knew what we were doing.
comment by NihilCredo · 2011-02-14T13:02:08.840Z · LW(p) · GW(p)
why do we have a tendency to accept moral philosophies that do not fit all of our existing values?
Do most humans actually accept them, or do most humans find themselves conflicted between their complex pre-accumulated values and their attraction [see question 2] towards the elegance of a simple moral system, and then spend the rest of their lives trying to bend the two so that they coincide?
Why do we find it natural or attractive to simplify our moral intuitions?
I'm going to offer an hypothesis that I don't quite have the competence to properly defend, but which I find plausible off of weak evidence:
It is a defining feature of Western culture - stretching all the way back to classical Greece, and I'm thinking in particular of pre-Socratic philosophers as well as Euclides - to see the strength of intellectual edifices as inversely proportional to their Kolmogorov complexity, whatever the subject they treat. This heuristic permeates the overwhelming majority of our theoretical education, and it is practically instinctual for people like us to see systems consisting of fewer and simpler axioms as more likely to be "correct", even in a field such as ethics where it is unclear if there are strong reasons to apply said heuristic. People who were not educated in Western culture, or Westerners who have undergone only minimal schooling, are nowhere near as uncomfortable with the idea that their moral system cannot be reduced to a brief set of fundamental principles.
comment by syllogism · 2011-02-18T02:55:30.455Z · LW(p) · GW(p)
I think the line of research being conducted by Haidt and others on the psychology of morality is relevant here. His TED talk provides a simple summary: http://www.ted.com/talks/jonathan_haidt_on_the_moral_mind.html
The basic idea is that there are "primary colours" of our moral intuitions that include fairness, harm reduction, loyalty, purity, respect for authority. These considerations give you a web of intuitions about goodness and propriety that are often brought into conflict. You might have a respect for authority, but also recognise that this can cause grave harm. You might have a respect for purity that gives you a knee jerk aversion to promiscuity or homosexuality, but also see that this is unfair to people who have immutable preferences and who aren't harming anyone.
There's no way to reconcile these conflicting intuitions, so we tend to focus on a "prime directive" like "make the world a better place", or "be a fine upstanding individual in your own life and try not to judge other people too harshly".
I think the former, "make the world a better place", is particularly good for producing moral systems that are internally consistent. It does ask you to question and reject a lot of knee jerk reactions. Counter-factuals are useful in doing this. Someone who has trouble letting go of a moral distaste for promiscuity or gluttony should keep in mind that the same instinct could under different circumstances make you repulsed by the idea of a woman on her period preparing food. Someone who thinks there's a moral failure in refusing to stick together with your in-group under adversity should remember that this instinct could have put you on the wrong side of the Holocaust.
Replies from: Multiheaded↑ comment by Multiheaded · 2011-07-08T19:15:39.030Z · LW(p) · GW(p)
Someone who thinks there's a moral failure in refusing to stick together with your in-group under adversity should remember that this instinct could have put you on the wrong side of the Holocaust
From a strictly utilitarian standpoint, if one had a strong commitment to the common good, but uncommonly little knee-jerk reaction or natural empathy, would it have made more sense to passively tolerate the Holocaust/offer only safe resistance, and live to affect the post-war world, where there could be more one could do for oneself/humanity?
Or make a stand and give your all to saving as many as possible, feeling plenty of moral gratification, and trying to go out in a blaze of glory when They came for you, which could also make you an inspiring example in, say, half a century?
I used to hold the former completely unacceptable after being strongly influenced by Hannah Arendt's Origins of Totalitarianism (a great read, highly recommended) and her notion of "Radical evil" (somewhat deontologically loaded), but I have yet to attempt a rationalist evaluation of what I've read.
comment by lukeprog · 2011-02-14T06:26:06.383Z · LW(p) · GW(p)
It's worth noting that most moral systems are not as simplistic as utilitarianisms and Kantianisms. Many religious moral systems, for example, project the complexity of human values onto the magical psychology of deities. Virtue theories and contract theories can't easily be reduced to a single moral principle, either. Recent studies in experimental philosophy suggest that moral relativism is more prevalent than once thought, too.
comment by endoself · 2011-02-14T05:32:26.164Z · LW(p) · GW(p)
When I believed in simple morality, I did so because I thought that morality had to be "fundamental" in some way. Complex value seemed it was chosen arbitrarily, while simple value could be built into the structure of the universe. If morality were regarded the same way by all sufficiently intelligent beings, it would have to be something simple, based on some small set of self-evident principles.
comment by Perplexed · 2011-02-14T05:05:22.533Z · LW(p) · GW(p)
I agree with much of your analysis - particularly the analogy to geometry, where, as you point out, the heuristic works. That heuristic is also useful in science. And in other branches of philosophy besides ethics.
The puzzle this post presents is: why do we have a tendency to accept moral philosophies that do not fit all of our existing values? Why do we find it natural or attractive to simplify our moral intuitions?
Here’s my idea: we have a heuristic that in effect says, if many related beliefs or intuitions all fit a certain pattern or logical structure, but a few don’t, the ones that don’t fit are probably caused by cognitive errors and should be dropped and regenerated from the underlying pattern or structure.
Given that the heuristic is so useful in other areas, I have to ask, "How you can be sure it is wrong to use it in ethics?"
Also, I think that the intuitions that don't fit into the logical structure or system are not usually discarded or replaced. Instead, I think they are often simply reclassified - if they can no longer be justified as intrinsic values, a just-so story is constructed to explain them as instrumental values, or perhaps as evo-psych artifacts of an ancestral environment which no longer applies.
Replies from: endoself, Wei_Dai↑ comment by endoself · 2011-02-14T05:44:38.863Z · LW(p) · GW(p)
There definitely are some intuitions that are wrong on reflection, like scope insensitivity.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-02-17T22:26:25.748Z · LW(p) · GW(p)
I'm not so sure about that. See Shut Up and Divide?
Replies from: endoself↑ comment by endoself · 2011-02-18T01:59:54.876Z · LW(p) · GW(p)
You still agree that is it wrong for morals not to depend on numbers of people, you just propose a different alternate decision system. Eliezer's example of willingness to donate to sick children in Israel could be included in some extremely convoluted but consistent decision process but, upon reflection, we find that this is not what we think is right.
↑ comment by Wei Dai (Wei_Dai) · 2011-02-14T19:48:58.418Z · LW(p) · GW(p)
Given that the heuristic is so useful in other areas, I have to ask, "How you can be sure it is wrong to use it in ethics?"
In fact, I'm not sure, but it does seem suspicious that different people, applying the same heuristic, can reach conclusions as different as utilitarianism and egoism. I guess I have another heuristic which says that in situations like this, don't do anything irreversible (which adopting a moral system can be if it permanently changes your moral intuitions) until I have a better idea of what is going on.
Replies from: Perplexed↑ comment by Perplexed · 2011-02-14T21:13:20.121Z · LW(p) · GW(p)
I'm not sure there is all that much difference at the behavioral level between a utilitarian (who sometimes fails to act in accordance with his proclaimed values) and an egoist (who sometimes fails to notice that he could have 'gotten away with it').
I think your second heuristic is a good one, though I don't personally know anyone who is so closed minded that they would be unable to undo the adoption of a moral system and the changed intuitions that came with it.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-07-08T20:06:10.158Z · LW(p) · GW(p)
I think the more sophisticated versions of egoism and utilitarianism have a tendency to meet in the middle anyway. Good egoists aren't supposed to Prudently Predate (because of the effect it has on society--rather utilitarianish that). Realistiic ultilitaians needn't indulge in relentless self sacrifice.
comment by atucker · 2011-02-14T03:22:35.191Z · LW(p) · GW(p)
I think it’s likely that both of these are factors that contribute to the apparent divergence in human moral reasoning. This seems to be another piece of bad news for the prospect of CEV, unless there are stronger converging influences in human moral reasoning that (in the limit of reflective equilibrium) can counteract these diverging tendencies.
If my morality had changed simply from me hearing about a moral simplification and throwing out what didn't fit, and I only picked that moral simplification because I had heard of it first and because of it diverged, I would want to discard the simplification and go back to an earlier, more convergent morality.
Is that a sufficient converging influence?
comment by Eugine_Nier · 2011-02-14T19:50:20.432Z · LW(p) · GW(p)
I get the feeling simplified moral systems are largely a recent western phenomenon. This doesn't seem to be limited to moral systems, e.g., simple theories of history. I suspect this is caused by people seeing the success that fundamentally simple theories have had in the hard sciences, and trying to apply the same methods to other fields of endeavor.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-07-08T20:08:19.190Z · LW(p) · GW(p)
The Ten Commandments are pretty simple.
Replies from: asrcomment by Dr_Manhattan · 2011-02-14T18:11:28.876Z · LW(p) · GW(p)
Why do we find it natural or attractive to simplify our moral intuitions?
Pragmatically, there is some necessary lossy compression of moral intuitions for these reasons
1) Meta-value of being consistent in different emotional states. E.g. I do not like to kill, but that guy in a BMW who cut me off when it was my right of way... Or not doing things while being drunk that you will regret.
2) Providing instructions to agents acting on your behalf.
3) Comprimise of one's preferences in some collective action context (corporate cultures)
Incidentally Iain Banks cleverly deals with the communication channel problem by having persons send each other mindstate extracts.
comment by paulfchristiano · 2011-02-14T16:22:10.305Z · LW(p) · GW(p)
In addition to being simple, the moral systems you describe also appear to be "obviously" consistent (although upon reflection they aren't). The more complicated systems you might adopt are not only more complicated, but also generally either don't look very consistent or (more frequently) are manifestly inconsistent. Complicated systems seem unlikely to be consistent unless they are extremely carefully designed.
If the point of your moral philosophy is to evaluate moral arguments, and if your moral philosophy is the only thing you use to evaluate moral arguments, having an inconsistent philosophy is dangerous for obvious reasons. So either you need to adopt a consistent moral philosophy, you need to be ready to change your philosophy when you are presented with an unpalatable argument (appealing to your real decision making process, which is your inconsistent intuition), or you should never engage in a moral argument. Changing your framework whenever you encounter an uncomfortable argument (or never engaging in an argument) is an easy way to avoid ever coming to an uncomfortable conclusion, which worries me.
I have never encountered a good simple moral framework, but the benefits of consistency do seem considerable. Of course a complicated consistent framework would be just as good, but I've never seen one of those either.
Replies from: prase↑ comment by prase · 2011-02-14T17:23:48.816Z · LW(p) · GW(p)
I have never encountered a good simple moral framework, but the benefits of consistency do seem considerable. Of course a complicated consistent framework would be just as good, but I've never seen one of those either.
What's the point of having a moral philosophy at all, when all moral philosophies are bad?
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-02-14T20:40:41.522Z · LW(p) · GW(p)
I don't have a simple framework in which to evaluate moral arguments, but I would like one. I probably don't have a consistent framework in which to evaluate moral arguments (I suspect a sufficiently good arguer could convince me of anything) but I would like one.
Replies from: prase↑ comment by prase · 2011-02-15T09:08:45.774Z · LW(p) · GW(p)
You also don't have a simple or consistent framework to evaluate aesthetic arguments, but I suppose you don't worry about it too much.
I suspect a sufficiently good arguer could convince me of anything) but I would like one
I don't expect any arguer to convince you that e.g. torturing babies for sadistic pleasure is morally acceptable any more than a good arguer could convince you that 5+6=88. I mean, it may be possible to construct a really clever deceiving argument about anything and perhaps nobody is completely immune against some extremely well-designed sophistry, but that is no specialty of moral arguments. And as for being persuaded to commit atrocities, I am far more afraid of people who have adopted some moral philososphy than those who judge instinctively. The moral theory can be exploited to convince the former, while little can be done with the latter.
Of course there are a lot of moral situations, like the trolley problems, where the right answer is far from obvious and it takes very small effort to change most people's minds (slightly modifying the formulation is often enough). This is undesirable if one aims to construct a consistent theory of morality, but why should we want to construct a consistent theory? Most of the historical attempts were based on assumption that people who embraced the moral theory would behave better than their morally naïve peers. You will probably agree that no moral theory was visibly successful so far, and there is even some general experimental evidence that it doesn't work. Shouldn't we conclude that creating neat consistent prescriptive moral theories is futile?
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-02-15T22:24:23.694Z · LW(p) · GW(p)
Why should we want to construct a consistent theory?
There are domains where my moral intuition fails me completely. I don't get to abstain from moral questions because I don't like them, so I need to develop some way of making decisions. I would like to be able to argue with people about what the correct decision is, even and especially on hard problems which aren't instantly solved by my intuitions. To handle this situation it looks like I either need to develop a consistent morality, to develop a logical system which can handle inconsistency, to give up on moral arguments, or to somehow enlarge my moral intuitions so they can act as a sanity check on arguments in all domains.
You also don't have a simple or consistent framework to evaluate aesthetic arguments, but I suppose you don't worry about it too much.
What is an aesthetic argument? I have occasionally engaged in arguments about what a certain audience will appreciate, or what I will appreciate in the future, but those are questions of fact which I evaluate in the same way as other questions of fact. I have occasionally gotten into arguments about what artistic sensibilities should be encouraged, but those are generally moral arguments that happen to deal with artwork (and in this case I do worry about it as much as I worry about morality at all).
Shouldn't we conclude that creating neat consistent prescriptive moral theories is futile?
No theory of everything has been visibly successful so far. I'm not too optimistic about finding a simple moral theory which agree with my intuition in the domain where my intuition is defined,but not because of past failures.
Replies from: prase↑ comment by prase · 2011-02-16T10:45:21.316Z · LW(p) · GW(p)
I have occasionally gotten into arguments about what artistic sensibilities should be encouraged, but those are generally moral arguments that happen to deal with artwork (and in this case I do worry about it as much as I worry about morality at all).
Yes, I have had in mind such arguments, and even more simply arguments over what's beautiful. I wouldn't classify them under morality label, but if you do, and are consistently worried about them in the same way as about other moral arguments, fine with me. I appologise for expecting that you are less consistent than you really are.
I would like to be able to argue with people about what the correct decision is, even and especially on hard problems which aren't instantly solved by my intuitions.
The problem I have with this is: why should I care about situations where my moral intuition doesn't give anwers? For me, morality is sort of defined by strong feelings associated with certain behaviors, and desires to reward or punish the actors. In absence of these feelings I simply consider the question morally neutral. That's why I have mentioned the aesthetics. I have no consistent idea of what is beautiful. Although it is important for me to live in a beautiful environment, clever arguers can probably change my perception of beauty a bit. Now, if I had a logical system built partially on my aesthetic intuition, but consistent and stable, I could profit as much as you in case of morality. I could algorithmically decide whether something is beautiful or not. But if I imagine encountering something new, unexpected and ugly, and in the same time calling it beautiful because that was the output of my algorithm - well, that seems absurd. (I like to think this was what went wrong with many of the modern artistic styles, like brutalism. They created a norm and later decided what is beautiful using the norm.)
comment by Vladimir_Nesov · 2011-02-14T01:58:34.635Z · LW(p) · GW(p)
There're probably no simple reasons for people engaging in moral simplification.