Most-Moral-Minority Morality
post by byrnema · 2011-06-27T16:28:04.925Z · LW · GW · Legacy · 35 commentsContents
Ideal Application: Nested Morality Model Figure: Hierarchy of Moral Preferences in the Nested Morality Model Real Life Application: Very Limited None 35 comments
In this post, I discuss a theoretical strategy for finding a morally optimum world in which -- regardless of my intrinsic moral preferences -- I fold to the preferences of the most moral minority.
I don't personally find X to be intrinsically immoral. I know that if some people knew this about me, they might feel shocked, sad and disgusted. I can understand how they would feel because I feel that Y is immoral and not everyone does, even though they should.
These are unpleasant feelings and, combined with the fear that immoral events will happen more frequently due to apathy, I'm willing to fold X into my category of things that shouldn't happen. Not because of X itself, but because I know it makes people feel bad.
This is more than the gaming strategy I'll be anti-X if they'll be anti-Y. This is a reflection that the most moral world is a world in which people's moral preferences are maximally satisfied, so that no one needs to feel that their morality is marginalized and suffer the feelings of disgust and sadness.
Ideal Application: Nested Morality Model
The sentiment and strategy just described is ideal in the case of a nested model of moralities in which preferences can be roughly universally ranked from most immoral to least immoral: X1, X2, X3, X4, ... . Every one has an immoral threshold where they no longer care. For example, all humans consider the first few elements to be immoral. However, only the most morally sensitive humans care about the elements after the first few thousand. In such a world where this model was accurate, it would be ideal to fold to the morality of the most morally sensitive. Not only would you be satisfying the morality of everyone, you could be certain that you were also satisfying the morality of your most moral, future selves, especially by extending the fold a little further out.
Figure: Hierarchy of Moral Preferences in the Nested Morality Model
Note that in this model it doesn't actually matter if individual humans would rank the preferences differently. Since they're all satisfied, the ordering of preferences doesn't matter. Folding to the most moral minority should solve all moral conflicts that result from varying sensitivity to a moral issue, regardless of differences in relative rankings. For example, by such a strategy I should become a vegetarian (although I'm not).
Real Life Application: Very Limited
However, in reality, moral preferences aren't nested in sensitivity, but conflicting. Someone may have a moral preference for Y, while someone else may have a clear preference for ~Y. Such conflicts are not uncommon and may represent the majority of moral conflicts in the world.
Secondly, even if a person is indifferent about the moral value of Y and ~Y, they may value the freedom or the diversity of having both Y and ~Y in the world.
When it comes to the latter conflicts, I think that the world would be a happier place if freedom and diversity suffered a little bit for very strong (albeit minority) moral preferences. However, freedom and diversity should not suffer too much for very weak or very small sample size preferences. With such a trade-off situation, an optimum can not be found since I don't expect to be able to place relative weights on 'freedom', 'diversity' and an individual's moral preference in a general case.
For now, I think I will simply resolve to (consider) folding to the moral preference Z of a fellow human in the simplest case where I am apathetic about Z and also indifferent to the freedom and diversity of Z and ~Z.
35 comments
Comments sorted by top scores.
comment by byrnema · 2011-06-28T04:38:05.116Z · LW(p) · GW(p)
Oops. Reading through the comments, I think my post was misunderstood as some kind of prescription for moral progress. In retrospect, the term 'most moral minority' was too loaded, and when I wrote 'the world would be better' if I heeded the morality of minority positions, I only meant my own preferences would be better satisfied if more people's preferences were satisfied --- not that people's preferences would be best satisfied if they listened to the minority. Very different things, but I can see now reading through my post I was never careful to distance myself from the second stance.
By 'most moral minority' I meant having a greater number of moral opinions (e.g., opinions when other people were silent), not that their opinions were necessarilymore moral.
I had meant to say something very simple: if someone should care where I do not care, I would try to fold their interests into my own. I'm sorry for the confusion .
comment by CaveJohnson · 2011-06-28T09:39:37.094Z · LW(p) · GW(p)
This is a great idea!
I'm ultra sensitive about all actions that benefit my reproduction or utility function but others don't care about.
What do you mean I'm not honest? That is an outrageous absurd proposition. Well ok maybe I fall like a tiny bit short of my ideal. Don't worry due to the process of natural selection my successors will be optimised for the former and even if we turn that off I can selfmodifty for the latter! Free utility is awesome! Problem?
comment by orthonormal · 2011-06-27T17:59:57.979Z · LW(p) · GW(p)
There's a significant difference between agreeing not to do X if I'm indifferent to it and others find it highly immoral, and agreeing to start finding X immoral, and I'm much less willing to do the latter out of deference. Which are you talking about?
Replies from: byrnema↑ comment by byrnema · 2011-06-27T18:21:57.034Z · LW(p) · GW(p)
I would agree that starting to find X immoral in-of-itself would be over-doing it, especially as there could be a conflict later with people who object to ~X. I suppose I am probing a middling position where even if you don't find X intrinsically immoral, you associate it with the suffering it would cause those people and thus it acquires the immorality of being associated with that amount of suffering.
Back to the vegetarian example -- which I continue to find politically 'safe' -- the unhappiness I may be causing to animal activists has not caused me to go as far as considering eating meat immoral, but I'm beginning to pause whenever I eat meat in deference to these minds and I wonder if I should continue to develop this second-person moral sensitivity. Arguably, the world could be better if my fellows didn't worry about the slaughtering of animals. And then, why should I continue to eat meat if the world were better without it?
Replies from: Nisan↑ comment by Nisan · 2011-06-27T21:16:21.591Z · LW(p) · GW(p)
If this were merely about a concern for the affective state of supporters of animal rights, you could just meat and then lie about it.
What I got out of your post was a game theory strategy, a sort of special case of the Golden Rule, by which you might decide to not eat meat in deference to supporters of animal rights — even when no one is looking — because there are certain behaviors you would like others to adopt reciprocally. Maybe you're a supporter of, um, turnips' rights, and you want others to refrain from eating turnips, at least where doing so would not be an inconvenience. So we have a Prisoner's Dilemma where you can eat animals or not, and the other player can eat turnips or not, and the best outcome is if everyone abstains from animals and turnips.
Replies from: byrnema↑ comment by byrnema · 2011-06-27T21:40:42.804Z · LW(p) · GW(p)
There's the game theory consideration, certainly, but also I directly prefer a world in which people's preferences are satisfied. Though this preference isn't strong, I'm wondering if it could be strengthened through reflection and what the effects would be.
comment by [deleted] · 2011-06-27T17:04:30.780Z · LW(p) · GW(p)
These are unpleasant feelings and, combined with the fear that immoral events will happen more frequently due to apathy, I'm willing to fold X into my category of things that shouldn't happen. Not because of X itself, but because I know it makes people feel bad.
The point of this post, and this paragraph in particular, implies that other people's disgust is strong enough to outweigh your desire to do X. But I'm not sure that this really applies to a lot of cases--to use some admittedly-politically-charged examples, substitute "gay marriage" or "freedom of expression" for X and tell me what ~Y would have to be in order for it to be a fair trade.
Also, this system sounds like it impedes moral progress because it disincentives society to change its values over time (because a trade-off would presumably satiate activists who would try to enact social reform). On that note, how would you deal with values changing over time?
Nonetheless, very interesting post. Upvoted.
Replies from: byrnema↑ comment by byrnema · 2011-06-27T17:43:15.058Z · LW(p) · GW(p)
Also, this system sounds like it impedes moral progress because it disincentives society to change its values over time
I guessed such a strategy would hasten moral progress. I think of moral progress as the morality of a more sensitive minority impressing itself, over time, on the general population. Do you think of examples that don't fit this pattern? But -- for example -- most people I know aren't vegetarian but I think meat eaters could agree that vegetarians have the moral high ground if it were to be a moral issue. Meat eaters folding to become vegetarians would accelerate moral progress if this is what a future moral society would choose.
Replies from: saturn, None, None, Peterdjones↑ comment by saturn · 2011-06-27T23:57:31.349Z · LW(p) · GW(p)
I think of moral progress as the morality of a more sensitive minority impressing itself, over time, on the general population.
Homosexuality seems to be a case where a less sensitive minority is currently impressing itself on the general population. Do you consider acceptance of homosexuality to be against moral progress?
Replies from: byrnema↑ comment by byrnema · 2011-06-28T04:16:33.590Z · LW(p) · GW(p)
I would consider tolerance of homosexuality a case where a more sensitive morality is beginning to prevail over the majority: First, tolerance is the minority view (or was a bit ago) because non-tolerance was the status quo. Second, tolerance is the more sensitive moral position because it requires empathy with the group of weaker social influence.
I would stand by what I wrote, that moral progress would be the more sensitive minority impressing itself on the general population, since it's sort of tautologically true, but I'd be careful to emphasize that my post wouldn't prescribe anything about moral conflicts.
↑ comment by [deleted] · 2011-06-28T09:24:53.218Z · LW(p) · GW(p)
I guessed such a strategy would hasten moral progress. I think of moral progress as the morality of a more sensitive minority impressing itself, over time, on the general population.
When I look at history I don't see much moral progress. What I see is moral change.
↑ comment by [deleted] · 2011-06-27T19:50:17.576Z · LW(p) · GW(p)
Imagine that there is some behavior called "snarf," which 10% of the population thinks is morally acceptable but 90% thinks is abhorrent. The desire of the 10% to perform snarf is going to be outweighed by the desire of the 90% to ban snarf. Thus, society will not move in the direction of legalizing snarf. If that's still too abstract, substitute "gay marriage" for "snarf."
Similarly, meat eaters would probably not agree that vegetarians hold the high ground.
Replies from: Peterdjones, byrnema, Alicorn↑ comment by Peterdjones · 2011-06-27T22:21:01.010Z · LW(p) · GW(p)
I don't think that a good analogy. i've never heard of a carnivore who thought meat eating was morally better. Their argument is that meat eating is not so much worse that it becomes an ethical no-no, rather than a ethically neutral lifestyle choice. (Morally level ground).
People can even carry on doing something they think is morally wrong on the excuse of akrasia.
And gay marriage is becoming slowly accepted.
Replies from: Nornagest, Jayson_Virissimo, anon895, MBlume↑ comment by Nornagest · 2011-06-28T00:48:04.528Z · LW(p) · GW(p)
i've never heard of a carnivore who thought meat eating was morally better
By sheerest coincidence, I just tabbed over from precisely that argument offsite. The arguments in favor of meat-eating struck me as rather confused (an odd quasi-Nietzschean will-to-power thing mixed with biological determinism, as best I can tell), but they were moral arguments and they were in favor of carnivory.
I'd expect that sort of thing to be rather rare, though. The mainstream position does seem to be that it simply isn't a moral issue.
↑ comment by Jayson_Virissimo · 2012-02-23T11:29:44.906Z · LW(p) · GW(p)
I don't think that a good analogy. i've never heard of a carnivore who thought meat eating was morally better. Their argument is that meat eating is not so much worse that it becomes an ethical no-no, rather than a ethically neutral lifestyle choice. (Morally level ground).
I have. The argument went something like this:
- For humans, an action that is natural for humans is more moral than an act that is not natural for humans, all else equal.
- For humans, eating (some) meat is natural.
- Therefore, for humans, eating (some) meat is more moral than not eating (some) meat, all else equal.
↑ comment by Peterdjones · 2012-08-15T16:16:00.914Z · LW(p) · GW(p)
Presumably they hunt their own meat...going to the supermarket is pretty unnatural.
↑ comment by anon895 · 2011-06-28T00:37:14.049Z · LW(p) · GW(p)
i've never heard of a carnivore who thought meat eating was morally better.
I suspect that you either haven't looked very hard or very long.
Replies from: Peterdjones↑ comment by MBlume · 2012-03-25T20:58:54.559Z · LW(p) · GW(p)
i've never heard of a carnivore who thought meat eating was morally better.
Katja Grace claimed to me that being a total utilitarian led her to prefer eating meat, since eating animals creates a reason for the animals to exist in the first place, and she imagines they'd prefer to exist for a while, and then be slaughtered, than not exist at all.
I tend to hang out in the average utilitarian camp, so that one didn't move me much. On the other hand:
Oh, you want utilitarian logic? One serving of utilitarian logic coming up: Even in the unlikely chance that some moron did manage to confer sentience on chickens, it's your research that stands the best chance of discovering the fact and doing something about it. If you can complete your work even slightly faster by not messing around with your diet, then, counterintuitive as it may seem, the best thing you can do to save the greatest number of possibly-sentient who-knows-whats is not wasting time on wild guesses about what might be intelligent. It's not like the house elves haven't prepared the food already, regardless of what you take onto your plate.
Harry considered this for a moment. It was a rather seductive line of reasoning -
Good! said Slytherin. I'm glad you see now that the most moral thing to do is to sacrifice the lives of sentient beings for your own convenience, to feed your dreadful appetites, for the sick pleasure of ripping them apart with your teeth -
What? Harry thought indignantly. Which side are you on here?
His inner Slytherin's mental voice was grim. You too will someday embrace the doctrine... that the end justifies the meats. This was followed by some mental snickering.
I'm pretty sure that the maximally healthy diet for me contains meat, that I can be maximally effective in my chosen goals when maximally healthy, and that my likely moral impact on the world makes sacrifices on the order of a cow per year (note that cows are big and hamburgers are small) look like a rounding error.
↑ comment by byrnema · 2011-06-27T20:28:48.047Z · LW(p) · GW(p)
Yes, your example is outside my model. If snarf is morally acceptable to some and abhorrent to others, this represents a moral conflict. My offered 'strategy' only applies to cases where one party is neutral and the other party cares, then you may have a chance of considering the caring party more morally sensitive.
Similarly, meat eaters would probably not agree that vegetarians hold the high ground.
Really? What if it were suddenly possible to harvest livestock without neural systems? If this were possible (and for the least convenient world, assume that such meat was understood to be just as healthy for consumption), do you predict that many people (even current meat eaters) would prefer farming the non-sapient livestock? If so, this would show that many people believe that the suffering of livestock has some moral weight -- but perhaps currently offset by other considerations.
Replies from: None↑ comment by [deleted] · 2011-06-27T20:44:34.905Z · LW(p) · GW(p)
In the situation you describe, some meat eaters might still want to eat sapient livestock because some people consider "natural" behaviors to be moral. But there's no reason to belabor the point--what I was trying to convey above was this: most-moral-minority sometimes oppresses the minority (e.g. the snarf example above) and impedes moral progress. There are certainly cases where it does work, but there are also borderline cases like this one and negative cases like snarf.
↑ comment by Peterdjones · 2011-06-27T22:26:27.499Z · LW(p) · GW(p)
I think moral progress is about more sensitive, and more, or at least averagely, persuasive, morality impressing itself over time. Suffering-in-silence never changed anything. But if at least one aspect of progress is leading toward reason and rationality, moral progress can be built on top of that,, because minorities can then make a reasoned case in a way that doesn't depend on force of numbers or any other kind of force.
Replies from: byrnema↑ comment by byrnema · 2011-06-27T23:37:03.766Z · LW(p) · GW(p)
OK, this is fair in the case of a morality issue having the possibility of being persuasive. In the case of instrumental rather than terminal values, for example, when terminal values are the same. But when moral values are just different, there is no persuading that can be done. Aside from persuasion along the lines I made in my post.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-28T00:46:37.871Z · LW(p) · GW(p)
I think moral norms can be rearranged on the basis of rational norms.
Replies from: byrnema↑ comment by byrnema · 2011-06-28T02:31:30.675Z · LW(p) · GW(p)
What do you mean? (I'm not sure what is meant by 'rearranged' or 'rational norm'.)
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-28T12:31:34.913Z · LW(p) · GW(p)
"if you are in favour of X, then to be consistent [rational norm], you should be in favour of Y"
Replies from: byrnema