Moral intuitions are surprisingly variable
post by Bob Baker · 2020-12-26T19:50:41.721Z · LW · GW · 5 commentsContents
5 comments
When building moral frameworks, moral intuitions are “our first and only source of data”. Further quoting the Consequentalism FAQ:
Searching for moral rules means searching for principles that correctly describe and justify enough of our existing moral intuition that we feel confident applying them to decide edge cases.
The results of these searches are often proposed as universal rules. The implicit justification for this is the claimed near universality of the underlying moral intuitions:
There are many moral situations where nearly everyone agrees on the correct answer, even though we're not exactly sure why.
I suggest that more skepticism is warranted for such claims because research shows a surprising diversity of moral intuitions once more cultures are sampled.
For example, “everybody” probably believes that intentions are important. Murder is worse than man-slaughter. Sabotage is worse than clumsiness.
Barrett et al presented a random story from a pair to members of a diverse set of cultures. While the outcome/harm inflicted in the stories was the same, in one of the pair it was an accident and in the other it was intentional. In Los Angeles, as we would expect, people judge intentional harm much more severely. However, one of the nine other cultures tested — in Fiji — judged exclusively based on consequences. In another the difference was below the significance threshold.
Any moral framework that gives weight to intention and claims universality should at least pause and consider that result.
Even the degree to which people respect impartial rules varies. Hampden-Turner and Trompenaars conducted extensive research on cultural differences including asking about whether it was acceptable to lie under oath to protect a friend who had injured someone through reckless driving. (Also described in this open publication.) English-speaking countries are part of the most impartial-rule-abiding set in this measure. Places like Russia and China are much less so.
Modern Western moral frameworks are nearly always impartial, ignoring special considerations for people nearby in social or family networks. But we don't usually get to see Chinese arguments about deontology. What would moral rules look like in a culture where honoring ancestors was a high virtue? What about cultures, such as Japan, where morality may be judged based on fidelity to social norms rather than an individual's actions in isolation?
To avoid these objections, moral frameworks could relax their claims to universality. But it's not as simple as warning that rules may not apply in Fiji (or China) — these are not binary differences, they are continuous variables across cultures.
Moral intuitions must always be balanced because they may be in conflict with others:
we must reach a reflective equilibrium among our various moral intuitions, which may end up assigning some intuitions more or less weight than others, and debunking some of them entirely.
But if the weights vary across the world then so, presumably, would the resulting rules.
One could also point, hopefully hesitantly, at cultural evolution. Western cultures have come to dominate the world and Steven Pinker marshals a body of data to argue that this has been a positive thing. Perhaps the cultures that inculcate the moral intuitions that inform certain moral frameworks have proven themselves by delivering historically exceptional peace and material wealth. But that is a very different argument than the claim that “everybody” agrees about certain ground truths.
For a survey of all these results, plus much more, see Henrich 2020. The Consequentalism FAQ is linked from archive.org because the original domain has lapsed.
5 comments
Comments sorted by top scores.
comment by MSRayne · 2020-12-26T23:54:22.596Z · LW(p) · GW(p)
Even the idea that variations in moral intuition matter is probably one which is nowhere near universal. After all, most cultures think their moral values are the True ones and don't care about any others. I'm not sure what to do with that fact, but it's something I noticed.
comment by Charlie Steiner · 2021-01-08T07:13:32.722Z · LW(p) · GW(p)
This is definitely an interesting topic (I recently listened to an interview with Joseph Heinrich, author of The WEIRDest People In The World). It's a serious problem if you're trying to find a unique True Morality and ground it in human intuitions, but if we've managed to move past that then there's still an interesting point here, the philosophical problem just gets turned inwards.
Of course different humans are allowed to prefer different things and even have different preferences about preference aggregation - so if you're trying to build some decisionmaking procedure that aggregates human preferences, rather than being able to delegate the choice of how to do that to some unique True Meta-Morality, you have to do the philosophical work of figuring out what you want out of this aggregation process.
comment by adamShimi · 2020-12-27T22:06:48.655Z · LW(p) · GW(p)
Two things comes to mind after reading this post:
- Your point probably depend on the thinker's take on moral realism. Because in a world where some things are inherently more moral, there is a way to choose between competing intuitions.
- I guess moral uncertainty could be a way to deal with the problem you point to. Applying it to intuitions maybe, to get a sort of criteria that mixes multiple ones.
↑ comment by Charlie Steiner · 2021-01-08T07:45:16.247Z · LW(p) · GW(p)
I really need to get around to writing more anti-moral-uncertainty posts :P
What it functionally is here is aggregating different peoples' preferences via linear combination. And this is fine! But it's not totally unobjectionable - some humans may in fact object to it (not just to the particular weights, which are of course subjective). So moral uncertainty isn't a solution to meta-moral disagreement, it's a framework you can use only after you've already resolved it to your own satisfaction and decided that you want to aggregate linearly.
↑ comment by Bob Baker · 2020-12-28T03:23:28.241Z · LW(p) · GW(p)
I think that personal choices about morality are unaffected by the fact that significantly different cultures exist. Perhaps they call for a soupçon more humility, but your moral intuitions remain axiomatic for you.
Rather I think the adjustment needed in some cases is a greater weight on the idea that your moral intuitions are significantly shaped by the culture that you found yourself in, and that the scope of possibilities is wide.
Perhaps this has little practical impact because, though your axioms might be more arbitrary than supposed, you have little choice but to use them. But there will exist people shaped by very different cultures, who formed different rules, and it's not clear that there's necessarily any ground for debate; the desire for universal morality might be hopeless.
(Or perhaps communications technology will cause Earth to tend towards a singular culture, giving grounds for a morality universal to all, at least until aliens or disconnected space colonies.)