Two arguments for not thinking about ethics (too much)
post by Kaj_Sotala · 2014-03-27T14:15:58.209Z · LW · GW · Legacy · 32 commentsContents
1: Little expected insight 2: Leads to akrasia Edited to add in April 2017: A brief update on the consequences of this shift in thought, three years later. None 32 comments
I used to spend a lot of time thinking about formal ethics, trying to figure out whether I was leaning more towards positive or negative utilitarianism, about the best courses of action in light of the ethical theories that I currently considered the most correct, and so on. From the discussions that I've seen on this site, I expect that a lot of others have been doing the same, or at least something similar.
I now think that doing this has been more harmful than it has been useful, for two reasons: there's no strong evidence to assume that this will give us very good insight to our preferred ethical theories, and more importantly, because thinking in those terms will easily lead to akrasia.
1: Little expected insight
This seems like a relatively straightforward inference from all the discussion we've had about complexity of value and the limits of introspection, so I'll be brief. I think that attempting to come up with a verbal formalization of our underlying logic and then doing what that formalization dictates is akin to "playing baseball with verbal probabilities". Any introspective access we have into our minds is very limited, and at best, we can achieve an accurate characterization of the ethics endorsed by the most verbal/linguistic parts of our minds. (At least at the moment, future progress in moral psychology or neuroscience may eventually change this.) Because our morals are also derived from parts of our brains to which we don't have such access, our theories will unavoidably be incomplete. We are also prone to excessive rationalization when it comes to thinking about morality: see Joshua Greene and others for evidence suggesting that much of our verbal reasoning is actually just post-hoc rationalizations for underlying moral intuitions.
One could try to make the argument from Dutch Books and consistency, and argue that if we don't explicitly formulate our ethics and work out possible contradictions, we may end up doing things that work cross-purposes. E.g. maybe my morality says that X is good, but I don't realize this and therefore end up doing things that go against X. This is probably true to some extent, but I think that evaluating the effectiveness of various instrumental approaches (e.g. the kind of work that GiveWell is doing) is much more valuable for people who have at least a rough idea of what they want, and that the kinds of details that formal ethics focuses on (including many of the discussions on this site, such as this post of mine) are akin to trying to calculate something to the 6th digit of precision when our instruments only measure things at 3 digits of precision.
To summarize this point, I've increasingly come to think that living one's life according to the judgments of any formal ethical system gets it backwards - any such system is just a crude attempt of formalizing our various intuitions and desires, and they're mostly useless in determining what we should actually do. To the extent that the things that I do resemble the recommendations of utilitarianism (say), it's because my natural desires happen to align with utilitarianism's recommended courses of action, and if I say that I lean towards utilitarianism, it just means that utilitarianism produces the least recommendations that would conflict with what I would want to do anyway.
2: Leads to akrasia
Trying to follow the formal theories can be actively harmful towards pretty much any of the goals we have, because the theories and formalizations that the verbal parts of our minds find intellectually compelling are different from the ones that actually motivate us to action.
For example, Carl Shulman comments on why one shouldn't try to follow utilitarianism to the letter:
As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.
Usually this doesn't work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person's motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one's values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.
Even if one avoided that particular failure mode, there remains the more general problem that very few people find it easy to be generally motivated by things like "what does this abstract ethical theory say I should do next". Rather, they are motivated by e.g. a sense of empathy and a desire to prevent others from suffering. But if we focus too much on constructing elaborate ethical theories, it becomes much too easy to start thinking excessively in terms of "what would this theory say I should do" and forget entirely about the original motivation that led us to formulate that theory. Then, because an abstract theory isn't intrinsically compelling in the same way that an emphatic concern over suffering is, we end up with a feeling of obligation that we should do something (e.g. some concrete action that would reduce the suffering of others), but not an actual intrinsic desire to really do it. Which leads to the kinds of action that are optimizing towards the goal of stop feeling that obligation, rather than the actual goal. This can manifest itself via things such as excessive procrastination. (See also this discussion of how "have-to" goals require willpower to accomplish, whereas "want-to" goals are done effortlessly.)
The following is an excerpt from Trying Not To Try by Edward Slingerland that makes the same point, discussing the example of an ancient king who thought himself selfish because he didn't care about his subjects, but who did care about his family, and who did spare the life of an ox when he couldn't face to see its distress as it was about to be slaughtered:
Mencius also suggests trying to expand the circle of concern by beginning with familial feelings. Focus on the respect you have for the elders in your family, he tells the king, and the desire you have to protect and care for your children. Strengthen these feelings by both reflecting on them and putting them into practice. Compassion starts at home. Then, once you’re good at this, try expanding this feeling to the old and young people in other families. We have to imagine the king is meant to start with the families of his closest peers, who are presumably easier to empathize with, and then work his way out to more and more distant people, until he finally finds himself able to respect and care for the commoners. “One who is able to extend his kindness in this way will be able to care for everyone in the world,” Mencius concludes, “while one who cannot will find himself unable to care for even his own wife and children. That in which the ancients greatly surpassed others was none other than this: they were good at extending their behavior, that is all.”
Mencian wu-wei cultivation is about feeling and imagination, not abstract reason or rational arguments, and he gets a lot of support on this from contemporary science. The fact that imaginative extension is more effective than abstract reasoning when it comes to changing people’s behavior is a direct consequence of the action-based nature of our embodied mind. There is a growing consensus, for instance, that human thought is grounded in, and structured by, our sensorimotor experience of the world. In other words, we think in images. This is not to say that we necessarily think in pictures. An “image” in this sense could be the feeling of what it’s like to lift a heavy object or to slog in a pair of boots through some thick mud. [...]
Here again, Mencius seems prescient. The Mohists, like their modern utilitarian cousins, think that good behavior is the result of digital thinking. Your disembodied mind reduces the goods in the world to numerical values, does the math, and then imposes the results onto the body, which itself contributes nothing to the process. Mencius, on the contrary, is arguing that changing your behavior is an analog process: education needs to be holistic, drawing upon your embodied experience, your emotions and perceptions, and employing imagistic reflection and extension as its main tools. Simply telling King Xuan of Qi that he ought to feel compassion for the common people doesn’t get you very far. It would be similarly ineffective to ask him to reason abstractly about the illogical nature of caring for an ox while neglecting real live humans who are suffering as a result of his misrule. The only way to change his behavior—to nudge his wu-wei tendencies in the right direction—is to lead him through some guided exercises. We are analog beings living in an analog world. We think in images, which means that both learning and teaching depend fundamentally on the power of our imagination.
In his popular work on cultivating happiness, Jonathan Haidt draws on the metaphor of a rider (the conscious mind) trying to work together with and tame an elephant (the embodied unconscious). The problem with purely rational models of moral education, he notes, is that they try to “take the rider off the elephant and train him to solve problems on his own,” through classroom instruction and abstract principles. They take the digital route, and the results are predictable: “The “class ends, the rider gets back on the elephant, and nothing changes at recess.” True moral education needs to be analog. Haidt brings this point home by noting that, as a philosophy major in college, he was rationally convinced by Peter Singer’s arguments for the moral superiority of vegetarianism. This cold conviction, however, had no impact on his actual behavior. What convinced Haidt to become a vegetarian (at least temporarily) was seeing a video of a slaughterhouse in action—his wu-wei tendencies could be shifted only by a powerful image, not by an irrefutable argument.
My personal experience of late has also been that thinking in terms of "what does utilitarianism dictate I should do" produces recommendations that feel like external obligations, "shoulds" that are unlikely to get done; whereas thinking about e.g. the feelings of empathy that motivated me to become utilitarian in the first place produce motivations that feel like internal "wants". I was very close to (yet another) burnout and serious depression some weeks back: a large part of what allowed me to avoid it was that I stopped entirely asking the question of what I should do, and began to focus entirely on what I want to do, including the question of which of my currently existing wants are ones that I'd wish to cultivate further. (Of course there are some things like doing my tax returns that I do have to do despite not wanting to, but that's a question of necessity, not ethics.) It's way too short of a time to say whether this actually leads to increased productivity in the long term, but at least it feels great for my mental health, at least for the time being.
Edited to add in April 2017: A brief update on the consequences of this shift in thought, three years later.
32 comments
Comments sorted by top scores.
comment by Qiaochu_Yuan · 2014-03-27T16:38:31.129Z · LW(p) · GW(p)
Agreed. In general, I think a lot of the discussion of ethics on LW conflates ethics-for-AI with ethics-for-humans, which are two very different subjects and which should be approached very differently (e.g. I think virtue ethics is great for humans but I don't even know what it would mean to make an AI a virtue ethicist).
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2014-03-31T09:49:03.966Z · LW(p) · GW(p)
If I paraphrased your position as "In order to act well according to some consequentialist goals, it makes sense for humans to follow a virtue ethical decision-procedure?", would you agree?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2014-05-05T03:58:38.274Z · LW(p) · GW(p)
Sure.
comment by NancyLebovitz · 2014-03-27T15:53:48.099Z · LW(p) · GW(p)
It seems to me that people who do a lot of good in the world tend to work on a specific cause which is close to their heart (Martin Luther King, Gandhi) or do professional work that helps a lot of people solve a particular problem (Borlaug (the Green Revolution).
In addition to your point about motivation (which I agree with), getting too abstract discourages people from getting the advantages of specialization.
comment by Squark · 2014-03-27T19:38:34.437Z · LW(p) · GW(p)
Maybe the best approach is separating time scales. On a short time scale, one should make ethical choices emotionally, since emotions are powerful motivators. On a long time scale, one should gradually calibrate one's emotional responses based on more abstract "cold" reasoning. For example, if your abstract reasoning led you to conclude that eating animals is wrong, you can look for ways to develop a negative emotional response to it. Essentially, it's a dark arts technique. Of course, the "cold" reasoning still has to be grounded in emotional intuition: after all, there's nothing else to ground it in.
comment by whales · 2014-03-28T05:13:18.525Z · LW(p) · GW(p)
I think that attempting to come up with a verbal formalization of our underlying logic and then doing what that formalization dictates is akin to "playing baseball with verbal probabilities"...
I wonder if the extent to which one thinks in words is anti-correlated with sharing that intuition.
I'm a mostly non-verbal thinker and strongly in favor of your arguments. On the other hand, I once dismissed the idea of emotional vocabulary, feeling that it was superfluous at best, and more likely caused problems via reductive, cookie-cutter introspection. Why use someone else's fixed terminology for my emotional states, when I have perfectly good nonverbal handles on them? I figured out later that some people have trouble distinguishing between various versions of "feeling bad" (for example), and that linguistic handles can be really helpful for them in understanding and responding to those states. (That also moved me favorably towards supplementing my own introspection with verbal labels.)
I don't think that kind of difference really bears on your arguments here, but I wouldn't be surprised if there were a typical-mind thing going on in the distribution of underlying intuitions.
comment by buybuydandavis · 2014-03-28T02:24:15.844Z · LW(p) · GW(p)
I stopped entirely asking the question of what I should do, and began to focus entirely on what I want to do
The utilitarian comes to Stirner's egoism at last! Welcome, brother! All hail Saint Max!
When one starts fussing over "what shoulds should I have?", one has gone tail biting insane.
Some of your wants are moral wants. They're not that much more mysterious than your "yummy wants". Morality is another set of preferences you have. Once they're no longer "rational truths" for you to calculate, but some of the many preferences you have, you can get down to satisfying them just as you satisfy your yummy preferences.
I think rationalist moral tail biting leads to akrasia because calculating isn't how one satisfies or even experiences moral preferences. It larges displaces focus on your moral preferences. The proof of the pudding lies in tasting, not in some rationalist calculation on how good it is going to taste.
comment by blacktrance · 2014-03-27T16:31:03.304Z · LW(p) · GW(p)
I can personally attest that thinking about ethics has significantly affected my life and has given me a lot of insight.
My personal experience of late has also been that thinking in terms of "what does utilitarianism dictate I should do" produces recommendations that feel like external obligations
This is a problem only if you assume that morality is external, as utilitarianism, Kantianism, and similar ethical systems are. If you take an internal approach to morality, as in contractarianism, virtue ethics, and egoism, this isn't a problem.
Replies from: Vaniver, scientism, Lukas_Gloor↑ comment by scientism · 2014-03-27T22:22:14.827Z · LW(p) · GW(p)
Yes, when I gave up consequentialism for virtue ethics it was both a huge source of personal insight and led to insights into politics, economics, management, etc. I'm of the belief that the central problem in modern society is that we inherited a bad moral philosophy and applied it to politics, management, the economy, personal relationships, etc.
Replies from: mwengler, blacktrance, shminux↑ comment by mwengler · 2014-03-30T15:00:51.597Z · LW(p) · GW(p)
I'm of the belief that the central problem in modern society is that we inherited a bad moral philosophy
So you gave up consequentialism because virtue ethics had better consequences?
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2014-03-31T09:52:59.122Z · LW(p) · GW(p)
Exactly, the way people talk about this on LW confuses me. I think I agree with everything, but it is framed in a weird way.
↑ comment by blacktrance · 2014-03-27T23:50:17.998Z · LW(p) · GW(p)
I don't think that "modern society" has anything coherent enough to be called a moral philosophy. Scattered moral intuitions, perhaps, but not a philosophy.
Also, virtue ethics and consequentialism are orthogonal. I'm a virtue ethicist and a consequentialist.
Replies from: mwengler↑ comment by mwengler · 2014-03-30T15:06:02.505Z · LW(p) · GW(p)
anti-slavery, equal pay for equal work, laws should not limit freedom unnecessarily, laws should not apply to groups based on irrelevant differences like skin color, religion, gender, the poor should be helped...
All of these seem to me to be components of modern society's moral philosophy. I think it could be said that there is not universal sign-on or agreement to the components of society's moral philosophy, but I don't think that negates the rich content. Plus there are many things where sign-on seems nearly universal, anti-slavery for instance.
Replies from: blacktrance↑ comment by blacktrance · 2014-03-31T17:47:00.797Z · LW(p) · GW(p)
Those are outputs of a moral philosophy, not components of one. Or they're freely floating moral intuitions, which is usually the case.
Replies from: mwengler↑ comment by mwengler · 2014-04-01T15:31:21.628Z · LW(p) · GW(p)
The presence of modern outputs of a moral philosophy would seem to suggest the existence of a modern moral philosophy.
Or if they are free floating intuitions, it is remarkable how they seem consistent with a fairly complicated view where the individual humans have great value and significant rights against the collective or other random strangers. And how different this modern view of the individual is then hundreds of years ago or more when moral theories revolved around various authoritarian institutions or supernatural beings rather than individuals.
Replies from: blacktrance↑ comment by blacktrance · 2014-04-01T17:34:34.172Z · LW(p) · GW(p)
The presence of modern outputs of a moral philosophy would seem to suggest the existence of a modern moral philosophy.
The presence of outputs of a moral philosophy need not mean that the moral philosophy is still present, or even that there's one moral philosophy. For example, imagine a world in which for centuries, the dominant moral philosophy was that of fundamentalist Christianity, and popular moral ideas are those derived from it. Then there is a rise in secularism and Christianity retreats, but many of its ideas remain, disconnected from their original source. Temporarily, Christianity is replaced by some kind of egalitarianism, which produces some of its own moral ideas, but then it retreats too, and then there isn't any dominant moral philosophy. You would find that in such a society, there would be various scattered ideas that can be traced back to Christianity or egalitarianism, even though it's possible that no one would be a Christian or an egalitarian.
if they are free floating intuitions, it is remarkable how they seem consistent with a fairly complicated view where the individual humans have great value and significant rights against the collective or other random strangers
If you put it that broadly, it's not concrete enough to be called a moral philosophy. Utilitarians, Kantians, and others would all agree that individuals have great value, but they're very different moral philosophies.
↑ comment by Shmi (shminux) · 2014-03-27T23:05:23.439Z · LW(p) · GW(p)
when I gave up consequentialism for virtue ethics
It's not either/or and no, you haven't, not completely.
↑ comment by Lukas_Gloor · 2014-03-31T09:51:14.109Z · LW(p) · GW(p)
Agreed, but I'd like to point out that this is a false dichotomy: Utilitarianism can be the conclusion when following an internal approach. And seen that way, it doesn't feel like you need to pressure yourself to follow some external standard. You simply need to pressure yourself to follow your own standard, i.e. make the best out of akrasia, addictions and sub-utility functions of your non-rational self that you would choose get rid of if you had a magic pill that could do so.
Replies from: blacktrance↑ comment by blacktrance · 2014-03-31T17:45:24.425Z · LW(p) · GW(p)
By "utilitarianism" I meant classical normative utilitarianism, i.e. utilitarianism is correct even if you don't like it, even if you hate following it, and regardless of what a moral agent wants or likes, they should maximize world utility. Then utilitarianism has to be an external standard. The LW usage of the term is at odds with standard usage.
comment by moridinamael · 2014-03-27T14:51:12.065Z · LW(p) · GW(p)
I knew I was in trouble a year or so ago when I found that I was internally using consequentialism to justify not doing things to help other people on the grounds that it would cost too much of my time and energy. The problem with this is that the lens of your mood, energy level, what you ate for lunch, how much sleep you got, etc., all completely transform the calculation of whether the task being asked of you is arduous and burdensome or reasonable and net-positive-utility to execute.
It was very disorienting to realize that I can't be trusted to do consequentialism in my personal life. I think the community-endorsed alternative is virtue ethics, but that is a vastly less clear prescription than consequentialism. Pick your poison.
Replies from: Lumifercomment by torekp · 2014-03-27T17:10:06.549Z · LW(p) · GW(p)
Mill and Sidgwick both responded to something like your point 2 by agreeing that motivations don't correspond to utilitarian math. But they didn't think that was a fatal problem. They recommended that we start where we actually are, motivationally speaking, and move in the recommended direction. In modern terms they were "indirect utilitarians", e.g. Mill is sometimes viewed as a rule-utilitarian. Not that I want to defend utilitarianism, but you may be overlooking more steelmanned versions.
I basically agree with your point 1, at least when it comes to normative ethical system-building.
comment by Giles · 2014-03-27T21:34:44.984Z · LW(p) · GW(p)
I got the same piece of advice - to think about things in terms of "wants" rather than "shoulds" or "have tos" - from someone outside the LW bubble, but in the context of things like doing my tax returns.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-03-28T08:08:54.781Z · LW(p) · GW(p)
Yeah, as a matter of fact I've been applying the "think in terms of wants" to things like tax returns as well, but I didn't want to get into that since it'd have been a different topic.
Replies from: bbleeker↑ comment by Sabiola (bbleeker) · 2014-03-28T20:26:43.353Z · LW(p) · GW(p)
Please make a post about that too!
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-03-29T11:08:51.990Z · LW(p) · GW(p)
It's a little hard, since a lot of what I've learned about it is in the form of knowledge that's not very easy to communicate verbally. But I'll try, at least assuming that the positive effects keep up for long enough. (I have a rule of waiting for at least a month to see whether such effects really do persist, or whether it's just the initial excitement of finding a new thing that does it.)
comment by mwengler · 2014-03-30T15:24:59.381Z · LW(p) · GW(p)
there's no strong evidence to assume that this will give us very good insight to our preferred ethical theories,
It seems generally that you have decided that formal, logical, philosophical ethics is a map, and the territory is our feelings, would you agree?
I do think that all of philosophical ethics is an attept to make a map and that the territory is our intuitions. Moral philosophies are deductive systems with the rules deduced from moral statements which are only "true" to the extent that they represent how we feel about something.
A major problem with mapping the territory of our feelings is that our feelings don't follow the kind of consistency that lends them easily towards making a map. If one day I feel it is wrong to steal a candy bar, but the next day I feel it is wrong to deny me a candy bar, well that is how the feelings go. A "map" of that territory might try to tell me that one or the other of those feelings is wrong, but to the extent it does that it will not actually be a very good map.
We might certainly get caught in discussions of which of various imperfect mapping schemes is the best, but by what good reason can we deny we are mapping intuitions and that at various times my intuitions are at odds?
We are driven to be very obviously moral, it is more important to be seen to be moral than to actually be moral according to many many studies on the matter. And that drive is not something we are conscious of generally. It is hard for me to see a commitment to any kind of moral philosophy as much more than this: taking a public posture that clearly sets you up as appearing to be gigantically moral. To be so moral that you pursue moral rules to the most ethereal logical absurdities and then very publicly try to live that way.
I suppose one might say that my study of the maps of moral systems have lead me to conclude that none of them are very good, and that I don't really need one because the intuitions I have with such a map are at odds with each other as the intuitions I have with no map at all. So my moral theory is to do what feels right at any given moment and not waste too much effort on looking for something deeper.
comment by [deleted] · 2016-03-09T14:57:21.480Z · LW(p) · GW(p)
I'm officially renouncing Effective Altruism for Ethical Egoism. But I'm also renouncing ethical egoism for this non-partisan approach to meta-ethics :)
comment by [deleted] · 2014-06-13T06:20:14.453Z · LW(p) · GW(p)
Distress? What distress?
Theory: "ABSTRACT ABSTRACT. Studies of empathy and empathy-related responding show that while some people respond to observing the suffering of another with a prosocial concern and urge to help the suffering person, others have an aversive, avoidant response that is primarily self-focused and aimed toward relieving their own distress rather than helping the other person. This self-focused response, labeled personal distress, is associated with various social and psychological problems. This article discusses the concept of personal distress and describes a study of licensed clinical social workers (n = 171) that examines the relationship of personal distress and three other aspects of the empathy construct with compassion fatigue, burnout, and compassion satisfaction. Results of ordinary least squares multiple regression analyses indicate that the model of empathy components and control variables explain 20% to 23% of the variance in the dependent variables. Personal distress is the only component of the empathy construct with significant associations with the dependent variables. Higher personal distress is associated with higher compassion fatigue and burnout and lower compassion satisfaction among clinical social workers. Implications for future research and for social work education are discussed."
Application: Try to focus on the other person's pain instead of your own to feel better.