SBF's comments on ethics are no surprise to virtue ethicists
post by c.trout (ctrout) · 2022-12-01T04:18:25.877Z · LW · GW · 30 commentsContents
Virtue Ethics vs. Deontology vs. Consequentialism Yet Another Absurdly Brief Introduction to Normative Ethics (YAABINE) LW and EA, echo chambers for Consequentialism The problem of thoughts too many Internal Moral Disharmony A problem for everybody... ...but especially (casual) consequentialists. Prevention and Cure Adopting VE vs. sophisticating Consequentialism Becoming virtuous None 30 comments
EDIT: Replaced the term "moral schizophrenia" with "internal moral disharmony" since the latter is more accurate and just.
DISCLAIMER: Although this is a criticism of the LW/EA community, I offer it in good faith. I don't mean to "take down" the community in any way. You can read this as a hypothesis for at least one cause of what some have called EA's emotions problem. [EA · GW] I also offer suggestions on how to address it [LW · GW]. Relatedly, I should clarify that the ideals I express (regarding how much one should feel vs how much one should be doing cold reasoning in certain situations) are just that: ideals. They are simplified, generalized recommendations for the average person. Case by case recommendations are beyond the scope of this post. (Nor am I qualified to give any!) But obviously, for example, those who are neurodivergent (e.g. have Aspergers) shouldn't be demeaned for not conforming to the ideals expressed here. Likewise though, it would be harmful to encourage those who are neurotypical to try to conform to an ideal better suited for someone who is neurodivergent: I do still worry we have "an emotions problem" in this community.
In case you missed it, amid the fallout from FTX's collapse, its former CEO and major EA donor Sam Bankman-Fried (SBF) admitted that his talk of ethics was "mostly a front," describing it as "this dumb game we woke Westerners play where we say all the right shibboleths and everyone likes us," a game in which the winners decide what gets invested in and what doesn't. He has since claimed that this was exaggerated venting intended for a friend audience, not the wider public. But still... yikes.
He also maintains that he did not know Alameda Research (the crypto hedge-fund heavily tied to FTX and owned by SBF) was over-leveraged, that he had no intention of doing anything sketchy like investing customers' deposits. In an interview yesterday, he generally admitted to negligence [? · GW] but nothing more. Regarding his ignorance and his intentions, he might be telling the truth. Suppose he is: suppose he never condoned doing sketchy things as a means he could justify by some expected greater good. Where then is the borderline moral nihilism coming from? Note that it's saying "all the right shibboleths" that he spoke of as mere means to an end, not the doing of sketchy things.
In what follows I will speculate about what might have been going on in SBF's head, in order to make a much higher confidence comment about the LW/EA community in general. Please don't read too much into the armchair psychological diagnosis from a complete amateur – that isn't the point. The point, to lay my cards on the table, is this: virtue ethicists would not be surprised if LW/EA people suffer (in varying degrees) from an internal disharmony between their reasons and their motives at higher rates than the general population. This is a form of cognitive dissonance that can manifest itself in a number of ways, including (I submit) flirts with Machiavellian attitudes towards ethics. And this is not good. To explain this, I first need to lay some groundwork about normative ethics.
Virtue Ethics vs. Deontology vs. Consequentialism
Yet Another Absurdly Brief Introduction to Normative Ethics (YAABINE)
The LW/EA forums are littered with introductions, of varying quality and detail, to the three major families of normative ethical theories. Here [LW · GW] is one from only two weeks ago. As such, my rendition of YAABINE will be even briefer than usual, and focuses only on theories of right action. (I encourage checking out the real deal though: here are SEP's entries on virtue ethics, deontology and consequentialism).
Virtue Ethics (VE): the rightness and wrongness of actions are judged by the character traits at the source of the action. If an action "flows" from a virtue, it is right; from a vice, wrong. The psychological setup (e.g. motivation) of the agent is thus critical for assessing right and wrong. Notably, the correct psychological setup often involves not excessively reasoning: VE is not necessarily inclined towards rationalism. (Much more on this below).
Deonotological Ethics (DE): the rightness and wrongness of actions are judged by their accordance with the duties/rules/imperatives that apply to the agent. The most well known form of DE is Kantian Ethics (KE). Something I have yet to see mentioned on LW is that, for Kant, it's not enough to act merely in accordance with moral imperatives: one's actions must also result from sound moral reasoning about the imperatives that apply. KE, unlike VE, is very much a rationalist ethics.
Consequentialism: the rightness and wrongness of actions are judged solely by their consequences – their net effect on the amount of value in the world. What that value is, where it is, whether we're talking about expected or actual effects, direct or indirect – these are all choice points for theorists. As very well put by Alicorn [LW · GW]:
"Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act consequentialism".
Finally, a quick word on theories of intrinsic value (goodness) and how they relate to theories of right action (rightness): conceptually speaking, much recombination is possible. For example, you could explain both goodness and rightness in terms of the virtues, forming a sort of Fundamentalist VE. Or you could explain goodness in terms of human flourishing (eudaimonia), which you in turn use to explain a virtue ethical theory of rightness – by arguing that human excellence (virtue) is partially constitutive of human flourishing. That would form a Eudaimonic VE (a.k.a. Neo-Aristotelian VE). Note that under this theory, a world with maximal human flourishing is judged to be maximally good, but the rightness and wrongness of our actions are not judged based on whether they maximize human flourishing!
Those are standard combinations but, prima facie, there is nothing conceptually incoherent about unorthodox recombinations like a Hedonistic VE (goodness = pleasure, and having virtues is necessary for/constitutive of pleasure), or Eudaimonic Consequentialism (goodness = eudaimonia, and rightness = actions that maximize general eudaimonia). The number of possible positions further balloons as you distinguish more phenomena and try to relate them together. There are, for example, many different readings of "good" and different categories of judgement (e.g. judging whole lives vs states of affairs at given moments in time; judging public/corporate policies vs character traits of individuals; judging any event vs specifically human actions). The normative universe is vast, and things can get complicated fast.
Here I hope to keep things contained to a discussion of right action, but just remember: this only scratches the surface!
LW and EA, echo chambers for Consequentialism
Why bother with YAABINE?
Something nags me about previous introductions on the LW/EA forums: VE and DE are nearly always reinterpreted to fit within a consequentialist's worldview. This is unsurprising of course: both LW and EA were founded by consequentialists and have retained their imprint. But that also means these forums are turning into something of an echo chamber on the topic (or so I fear). With this post, I explicitly intend to challenge my consequentialist readers. I'm going to try and do for VE what Alicorn does for DE [LW · GW]: demonstrate how virtue ethicists would actually think through a case.
What does that consequentialist co-opting look like? A number have remarked that, on consequentialist grounds, it is generally right to operate as if [LW · GW] VE were true [LW · GW] (i.e. develop virtuous character traits) or operate as if DE (if not KE) were true [LW · GW] (i.e. beware means-ends reasoning, respect more general rules and duties) or a mix [LW(p) · GW(p)]of both [EA · GW]. In fact the second suggestion has a long heritage: this is basically just Rule Consequentialism.
On the assumption that Consequentialism is true, I generally agree with these remarks. But let's get something straight: you shouldn't read these as charitable interpretations of DE and VE or something. There are very real differences and disagreements between the three major families of theory, and it's an open question regarding who is right. FWIW currently VE has a slim plurality among philosophers, with DE as the runner up. Among ethicists (applied, normative, meta, feminist), it seems DE consistently has the plurality, with VE as runner up. Consequentialism is consistently in third place.
One way to think of the disagreement between the three families is in what facts they take to be explanatorily fundamental – the facts that form the basis for their unifying account of a whole range of normative judgments. Regarding judgments of actions, for DE the fundamental facts are about imperatives; for Consequentialism, consequences; for VE, the character of the agent. Every theory will have something to say about each of these terms – but different theories will take different terms to be fundamental. If it helps, you can roughly categorize these facts based on their location in the causal stream:
DE ultimately judges actions based on facts causally up-stream from the action (e.g. what promises were made?), along with, perhaps, some acausal facts (e.g. what imperatives are a priori possible/impossible for a mind to will coherently/rationally?);
VE ultimately judges actions based on facts immediately up-stream (e.g. what psychological facts about the agent explain how they reacted to the situation at hand?);
Consequentialism ultimately judges actions based on down-stream facts (e.g. what was the net effect on utility?).
This is an excessively simplistic and imperfect categorization, but hopefully it gets across the deeper disagreement between the families. Yes, it's true, they tend to prescribe the same course of action in many scenarios, but they very much disagree on why and how we should pursue said course. And that matters. Such is the disagreement at the heart of this post.
The problem of thoughts too many
Bernard Williams, 20th century philosopher and long-time critic of utilitarianism, proposed the following thought experiment. Suppose you come across two people drowning. As you approach you notice: one is a stranger; the other, your spouse! You only have time to save one of them: who do you save? Repressing any gut impulse they might have, the well-trained utilitarian will at this point calculate (or recall past calculations of) the net effect on utility for each choice, based on their preferred form of utilitarianism and... they will have already failed to live up to the moment. According to Williams, someone who seeks a theoretical justification for the impulse to save the life of a loved one has had "one thought too many." (Cf. this story about saving two children from an oncoming train [? · GW]: EY is very much giving a calculated justification for an impulse that only an over-thinking consequentialist would question).
Virtue ethicist Michael Stocker[1] develops a similar story, asking us to imagine visiting a sick friend at the hospital. If our motivation for visiting our sick friend is that we think doing so will maximize the general good, (or best obeys the rules most conducive to the general good, or best respects our duties), then we are morally ugly in some way. If the roles were reversed, it would likely hurt to find out our friend came to visit us not because they care about us (because they felt a pit in their stomach when they heard we were hospitalized) but because they believe they are morally obligated (they consulted moral theory, reasoned about the facts, and determined this was so). Here, as before, there seems to be too much thinking getting in the way of (or even replacing) the correct motivation for acting as one should.
Note how anti-rationalist this is: part of the point here is that the thinking itself can be ugly. According to VE, in both these stories there should be little to no "slow thinking" going on at all – it is right for your "fast thinking," your heuristics, to take the reins. Many virtue ethicists liken becoming virtuous to training one's moral vision – learning to perceive an action as right, not to reason that it is right. Of course cold calculated reasoning has its place, and many situations call for it. But there are many more in which being calculating is wrong.
(If your heuristic is a consciously invoked utilitarian/deontological rule that you've passionately pledged yourself to, then the ugliness comes from the fact that your affect is misplaced – you care about the rule, when you should be caring about your friend. Just like cold reasoning, impassioned respect for procedure and duty can be appropriate at times [LW(p) · GW(p)]; most times it amounts to rule-worship.)
Internal Moral Disharmony
In Stocker's terms, a theory brings on "moral schizophrenia" when it produces disharmony between our reasons/justifications for acting and our motivations to act. Since this term is outdated and misleading, let's call this malady of the spirit "internal moral disharmony." As Stocker describes it (p454):
An extreme form of [this disharmony] is characterized, on the one hand, by being moved to do what one believes bad, harmful, ugly, abasing; on the other, by being disgusted, horrified, dismayed by what one wants to do. Perhaps such cases are rare. But a more modest [disharmony] between reason and motive is not, as can be seen in many examples of weakness of the will, indecisiveness, guilt, shame, self-deception, rationalization, and annoyance with oneself.
When our reasons (or love of rules) fully displace the right motivations to act, the disharmony is resolved but we get the aforementioned ugliness (in consequentialist terms: we do our friend/spouse harm by not actually caring about them). We become walking utility calculators (or rule-worshipers). Most of us, I would guess, are not so far gone, but instead struggle with this disharmony. It manifests itself as a sort of cognitive dissonance: we initially have the correct motivations to act, but thoughts too many get in the way, thoughts we would prefer not to have. Stocker's claim is that Consequentialism is prone to producing this disharmony. Consequentialism has us get too accustomed to ethical analysis, to the point of it running counter our first (and good) impulses, causing us to engage in slow thinking automatically even when we would rather not. Resolving this dissonance is difficult – like trying to stop thinking about pink elephants. The fact we have this dissonance in our head makes us less than a paragon of virtue, but better than the walking utility calculator/rule-worshiper.
Besides being a flaw in our moral integrity, this dissonance is also harmful to ourselves. (Which seems to lead hedonistic consequentialists to conclude we should be the walking utility calculators/rule-worshipers![2]) Too much thinking about a choice – analyzing the options along more dimensions, weighing more considerations for and against each, increasing the number of options considered – will dampen one's emotional attachment to the option chosen. Most of us have felt this before: too much back and forth on what to order at a restaurant leaves you less satisfied with whatever you eventually choose. Too much discussion about where to go, what to do, leaves everyone less satisfied with whatever is finally chosen. A number of publications in psychology confirm and elaborate on this readily apparent phenomenon (most famously Schwartz's The Paradox of Choice).[3][4][5][6][7][8][9] (Credit for this list of references goes to Eva Illouz, who finds evidence of this phenomenon in the way we choose our romantic partners today, especially men).[10]
Regularly applying ethical analysis to every little thing (which consequentialists are prone to do!) can be especially bad and dangerous. When ethical considerations and choices start to leave you cold, you will struggle to find the motivation to do what you judge is right, making you weak-willed (a "less effective consequentialist" if you prefer).[11] Or you might continue to go through the right motions, but it will be mechanical, without joy or passion or fulfillment. This is harm in itself, to oneself. But moreover, it leaves you vulnerable: this coldness is a short distance from the frozen wastes of cynicism and nihilism. When ethics looks like "just an optimization problem" to you, it can quickly start to look like "just a game." Making careful analysis your first instinct means learning to repress your gut sense of what is right and wrong; once you do that, right and wrong might start to feel less important, at which point it becomes harder to hang onto the normative reality they structure. You might continue to nominally recognize what is right and wrong, but feel no strong allegiance to rightness.
Given his penchant for consequentialist reasoning (and given that being colder is associated with being less risk-averse, making one a riskier gambler and more successful investor),[12] it would not surprise me to learn that SBF has slipped into that coldness at times. This profile piece suggests Will McAskill has felt its touch. J.S. Mill, notable consequentialist, definitely suffered it. There are symptoms of it all over this post [LW · GW]from Wei Dai and the ensuing comment thread (see my pushback here [LW(p) · GW(p)]). In effect, much of EY's sequence on morality [? · GW]encourages one to suppress affect and become a walking utility calculator or rule-worshiper – exactly what leads to this coldness. In short, I fear it is widespread in this community.
EDIT: The term"widespread" is vague – I should have been clearer. I do not suspect this coldness afflicts the majority of LW/EA people. Something more in the neighborhood of 20~5%. Since it's not easy to measure this coldness, I have given a more concrete falsifiable prediction here [LW(p) · GW(p)]. None of this is to say that, on net, the LW/EA community has a negative impact on people's moral character. On the contrary, on net, I'm sure it's positive. But if there is a problematic trend in the community (and if it had any role to play in the attitudes of certain high profile EAs towards ethics), I would hope the community takes steps to curb that trend.
The danger of overthinking things is of course general, with those who are "brainier" being especially susceptible. Given that this is a rationalist community – a community that encourages braininess – it would be no surprise to find it here at higher rates than the general population. However, I am surprised and disappointed that being brainy hasn't been more visibly flagged as a risk factor! Let it be known: braininess comes with its own hazards (e.g. rationalization [? · GW]). This coldness is another one of them. LW should come with a warning label on it!
A problem for everybody...
If overthinking things is a very general problem, that suggests thoughts too many (roughly, "the tendency to overthink things in ethical situations") is also general and not specific to Consequentialism. And indeed, VE can suffer it. In its simplest articulation, VE tells us to "do as the virtuous agent would do," but telling your sick friend that you came to visit "because this is what the virtuous agent would do" is no better than the consequentialists response! You should visit your friend because you're worried for them, full stop. Similarly, if someone was truly brave and truly loved their spouse, they would dive in to save them from drowning (instead of the stranger) without second thought.
Roughly, a theory is said to be self-effacing when the justification it provides for the rightness of an action is also recognized by the theory as being the wrong motivation for taking that action. Arguably, theories can avoid causing internal disharmony at the cost of being self-effacing. When Stocker first exposed self-effacement in Consequentialism and DE, it was viewed as something of a bug. But in some sense, it might actually be a feature: if there is no situation in which your theory recommends you stop consulting theory, then there is something wrong with that theory – it is not accounting for the realities of human psychology and the wrongness of thoughts too many. It's unsurprising that self-effacement should show up in nearly every plausible theory of normative ethics – because theory tends to involve a lot of thinking.
...but especially (casual) consequentialists.
All that said, consequentialists should to be especially wary of developing thoughts too many for a few reasons:
- Culture: the culture surrounding Consequentialism is very much one that encourages adopting the mindset of a maximizer, an optimizing number cruncher, someone who applies decision theory to every aspect of one's life. Consequentialism and rationalism share a close history after all. In all things morality related, I advise rationalists tone down these attitudes (or at least flag them as hazardous) especially around less sophisticated, more casual audiences.
- The theory's core message: even though most consequentialist philosophers advise against using an act-consequentialist decision procedure, Act Consequentialism ("the right action = the action which results in the most good") is still the slogan. Analyzing, calculating, optimizing and maximizing appears front and center in the theory. It seems to encourage the culture mentioned above from the outset. It's only many observations later that sophisticated consequentialists will note that, the best way for humans to actually maximize utility is to operate as if VE [LW · GW]or DE were true [LW · GW] (i.e. by developing character traits or respecting rules that tend to maximize utility). Fewer still notice the ugliness of thoughts too many (and rule-worship). Advocates of Consequentialism should do two things to guard their converts against, if nothing else, the cognitive dissonance of internal moral disharmony:
- At a minimum, converts should be made aware of the facts about human psychology (see §2.1 above) at the heart of this dissonance: these facts should to be highlighted aggressively. And early, lest the dissonance sets in before the reader's sophistication.
- Assuming you embrace self-effacement, connect the dots for your readers: highlight how Consequentialism self-effaces – where and how often it recommends that one stop considering Consequentialism's theoretical justifications for one's actions.
Virtue ethicists, for their part, are known for forestalling self-effacement by just not giving a theory in the first place – by resisting the demand to give a unifying account of a broad range of normative judgments about actions. They tend to prefer taking things case by case, insistently pointing to specific details in specific examples and just saying that's what was key in that situation. They prefer studying the specific actions of virtuous exemplars and vicious foils. The formulation "the right action is the one the virtuous agent would take" is always reluctantly given, as more of a sketch of a theory than something we should think too hard about. This can make them frustrating theorists, but responsible writers (protecting you from developing thoughts too many), and decent moral guides. Excellent moral guides do less lecturing on moral theory, and more leading by example. Virtue ethicists like to sit somewhere in-between: they like lecturing on examples.
Note that, to proscribe consulting theory is not to prescribe pretense. Pretending to care (answering your friend "because I was worried!" when in fact your motivation was to maximize the general good) is just as ugly and will exacerbate the self-harm. That said theories can recognize that, under certain circumstances, "fake it 'til you make it" is the best policy available to an agent. Such might be the case for someone who was not fortunate enough to have good role models in their impressionable youth, and whose friends cannot/will not help them curb a serious vice. Conscious impersonation of the virtuous in an attempt to reshape their character might sadly be this person's best option for turning their life around. But note that, even when this is the case, success is never fully achieved if the pretense doesn't eventually stop being pretense – if the pretender doesn't eventually win themselves over, displacing the motivation for pretense with the right motivation to act (e.g. a direct concern for the sick friend).
Prevention and Cure
Adopting VE vs. sophisticating Consequentialism
Aware of thoughts too many, what should one do?
Well, you could embrace VE. Draw on the practical wisdom encoded in the rich vocabulary of virtue and vice, emphatically ending your reason-giving on specific details that are morally salient to the case at hand (e.g. "Because ignoring Sal would have been callous! Haven't you seen how lonely they've been lately?"). Don't fuss too much with integrating and justifying each instance of normative judgment with an over-arching system of principles: morally speaking, you're not really on the line for it, and its a hazardous task. If you are really so inclined, go ahead, but be sure to hang up those theoretical justifications when you leave the philosophy room, or the conscious self-improvement room. Sure, ignoring this sort of theoretical integration might[13] make you less morally consistent, but consistency is just one virtue: part of the lesson here is that, in practice, when humans consciously optimize very hard for moral consistency they typically end up making unacceptable trade-offs in other virtues. Rationalists seem especially prone to over-emphasize consistency.
Alternatively, you could further sophisticate your Consequentialism. With some contortion, the lessons here can be folded in. One could read the above as just more reason, by consequentialist lights, to operate as if VE were true: adopt the virtuous agent's decision procedure in order to avoid the harms resulting from thoughts too many and internal moral disharmony. But remark how thorough this co-opting strategy must now be: consequentialists won't avoid those harms with mere impersonation of the virtuous agent. Adopting the virtuous agent's decision procedure means completely winning yourself over, not just consciously reading off the script. Again, pretense that remains pretense not only fails to avoid thoughts too many but probably worsens the cognitive dissonance!
If you succeed in truly winning yourself over though, how much of your Consequentialism will remain? If you still engage in it, you will keep your consequentialist reasoning in check. Maybe you'll reserve it for moments of self-reflection, insofar as self-reflection is still needed to regulate and maintain virtue. Or you might engage in it in the philosophy room (wary that spending too much time in there is hazardous, engendering thoughts too many). At some point though, you might find it was your Consequentialism that got co-opted by VE: if you are very successful, it will look more like your poor past self had to use consequentialist reasoning as a pretext for acquiring virtue, something you now regard as having intrinsic value, something worth pursuing for its own sake... Seems more honest to me if you give up the game now and endorse VE. Probably more effective too.
Anyway, if you go in for the co-opt, don't forget, part of the lesson is to be mindful of the facts of human psychology. Invented virtues like the virtue of always-doing-what-the-best-consequentialist-would-do [LW · GW], besides being ad hoc and convenient for coddling one's Consequentialism, are circular and completely miss the point. Trying to learn such a virtue just reduces to trying to become "the most consequential consequentialist." But the question for consequentialists is precisely that: what character traits does the most consequential consequentialist tend to have? The minimum takeaway of this post: they don't engage in consequentialist reasoning all the time!
Consequentialists might consider reading the traditional list of virtues [LW · GW] as a time-tested catalog of the most valuable character traits (by consequentialist lights) that are attainable for humans. (Though see "objection h" here, for some complications on that picture).
Becoming virtuous
Whether we go with VE or Consequentialism, it seems we need to tap into whatever self-discipline (and self-disciplining tools) we have and begin a virtuous cycle of good habit formation. Just remember that chanting creeds to yourself and faking it 'til you make it aren't your only options! Encourage your friends to call out your vices. (In turn, steer your friends away from vice and try to be a good role model for the impressionable). Engage with good books, movies, plays etc. Virtue ethicists note that art has a great potential for exercising and training moral awareness, for providing us role models to take inspiration from, flawed characters to learn from, villains to revile. It's critical to see what honesty (and dishonesty), compassion (and callousness), courage (and cowardice) etc. look like in detailed, complex situations. Just knowing their dictionary definitions and repeating to yourself that you will (or won't) be those things won't get you very far. To really get familiar with them, you need to encounter many examples of them, within varied normative contexts. Again, the aim is to train a sort of moral perception – the ability to recognize, in the heat of the moment, right from wrong (and to a limited extent, why it is so), and react accordingly. In that sense, VE sees developing one's moral character as very similar (even intertwined with) developing one's aesthetic taste. Many of the virtues are gut reactions after all – the good ones.
- ^
M. Stocker, The schizophrenia of modern ethical theories. Journal of Philosophy 73 (14) (1976), 453-466.
- ^
If we take Hedonistic Consequentialism (HC) literally, the morally ideal agent is one which outwardly pretends perfectly to care (when interacting with agents that care about being cared about) but inwardly always optimizes as rationally as possible to maximize hedon, either by directly trying to calculate the hedon maximizing action sequence (assuming the agent's compute is much less constrained) or by invoking the rules that tend to maximize hedon (assuming the compute available to the agent is highly constrained [LW · GW]). In other words, according to HC the ideal agent seems to be a sociopathic conartist obsessed with maximizing hedon (or obsessed with obeying the rules that tend to maximize hedon). No doubt advocates of HC have something clever to say in response, but my point stands: taking HC too literally (as SBF may have?) will turn you into a hedon monster.
- ^
G. Klein, Sources of Power: How People Make Decisions (Cambridge, MA: MIT Press, 1999).
- ^
T.D. Wilson and J.W. Schooler, “Thinking Too Much: Introspection Can Reduce the Quality of Preferences and Decisions,” Journal of Personality and Social Psychology 60(2) (1991), 181–92.
- ^
C. Ofir and I. Simonson, “In Search of Negative Customer Feedback: The Effect of Expecting to Evaluate on Satisfaction Evaluations,” Journal of Marketing Research, 38(2) (2001), 170–82.
- ^
R. Dhar, “Consumer Preference for a No-Choice Option,” Journal of Consumer Research, 24(2) (1997), 215–31.
- ^
D. Kuksov and M. Villas-Boas, “When More Alternatives Lead to Less Choice,” Marketing Science, 29(3) (2010), 507–24.
- ^
H. Simon, “Bounded Rationality in Social Science: Today and Tomorrow,” Mind & Society, 1(1) (2000), 25–39.
- ^
B. Schwartz, The Paradox of Choice: Why More is Less (New York: Harper Collins, 2005)
- ^
E. Illouz, Why love hurts: A sociological explanation (2012), ch. 3.
- ^
We know from psychology that humans struggle with indecision when they lack emotions to help motivate a choice. See A. R. Damasio, Descartes’ error: emotion, reason, and the human brain (1994)
- ^
B. Shiv, G. Loewenstein, A. Bechara, H. Damasio and A. R. Damasio, Investment Behavior and the Negative Side of Emotion. Psychological Science, 16(6) (2005), 435–439. http://www.jstor.org/stable/40064245
- ^
This is an empirical question for psychologists: in practice, does the exercise of integrating your actions and judgments into a unifying theoretical account actually correlate with being more morally consistent (e.g. in the way you treat others)? Not sure. Insofar as brainier people are, despite any rationalist convictions they might have, particularly prone to engage in certain forms of irrational behaviour (e.g. rationalization [? · GW]) I'm mildly doubtful.
30 comments
Comments sorted by top scores.
comment by gwern · 2022-12-01T21:39:03.959Z · LW(p) · GW(p)
Out of curiosity, what scandals over the past year have been a surprise to virtue ethicists?
Replies from: ctrout↑ comment by c.trout (ctrout) · 2022-12-02T02:32:18.570Z · LW(p) · GW(p)
Great question! Since I'm not a professional ethicist, I can't say: I don't follow this stuff closely enough. But if you want a concrete falsifiable claim from me, I proposed this to a commenter on the EA forum:
I claim that one's level of engagement with the LW/EA rationalist community can weakly predict the degree to which one adopts a maximizer's mindset when confronted with moral/normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one's decision (or the object at the center of their decision) in such scenarios.
More specifically I predict that, above a certain threshold of engagement with the community, increased engagement with the LW/EA community correlates with an increase in the maximizer's mindset, increase in cognitive dissonance, and decrease in positive affective attachment in the aforementioned scenarios.
The hypothesis for why I think this correlation exists is mostly here [LW · GW]and here. [LW · GW]
comment by Lone Pine (conor-sullivan) · 2022-12-01T11:09:59.293Z · LW(p) · GW(p)
Sure, ignoring this sort of theoretical integration might[13] make you less morally consistent, but consistency is just one virtue
I've been thinking that consistency is overrated around these parts. Inconsistency supposedly makes you vulnerable to certain kinds of scams, but in practice humans just notice that they are being scammed and adapt. Really, the ability to be inconsistent is part of adaption and exploration. If every decision I made in my life had to be perfectly consistent with every previous decision, I'd never get anywhere!
comment by Jan_Kulveit · 2022-12-02T15:45:50.169Z · LW(p) · GW(p)
While I have a lot of sympathy for the view expressed here, it seems confused in a similar way to straw consequentialism, just in an opposite direction.
Using the terminology from Limits to Legibility, [LW · GW]we can roughly split the way how we do morality into two types of thinking
- implicit / S1 / neural-net type / intuitive
- explicit / S2 / legible
What I agree with:
In my view, the explicit S2 type processing basically does not have the representation capacity to hold "human values", and the non-legible S1 neural-net boxes are necessary for being moral.
Attempts to fully replace the S1 boxes are stupid and lead to bad outcomes.
Training the S1 boxes to be better is often a better strategy than "more thoughts".
What I don't agree with:
You should rely just on the NN S1 processing. (Described in phenomenology way "get moral perception – the ability to recognize, in the heat of the moment, right from wrong" + rely on this)
In my view, the neural-net type of processing has different strength and weaknesses from the explicit reasoning, and they are often complementary.
- both systems provide some layer of reflectivity
- NNs tend to suffer from various biases; often, it is possible to abstractly understand where to expect the bias
- NN represent what's in the training data; often, explicit models lead to better generalization
- explicit legible models are more communicable
"moral perception" or "virtues" ...is not magic, bit also just a computation running on brains.
Also: I think the usual philosophical discussion about what's explanatorily fundamental is somewhat stupid. Why? Consider example from physics, where you can describe some mechanic phenomena using classical terminology of forces, or using Hamiltonian mechanics, or Lagranigan mechanics. If we were as confused about physics as about moral philosophies, there would likely be some discussion about what is fundamental. As we are less confused, we understand the relations and isomorphisms.
↑ comment by c.trout (ctrout) · 2022-12-02T21:15:56.883Z · LW(p) · GW(p)
In my view, the neural-net type of processing has different strength and weaknesses from the explicit reasoning, and they are often complementary.
Agreed. As I say in the post:
Of course cold calculated reasoning has its place, and many situations call for it. But there are many more in which being calculating is wrong.
I also mention that faking it til you make it (which relies on explicit S2 type processing) is also justified sometimes, but something one ideally dispenses with.
"moral perception" or "virtues" ...is not magic, bit also just a computation running on brains.
Of course. But I want to highlight something you might be have missed: part of the lesson of the "one thought too many" story is that sometimes explicit S2 type processing is intrinsically the wrong sort of processing for that situation: all else being equal you would be better person if you relied on S1 in that situation. Using S2 in that situation counted against your moral standing. Now of course, if your S1 processing is so flawed that it would have resulted in you taking a drastically worse action, then relying on S2 was overall the better thing for you to do in that moment. But, zooming out, the corollary claim here (to frame things another way) is that even if your S2 process was developed to arbitrarily high levels of accuracy in identifying and taking the right action, there would still be value left on the table because you didn't develop your S1 process. There are a few ways to cash out this idea, but the most common is to say this: developing one's character (one's disposition to feel and react a certain way when confronted with a given situation – your S1 process) in a certain way (gaining the virtues) is constitutive of human flourishing – a life without such character development is lacking. Developing one's moral reasoning (your S2 process) is also important (maybe even necessary), but not sufficient for human flourishing.
Regarding explanatory fundamentality:
I don't think your analogy is very good. When you describe mechanical phenomena using the different frameworks you mention, there is no disagreement between them about the facts. Different moral theories disagree. They posit different assumptions and get different results. There is certainly much confusion about the moral facts, but saying theorists are confused about whether they disagree with each other is to make a caricature of them. Sure, they occasionally realize they were talking past each other, but that's the exception not the rule.
We're not going to resolve those disagreements soon, and you may not care about them, which is fine – but don't think that they don't exist. A closer analogy might be different interpretations of QM: just like most moral theorists agree on ~90% of all common sense moral judgments, QM theorists agree on the facts we can currently verify but disagree about more esoteric claims that we can't yet verify (e.g. existence of other possible worlds). I feel like I need to remind EA people (which you may or may not be) that the EA movement is unorthodox, it is radical (in some ways – not all). That sprinkle of radicalism is a consequence of unwaveringly following very specific philosophical positions to their logical limits. I'm not saying here that being unorthodox automatically means you're bad. I'm just saying: tread carefully and be prepared to course-correct.
comment by Lao Mein (derpherpize) · 2022-12-01T10:37:41.520Z · LW(p) · GW(p)
The problem is that people are really really good at self-deception, something that often requires a lot of reflection to uncover. Ultimately, the passion vs reason debate comes down to which one has served us the best personally.
I think you have a really good history with following your moral and social intuitions. I'm guessing that, all else equal, following your heart led to better social and personal outcomes than following your head?
If I followed my heart, I'd probably be Twitter-stalking and crying over my college ex-gf and playing video games while unemployed right now. Reflection > gut instinct for many. Actually, violating my gut instinct has mostly led to positive outcomes when it came to my social life and career whenever it has come in conflict with reason, so I have a high level of contempt for intuitivist anything.
Replies from: Viliam↑ comment by Viliam · 2022-12-01T15:45:10.067Z · LW(p) · GW(p)
Consequentialism only works if you can predict the consequences. I think many "failures of consequentialist thinking" could be summarized as "these people predicted that doing X will result in Y, and they turned out to be horribly wrong".
So the question is whether your reason or emotion is a better predictor of future. Which probably depends on the type of question asked (emotions will be better for situations similar to those that existed in the ancient jungles, e.g. human relations; reason will be better for situations involving math, e.g. investing), but neither is infallible. Which means we cannot go fully consequentialist, because that means fully overconfident.
Replies from: ctrout↑ comment by c.trout (ctrout) · 2022-12-01T20:30:58.981Z · LW(p) · GW(p)
I agree with both of you that the question for consequentialists is to determine when and where an act-consequentialist decision procedure (reasoning about consequences), a deontological decision procedure (reasoning about standing duties/rules), or the decision procedure of the virtuous agent (guided by both emotions and reasoning) are better outcome producers.
But you're missing part of the overall point here: according to many philosophers (including sophisticated consequentialists) there is something wrong/ugly/harmful about relying too much on reasoning (whether about rules or consequences). Someone who needs to reason their way to the conclusion that they should visit their sick friend in order to motivate themselves to go, is not as good a friend as the person who just feels worried and goes to visit their friend.
I am certainly not an exemplar of virtue: I regularly struggle with overthinking things. But this is something one can work on. See the last section of my post.
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-12-02T00:41:31.126Z · LW(p) · GW(p)
Also, let’s remember that the deontologists and virtue ethicists share plenty of blame for “one thought too many.” I’ve spent hours fielding one objection after another to the simple and obvious rightness of permitting carefully regulated kidney sales from virtue ethicists who go on for hours concocting as hoc ethical objections to the practice. I’m not sure why consequentialism is being singled out here as being unusually provocative of excessive moral perseveration.
Replies from: ctrout↑ comment by c.trout (ctrout) · 2022-12-02T02:50:24.756Z · LW(p) · GW(p)
I agree that, among ethicists, being of one school or another probably isn't predictive of engaging more or less in "one thought too many." Ethicists are generally not moral paragons in that department. Overthinking ethical stuff is kind of their job though – maybe be thankful you don't have to do it?
That said, I do find that (at least in writing) virtue ethicists do a better job of highlighting this as something to avoid: they are better moral guides in this respect. I also think that they tend to muster a more coherent theoretical response to the problem of self-effacement: they more or less embrace it, while consequentialists try to dance around it.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2022-12-02T03:06:21.881Z · LW(p) · GW(p)
It sounds like you're arguing not so much for everybody doing less moral calculation, and more for delegating our moral calculus to experts.
I think we meet even stronger limitations to moral deference than we do for epistemic deference: experts disagree, people pose as experts when they aren't, people ignore expertise where it exists, laypeople pick arguments with each other even when they'd both do better to defer, experts engage in interior moral disharmony, etc. When you can do it, I agree that deference is an attractive choice, as I feel I am able to do in the case of several EA institutions.
I strongly dislike characterizations of consequentialism as "dancing around" various abstract things. It is a strange dance floor populated with strange abstractions and I think it behooves critics to say exactly what they mean, so that consequentialists can make specific objections to those criticisms. Alternatively, we consequentialists can volley the same critiques back at the virtue ethicists: the Catholic church seems to do plenty of dancing around its own seedy history of global-scale consquest, theft, and abuse, while asking for unlimited deference to a moral hierarchy it claims is not only wise, but infallible. I don't want to be a cold-hearted calculator, but I also don't want to defer to, say, a church with a recent history of playing the ultimate pedopheliac shell game. If I have to accept a little extra dancing to vet my experts and fill in where ready expertise is lacking, I am happy for the exercise.
Replies from: ctrout↑ comment by c.trout (ctrout) · 2022-12-02T05:21:24.222Z · LW(p) · GW(p)
Regarding moral deference:
I agree that moral deference as it currently stands is highly unreliable. But even if it were, I actually don't think a world in which agents did a lot of moral deference would be ideal. The virtuous agent doesn't tell their friend "I deferred to the moral experts and they told me I should come see you."
I do emphasize the importance of having good moral authorities/exemplars help shape your character, especially when we're young and impressionable. That's not something we have much control over – when we're older, we can somewhat control who we hang around and who we look up to, but that's about it. This does emphasize the importance of being a good role model for those around us who are impressionable though!
I'm not sure if you would call it deference, but I also emphasize (following Martha Nussbaum and Susan Feagin) that engaging with good books, plays, movies, etc. is critical for practicing moral perception, with all the appropriate affect, in a safe environment. And indeed, it was a book (Marmontel's Mimoires) that helped J.S. Mill get out of his internal moral disharmony. If there are any experts here, it's the creators of these works. And if they have claim to moral expertise it is an appropriately humble folk expertise which, imho, is just about as good as our current state-of-the-art ethicists' expertise. Where creators successfully minimize any implicit or explicit judgment of their characters/situations, they don't even offer moral folk expertise so much as give us complex detailed scenarios to grapple with and test our intuitions (I would hold up Lolita as an example of this). That exercise in grappling with the moral details is itself healthy (something no toy "thought experiment" can replace).
Moral reasoning can of course be helpful when trying to become a better person. But it is not the only tool we have, and over-relying on it has harmful side-effects.
Regarding my critique of consequentialism:
Something I seem to be failing to do is make clear when I'm talking about theorists who develop and defend a form of Consequentialism and people who have, directly or indirectly, been convinced to operate on consequentialist principles by those theorists. Call the first "consequentialist theorists" and the latter "consequentialist followers." I'm not saying followers dance around the problem of self-effacement – I don't even expect many to know what that is. It's a problem for the theorists. It's not something that's going to get resolved in a forum comment thread. I only mentioned it to explain why I was singling out Consequentialism in my post: because I happen to know consequentialist theorists struggle with this more than VE theorists. (As far as I know DE theorists struggle with it to, and I tried to make that clear throughout the post, but I assume most of my readers are consequentialist followers and so don't really care). I also mentioned it because I think it's important for people to remember their "camp" is far from theoretically airtight.
Ultimately I encourage all of us to be pluralists about ethics – I am extremely skeptical that any one theorist has gotten it all correct. And even if they did, we wouldn't be able to tell with any certainty they did. At the moment, all we can do is try and heed the various lessons from the various camps/theorists. All I was just trying to do was pass on a lesson one hears quite loudly in the VE camp and that I suspect many in the Consequentialism camp haven't heard very often or paid much attention to.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2022-12-02T06:14:14.497Z · LW(p) · GW(p)
It sounds like what you really care about is promoting the experience of empathy and fellow-feeling. You don’t particularly care about moral calculation or deference, except insofar as they interfere or make room for with this psychological state.
I understand the idea that moral deference can make room for positive affect, and what I remain skeptical of is the idea that moral calculation mostly interferes with fellow-feeling. It’s a hypothesis one could test, but it needs data.
Replies from: ctrout, ctrout↑ comment by c.trout (ctrout) · 2022-12-07T05:21:28.052Z · LW(p) · GW(p)
Sorry, my first reply to your comment wasn't very on point. Yes, you're getting at one of the central claims of my post.
what I remain skeptical of is the idea that moral calculation mostly interferes with fellow-feeling
First, I wouldn't say "mostly." I think in excessive amounts it interferes. Regarding your skepticism: we already know that calculation (a maximizer's mindset) in other contexts interferes with affective attachment and positive evaluations towards the choices made by said calculation (see references to psych litt). Why shouldn't we expect the same thing to occur in moral situations, with the relevant "moral" affects? (In fact, depending on what you count as "moral," the research already provides evidence of this).
If your skepticism is about the sheer possibility of calculation interfering with empathy/fellow-feeling etc, then any anecdotal evidence should do. See e.g. Mill's autobiography. But also, you've never ever been in a situation where you were conflicted between doing two different things with two different people/groups, and too much back and forth made you kinda feel numb to both options in the end, just shrugging and saying "whatever, I don't care anymore, either one"? That would be an example of calculation interfering with fellow-feeling.
Some amount of this is normal and unavoidable. But one can make it worse. Whether the LW/EA community does so or not is the question in need of data – we can agree on that! See my comment below [LW(p) · GW(p)]for more details.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2022-12-07T07:17:36.817Z · LW(p) · GW(p)
First, I wouldn't say "mostly." I think in excessive amounts it interferes.
We've all sat around with thoughts whirling around in our heads, perseverating about ethics. Sometimes, a little ethical thinking helps us make a big decision. Other times, it's not much different from having an annoying song stuck in your head. When we're itchy, have the sun in our eyes, or, yes, can't stop thinking about ethics, that discomfort shows in our face, in our bearing, and in our voice, and it makes it harder to connect with other people.
You and I both see that, just like a great song can still be incredibly annoying when it's stuck in your head, a great ethical system can likewise give us a terrible headache when we can't stop perseverating about it.
So, for a person who streams a lot of consequentialism on their moral Spotify, it seems like you're telling them that if they'd just start listening to some nice virtue ethics instead and give up that nasty noise, they'd find themselves in a much more pleasant state of mind after a while. Personally, as a person who's been conversant with all three ethical systems, interfaces with many moral communities, and has fielded a lot of complex ethical conversations with a lot of people, I don't really see any more basis for thinking consequentialism is unusually bad as a "moral earworm," any more than (as a musician) I think that any particular genre of music is more prone to distressing earworms.
To me, perseveration/earworms feels more like a disorder of the audio loop, in which it latches on to thoughts, words, or sounds and cycles from one to another in a way you just can't control. It doesn't feel particularly governed by the content of those thoughts. Even if it is enhanced by specific types of mental content, it seems like it would require psychological methodologies that do not actually exist in order to reliably detect an effect of that kind. We'd have to see the thoughts in people's heads, find out how often they perseverate, and try and detect a causal association. I think it's unlikely that convincing evidence exists in the literature, and I find it dubious that we could achieve confidence in our beliefs in this matter without such a careful scientific study.
↑ comment by c.trout (ctrout) · 2022-12-02T06:47:29.573Z · LW(p) · GW(p)
Here is my prediction:
I claim that one's level of engagement with the LW/EA rationalist community can weakly predict the degree to which one adopts a maximizer's mindset when confronted with moral/normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one's decision (or the object at the center of their decision) in such scenarios.
More specifically I predict that, above a certain threshold of engagement with the community, increased engagement with the LW/EA community correlates with an increase in the maximizer's mindset, increase in cognitive dissonance, and decrease in positive affective attachment in the aforementioned scenarios.
The hypothesis for why that correlation will be there is mostly in this section [LW · GW]and at the end of this section. [LW · GW]
On net, I have no doubt the LW/EA community is having a positive impact on people's moral character. That does not mean there can't exist harmful side-effects the LW/EA community produces, identifiable as weak trends among community goers that are not present among other groups. Where such side-effects exist shouldn't they be curbed?
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-12-02T00:37:37.631Z · LW(p) · GW(p)
Thinking more about the “moral ugliness” case, I find that ethical thought engenders feelings of genuine caring that would otherwise be absent. If it weren’t for EA-style consequentialism, I would hardly give a thought to malaria, for example. As it is, moral reason has instilled in me a visceral feeling of caring about these topics, as well as genuine anger at injustice when small-potatoes political symbolism distracts from these larger issues.
Likewise, when a friend is down, I am in my native state cold and egocentric. But by reminding myself intellectually about our friendship, the nature of their distress, the importance of maintaining close connections and fellow feeling, I spark actual emotion inside of myself.
Replies from: ctrout↑ comment by c.trout (ctrout) · 2022-12-02T03:45:35.736Z · LW(p) · GW(p)
Regarding feelings about disease far away:
I'm glad you have become concerned about these topics! I'm not sure virtue ethicists couldn't also motivate those concerns though. Random side-note: I absolutely think consequentialism is the way to go when judging public/corporate/non-profit policy. It makes no sense to judge the policy of those entities the same way we judge the actions of individual humans. The world would be a much better place if state departments, when determining where to send foreign aid, used consequentialist reasoning.
Regarding feelings toward your friend:
I'm glad to hear that moral reasoning has helped you there too! There is certainly nothing wrong with using moral reasoning to cultivate or maintain one's care for another. And some days, we just don't have the energy to muster an emotional response and the best we can do is just follow the rules/do what you know is expected of you to do even if you have no heart in it. But isn't it better when we do have our heart in it? When we can dispense with the reasoning, or the rule consulting?
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2022-12-02T05:11:11.482Z · LW(p) · GW(p)
It’s better when we have our heart in it, and my point is that moral reasoning can help us do that. From my point of view, almost all the moral gains that really matter come from action on the level of global initiatives and careers directed at steering outcomes on that level. There, as you say, consequentialism is the way to go. For the everyday human acts that make up our day to day lives, I don’t particularly care which moral system people use - whatever keeps us relating well with others and happy seems fine to me. I’d be fine with all three ethical systems advertising themselves and competing in the marketplace of ideas, as long as we can still come to a consensus that we should fund bed nets and find a way not to unleash a technological apocalypse on ourselves.
Replies from: ctrout↑ comment by c.trout (ctrout) · 2022-12-02T06:41:14.125Z · LW(p) · GW(p)
It’s better when we have our heart in it, and my point is that moral reasoning can help us do that.
My bad, I should have been clearer. I meant to say "isn't it better when we have our heart in it, and we can dispense with the reasoning or the rule consulting?"
I should note, you would be in good company if you answered "no." Kant believed that an action has no moral worth if it was not motivated by duty, a motivation that results from correctly reasoning about one's moral imperatives. He really did seem to think we should be reasoning about our duties all the time. I think he was mistaken.
comment by quetzal_rainbow · 2022-12-07T07:57:03.042Z · LW(p) · GW(p)
I feels to me that it is search for answer in the wrong place. If your problem is overthinking, you are not trying to find ethical theory that justifies less thinking, you cure overthinking with development of skills under the general label "cognitive awareness". At some level, you can just stop thinking harmful thoughts.
comment by npostavs · 2022-12-01T20:58:24.985Z · LW(p) · GW(p)
imagine visiting a sick friend at the hospital. If our motivation for visiting our sick friend is that we think doing so will maximize the general good, (or best obeys the rules most conducive to the general good, or best respects our duties), then we are morally ugly in some way.
If our motivation is just to make our friend feel better is that okay? Because it seems like that is perfectly compatible with consequentialism, but doesn't give the "I don't really care about you" message to our friend like the other motivations.
Or is the fact that the main problem I see with the "morally ugly" motivations is that they would make the friend feel bad a sign that I'm still too stuck in the consequentialist mindset and completely missing the point?
Replies from: ctrout, JBlack↑ comment by c.trout (ctrout) · 2022-12-02T17:03:21.620Z · LW(p) · GW(p)
If our motivation is just to make our friend feel better is that okay?
Absolutely. Generally being mindful of the consequences of one's actions is not the issue: ethicists of every stripe regularly reference consequences when judging an action. Consequentialism differentiates itself by taking the evaluation of consequences to be explanatorily fundamental – that which forms the underlying principle for their unifying account of all/a broad range of normative judgments. The point that Stocker is trying to make there is (roughly) that being motivated purely by intensely principled ethical reasoning (for lack of a better description) is ugly. Ethical principles are so general, so far removed, that they misplace our affect. Here is how Stocker describes the situation (NB: his target is both DE and Consequentialism):
But now, suppose you are in a hospital, recovering from a long illness. You are very bored and restless and at loose ends when Smith comes in once again. [...] You are so effusive with your praise and thanks that he protests that he always tries to do what he thinks is his duty, what he thinks will be best. You at first think he is engaging in a polite form of self-deprecation [...]. But the more you two speak, the more clear it becomes that he was telling the literal truth: that it is not essentially because of you that he came to see you, not because you are friends, but because he thought it his duty, perhaps as a fellow Christian or Communist or whatever, or simply because he knows of no one more in need of cheering up and no one easier to cheer up.
I should make clear (as I hope I did in the post): this is not an insurmountable problem. It leads to varying degrees of self-effacement [LW · GW]. I think some theorists handle it better than others, and I think VE handles it most coherently, but it's certainly not a fatal blow for Consequentialism or DE. It does however present a pitfall (internal moral disharmony) for casual readers/followers of Consequentialism. Raising awareness of that pitfall was the principle aim of my post.
Orthogonal point:
The problem is certainly not just that the sick friend feels bad. As I mention:
Pretending to care (answering your friend "because I was worried!" when in fact your motivation was to maximize the general good) is just as ugly and will exacerbate the self-harm.
But many consequentialists can account for this. They just need a theory of value that accounts for harms done that aren't known to the one harmed. Eudaimonic Consequentialism (EC) could do this easily: the friend is harmed in that they are tricked into thinking they have a true, caring friend when they don't. Having true, caring friends is a good they are being deprived of. Hedonistic Consequentialism (HC) on the other hand will have a much harder time accounting for this harm. See footnote 2 [LW(p) · GW(p)].
I say this is orthogonal because both EC and HC need a way to handle internal moral disharmony – a misalignment between the reasons/justifications for an action being right and the appropriate motivation for taking that action. Prima facie HC bites the bullet, doesn't self-efface, but recommends we become walking utility calculators/rule-worshipers. EC seems to self-efface: it judges that visiting the friend is right because it maximizes general human flourishing, but warns that this justification is the wrong motivation for visiting the friend (because having such a motivation would fail to maximize general human flourishing). In other words, it tells you to stop consulting EC – forget about it for a moment – and it hopes that you have developed the right motivation prior to this situation and will draw on that instead.
Replies from: npostavs↑ comment by npostavs · 2022-12-18T19:07:03.485Z · LW(p) · GW(p)
Okay, I think my main confusion is that all the examples have both the motivation-by-ethical-reasoning and lack-of-personal-caring/empathy on the moral disharmony/ugliness side. I'll try to modify the examples a bit to tease them apart:
- Visiting a stranger in the hospital in order to increase the sum of global utility is morally ugly
- Visiting a stranger in the hospital because you've successfully internalized compassion toward them via loving kindness meditation (or something like that) is morally good(?)
That is, the important part is the internalized motivation vs reasoning out what to do from ethical principles.
(although I notice my intuition has a hard time believing the premise in the 2nd case)
Replies from: ctrout↑ comment by c.trout (ctrout) · 2022-12-20T04:25:32.320Z · LW(p) · GW(p)
That sounds about right. The person in the second case is less morally ugly than the first. This is spot on:
the important part is the internalized motivation vs reasoning out what to do from ethical principles.
What do you mean by this though?:
(although I notice my intuition has a hard time believing the premise in the 2nd case)
You find it hard to believe someone could internalize the trait of compassion through "loving kindness meditation"? (This last I assume is a placeholder term for whatever works for making oneself more virtuous). Also, any reason you swapped the friend for a stranger? That changes the situation somewhat – in degree at least, but maybe in kind too.
I would note, according to (my simplified) VE, it's the compassion that makes the action of visiting the stranger/friend morally right. How the compassion was got is another question, to be evaluated on different merits [LW · GW].
I'm not sure I understand your confusion, but if you want examples of when it is right to be motivated by careful principled ethical reasoning or rule-worship, here are some examples:
- for a judge, acting in their capacity as judge, it is often appropriate that they be motivated by a love of consistently respecting rules and principles
- for policymakers, acting in their capacity as policymakers (far-removed from "the action"), it is often appropriate for them to devise and implement their policies motivated by impersonal calculations of general welfare
These are just some but I'm sure there are countless others. The broader point though: engaging in this kind of principled ethical reasoning/rule-worship very often, making it a reflex, will likely result in you engaging in it when you shouldn't. When you do so involuntarily, despite you preferring that you wouldn't: that's internal moral disharmony. (Therefore, ethicists of all stripe probably tend to suffer internal moral disharmony more than the average person!)
Replies from: npostavs↑ comment by npostavs · 2022-12-21T04:53:58.294Z · LW(p) · GW(p)
Also, any reason you swapped the friend for a stranger? That changes the situation somewhat – in degree at least, but maybe in kind too.
Yes, the other examples seemed to be about caring about people you are close to more than strangers, but I wanted to focus on the ethical reasoning vs internal motivation part.
examples of when it is right to be motivated by careful principled ethical reasoning or rule-worship
Thanks, that's helpful.
↑ comment by JBlack · 2022-12-02T00:14:59.320Z · LW(p) · GW(p)
Yes, consequentialism judges the act of visiting a friend in hospital to be (almost certainly) good since the outcome is (almost certainly) better than not doing it. That's it. No other considerations need apply. What their motivation was and whether there exist other possible acts that were also good are irrelevant.
If someone visits their sick friend only because it is a moral duty to do so, then I would have doubts that they are actually a friend. If there is any ugliness, it's just the implied wider implications of deceiving their "friend" about actually being a friend. Even then, consequentialism in itself does not imply any duty to perform any specific good act so it still doesn't really fit. That sounds more like some strict form of utilitarianism, except that a strict utilitarian probably won't be visiting a sick friend since there is so much more marginal utility in addressing much more serious unmet needs of larger numbers of people.
If they visit their sick friend because they personally care about their friend's welfare, and their moral framework also judges it a good act to visit them, then where's the ugliness?
Replies from: ctrout↑ comment by c.trout (ctrout) · 2022-12-02T17:30:23.876Z · LW(p) · GW(p)
... consequentialism judges the act of visiting a friend in hospital to be (almost certainly) good since the outcome is (almost certainly) better than not doing it. That's it. No other considerations need apply. [...] whether there exist other possible acts that were also good are irrelevant.
I don't know of any consequentialist theory that looks like that. What is the general consequentialist principle you are deploying here? Your reasoning seems very one off. Which is fine! That's exactly what I'm advocating for! But I think we're talking past each other then. I'm criticizing Consequentialism not just any old moral reasoning that happens to reference the consequences of one's actions (see my response to npostavs [LW(p) · GW(p)])
comment by Rana Dexsin · 2022-12-01T11:25:49.718Z · LW(p) · GW(p)
Little language note: “take the reins” (instead of “reigns”), please. (Interacts interestingly with “elephant in the brain” imagery, too.)
Replies from: ctrout↑ comment by c.trout (ctrout) · 2022-12-01T15:01:40.383Z · LW(p) · GW(p)
lol. Fixed, thanks!