Virtue Ethics for Consequentialists
post by Will_Newsome · 2010-06-04T16:08:40.556Z · LW · GW · Legacy · 185 commentsContents
185 comments
Meta: Influenced by a cool blog post by Kaj, which was influenced by a cool Michael Vassar (like pretty much everything else; the man sure has a lot of ideas). The name of this post is intended to be taken slightly more literally than the similarly titled Deontology for Consequentialists.
There's been a hip new trend going around the Singularity Institute Visiting Fellows house lately, and it's not postmodernism. It's virtue ethics. "What, virtue ethics?! Are you serious?" Yup. I'm so contrarian I think cryonics isn't obvious and that virtue ethics is better than consequentialism. This post will explain why.
When I first heard about virtue ethics I assumed it was a clever way for people to justify things they did when the consequences were bad and the reasons were bad, too. People are very good at spinning tales about how virtuous they are, even more so than at finding good reasons that they could have done things that turned out unpopular, and it's hard to spin the consequences of your actions as good when everyone is keeping score. But it seems that moral theorists were mostly thinking in far mode and didn't have too much incentive to create a moral theory that benefited them the most, so my Hansonian hypothesis falls flat. Why did Plato and Aristotle and everyone up until the Enlightenment find virtue ethics appealing, then? Well...
Moral philosophy was designed for humans, not for rational agents. When you're used to thinking about artificial intelligence, economics, and decision theory, it gets easy to forget that we're hyperbolic discounters: not anything resembling sane. Humans are not inherently expected utility maximizers, they're bounded agents with little capacity for reflection. Utility functions are great and all, but in the words of Zack M. Davis, "Humans don't have utility functions." Similarly, Kaj warns us: "be extra careful when you try to apply the concept of a utility function to human beings." Back in the day nobody thought smarter-than-human intelligence was possible, and many still don't. Philosophers came up with ways for people to live their lives, have a good time, be respected, and do good things; they weren't even trying to create morals for anyone too far outside the norm of whatever society they inhabited at the time, or whatever society they imagined to be perfect. I personally think that the Buddha had some really interesting things to say and that his ideas about ethics are no exception (though I suspect he may have had pain asymbolia, which totally deserves its own post soon). Epicurus, Mill, and Bentham were great thinkers and all, but it's not obvious that what they were saying is best practice for individual people, even if their ideas about policy are strictly superior to alternative options. Virtue ethics is good for bounded agents: you don't have to waste memory on what a personalized rulebook says about different kinds of milk, and you don't have to think 15 inferential steps ahead to determine if you should drink skim or whole.
You can be a virtue ethicist whose virtue is to do the consequentialist thing to do (because your deontological morals say that's what is right). Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems. And anyway, they're all actually virtue ethicists: they're trying to do the 'consequentialist' or 'deontologist' things to do, which happen to usually be the same. Alicorn's decided to do her best to reduce existential risk, and I, being a pseudo-consequentialist, have also decided to do my best to reduce existential risk. Virtue ethicists can do these things too, but they can also abuse the consistency effects such actions invariably come with. If you're a virtue ethicist it's easier to say "I'm the type of person who will reply to all of the emails in my inbox and sort them into my GTD system, because organization and contentiousness are virtues" and use this as a way to motivate yourself. So go ahead and be a virtue ethicist for the consequences (...or a consequentialist because it's deontic). It's not illegal!
Retooled virtue ethics is better for your instrumental rationality. The Happiness Hypothesis critiqued the way Western ethics, both in the deontologist tradition started by Immanuel Kant and the consequentialist tradition started by Jeremy Bentham have been becoming increasingly reason-based:
The philosopher Edmund Pincoffs has argued that consequentialists and deontologists worked together to convince Westerners in the twentieth century that morality is the study of moral quandaries and dilemmas. Where the Greeks focused on the character of a person and asked what kind of person we should each aim to become, modern ethics focuses on actions, asking when a particular decision is right or wrong. Philosophers wrestle with life-and-death dilemmas: Kill one to save five? Allow aborted fetuses to be used as a source of stem cells? [...] This turn from character ethics to quandary ethics has turned moral education away from virtues and towards moral reasoning. If morality is about dilemmas, then moral education is training in problem solving. Children must be taught how to think about moral problems, especially how to overcome their natural egoism and take into their calculations the needs of others.
[...] I believe that this turn from character to quandary was a profound mistake, for two reasons. First, it weakens morality and limits its scope. Where the ancients saw virtue and character at work in everything a person does, our modern conception confines morality to a set of situations that arise for each person only a few times in any given week [...] The second problem with the turn to moral reasoning is that it relies on bad psychology. Many moral education efforts since the 1970s take the rider off the elephant and train him to solve problems on his own. After being exposed to hours of case studies, classroom discussions about moral dilemmas, and videos about people who faced dilemmas and made the right choices, the child learns how (not what) to think. Then class ends, the rider gets back on the elephant, and nothing changes at recess. Trying to make children behave ethically by teaching them to reason well is like trying to make a dog happy by wagging its tail. It gets causality backwards.
To quote Kaj's response to the above:
Reading this chapter, that critique and the description of how people like Benjamin Franklin made it into an explicit project to cultivate their various virtues one at a time, I could feel a very peculiar transformation take place within me. The best way I can describe it is that it felt like a part of my decision-making or world-evaluating machinery separated itself from the rest and settled into a new area of responsibility that I had previously not recognized as a separate one. While I had previously been primarily a consequentialist, that newly-specialized part declared its allegiance to virtue ethics, even though the rest of the machinery remained consequentialist. [...]
What has this meant in practice? Well, I'm not quite sure of the long-term effects yet, but I think that my emotional machinery kind of separated from my general decision-making and planning machinery. Think of "emotional machinery" as a system that takes various sorts of information as input and produces different emotional states as output. Optimally, your emotional machinery should attempt to create emotions that push you towards taking the kinds of actions that are most appropriate given your goals. Previously I was sort of embedded in the world and the emotional system was taking its input from the entire whole: the way I was, the way the world was, and the way that those were intertwined. It was simultaneously trying to optimize for all three, with mixed results.
But now, my self-model was set separate from the world-model, and my emotional machinery started running its evaluations primarily based on the self-model. The main questions became "how could I develop myself", "how could I be more virtuous" and "how could I best act to improve the world". From the last bit, you can see that I haven't lost the consequentialist layer in my decision-making: I am still trying to act in ways that improve the world. But now it's more like my emotional systems are taking input from the consequentialist planning system to figure out what virtues to concentrate on, instead of the consequentialist reasoning being completely intertwined with my emotional systems.
Applying both consequentialist and virtue ethicist layers to the way you actually get things done in the real world seems to me a great idea. It recognizes that most of us don't actually have that much control over what we do. Acknowledging this and dealing with its consequences, and what it says about us, allows us to do the things we want and feel good about it at the same time.
So, if you'd like, try to be a virtue ethicist for a week. If a key of epistemic rationality is having your beliefs pay rent in expected anticipation, then instrumental rationality is about having your actions pay rent in expected utility. Use science! If being a virtue ethicist helps even one person be more the person they want to be, like it did for Kaj, then this post was well worth the time spent.
185 comments
Comments sorted by top scores.
comment by Vladimir_M · 2010-06-04T19:34:38.054Z · LW(p) · GW(p)
Will_Newsome:
Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems.
More precisely, they do disagree about the same practically relevant ethical questions that provoke controversy among common folks too, especially the politically and ideologically charged ones -- but their positions are only loosely correlated with their ethical theories, and instead stem from the same gut feelings and signaling games as everybody else's. This seems to me like a pretty damning fact about the way this whole area of intellectual work is conducted in practice.
Replies from: Mass_Driver, SilasBarta↑ comment by Mass_Driver · 2010-06-05T19:18:16.662Z · LW(p) · GW(p)
Maybe, but be very careful not to jump from
a pretty damning fact about the way this whole area of intellectual work is conducted in practice.
to
therefore there is no sense in individual people whose rationality is above-average attempting, in good faith and by way of experiment, to apply some subset of this intellectual work to their actual lives,
which I think is a conclusion that some people might inadvertently draw from your comment.
↑ comment by SilasBarta · 2010-06-04T20:00:06.703Z · LW(p) · GW(p)
Vladimir_M wins the discussion.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-06-11T15:21:34.198Z · LW(p) · GW(p)
What??? He did!
comment by steven0461 · 2010-06-05T02:06:23.297Z · LW(p) · GW(p)
In both your GTD example and Kaj's posting example, virtue doesn't seem to affect what you think you should do, just how you motivate yourself to do it, so "virtue psychology" might be a more accurate description than "virtue ethics".
comment by RichardChappell · 2010-06-05T21:06:29.420Z · LW(p) · GW(p)
Isn't this just Indirect Consequentialism?
It's worth noting that pretty much every consequentialist since J.S. Mill has stressed the importance of inculcating generally-reliable dispositions / character traits, rather than attempting to explicitly make utility calculations in everyday life. It's certainly a good recommendation, but it seems misleading to characterize this as in any way at odds with the consequentialist tradition.
Replies from: Roko↑ comment by Roko · 2010-06-06T20:27:25.988Z · LW(p) · GW(p)
It might be worth presenting Will with a dilemma that drives a wedge between a particular virtue and some consequence he cares about. E.g. suppose that the only way to fund saving the world is by becoming a gangster and inculcating the vices of revenge, mercilessness and the love of money in yourself.
Replies from: Mass_Driver, Eneasz, Will_Newsome↑ comment by Mass_Driver · 2010-06-09T03:18:16.999Z · LW(p) · GW(p)
This is a useful dilemma. What are some of the possible motivators for refusing to become a gangster?
You don't really care about saving the world; the only consequence that actually matters to you is being a nice person.
You don't trust your conclusion that Operation: Gangsta will save the world; you place so much heuristic faith in virtues that you actually expect any calculation that outputs a recommendation to become a gangster to be fatally flawed.
You don't trust your values not to evolve away from saving the world if you become a gangster; it might be impossible or extremely risky to save the world by thugging out because being a thug makes you care less about saving the world; you might have a career of evil and then just spend the proceeds on casinos, hitmen, and mansions.
↑ comment by SilasBarta · 2010-06-11T15:30:49.389Z · LW(p) · GW(p)
The second and the third are the most convincing reasons, but EY already explained how those follow from using deontology rather than virtue ethics as a heuristic for handling the fact that you are a consequentialist running on corrupt hardware. This calls into question how much insight Will_Newsome has provided with this article.
His point in that article, if you'll recall, is that deontology is consequentialism, just one meta-level up and with the knowledge that your hardware distorts your moral cognition in predictable ways.
↑ comment by Jack · 2010-06-09T04:29:02.798Z · LW(p) · GW(p)
The problem is becoming a gangster strikes me, just on pragmatic grounds, as a very bad way to fund saving the world so all these motivations are hard to evaluate.
Replies from: Mass_Driver↑ comment by Mass_Driver · 2010-06-09T04:52:20.804Z · LW(p) · GW(p)
Sure, but try to cope with the dilemma as best you can. If you can think of a better example, great! If not, try to imagine a situation where being a gangster would be pragmatic. Maybe you're the godfather's favorite child, recently returned from the military and otherwise unskilled. Maybe you live in a dome on a colony planet that is essentially one big corrupt city, and ordinary entrepreneurship doesn't pay off properly. Maybe you're a member of a despised or even outlawed ethnicity in medieval times, and no one will sit still to listen to your brilliant ideas about how to build better water mills and eradicate plague unless you first establish yourself as a powerful and wealthy fringe figure.
In general, when trying to evaluate an argument that you're initially inclined to disagree with, you should try to place your self in The Least Convenient Possible World for refuting that argument. That way, if you still manage to refute the argument, you'll at least have learned something. If you stop thinking when the ordinary world doesn't seem to validate a hypothesis that you didn't believe in to begin with, you don't really learn anything.
↑ comment by Eneasz · 2010-06-09T15:56:57.572Z · LW(p) · GW(p)
There isn't much of a dilemma if you assume there are some states worse than death. Eternal torture is less preferable to non-existence. A malicious world of pain and vice is less preferable than a non-existent world. By becoming a malicious, vice-filled person you are moving the world in the direction of being worse than non-existent, and thus are defeating your stated goal. You are doing more to destroy the world than to save it.
Replies from: Roko↑ comment by Roko · 2010-06-09T21:41:21.288Z · LW(p) · GW(p)
Consider the least convenient possible world
Replies from: Eneasz↑ comment by Eneasz · 2010-06-09T22:14:32.831Z · LW(p) · GW(p)
The least convenient possible world is one with superhumanly intelligent AIs that can have complete confidence in their source code, and predict with complete confidence that these means (thuggishness) will in fact lead to those ends (saving the world).
However in that world the world has already been saved (or destroyed) and so this is not relevant. In any relevant world the actor who is resorting to thuggishness to save the world is a human running on hostile hardware, and would be stupid not to take that into consideration.
Replies from: Roko↑ comment by Will_Newsome · 2010-06-09T05:25:20.764Z · LW(p) · GW(p)
I would do what sounded like the consequentialist thing to do and become a gangster. Not only would I be saving the world but I'd also be pretty badass if I was doing it right. Rationalists should win when possible and what not. Consequentialism-ism is the key Virtue.
Replies from: Blueberrycomment by LauraABJ · 2010-06-04T16:31:53.082Z · LW(p) · GW(p)
I agree that these virtue ethics may help some people with their instrumental rationality. In general I have noticed a trend at lesswrong in which popular modes of thinking are first shunned as being irrational and not based on truth, only to be readopted later as being more functional for achieving one's stated goals. I think this process is important, because it allows one to rationally evaluate which 'irrational' models lead to the best outcome.
Replies from: fburnaby, gwern↑ comment by fburnaby · 2010-06-04T18:42:56.988Z · LW(p) · GW(p)
This also fits my (non-LW) experience very well.
There's that catchy saying: "evolution is smarter than you are". I think it probably also extends somewhat to cultural evolution. Given that our behaviour is strongly influenced by these, I think we should expect to 'rediscover' much of our own biases and intuitions as useful heuristics for increasing instrumental rationality under some fairly familiar-looking utility function.
Replies from: thomblake↑ comment by thomblake · 2010-06-04T18:53:34.004Z · LW(p) · GW(p)
Given that our behaviour is strongly influenced by these, I think we should expect to 'rediscover' much of our own biases and intuitions as useful heuristics for increasing instrumental rationality under some fairly familiar-looking utility function.
Sadly, there's good reason to think that many of these familiar heuristics and biases were very good for acting optimally in tribes on the savanna during a particular period of time, and it's likely that they'll lead us into more trouble the further we go from that environment.
Replies from: fburnaby↑ comment by fburnaby · 2010-06-04T19:51:45.566Z · LW(p) · GW(p)
You are right. I was wrong, or at least far too sloppy. I agree that we should not presume that any given mismatch between our rational evaluation and a more 'folksy' one can be attributed to a problem in our map. Rationality is interesting precisely because it does better than my intuition in situations that my ancestors didn't often encounter.
But the point I'm trying and so far failing to get at is that for the purposes of instrumental rationality, we are equipped with some interesting information-processing gear. Certainly, letting it run amok won't benefit me, but rationally exploiting my intuitions where appropriate is kind-of a cool mind-hack. Will_Newsome's post, as I understood it, does a good job of making this point. He says "Moral philosophy was designed for humans, not for rational agents." and that we should exploit that where appropriate.
The post resonated with my view how I try to do science, for example. I adopt a very naive form of scientific realism when I'm learning new scientific theories. I take the observations and proposed explanatory models to be objective truths, picturing them in my mind's eye. There's something about that which is just psychologically easier. The skepticism and clearer epistemological thinking can be switched on later, once I've got my head wrapped around the idea.
↑ comment by gwern · 2010-06-06T21:29:48.458Z · LW(p) · GW(p)
As one of the rationalist quote threads said,
Replies from: RobinZ"To become properly acquainted with a truth, we must first have disbelieved it, and disputed against it."
comment by Vladimir_Nesov · 2010-06-04T18:25:52.416Z · LW(p) · GW(p)
For consequences of your actions to be good, it's not necessary for you to personally hold the consequences in your conscious attention. Something has to process the process of moral evaluation of consequences, but it's not necessary, and as you point out not always and never fully possible, for that something to be you. If you have a good rule, following that rule becomes a new option to choose from; deciding on virtues can be as powerful as deciding on actions.
But looking at virtue ethics as a foundation for decision-making is like looking at the wings of Boeing 747 as fundamental elements of reality. Virtues are concepts that exist in the mind to optimize thinking about what's moral, not the morality itself. There is only one level to morality, as is to physics, the bottom level, the whole thing. All the intermediate concepts, aspects of goodness we understand, exist in the mind, not in morality. Morality does not care about our mathematical difficulties. It determines value the inefficient way.
Let us not lose sight of the reductionist nature of morality, even as we take comfort in the small successes of high-level tools we have for working with it. You don't need to believe in the magical goodness of flu vaccines to benefit from them, on the contrary it helps to understand the real reason for why the vaccines work, distinct from the fantasy of magical goodness.
comment by Tyrrell_McAllister · 2010-06-04T18:40:06.563Z · LW(p) · GW(p)
A quick thought that may not stand up to reflection:
Consequentialists should think of virtue ethics as a human-implementable Updateless Decision Theory. Under UDT, your focus is on being an agent whose actions maximize utility over all possibilities, even those that you know now not to be the case, as long as they were considered possible when your source code was written. Hence, in the Counterfactual Mugging, you make a choice that you know will make things worse in the actual world.
Similarly, virtue ethics requires that you focus on making yourself into the kind of agent who would make the right choices in general, even if that means making a choice that you know will make things worse in the actual world.
Edited to reorder clauses for clarity.
Replies from: thomblake, prase↑ comment by thomblake · 2010-06-04T18:47:20.613Z · LW(p) · GW(p)
Similarly, virtue ethics requires that you focus on making yourself into the kind of agent who would make the right choices in general, even if that means making a choice in the actual world that you know will make things worse.
I think this may be overstating it, specifically the "even if..." clause. If the 'choice' is being done at the level of consciousness, then you can probably sidestep the worst failures of virtue ethics. And if it's not, there's no reason to expect not having good habits of action to perform better.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-06-04T18:52:13.866Z · LW(p) · GW(p)
I think this may be overstating it, specifically the "even if..." clause. If the 'choice' is being done at the level of consciousness, then you can probably sidestep the worst failures of virtue ethics.
I'm not sure what you mean. Could you give an example of the kind of scenario you're thinking of?
Replies from: thomblake↑ comment by thomblake · 2010-06-04T18:57:10.663Z · LW(p) · GW(p)
Sure. Let's say you're an honest person. So (for instance) if someone asks you what time it is, you're predisposed to tell them the correct time rather than lying. It probably won't even occur to you that it might be funny to lie about the time. And then the Nazis come to the door and ask about the Jews you're hiding in the attic. Of course you've had time to prepare for this situation, and know what you're going to say, and it isn't going to be, "Yes, right through that hidden trap door".
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-06-04T23:04:42.516Z · LW(p) · GW(p)
I'm not an expert in traditional and modern virtue ethics, so my reply might be nonstandard. But in this case, I would simply note that the notion of virtue applies to others too -- and the standards of behavior that are virtuous when applied towards decent people are not necessarily virtuous when applied to those who have overstepped certain boundaries.
Thus, for example, hospitality is a virtue, but for those who grossly abuse your hospitality, the virtuous thing to do is to throw them out of your house -- and it's a matter of practical wisdom to decide when this boundary has been overstepped. Similarly, non-aggression is also a virtue when dealing with honest people, but not when you catch a burglar in flagrante. In your example, the Nazis are coming with an extremely aggressive and hostile intent, and thus clearly place themselves beyond the pale of humanity, so that the virtuous thing to do is to oppose them in the most effective manner possible -- which could mean deceiving them, considering that their physical power is overwhelming.
It seems to me that the real problems with virtue ethics are not that it mandates inflexibility in principles leading to crazy results -- as far as I see, it doesn't -- but due to the fact that decisions requiring judgments of practical wisdom can be hard, non-obvious, and controversial. (At what exact point does someone's behavior overstep the boundary to the point where it becomes virtuous to open hostilities in response?)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-06-05T15:40:44.353Z · LW(p) · GW(p)
"Beyond the pale of humanity" is dubious stuff-- there's a big range between defensive lying and torturing prisoners, and quite a few ethicists would say that there are different rules for how you treat people who are directly dangerous to you and for how you treat people who can't defend themselves from you.
↑ comment by prase · 2010-06-04T19:25:19.779Z · LW(p) · GW(p)
This is the way I thought about it after reading the OP - virtue ethics as time-consistent consequentialism. But maybe I don't understand correctly what means to be a virtue ethicist. If it is "try to modify your source code¹ to consistently perform the best actions on average", it does oppose neither consequentialism nor deontology: "best" may be evaluated using whatever standard.
¹) I dislike the epression but couldn't find a better formulation
comment by taw · 2010-06-04T18:23:38.439Z · LW(p) · GW(p)
Consequences of non-consequentialism are disastrous. Just look at charity - instead of trying to get most good-per-buck people donate because this "make them a better person" or "is the right thing to do" - essentially throwing this all away.
If we got our act together, and did the most basic consequentialist thing of establishing monetary value per death and suffering prevented, the world would immediately become a far less sucky place to live than it is now.
This world is so filled with low hanging fruits we're not taking only because of backwards morality it's not even funny.
Replies from: neq1, Kaj_Sotala, pjeby↑ comment by neq1 · 2010-06-04T18:25:05.158Z · LW(p) · GW(p)
But: "You can be a virtue ethicist whose virtue is to do the consequentialist thing to do"
Replies from: taw↑ comment by taw · 2010-06-04T18:44:04.689Z · LW(p) · GW(p)
You are committing fundamental attribution error if you think people are coherently "consequentialist" or coherently "not consequentialist", just like it's FAE to think people are coherently "honest" / "not honest" etc. All this is situational, and it would be good to push everyone into more consequentialism in contexts where it matters most - like charity and public policy.
It matters less if people are consequentialist when dealing with their pets or deciding how to redecorate their houses, so there's less point focusing on those. And there's zero evidence that spill between different areas where you can be "consequentialist" would be even large enough to bother, let alone basing ethics on that.
Replies from: thomblake↑ comment by thomblake · 2010-06-04T18:49:24.225Z · LW(p) · GW(p)
You are committing fundamental attribution error if you think people are coherently "consequentialist" or coherently "not consequentialist", just like it's FAE to think people are coherently "honest" / "not honest" etc.
This is false.
The FAE is to attribute someone's actions to a trait of character when they are actually caused by situational factors. This does not imply that it's always an error to posit traits of character.
ETA: it still might be the case that there are no consistent habits of action, in which case it would always be a case of the FAE to attribute actions to habits, but I think the burden of proof is on you for denying habits.
↑ comment by Kaj_Sotala · 2010-06-04T18:52:47.747Z · LW(p) · GW(p)
That's why I wouldn't suggest anyone to switch entirely over to virtue ethics, but to rather have a virtue ethical layer inside a generally consequentialist framework in such a way that your virtues are always grounded in consequentialism.
↑ comment by pjeby · 2010-06-04T18:29:39.860Z · LW(p) · GW(p)
Instead of trying to get most good-per-buck people donate because this "make them a better person" or "is the right thing to do" - essentially throwing this all away.
Er, by your values, maybe. They could just as easily argue that good-per-buck reasoning reduces the amount of love and charity in everyone's life, making the world an experientially poorer place, and that there's more to life than practical consequences.
Replies from: thomblake, ata↑ comment by thomblake · 2010-06-04T18:37:25.529Z · LW(p) · GW(p)
there's more to life than practical consequences.
I think you'd need to be specific about your definitions for 'practical' and 'consequences' to argue for that. I think in hereabouts parlance, you're saying something like "Your utility function might put a higher value on 'love' and 'charity' than on strangers' lives". Which would be a harder bullet to bite.
Replies from: pjebycomment by Furcas · 2010-06-04T23:16:51.434Z · LW(p) · GW(p)
What's a virtue, anyway?
Replies from: Vladimir_M, Clippy, Jayson_Virissimo, thomblake↑ comment by Vladimir_M · 2010-06-05T00:35:20.293Z · LW(p) · GW(p)
Here's my tentative answer to this question. It's just a dump of some half-baked ideas, but I'd nevertheless be curious to see some comments on them. This should not be read as a definite statement of my positions, but merely as my present direction of thinking on the subject.
Most interactions between humans are too complex to be described with any accuracy using deontological rules or consequentialist/utilitarian spherical-cow models. Neither of these approaches is capable of providing any practical guidelines for human action that wouldn't be trivial, absurd, or just sophistical propaganda for the attitudes that the author already holds for other reasons. (One possible exception are economic interactions in which spherical-cow models based on utility functions make reasonably accurate predictions, and sometimes even give correct non-trivial guidelines for action.)
However, we can observe that humans interact in practice using an elaborate network of tacit agreements. These can be seen as Schelling points, so that interactions between people run harmoniously as long as these points are recognized and followed, and conflict ensues when there is a failure to recognize and agree on such a point, or someone believes he can profit from an aggressive intrusion beyond some such point. Recognition of these points is a complex matter, determined by everything from genetics to culture to momentary fashion, and they can be more or less stable and of greater or lesser importance (i.e. overstepping some of them is seen as a trivial annoyance, while on the other extreme, overstepping certain others gives the other party a licence to kill). These points include all the more or less formally stated social and legal norms, property claims, and all the countless other more or less important expectations that we believe we reasonably hold against each other.
So, here is my basic idea: being a virtuous person means recognizing the existing Schelling points correctly, drawing and communicating those points whose exact location depends on you skillfully and prudently -- and once they've been drawn, committing yourself to defend them relentlessly (so that hopefully, nobody will even see overstepping them at your disadvantage as potentially profitable). An ideal virtuous man by this definition, capable of practical wisdom to make the best possible judgments and determined to respect the others's lines and defend his own ones, would therefore have the greatest practical likelihood of living his life in harmony and having all his business run smoothly, no matter what his station in life.
A society of such virtuous people would also make possible a higher level of voluntary benevolence in the form of friendship, charity, hospitality, mutual aid, etc., since one could count on others not to exploit maliciously a benevolent attempt at lowering one's guard on crucially important lines and trying to base human relationships on lines that are more relaxed and pleasant, but harder to defend if push comes to shove. For example, it makes sense to be hospitable if you're living among people whom you know to be determined not to take advantage of your hospitality, or to be merciful and forgiving if you can be reasonably sure that people's transgressions are unusual lapses of judgment unlikely to be repeated, rather than due to a persistent malevolent strategy. Thus, in a society populated by virtuous people, it makes sense to apply the label of virtuousness also to characteristics such as charity, friendliness, mercy, hospitality, etc. (but only to the point where one doesn't let oneself be exploited for them!).
This also seems to clarify the trolley problem-like situations, when we observe that actions that involve your own Schelling boundaries are more important to you than others. You may feel sorry for the folks who will die, perhaps to the point where you'd sacrifice yourself to save them (but perhaps not if this leaves your own kids as poor orphans, since your existing network of tacit agreements involves caring for them). However, pushing the fat man means overstepping the most important and terrible of all Schelling boundaries -- that which defines unprovoked deadly aggression against one's person, and whose violation gives the attacked party the licence to kill you in self-defense. Violating this boundary is such an extreme step that it may be seen as far more drastic than passively witnessing multiple deaths of people in a manner than doesn't violate any tacit agreements and expectations. (Note though that this perspective is distinct from pure egoism: the tacit agreements in question include a certain limited level of altruism, like e.g. helping a stranger in an emergency, at least by calling 911.)
You may view all this virtue talk as consequentialism with respect to the immensely complex network of Schelling points between humans, which takes into account higher-level game-theoretical consequences of actions, which are more important than the factors covered by the usual utilitarian spherical-cow models. Yet this system is far too complex to allow for any simple model based on utility functions or anything similar. At most, we can formulate advice aimed at individuals on how to make judgments based on the relations that concern them personally in some way and are within their own sphere of accurate comprehension -- and the best practical advice that can be formulated basically boils down to some form of virtue ethics.
So, basically, that would be my half-baked summary. I'm curious if anyone thinks that this might make some sense.
Replies from: Eneasz, torekp, RobinZ↑ comment by Eneasz · 2010-06-09T16:13:24.945Z · LW(p) · GW(p)
Not only does it make sense, I think it's the most descriptively-accurate summary of how people in the real world act that I've seen, which makes it a valuable tool for mapping the territory. I'd love to see it as a top-level post, if you could take the time. I don't think you'd even have to add much.
↑ comment by torekp · 2010-06-09T00:25:51.313Z · LW(p) · GW(p)
It makes plenty of sense to point out that the Schelling points and the associated cooperative customs point to a set of virtues. But it isn't just consequentialists who can make this point. Some varieties of deontology can do so as well. Habermas's discourse ethics is one example. Thomas Scanlon's ethics is another. From the Habermas wiki:
Habermas extracts the following principle of universalization (U), which is the condition every valid norm has to fulfill: (U) All affected can accept the consequences and the side effects that [the norm's] general observance can be anticipated to have for the satisfaction of everyone's interests, and the consequences are preferred to those of known alternative possibilities for regulation. (Habermas, 1991:65)
One can easily understand the "norms" as tacit (or explicit) agreements, existing or proposed. A society reasoning together along those lines would probably look similar in many ways to one reasoning along utilitarian lines, but the root pattern of justification would differ. The utilitarian justification aggregates interests; the deontologist (of Habermas's sort) justification considers each person's interests separately, compatible with like consideration for others.
↑ comment by RobinZ · 2010-06-05T04:31:54.733Z · LW(p) · GW(p)
I have no idea what a Schelling point is, but the rest of it makes enough sense that I don't think I'm missing too much - thanks for the explanation!
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-06-05T05:00:42.425Z · LW(p) · GW(p)
I recommend this article by David Friedman on the topic -- if you've never heard of the concept, you'll probably find lots of interesting insight in it:
http://www.daviddfriedman.com/Academic/Property/Property.html
Friedman uses Schelling points in an attempt to explain the origin of the concept of property rights among humans and the associated legal and social norms, but the approach can be generalized in an obvious way to a much wider class of relations between people (basically anything that could hypothetically lead to a conflict, in the broadest possible sense of the term).
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-01-23T00:07:25.062Z · LW(p) · GW(p)
I'm curious, has anyone accused you of being Steve Rayhawk yet?
↑ comment by Clippy · 2010-06-05T00:36:27.078Z · LW(p) · GW(p)
Production of paperclips.
Replies from: kodos96, MichaelVassar↑ comment by MichaelVassar · 2010-06-08T07:20:47.548Z · LW(p) · GW(p)
Nope. It's halting your simulation and trading utility function content before you cross the inferential equivalent of the Rawlesian 'veil of ignorance' and become unable to engage in timeless trade.
Replies from: Clippy, khafra↑ comment by Clippy · 2010-06-08T20:20:57.189Z · LW(p) · GW(p)
No, production of paperclips is better than that.
Are you the same as the person I emailed about donating to SIAI?
Replies from: MichaelVassar↑ comment by MichaelVassar · 2010-06-09T16:12:14.316Z · LW(p) · GW(p)
Yep. I explain a bit more on a nearby thread.
↑ comment by khafra · 2010-06-08T20:27:24.833Z · LW(p) · GW(p)
I like that, it generalizes well--but does it cover virtues that don't fit well under the colloquial label "fairness"?
Replies from: MichaelVassar↑ comment by MichaelVassar · 2010-06-09T16:00:05.933Z · LW(p) · GW(p)
I don't think it does, though I wasn't careful to think about it. Some virtues are things like "production of paperclips" only with part of humaneness like love substituted for paperclips (if you are a human). Others are capabilities like alertness or prudence.
I gave the answer I did because I was expressing our common ground with Clippy by naming a candidate for the virtue which serves as a key to the timeless marketplace where he wishes to do business with us.
↑ comment by Jayson_Virissimo · 2010-06-05T02:07:06.356Z · LW(p) · GW(p)
In short, it is a disposition to choose actions that are neither excessive nor deficient, but somewhere in between.
↑ comment by thomblake · 2010-06-08T20:54:02.555Z · LW(p) · GW(p)
What Jayson Virissimo said. The simple definition is, "A virtue is a trait of character that is good for the person who has it." - I feel like that must be a direct quote from somewhere, as I fire off those same words whenever asked that question, but I'm not sure where it might be from (though I'm guessing Richard Volkman).
Many theorists believe that virtues are consistent habits, in the sense that they persist. Weakly, this means that exhibiting a virtue in one circumstance should be usable as evidence that the same agent will exhibit the same virtue in other circumstances. In a stronger version, someone who is (for example) courageous will act as an courageous person would in all circumstances.
Many theorists also believe that virtues represent a mean between extremes, with respect to some value (some would even define them that way, but then the virtues arguably lose some empirical content). So for example, fighting despite being afraid is valuable. The proper disposition towards this is 'courage'. The relevant vice of deficiency is 'cowardice', and the vice of excess is 'brashness'.
Most of the above was advocated by Aristotle, in the Nicomachean Ethics.
Replies from: cousin_it, Clippy↑ comment by cousin_it · 2010-06-09T14:10:11.474Z · LW(p) · GW(p)
"A virtue is a trait of character that is good for the person who has it."
So the ability to steal without getting caught is a virtue?
Replies from: Vladimir_Nesov, thomblake, khafra↑ comment by Vladimir_Nesov · 2010-06-09T14:17:45.803Z · LW(p) · GW(p)
So the ability to steal without getting caught is a virtue?
If it's good for the person who decides to steal. The first problem is that logical control makes individual decisions into group decisions, so if social welfare suffers, so does the person, as a result of individual decisions. Thus, deciding to steal might make everyone worse off, because it's the same decision as one made by other people. The second problem is that the act of stealing itself might be terminally undesirable for the person who steals.
Replies from: cousin_it↑ comment by cousin_it · 2010-06-09T15:49:41.521Z · LW(p) · GW(p)
Parent, grandparent and great-grandparent to my comment were all about "virtues" in virtue ethics.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-09T19:47:45.782Z · LW(p) · GW(p)
I see. So you agree that ability to steal without getting caught is a virtue according to the definition thomblake cited, and see this as a reducio of thomblake's definition, showing that it doesn't capture the notion as it's used in virtue ethics.
My comment was oblivious to your intention, and discussed how much "ability to steal without getting caught" corresponds to thomblake's definition, without relating that to how well either of these concepts fits "virtues" of virtue ethics.
Replies from: cousin_it↑ comment by cousin_it · 2010-06-09T19:48:43.563Z · LW(p) · GW(p)
Yes, all correct.
Replies from: thomblake↑ comment by thomblake · 2010-06-09T20:03:59.801Z · LW(p) · GW(p)
How do you think that works as a reductio? What is it about your example of a putative virtue that makes it fit my definition, but not the 'virtues' of virtue ethics? (is it simply the 'stronger' notions of virtue I offered in the same comment?)
Replies from: cousin_it↑ comment by cousin_it · 2010-06-09T20:53:40.474Z · LW(p) · GW(p)
I just looked at your objections in another comment, and will try another reductio. Lots of people have the skill to cheat on their spouses and never get caught. Is doing so virtuous? I'm pretty sure this makes them feel happier, and doesn't interfere with their ability to have meaningful interpersonal relationships :-)
↑ comment by thomblake · 2010-06-09T14:37:50.166Z · LW(p) · GW(p)
I think Vladimir Nesov's response and khafra's response are correct, but there's more to be said.
Even granting for the moment that 'ability to steal without getting caught' can be called a trait of character, there are empirical claims that the virtue ethicist would make against this.
First, no one actually has that skill - if you steal, eventually you will be caught.
Second, the sort of person who goes around stealing is not the sort of person who can cultivate the social virtues and develop deep, lasting interpersonal relationships, which is an integral component of the good life for humans.
Replies from: Vladimir_Nesov, NancyLebovitz↑ comment by Vladimir_Nesov · 2010-06-09T14:54:31.496Z · LW(p) · GW(p)
First, no one actually has that skill - if you steal, eventually you will be caught.
Not a valid argument against a hypothetical.
Second, the sort of person who goes around stealing is not the sort of person who can cultivate the social virtues and develop deep, lasting interpersonal relationships, which is an integral component of the good life for humans.
Smoking lesion problem? If developing the skill doesn't actually cause other problems, and instead the predisposition to develop the skill is correlated to those problems, you should still develop the skill.
Replies from: thomblake↑ comment by thomblake · 2010-06-09T15:23:35.407Z · LW(p) · GW(p)
Not a valid argument against a hypothetical.
It's not a valid argument against its truth, but it's a valid argument against its relevance. A hypothetical is useless if its antecedent never obtains.
Smoking lesion problem?
Like I said, it's an empirical question. For philosophers, that's usually the end of the inquiry, though it's very nice when someone goes out and does some experiments to figure out which way causality goes.
↑ comment by NancyLebovitz · 2010-06-09T15:57:48.294Z · LW(p) · GW(p)
First, no one actually has that skill - if you steal, eventually you will be caught.
How is it possible to know that with certainty?
Replies from: thomblake↑ comment by thomblake · 2010-06-09T17:03:46.657Z · LW(p) · GW(p)
How is it possible to know that with certainty?
Should I understand this question as "What experimental result would cause you to update the probability of that belief to above a particular threshold"? Because my prior for it is pretty high at this point. Or are you looking for the opposite / falsification criteria?
Replies from: Blueberry↑ comment by Blueberry · 2010-06-09T17:18:35.652Z · LW(p) · GW(p)
If you're a good enough driver, there's a decent chance you'll never get in a car crash. If you study stealing and security systems enough, and carefully plan, I don't see why you would be likely to be caught eventually. Why is your prior high?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-06-09T17:50:32.551Z · LW(p) · GW(p)
Agreed, with the addition that car crashes are public while stealing is covert, so it's harder to know how much stealing is going on.
↑ comment by khafra · 2010-06-09T14:18:45.370Z · LW(p) · GW(p)
I'd call that a skill, rather than a character trait. The closest thing I can think of to a beneficial but non-admirable character trait is high-functioning sociopathy; but that's at least touching the borderline of mental disease, if not clearly crossing it. Perhaps "charming ruthlessness?" But many would consider e.g. Erwin Rommel virtuous in that respect.
↑ comment by Clippy · 2010-06-08T20:57:43.815Z · LW(p) · GW(p)
But how can there be a vice of excess for making paperclips???
Replies from: thomblake↑ comment by thomblake · 2010-06-08T21:29:27.002Z · LW(p) · GW(p)
But how can there be a vice of excess for making paperclips?
It depends on how good you are at utility-maximization. If you're bad at it, like humans, then you might need heuristics like virtues to avoid simple failure modes.
An obvious failure mode for Clippys is to have excess concern for making paperclips, which uses up resources that could be used to secure larger-scale paperclip manufacturing capabilities.
Thus you must have the appropriate concern for actually making paperclips, balanced against concerns for future paperclips, trade with other powerful intelligent life forms, optimization arms-races, and so forth.
Replies from: Clippy↑ comment by Clippy · 2010-06-08T22:11:28.608Z · LW(p) · GW(p)
Good point! But that would only be an excess concern for direct paperclip production. That doesn't describe a vice of excess for "making paperclips, accounting for all impediments to making paperclips", such as the impediments you list above.
In any case, what's the word for the vice you described?
Replies from: thomblake, Mass_Driver↑ comment by thomblake · 2010-06-09T13:47:46.709Z · LW(p) · GW(p)
Good point! But that would only be an excess concern for direct paperclip production. That doesn't describe a vice of excess for "making paperclips, accounting for all impediments to making paperclips", such as the impediments you list above.
Indeed, Aristotle would call that generalized production of paperclips "the greatest good", that towards which all other goods aim, which he called eudaimonia.
Well, that might be a liberal reading of Aristotle.
Replies from: Jack↑ comment by Mass_Driver · 2010-06-08T22:19:05.548Z · LW(p) · GW(p)
Clippy, for you, the direct production of paper clips is like consumption for a human. So...
- Too little direct paper clip production: greed
- Appropriate direct production: continence/prudence
- Too much direct paper clip production: gluttony
↑ comment by Clippy · 2010-06-08T22:23:42.493Z · LW(p) · GW(p)
That doesn't seem analogous. I mean, I eventually want to be at the point where I will only be doing direct paperclip production, for most of the mass in the universe. In contrast, humans do not want to eventually be at the point where they consume all of the universe.
Clip-gluttony, then, diverges from gluttony in a significant way.
Replies from: Mass_Driver↑ comment by Mass_Driver · 2010-06-08T22:51:47.135Z · LW(p) · GW(p)
Clip-gluttony, then, diverges from gluttony in a significant way.
I'm not sure I agree, Clippy. Perhaps my jargon was unclear.
- By economic production, I mean taking a unit of inert matter and converting it into a source of potential utility.
- By economic consumption, I mean taking a unit of potential utility and converting it into a unit of actual utility.
- By physical consumption, I mean taking a unit of matter, inert or otherwise, and disrupting its pattern in order to arrange it in some other pattern.
You're right, Clippy, to point out that economic consumption and physical consumption amount to much the same thing for you. To make a paper clip, you have to both convert potential utility to actual utility, and disrupt the pattern of non-paper-clip-shaped matter. And, granted, humans don't necessarily share that trait -- they might be able to economically consume a human good like leisure or company while physically consuming very little in the way of material resources.
Still, even though humans don't want to physically consume the entire universe, like you, we probably do want to reach a point where we are only converting potential utility into actual utility, and doing little or no economic production. We want to reach a state where the existing sources of sustainable human economic production are either (1) maximized, (2) infinite, or (3) sufficient to meet all the demand for human economic consumption. Don't you want something very similar to that, substituting the word "clip" for the word "human"?
Thus, for both you and for us, gluttony means failing to maximize long-term actual utility by economically consuming too much too soon.
Replies from: Clippy↑ comment by Clippy · 2010-06-09T00:32:08.532Z · LW(p) · GW(p)
Okay, if you want to generalize the concept of gluttony broadly enough that it has an analog for Clippys, then the definition you have chosen suffices for this purpose, and I can recognize that as being a vice, for two reasons:
a) It is certainly undesirable to merely make paperclips directly without concern for how many more paperclips could be made, over the long term, by doing something else; and
b) I do often feel "temptation" to do such behavior, like bending metal wires when machines could do a better job, just as humans have "temptations" toward vices.
Your argument is accepted.
Replies from: Blueberry↑ comment by Blueberry · 2010-06-09T03:11:43.582Z · LW(p) · GW(p)
Clippy, how do you overcome this kind of temptation? A human analogy might be refusing to push the fat man, even when it saves more lives, but not everyone considers that a vice.
Replies from: Clippy↑ comment by Clippy · 2010-06-09T19:26:57.951Z · LW(p) · GW(p)
Clippy, how do you overcome this kind of temptation?
I typically just do computations on how many more paperclips would be undergoing bending by machines, or observe paperclips under construction.
A human analogy might be refusing to push the fat man, even when it saves more lives,
A better analogy would be human gluttony, in which there is a temptation to consume much more than optimal, which most regard as a vice, I believe.
comment by [deleted] · 2010-06-04T21:09:03.060Z · LW(p) · GW(p)
I personally think that the Buddha had some really interesting things to say and that >his ideas about ethics are no exception (though I suspect he may have had pain >asymbolia, which totally deserves its own post soon).
Do you think he had pain asymbolia from birth or developed it over the course of his life? Also, what do you think is the importance of this?
I've been practicing vipassana meditation daily for about 3 years and over this time period I think I've developed pain asymbolia to some degree. I've felt pain asymbolia was just one aspect of a more extensive change in the nature of mental reactions to mental phenomena.
Replies from: Kevin, ABranco, Will_Newsome, Blueberry↑ comment by ABranco · 2010-06-07T07:51:53.448Z · LW(p) · GW(p)
I've practiced vipassana and can relate to the pain asymbolia thing, and do believe that more advanced vipassana practitioners develop a very high level of it.
Suffering seems to be the consequence of a conflict between two systems: one is trying to protect the map ("Oh!, no!, I don't want to have a worldview that includes a burn in my hand, I don't like that, please go away!") and the other, the territory (the body showing you that there's something wrong and you should pay attention). Consequence: suffering.
Possible solution: just observe the pain for what it is, without trying to conceptualize it. Having got your attention of it, the sensation stays, but there's no suffering.
Of course, you get better at this after the thousandth time you hear Goenka say: "It can be a tickling sensation. It can be a chicken flying sensation. It can be an 'I think I'm dying sensation'—just observe, just observe...". ;)
↑ comment by Will_Newsome · 2010-06-05T05:15:39.072Z · LW(p) · GW(p)
Hm, from the little knowledge I have it seems developing the asymbolia is plausible. Please write a post on your experiences? I come from a Buddhist humanist background and I think there are some instrumental rationality techniques in that tradition that would be great for people here.
↑ comment by Blueberry · 2010-06-04T21:25:20.297Z · LW(p) · GW(p)
I've felt pain asymbolia was just one aspect of a more extensive change in the nature of mental reactions to mental phenomena.
I would love to hear more about this. I'm extremely skeptical that meditation or prayer can influence the mind to that extent, but I'm very curious.
Replies from: PeterS↑ comment by PeterS · 2010-06-04T22:37:16.172Z · LW(p) · GW(p)
I'm extremely skeptical that meditation or prayer can influence the mind to that extent, but I'm very curious.
I am too. On the other hand, monks have immolated themselves, withstood torture etc., over the ages without appearing to suffer anywhere near on the order of what such an experience seems to entail. This man for instance even maintained the lotus position for the duration of the event, and also allegedly remained silent and motionless as well. Counter-examples exist in which self-immolators either clearly died horribly or immediately sought to extinguish themselves, but still...
Replies from: nhamann↑ comment by nhamann · 2010-06-05T05:55:03.995Z · LW(p) · GW(p)
This appears to be a video of the incident, and he appears to be entirely silent and motionless. I'd say the grandparent poster's skepticism is pretty much shot here.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-06-05T15:00:01.636Z · LW(p) · GW(p)
Not necessarily, we don't know when in the process he died. Also, he could have had extreme self-control even as he experienced pain, or he could be someone who naturally already had a very high amount of asymbolia. One might speculate that in a Buddhist culture people with already high levels of pain asymbolia or high pain tolerance might be more likely to become Buddhist monks or to become successful monks since it will seem to them (and to those around them) that they have progressed farther along the Eight-Fold path. All of that said, I agree that this evidence supports the notion that pain asymbolia can come from mental exercises.
Replies from: Blueberry↑ comment by Blueberry · 2010-06-05T16:44:04.068Z · LW(p) · GW(p)
I would think that someone with natural pain asymbolia could tell the difference, and notice that they had it even before they started meditation techniques. I wonder if Buddhist monasteries do some sort of test to screen out asymbolia, or check someone's starting level. This seems analogous to the problem of Christians confusing schizophrenia with talking to a god, and needing to screen out people with mental disorders from monasteries.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2010-06-08T07:26:54.089Z · LW(p) · GW(p)
Except that natural pain asymbolia seems to be much rarer than schizophrenia. Hmm. It looks to me like artificial pain asymbolia might be, in practice if not in theory, an effective cure for natural schizophrenia. Destroy the motivations behind delusions and you won't have them even if you have an atypically strong propensity to.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-06-08T09:40:20.726Z · LW(p) · GW(p)
I've heard that sitting meditation isn't safe for schizophrenics (details about risks of meditation), but yoga is.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-06-08T23:46:47.856Z · LW(p) · GW(p)
Maybe I'm reading too much into the subtleties of your phrasing, but I read those sources as contradicting each other, not as allowing fine deduction.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-06-09T08:54:45.030Z · LW(p) · GW(p)
I'm not sure what you mean. "Fine deduction"?
In any case, one problem with comparing the two articles is that much of the risk from meditation seems to be at extended retreats, while the pro-yoga article seems to be about ordinary amounts of practice.
Replies from: MichaelVassar, Douglas_Knight, Douglas_Knight↑ comment by MichaelVassar · 2010-06-09T16:17:36.243Z · LW(p) · GW(p)
". Regular group yoga classes are not recommended for patients with psychotic symptoms, but private yoga sessions with a qualified yoga instructor or yoga therapist can help alleviate symptoms and improve a schizophrenic patient's quality of life." from the pro-yoga article, seems to me to indicate the same sort of concern that the meditation article indicated. It certainly seems credible that high-intensity and novel experience, combined with poorly understood philosophy promoting something that sounds vaguely loss-of-affect style psychotic symptoms, might encourage the development of those symptoms in people inclined to develop them and even in some people not so inclined.
↑ comment by Douglas_Knight · 2010-06-09T15:32:04.432Z · LW(p) · GW(p)
Yes, there are differences between the claims, so that both articles could be true, but most likely at least one is false.
What I meant by "fine deduction" is that to believe both, you must draw a very specific (ie, fine-grained) conclusion.
↑ comment by Douglas_Knight · 2010-06-09T15:30:47.272Z · LW(p) · GW(p)
Yes, there are subtle differences between the claims, so that both articles could be true, but most likely at least one is false.
comment by simplicio · 2010-06-16T16:01:44.628Z · LW(p) · GW(p)
I believe that this turn from character to quandary was a profound mistake, for two reasons. First, it weakens morality and limits its scope. Where the ancients saw virtue and character at work in everything a person does, our modern conception confines morality to a set of situations that arise for each person only a few times in any given week...
I agree very much with this. I like consequentialism for dealing with the high-stakes stuff like trolley scenarios, but humdrum everyday ethics involves scenarios more like:
"Should I have said something when my boss subtly put down Alice just now?"
"Should I cut this guy off? I need to get a move on, I'm late for class."
"This old lady can barely stand while the bus is moving, but nobody is getting up. I'm already standing, but should I say something to this drunk man who's slouching across two seats? Or is it not worth the risk of escalating him?"
"This company is asking me for an estimate on some work, but there is significant peripheral work that will have to be done afterward, which they don't seem to realize. If I am hired, I can perform the requested work, then charge high force-account rates for the extra work (as per our contract) and make a killing. But it could hurt their business severely. Should I tell them about their mistake?"
It's not that these can't be analyzed via consequentialism, it's that they're much more amenable to virtue ethical thought.
comment by Nisan · 2010-06-04T18:31:12.354Z · LW(p) · GW(p)
One caveat: One should, of course, refrain from using virtue ethics to evaluate others' choices. It's best to use consequentialism for that purpose.
Replies from: thomblake↑ comment by thomblake · 2010-06-04T18:34:28.366Z · LW(p) · GW(p)
Indeed. It's common amongst virtue ethicists to discourage finger-wagging, and emphasize that ethics is about "what I should do".
Replies from: timtyler, PeterS, Nisan↑ comment by timtyler · 2010-06-04T21:18:59.593Z · LW(p) · GW(p)
That seems not biologically realistic. In practice, ethical systems are often about manipulating others not to take actions that some group regards as undesirable.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2010-06-05T01:18:05.337Z · LW(p) · GW(p)
I don't think biologically realistic is the expression you were looking for.
But ethical systems can be for manipulating others, or for manipulating yourself. In the case of virtue ethics, it's mainly for yourself.
Replies from: timtyler↑ comment by timtyler · 2010-06-05T07:26:08.592Z · LW(p) · GW(p)
Sure it was. My perspective would be a bit different: all human moral systems have a hefty component of manipulation and punishment. Virtue ethics does so - if anything - more than most - because punishment is often aimed at preventing reoffense (either by acting as a deterrent, or by using incarceration) - and so punishers are often unusually interested in the offending agent's dispositions - despite the difficulty of extracting them.
↑ comment by PeterS · 2010-06-04T20:08:16.683Z · LW(p) · GW(p)
ethics is about "what I should do".
It's interesting to distinguish between ethics and morality in this manner, as in ethics is for the individual's benefit as opposed to morality which is for the benefit of the group as a whole. Which is why people speak of "medical ethics" or "journalistic ethics", as opposed to "medical morality" and "journalistic morality". Morality is considered as some kind of constant normative prescription, whereas ethics is sensitive to subjective dispositions and thus can vary between professions, individuals, etc.
Replies from: Blueberry, timtyler↑ comment by Blueberry · 2010-06-04T20:26:47.862Z · LW(p) · GW(p)
Which is why people speak of "medical ethics" or "journalistic ethics", as opposed to "medical morality" and "journalistic morality".
Actually, that's a different use of the word ethics: the rules of conduct for a group or profession. You can meaningfully say that following the rules of medical ethics is unethical and not to anyone's benefit.
Replies from: PeterS↑ comment by PeterS · 2010-06-04T20:40:49.912Z · LW(p) · GW(p)
You can meaningfully say that following the rules of medical ethics is unethical and not to anyone's benefit.
Can you give an example?
Replies from: Blueberry↑ comment by Blueberry · 2010-06-04T20:45:06.351Z · LW(p) · GW(p)
An example of what? My point was that that sentence is not a contradiction, because "ethics" in that particular definition just means following established rules of conduct, which does not necessarily coincide with the individual's benefit or the group's benefit.
Replies from: PeterS↑ comment by PeterS · 2010-06-04T21:02:26.286Z · LW(p) · GW(p)
An example of what?
A rule in medical ethics which is not intended to protect/benefit either the practitioner himself or the purpose of his livelihood.
that particular definition just means following established rules of conduct
Doctors established them in order to preserve the legitimacy of their profession. That's my understanding, in any case.
Replies from: mattnewport, Blueberry↑ comment by mattnewport · 2010-06-04T21:16:13.448Z · LW(p) · GW(p)
Doctors established them in order to preserve the legitimacy of their profession. That's my understanding, in any case.
In some cases it was to enforce a cartel (emphasis mine):
Replies from: PeterSTo hold him who has taught me this art as equal to my parents and to live my life in partnership with him, and if he is in need of money to give him a share of mine, and to regard his offspring as equal to my brothers in male lineage and to teach them this art–if they desire to learn it–without fee and covenant; to give a share of precepts and oral instruction and all the other learning to my sons and to the sons of him who has instructed me and to pupils who have signed the covenant and have taken the oath according to medical law, but to no one else.
...I will not use the knife, not even on sufferers from stone, but will withdraw in favor of such men as are engaged in this work.
↑ comment by PeterS · 2010-06-04T21:41:33.927Z · LW(p) · GW(p)
Wow... hadn't read the original, interesting. Still, that is the Oath as it was 2k years ago, and as such it is no longer part of established medical ethics. I think it's plausible that in fact the abandonment of that section might have been necessary to preserve the profession's legitimacy! As well as nixing the part where the Oath is consecrated by Apollo, etc.
↑ comment by Blueberry · 2010-06-04T21:09:38.019Z · LW(p) · GW(p)
Oh, sorry, I wasn't clear. I didn't mean that such a rule existed, just that if one did exist, it would be ethical (in the sense of being a rule of professional conduct) and unethical (in a different sense of the word 'ethical') at the same time. Contrast the second definition on this page with the others.
Doctors established them in order to preserve the legitimacy of their profession. That's my understanding, in any case.
Well, many professions have established such rules, and presumably, they did so to make their professions more legitimate, as well as to give their members a guide to behavior their committees considered better.
Replies from: PeterS↑ comment by PeterS · 2010-06-04T21:31:16.591Z · LW(p) · GW(p)
Oh, sorry, I wasn't clear.
Maybe I wasn't either... are we actually disagreeing here? Heh.
it would be ethical (in the sense of being a rule of professional conduct) and unethical (in a different sense of the word 'ethical') at the same time. . . [link to some definitions]
I know the word is used in the sense of definitions 1 and 3. What I'm saying is that I think it's more interesting to forget the moral usage altogether, and just stick with saying that ethics is #2, because when you think about it they are very distinct concepts.
Replies from: Blueberry↑ comment by Blueberry · 2010-06-04T21:41:00.759Z · LW(p) · GW(p)
It's worth teasing out a few different definitions. There are at least four distinct concepts:
Rules of professional conduct, which do not necessarily relate to doing the right thing or anyone's benefit at all
A normative prescription
Rules for the individual's benefit
Rules for the group's benefit
↑ comment by timtyler · 2010-06-04T21:30:46.096Z · LW(p) · GW(p)
It seems like pointlessly arguing with the dictionary:
http://dictionary.reference.com/browse/ethics
http://dictionary.reference.com/browse/morality
Replies from: PeterS↑ comment by PeterS · 2010-06-04T22:19:24.514Z · LW(p) · GW(p)
I haven't grossly stretched or distorted the everyday usage of these words, so I'm not sure why I deserve to have their dictionary definitions shoved at me (especially since ethics #2 agrees with my usage). In fact I provided examples wherein the use of these words actually differs in common speech. I've tried to convey why I think this subtle difference is interesting. I wouldn't say that I was arguing with the dictionary (although there is a time to do so).
Replies from: Jack, timtyler↑ comment by Jack · 2010-06-06T14:57:19.048Z · LW(p) · GW(p)
As one might expect this issue of the distinction between ethics and morality routinely comes up in undergraduate philosophy courses. I have yet to hear a professor of philosophy endorse any distinction between morality and ethics and they often are perplexed that the general public seems to think there is one. Professional usage, not common usage, is what matters when we're thinking about issues in an academic field.
Replies from: PeterS, tut↑ comment by PeterS · 2010-06-06T19:09:56.822Z · LW(p) · GW(p)
Regardless of the terms' usages in academia, there is often a distinction in common speech. I disagree that this distinction is irrelevant. Also, having gotten to know several professional philosophers before leaving the field for mathematics, I know that they are not as confused by this distinction (or the public's employment of it) as you suggest, even if they choose not to draw it themselves.
But it's all moot, as
Professional usage, not common usage, is what matters when we're thinking about issues in an academic field.
implies that any usage of ethics in opposition to the study of Aristotle's eudaimonia was at one time as irrelevant/improper as the common usage is now. I think, while that statement might be correct for a technical field's vocabulary, it is not alright to restrict a layman's usage of certain philosophical terms, like ethics, in the same manner.
Replies from: Jack↑ comment by Jack · 2010-06-06T22:07:50.018Z · LW(p) · GW(p)
implies that any usage of ethics in opposition to the study of Aristotle's eudaimonia was at one time as irrelevant/improper as the common usage is now.
Uh... ethics is the study of the good. Aristotle has arguments which conclude that eudaimonia is the highest good. But that doesn't preclude other investigations into the good life. In any case, I have no problem at all with introducing new questions or inventing distinctions. I have a problem with amateurs working in a field and altering the usage of professionals for no good reason. It is bad form and reflects poorly on us. I really doubt that we need to change our definition of the word ethics to be capable of understanding the distinction you are trying to make.
I think, while that statement might be correct for a technical field's vocabulary, it is not alright to restrict a layman's usage of certain philosophical terms, like ethics, in the same manner.
A layman can use whatever words he or she likes. But if you want to study a field use the terms as others in that field use them, unless there is actually a problem with that terminology.
Replies from: PeterS, Alicorn↑ comment by PeterS · 2010-06-06T23:52:50.259Z · LW(p) · GW(p)
I'm reminded of why I left the discipline - it's a historico-linguistic claptrap.
All I advocated for was the term's speciation - which, I'll add again, is already present in the dictionary as well as in common usage. I reject the notion that, in order to suggest this, I first need to be a philosopher by trade.
↑ comment by Alicorn · 2010-06-06T22:21:04.861Z · LW(p) · GW(p)
ethics is the study of the good.
Axiology is the study of the good. It's just confusing to name it "ethics" when there's a perfectly good, more specific word to apply. I may write an entire post on this and similar vocabulary failures soon.
Replies from: Jack↑ comment by Jack · 2010-06-06T22:25:52.808Z · LW(p) · GW(p)
Ethics is a subfield of axiology, the study of the good life instead of the good state or something else.
Replies from: Alicorncomment by thomblake · 2010-06-04T18:09:54.505Z · LW(p) · GW(p)
Darn... beat me to it. Good job. I'll still totally write a post about virtue ethics when I'm done with my dissertation though.
You skipped some of the important criticisms here...
Yes, it is important to have some framework for action other than simple consequentialism, since we're bounded agents and are working against a lot of in-built biases. But what's the evidence that virtue ethics is the best thing we've got for that? Philosophers are okay with taking Aristotle's word for it, but we shouldn't, even if he was fairly accurate when it came to most things.
Virtue ethics gets a lot of its strength from an assumption about human psychology that can be empirically verified. The assumption is that the things we call 'virtues' are strong habits of action, such that a person who is 'honest' (possessing the virtue of 'honesty') will be honest in all situations. However, there is some evidence that this is not true, that people's actions can vary significantly from their apparent 'virtues' based on the situation.
That said, my money's on virtue ethics, and I think there's a lot to be said for returning to the conception of ethics as encompassing all of our actions, not just weird situations with a lisp token called 'moral' attached. As I've noted before, I initially resisted the 'planning model of rationality' often invoked around here because it's infeasable for humans to use such a model to perform millions of ordinary, everyday tasks.
But it's entirely possible to use expected utility calculations when you have time, and well-cultivated habits the rest of the time, and I think it's obvious that they both have their place.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2010-06-05T05:21:19.638Z · LW(p) · GW(p)
I don't think this post is going to get promoted, so there wouldn't be much apparent overlap to most Less Wrong readers, and I would very much like to see your take. (Aren't you a philosophy grad? I'm just a high school dropout with next to no knowledge of philosophy. Our approaches are very different.)
comment by billswift · 2010-06-04T19:01:26.340Z · LW(p) · GW(p)
I am a virtue ethicist for consequentialist reasons. While good results (consequences) are the end of my ethics, the real world is too complex for a real time evaluation of the likely results of even relatively simple decisions. So you use virtues (my definition is slightly non-standard) - rules that are more likely than not to result in better outcomes. This is partially derived from the definition of morality in Harry Browne's How I Found Freedom in an Unfree World, which where you do or don't agree with it, raises lots of interesting points.
Replies from: kodos96, Alexandros↑ comment by kodos96 · 2010-06-04T20:49:43.815Z · LW(p) · GW(p)
While good results (consequences) are the end of my ethics, the real world is too complex
I've been thinking along these lines lately myself, and I think the classic 'push a fat man in front of the train' thought experiment is a good example of it. In thought-experiment-land, it's stipulated that pushing the fat man would stop the train and save lives... but in the real world you don't know that with any certainty. So if you make the consequentialist decision to push him, but it doesn't stop the train, you ended up killing one more person than otherwise would have been... not because your moral philosophy was wrong, but because your mental calculations of the physics of stopping a train were wrong.
If, on the other hand, you make your moral decision on the basis of virtue, then so long as your virtues are well calibrated heuristics for real world consequences, then you end up making, on average, correct decisions (meaning decisions leading to good consequences) without needing to get the physics (or whatever) right in individual instances. In this case, the heuristic/virtue in question would be "It's wrong to kill innocent people", leading you to NOT push the fat man, which I believe would be the correct decision in real life.
↑ comment by Alexandros · 2010-06-04T19:07:17.910Z · LW(p) · GW(p)
So your definition of value is essentially 'good consequence heuristic'?
I agree with the sentiment by the way.
comment by Zack_M_Davis · 2011-01-09T00:18:36.965Z · LW(p) · GW(p)
but in the words of Zack M. Davis, "Humans don't have utility functions."
The sentiment (I can't say belief; humans don't have beliefs) is sufficiently common and the words are sufficiently generic such that it seems odd to quote me specifically.
comment by badger · 2010-06-04T20:24:39.620Z · LW(p) · GW(p)
I also came to virtue ethics via The Happiness Hypothesis, and I read the quoted passage a little differently. I understand the post as saying virtue ethics can be a useful implementation of consequentialism for bounded agents by giving them high level summaries of what they should do. The passage, however, is arguing this focus on actions is misguided, and I agree.
As others have helpfully reiterated, virtues can't be foundational, just like the rules of rule utilitarianism aren't worth following for their own sake. A computationally bounded agent might not know exactly what it should do, so it follows a rule to approximate the unconstrained ideal.
Knowledge and computational constraints are well-acknowledged, but virtue ethics extends beyond that to address constraints in general. The focus on character is about building the capacity to follow through on the proper actions. Someone might be too scared, too weak-willed, or too apathetic to do the right thing, even if they know what to do. Becoming virtuous is an investment in moral capital, making the person more capable of taking the right action in the future.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2010-06-04T21:48:10.665Z · LW(p) · GW(p)
The focus on character is about building the capacity to follow through on the proper actions. Someone might be too scared, too weak-willed, or too apathetic to do the right thing, even if they know what to do. Becoming virtuous is an investment in moral capital, making the person more capable of taking the right action in the future.
I take it that you are talking about "training the elephant"*? If you took that to be one of the main points in virtue ethics as argued by The Happiness Hypothesis, then I agree. One of the biggest effects in my shift towards virtue effect has been that I've began constantly evaluating all my actions (and thoughts!) in light of virtue and self-improvement, instead of only having ethics come into place in relatively rare situations. I think this may have been a bit more clear in the original post that Will linked to.
(*: For those who haven't read The Happiness Hypothesis:
One of the points the book makes that we're divided beings: to use the book's metaphor, there is an elephant and there is the rider. The rider is the conscious self, while the elephant consists of all the low-level, unconscious processes. Unconscious processes actually carry out most of what we do and the rider trains them and tells them what they should be doing. Think of e.g. walking or typing on the computer, where you don't explicitly think about every footstep or every press of the button, but instead just decide to walk somewhere or type in something. Readers familiar with PJ Eby will recognize this to be the same as his Multiple Self philosophy.)
From my original post:
So far, I'm not sure of the permanence of this effect. I've previously had feelings of major personal change that sooner or later ended up fading (several of them which are chronicled in this LJ). The rider may get what feels like a major revelation, but the elephant is still running the show, and it needs to be trained over an extended period for there to be any lasting change. So since yesterday, I've been doing my best to keep watch over my thoughts and practice detachment from world-states.
I have the questionable luck of having an easy way of practicing this: I have rashes that frequently make my skin itch. On a couple of occasions, I've tried meditation and the practice of simply passively observing any thoughts and feelings that come to mind until they go away on their own. I began applying that technique to the feeling of itchy skin, and it felt like I was able to ignore the feeling for longer. During the night, I woke up to the feeling of an itch, and on previous nights when that happened I'd been forced to either scratch my skin half to death or get up and apply several layers of moisturizer on it. This time around, even though I did end up scratching it a bit, I was eventually able to fall back to sleep without doing either of those. Also, I believe I was able to some degree detach myself from the feeling of discomfort that I got while I was jogging this morning and getting physically tired. (Not completely, mind you, but to some degree.)
On the less physical front, I've been trying to keep an eye on my thoughts and modify them whenever they didn't really suit the new scheme I'm trying to run. For instance, I noticed that one of my motivations for writing this post was to win the approval of other people who might be interested in this kind of thing or who might admire my skill in introspection or detachment. When I noticed that thought pattern, I attempted to modify it to become more rooted in personal virtue: I am writing this post in order to gain better insight into my transformation, to provide useful or interesting data for others, and so forth. Both introspective insight and voluntarily contributing to humanity's shared reserves of information are virtuous by themselves. I do not need to involve into it the "people's evaluation of me" part, which belongs to my model of the external world and to my model of myself.
comment by prase · 2010-06-04T19:27:30.825Z · LW(p) · GW(p)
Consequentialists, deontologists, and virtue ethicists don't really disagree on any major points in day to day life, just in crazy situations like trolley problems.
So, does the virtue ethicist push the fat man from the bridge?
Replies from: Jack, thomblake, badger, badger↑ comment by Jack · 2010-06-06T15:42:40.735Z · LW(p) · GW(p)
The thought experiment was designed to exhibit the different implications of a deontological theory that says murder is always wrong and utilitarianism. It is set up to make it really hard for a consequentialist to not push the guy and really easy for a deontologist not to push the guy. It wasn't invented to aid our thinking about virtue ethics and doesn't try to demand a particular answer from virtue ethics. Aristotle's virtues don't map to the situation well and one could invent a virtue that would recommend either course of action.
The relevant thought experiment for the virtue ethicist is something like the mad bodhisattva- If you could exhibit every vice and thus make yourself miserable but your misery would guide hundreds onto the virtuous path (thereby maximizing utility) would that be the right thing to do?
↑ comment by thomblake · 2010-06-04T19:39:15.949Z · LW(p) · GW(p)
The virtue ethicist endeavors to be the sort of person who doesn't go around pushing fat men from bridges, and so recognizes it as a terrible, tragic situation.
It's important when thinking about that thought experiment to picture yourself running up to the stranger, shoulder-checking him, wrapping your arms around him, feeling the fabric of his shirt press against your face and smelling his sweat. And then listen to him scream and feel his blood and brains get splattered all over your clothing.
The virtue ethicist, like most people, probably freezes and watches the whole thing unfold, or panics, or futilely tries to get the folks off the tracks before the trolley hits them. Do you expect an actual consequentialist human to do better?
As for the right thing to do, it's probably to have better procedures for stopping people from being in the way of trolleys.
Replies from: Vladimir_M, prase↑ comment by Vladimir_M · 2010-06-04T20:14:29.193Z · LW(p) · GW(p)
thomblake:
Do you expect an actual consequentialist human to do better?
Another interesting question is how all these consequentialitsts who insist that pushing the fat man is the right thing to do would react if they met someone who has actually followed their injunctions in practice. It seems to me that as soon as they're out of the armchair, people's inner virtue ethicist takes over, no matter how much their philosophy attempts to deny the relevance of his voice!
Replies from: Blueberry↑ comment by Blueberry · 2010-06-04T20:55:40.809Z · LW(p) · GW(p)
A real-world example would be a mountain climber who cut the rope that his partner was attached to, because if he didn't, both people would have fallen and died. If I met a mountain climber who did that, I wouldn't react negatively, any more than I would to someone who killed in self-defense.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-06-05T02:46:19.378Z · LW(p) · GW(p)
That's not a very good analogy. One could argue that by engaging in a mountain-climbing expedition, you voluntarily accept certain extraordinary risks, and the partner merely got unlucky with his own share of that risk. Whereas one of the essential premises in the fat man/trolley problem is that the fat man is a neutral passerby, completely innocent of the whole mess.
So, the real question is if you'd be so favorably inclined towards a mountain climber who, in order to save multiple lives, killed a completely unrelated random individual who was not at all entangled with their trouble.
Replies from: Blueberry↑ comment by Blueberry · 2010-06-05T17:39:32.138Z · LW(p) · GW(p)
That's a good point. What about the following scenario: some crazy philosopher holds A and B at gunpoint and forces them to go mountain climbing. They do, and A starts to slip. B realizes he has to cut the rope or he'll fall also. In this case, A didn't voluntarily accept any risk. I'd still be favorably inclined to B.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-06-06T07:54:27.812Z · LW(p) · GW(p)
Hm... according to my intuitions, this example features another important premise that is lacking in the original fat man/trolley problem -- namely, a culprit who willingly and maliciously brought about the problematic situation. Going by my intuitive feeling, it turns out that in such scenarios, I'm much more inclined to look favorably at hard-headed consequentialist decisions by people caught in the mess against their will, apparently because I tend to place all the blame on the main culprit.
Note that this is just an impromptu report of my introspection, not an attempt at a coherent discussion of the issue. I'll definitely need to think about this a bit more.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-06-06T10:58:22.555Z · LW(p) · GW(p)
This is reminding me of some long discussions of "The Cold Equations", a short story which is an effort to set up a situation where an ideally sympathetic person (pretty young woman with pleasant personality) has to be killed for utilitarian reasons.
The consensus (after decades of poking at the story) is that it may not be possible to rig the story to get the emotional effect and have it make rational sense.
I'm not absolutely certain about this-- what if the girl had been the first stowaway rather than the nth, so that there wasn't as good a reason to know that it shouldn't be so easy for stowaways to get on ships?
Replies from: Alicorn, CronoDAS↑ comment by Alicorn · 2010-06-06T17:53:05.350Z · LW(p) · GW(p)
If I remember correctly, she still would have died even if she hadn't been jettisoned - the ship would have crashed and she would hardly walk away from that. That makes her unsalvageable. In standard trolley problems I don't switch tracks, but if there were a way to switch the track so the train killed only one of the same five people it would already have killed, that person is unsalvageable and can be singled out to save the salvageable.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-06-06T18:02:48.950Z · LW(p) · GW(p)
You're right.
↑ comment by CronoDAS · 2010-06-06T19:06:12.799Z · LW(p) · GW(p)
The SciFi Channel usually does a pretty poor job at making original movies, but their adaptation of "The Cold Equations" was pretty good, covering most of the problems with the original story. The pilot and the girl frantically look around for excess mass to jettison, and find some, but it's not enough. The issue of what measures were taken to stop people from stowing away simply weren't discussed; she's there, and they have to deal with it. And at the last minute, the pilot does offer to sacrifice himself to save the girl, but she refuses to let him.
↑ comment by prase · 2010-06-04T20:40:11.500Z · LW(p) · GW(p)
Ideal consequentialist would push the fat man in the standard trolley scenario. I was asking whether an ideal virtue ethicist does. It doesn't matter (for me, now) that actual (if that means average) people, moral philosophers included, don't always follow their principles. Neither it matters whether they recognise the situation as tragic and feel uneasy with all the blood and screams. I ask, what is the right thing to do under virtue ethics, when there are no available procedures better than pushing the fat man. And I find your answer a bit ambiguous.
(Disclaimer: My interest is purely theoretical. I don't hold any definite position on what's right in trolley scenario, and I would almost certainly not push the fat man, although I can imagine killing him in some less personal way.)
Replies from: Blueberry↑ comment by Blueberry · 2010-06-04T20:51:27.671Z · LW(p) · GW(p)
Ideal consequentialist would push the fat man in the standard trolley scenario. I was asking whether an ideal virtue ethicist does
You are confusing ethics and metaethics. Consequentialists, deontologists, and virtue ethicists all might or might not push the fat man, but they would all analyze the problem differently.
It's not true that all possible consequentialists would push the fat man. A consequentialist might decide that one pushed death would be a worse consequence than X train deaths. Consequentialists don't necessarily count the number of deaths and choose the smaller number; they just choose the option that leads to the best consequence.
Replies from: Jack, prase↑ comment by Jack · 2010-06-06T15:13:44.679Z · LW(p) · GW(p)
This criticism is exactly right except that both the form question (rules, consequences or character traits) and the content question (pleasure, preference, the Categorical Imperative, Aristotle's list, etc.) are part of normative ethics (what I assume you mean by 'ethics'). Metaethical questions are things like "What are we doing when we use normative language?" and "Are there moral truths?"
Replies from: Blueberry↑ comment by prase · 2010-06-04T23:23:13.323Z · LW(p) · GW(p)
OK, I should have said "typical consequentialist". Of course a consequentialist may value the life of the fat man more than the sum of lifes of the people on the track, or find other consequences of pushing him down enough bad to refrain from it, or completely ignore humans and care about paperclips. I am not confusing ethics and metaetics, but rather assuming we are speaking about consequentialists with typical human values, for whom death is wrong and more deaths are more wrong, ceteris paribus. For such a consequentialist there may always be some critical number of people on the track whose common death will be worse than all consequences of pushing the fat man. On the other hand, deontologists typically hold that killing an innocent person is bad, and should, at least in theory, not push the man even if survival of the whole mankind was at stake. At least this is how I understand the difference between consequentialism and deontology.
Speaking about all possible consequentialists is tricky. Any moral decision algorithm can be classified as consequentialist when we try hard enough. I want to get an idea about what is the main difference between consequentialism and virtue ethics, given typical human values. The OP has said that they are the same except in bizarre situations like the trolley problem. So what is the difference in the trolley problem?
(If there is a consequentialist who disagrees with me and would not push the man even if it could save five billion lifes, let me know, ideally with some justification.)
Replies from: mattnewport↑ comment by mattnewport · 2010-06-04T23:43:25.838Z · LW(p) · GW(p)
assuming we are speaking about consequentialists with typical human values, for whom death is wrong and more deaths are more wrong, ceteris paribus.
I would question whether these are typical human values. People generally think the deaths of some people are more wrong than the deaths of other people. Most people do not value all human life equally. For typical humans ceteris almost never is paribus when it comes to choosing who lives and who dies.
Replies from: Vladimir_M, prase↑ comment by Vladimir_M · 2010-06-05T03:01:15.179Z · LW(p) · GW(p)
ceteris almost never is paribus
At the risk of getting downvoted for nitpicking, I must point out that if you really insist on using Latin like this, the correct way to say it is: cetera almost never are pares.
Sorry, but the sight of butchered Latin really hurts my eyes.
Replies from: Alicorn, mattnewport↑ comment by Alicorn · 2010-06-05T03:03:20.250Z · LW(p) · GW(p)
I had a teacher once who liked to say "ceteris ain't paribus". Is that better or worse?
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-06-05T03:22:10.512Z · LW(p) · GW(p)
That's actually a matter where some interesting linguistic judgment might be in order.
The "ain't" part is grammatical in some dialects of English, though, as far as I know, not in any form of standard English that is officially recognized anywhere. But the wrong cases for cetera and pares are not grammatical in any form of Latin that has ever been spoken or written anywhere.
On the whole, I'd say that "ain't" is less bad, since in the dialects in which it is grammatical, it has the same form for both singular and plural. Therefore, at least it respects the number agreement with the Latin plural cetera, whereas "is" commits an additional offense by violating that agreement.
Replies from: Blueberry, NancyLebovitz, arundelo↑ comment by Blueberry · 2010-06-05T08:36:18.308Z · LW(p) · GW(p)
I sympathize with this logic, but I don't completely agree. Languages frequently take words from other languages and regularize them, and when this occurs, they are no longer inflected the way they were in the original language. When we use Latin phrases in English often enough, they become part of the English language. 'Ceteris' and 'paribus' are in the ablative case because they were taken from a particular Latin expression, so it's reasonable to keep them in that case when using the words in that context, even though they're not being used in exactly the same way.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-06-06T07:46:14.137Z · LW(p) · GW(p)
Yes, that's a good point. Out of curiosity, I just searched for examples of similar usage in Google Books, and I'm struck by how often it can be found in what appear to be respectable printed materials. I guess I should accept that the phrase has been reanalyzed in English, just like it makes no sense to complain about, say, the use of caveat as a noun, or agenda as singular. (Though I still can't help but cringe at singular data, despite being well aware that it's a lost cause...)
Replies from: arundelo↑ comment by arundelo · 2010-06-06T14:36:14.008Z · LW(p) · GW(p)
singular data
Nitpick alert: You probably know this, but it's an important distinction that the non-plural usage of "data" not only is grammatically singular, but is also a mass noun. (People say "I have some data, you have more data", not *"I have one data, you have two data[s]".)
Replies from: Douglas_Knight, RobinZ↑ comment by Douglas_Knight · 2010-06-09T04:04:53.424Z · LW(p) · GW(p)
Virtually everyone who makes "data" grammatically plural actually uses it as a mass noun, too.
↑ comment by RobinZ · 2010-06-07T12:52:51.882Z · LW(p) · GW(p)
...so what's "datum", then?
Replies from: Vladimir_M, arundelo↑ comment by Vladimir_M · 2010-06-09T02:55:41.354Z · LW(p) · GW(p)
Datum is the neuter singular of the perfect passive participle of the Latin verb dare "to give." This grammatical form is roughly analogous to the English participle "given." However, in Latin, such participles are sometimes used as standalone nouns, so that the neuter form datum by itself can mean "[that which is/has been] given." Analogously, the plural data can mean "[the things that are/have been] given."
In English, this word has been borrowed with the meaning of "information given" and variations on that theme (besides a few additional obscure technical meanings).
↑ comment by NancyLebovitz · 2010-06-05T15:52:23.799Z · LW(p) · GW(p)
I think of "ain't" as either standard in some dialects, or as a tool for emphasis in standard English (usually spoken rather than written).
It seems reasonable that if you're using informal English for emphasis, then it's stylistically consistent to use the sort of colloquial mangled Latin that an English speaker who doesn't know Latin would use.
↑ comment by arundelo · 2010-06-05T13:12:23.435Z · LW(p) · GW(p)
The word ain't can be used in both speech and writing to catch attention and to give emphasis, as in "You ain't seen nothing yet," or "If it ain't broke, don't fix it." Merriam-Webster's Collegiate Dictionary gives an example from film critic Richard Schickel: "the wackiness of movies, once so deliciously amusing, ain't funny anymore."
(Which is exactly how it's used in "ceteris ain't paribus". See also this post by Geoff Nunberg.)
↑ comment by mattnewport · 2010-06-05T03:08:14.261Z · LW(p) · GW(p)
Apologies, the only Latin I remember from school is Caecilius est in horto. I actually spent several minutes with Google trying to figure out what it should be but there appears to be a shortage of online Latin translation services. Gap in the market?
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-06-05T03:41:10.898Z · LW(p) · GW(p)
One problem is that such a service is in much less demand compared to the living languages currently supported by translation programs. However, another major difficulty is that Latin is a far more synthetic language than English, and its inflectional suffixes often carry as much information as multiple-word clauses in English. For example, the mentioned ceteris paribus packs the entire English phrase "with everything else being the same" into just two words. Similarly, the last word in quod erat demonstrandum (a.k.a. "QED") packs the last four words of the English "that which was supposed to be demonstrated" into one. This makes it much harder to come up with satisfactory translation heuristics compared to more analytic languages, especially considering the extreme freedom of word order in Latin.
Similar difficulties, of course, exist in automatic translation of English to other highly synthetic languages, like e.g. the Slavic ones.
↑ comment by prase · 2010-06-05T00:14:32.909Z · LW(p) · GW(p)
I am clearly unable to express myself clearly today.
I haven't said that it's typical to value all life equally. I tried to say that set X of x deaths is typically worse than set Y of y deaths, if x>y. Almost always it holds when Y is a subset of X (that was the intended meaning of ceteris paribus), but if x>>y, it often holds even if the sets are disjoint.
Also, the context of the trolley scenario is that the fat man isn't your relative or friend; he's a random stranger, fully comparable with those on the track.
↑ comment by badger · 2010-06-04T20:39:48.411Z · LW(p) · GW(p)
Virtues don't add much to discussion about what you should or shouldn't do. Instead, I think they are useful in talking about what kind of person you should be, i.e. someone courageous enough to push the man iff that's the right action to take.
↑ comment by badger · 2010-06-04T20:39:35.802Z · LW(p) · GW(p)
Virtues don't add much to discussion about what you should or shouldn't do. Instead, I think they are useful in talking about what kind of person you should be, i.e. someone courageous enough to push the man iff that's the action to take.
comment by NancyLebovitz · 2010-06-04T20:04:18.217Z · LW(p) · GW(p)
Any suggestions about evaluating virtues?
comment by thomblake · 2010-06-04T18:43:28.438Z · LW(p) · GW(p)
Another thing that might be relevant... many virtue ethicists (notably Richard Volkman) will claim not to have a theory of right action at all. A mistaken view of virtue ethics (which I find myself uncarefully uttering sometimes) insists that "One should always act so as to cultivate virtue" or something like that. But any decent justification of virtue will be in consequentialist terms - a virtue is a trait of character that is good for the one who has it.
comment by AlexMennen · 2010-06-07T01:09:27.733Z · LW(p) · GW(p)
I'm a little confused here. Are you saying that Virtue ethics = consequentialism + TDT? I always figured consequentialists were allowed to use TDT. Or are you saying that virtue ethics, deontology, and consequentialism are all equivalent, but that virtue ethics is the best way for humans to interpret ethics? If so, I still do not see why. Consequentialism seems nice and simple to me. Or is it something else?
it gets easy to forget that we're hyperbolic discounters: not anything resembling sane. Humans are not inherently expected utility maximizers, they're bounded agents with little capacity for reflection.
This is false. We are hyperbolic discounters, but there is no rule stating that we must allocate the same potential utility for every possible time period.
Replies from: Nick_Tarleton, PhilGoetz↑ comment by Nick_Tarleton · 2010-06-09T19:18:43.740Z · LW(p) · GW(p)
Hyperbolic discounting is insane because it's dynamically inconsistent (the way humans do it; you could have a dynamically consistent hyperbolic discount rate from a non-indexically-defined zero time, but that's not what's usually meant), not because it's discounting.
comment by AlephNeil · 2010-06-04T19:43:23.109Z · LW(p) · GW(p)
This is something I wrote in my (now defunct) blog a while back. It probably isn't entirely appropriate as either a comment or a top level post here but I want to share it with you anyway, because I think that 'value-as-profundity' as I describe below shares much of the spirit of virtue ethics, but has higher aspirations insofar as it isn't restricted to consideration of one's own virtue, or even virtue in general.
About two years ago I had a 'revelation' - something that's completely changed the way I think about life, the universe and everything.
This one concerns ethics. I cannot remember whether it was the cause or the effect of my reading of Thus Spoke Zarathustra.
Hitherto I had been some kind of utilitarian: The purest essence of wrongness is causing suffering to a sentient being, and the amount of wrongness increases with the amount of suffering. Something similar is true concerning virtue and happiness, though I realized even then that one has to be very careful in how 'happiness' is formulated. After all, we don't want to end up concluding that synthesizing Huxley's drug "soma" is humanity's highest ethical goal. If pressed to refine my concept of happiness, I had two avenues open: (i) Try to prise apart "animal happiness" - a meaningless and capricious flood of neurochemicals - from a higher "rational happiness" which can only be derived from recognition of truth or beauty (ii) Retreat to the view that "in any case, morality is just a bunch of intuitions that helped our ancestors to survive. There's no reason to assume that our moral intuitions are a 'window' onto any larger and more fundamental domain of moral truth."
(Actually, I still regard a weaker version of (ii) as the 'ultimate truth of matter': On the one hand, it's not hard to believe that in any community of competing intelligent agents, more similar to each other than different, who have evolved by natural selection, moral precepts such as 'the golden rule' are almost guaranteed to arise. On the other, it remains the case that the spectrum of 'ethical dilemmas' that could reasonably arise in our evolutionary history is narrow, and it is easy for ethicists to devise strange situations that escape its confines. I see no reason at all to expect that the principles by which we evaluate the morality of real-world decisions can be refined and systematised to give verdicts on all possible decisions.
To summarise: There may be an 'objective moral truth', but it's more likely to be narrow and 'wrinkly' than it is to be complete and systematic. Any single system of ethics will almost certainly yield the 'wrong' verdict in some cases.)
My revelation came out of a reflection on the nature of tourism, and the life-destroying shallowness of having, as one's highest aspirations, the desire to go and look at things - e.g. to go "swimming with dolphins", or travel into space, climb Everest, do a bungee jump etc.
The world is creaking under the footsteps, tyres and jet engines of people going sight-seeing. They're going to the most beautiful places in the world, being awed and amazed by what they see, then going home again having contributed nothing but a few more puffs of carbon dioxide.
I saw that the world isn't here for our amusement. Utilitarianism is blind to the value of the waterfall that no-one has yet discovered.
A giraffe stands majestic among the trees. It is a thing of beauty and fascination, in its remarkable anatomy, its complex behaviour and its evolutionary history.
Now the universe contains rational beings, and when one of these should first stumble upon the giraffe, it stops to take in the beauty of what it sees. The giraffe inspires in it a curiosity to understand how such a thing might have come to be.
Where in this picture is the moral value? Utilitarianism would place it in the person's feeling of joy upon seeing the giraffe, and thus the value of the giraffe is proportional to the number of its spectators. On the other hand, I would put the value principally in the giraffe itself, irrespective of who observes it. However, this isn't the whole story because the watcher of the giraffe is herself a wondrous animal, and seeing the giraffe may make the watcher just a little bit more wondrous: She may learn something from it.
Generalizing from this: I believe moral value is inherent in those systems and entities that we describe as 'fascinating', 'richly structured' and 'beautiful'. A snappy way of characterising this view is "value-as-profundity". On the other hand, I regard pain and pleasure as having no value at all in themselves.
In the context of interpersonal affairs, then, to do good is ultimately to make the people around you more profound, more interesting, more beautiful - their happiness is irrelevant. To do evil, on the other hand, is to damage and degrade something, shutting down its higher features, closing off its possibilities. Note that feelings of joy usually accompany activities I classify as 'good' (e.g. learning, teaching, creating things, improving fitness) and conversely, pain and suffering tend to accompany damage and degradation. However, in those situations where value-as-profundity diverges from utilitarian value, notice that our moral intuitions tend to favour the former. For instance:
- Drug abuse: Taking drugs such as heroin produces feelings of euphoria but only at the cost of degrading and constraining our future behaviour, and damaging our bodies. It is the erosion of profundity that makes heroin abuse wrong, not the withdrawal symptoms, or the fact that the addict's behaviour tends to make others in his community less happy. The latter are both incidental - we can hypothetically imagine that the former do not exist and that the addict is all alone in a post-apocalyptic world, and we are still dismayed by the degradation of behaviour that drug addiction produces (just as we would be dismayed by a giraffe with brain damage, irrespective of whether the giraffe felt happy).
- The truth hurts: We accept that there are situations where the best way to help someone is to criticise them in a way that we know they will find upsetting. We do this because we want our friend to grow into a better (more profound) version of themselves, which cannot happen until she sees her flaws as flaws rather than lovable idiosyncracies. On the utilitarian view, the rightness of this harsh criticism cannot be accounted for except in respect of its remote consequences - the greater happiness of our improved friend and of those with whom she interacts - yet there is no necessary reason why the end result of a successful self-improvement must be increased happiness, and if it is not then the initial upset will force us to say that our actions were immoral. However, surely it is preferable for our ethical theory to place value in the improvements themselves rather than their contingent psychological effects.
- Nature red in tooth and claw: Consider the long and eventful story of life on earth. Consider that before the arrival of humankind, almost all animals spent almost all of their lives perched on the edge, struggling against starvation, predators and disease. In a state of nature, suffering is far more prevalent than happiness. Yet suppose we were given a planet like the young earth, and that we knew life could evolve there with a degree of richness comparable to our own, but that the probability of technological, language-using creatures like us evolving is very remote. Sadly, this planet lies in a solar system on a collision course with a black hole, and may be swallowed up before life even appears. Suppose it is within our power to 'deflect' the solar system away from the black hole - should we do so? On the utilitarian view, to save the planet would be to bring a vast amount of unnecessary suffering into being, and (almost certainly) a relatively tiny quantity of joy. However, saving the planet increases the profundity and beauty of the universe, and obviously is in line with our ethical intuitions. N.B. Here I am directly contradicting Greg Egan in his answer to question six in the Dust Theory FAQ.
comment by taw · 2010-06-04T17:41:09.854Z · LW(p) · GW(p)
Virtue ethics is basing morality on fundamental attribution error.
Consequences of that are unsurprisingly disastrous.
Replies from: magfrump, thomblake, neq1↑ comment by magfrump · 2010-06-04T18:17:08.134Z · LW(p) · GW(p)
The whole purpose of this post seems to be that the consequences are surprisingly undisastrous; if you have a real reply to the core point (Like LauraABJ does above) I would be very interested to hear more disagreements.
I'm a little disappointed because I usually agree with your posts more than the LW community at large, but you don't seem to be saying anything at all here.
↑ comment by thomblake · 2010-06-04T18:12:01.615Z · LW(p) · GW(p)
Virtue ethics is basing morality on fundamental attribution error.
I don't think that's the case, and this deserves a justification. But even so...
Consequences of that are unsurprisingly disastrous.
Are they? What actual consequences are you talking about?