Ethical frameworks are isomorphic
post by lavalamp · 2014-08-13T22:39:50.343Z · LW · GW · Legacy · 44 commentsContents
44 comments
I have previously been saying things like "consequentialism is obviously correct". But it occurred to me that this was gibberish this morning.
I maintain that, for any consequentialist goal, you can construct a set of deontological rules which will achieve approximately the same outcome. The more fidelity you require, the more rules you'll have to make (so of course it's only isomorphic in the limit).
Similarly, for any given deontological system, one can construct a set of virtues which will cause the same behavior (e.g., "don't murder" becomes "it is virtuous to be the sort of person who doesn't murder")
The opposite is also true. Given a virtue ethics system, one can construct deontological rules which will cause the same things to happen. And given deontological rules, it's easy to get a consequentialist system by predicting what the rules will cause to happen and then calling that your desired outcome.
Given that you can phrase your desired (outcome, virtues, rules) in any system, it's really silly to argue about which system is the "correct" one.
Instead, recognize that some ethical systems are better for some tasks. Want to compute actions given limited computation? Better use deontological rules or maybe virtue ethics. Want to plan a society that makes everyone "happy" for some value of "happy"? Better use consequentialist reasoning.
Last thought: none of the three frameworks actually gives any insight into morality. Deontology leaves the question of "what rules?", virtue ethics leaves the question of "what virtues?", and consequentialism leaves the question of "what outcome?". The hard part of ethics is answering those questions.
(ducks before accusations of misusing "isomorphic")
44 comments
Comments sorted by top scores.
comment by DanielLC · 2014-08-13T23:51:45.740Z · LW(p) · GW(p)
In principle, you can construct a utility function that represents a deontologist who abhors murder. You give a large negative value to the deontologist who commits murder. But it's kludgy. If a consequentialism talks about murder being bad, they mean that it's bad if anybody does it.
It is technically true that all of these ethical systems are equivalent, but saying which ethical system you use nonetheless carries a lot of meaning.
Instead, recognize that some ethical systems are better for some tasks.
If you choose your ethical system based on how it fulfils a task, you are already a consequentialist. Deontology and virtue ethics don't care about getting things done.
Replies from: Xachariah, lavalamp↑ comment by lavalamp · 2014-08-18T18:32:57.315Z · LW(p) · GW(p)
(Sorry for slow response. Super busy IRL.)
If a consequentialism talks about murder being bad, they mean that it's bad if anybody does it.
Not necessarily. I'm not saying it makes much sense, but it's possible to construct a utility function that values agent X not having performed action Y, but doesn't care if agent Z performs the same action.
It is technically true that all of these ethical systems are equivalent, but saying which ethical system you use nonetheless carries a lot of meaning.
a) After reading Luke's link below, I'm still not certain if what I've said about them being (approximately) isomorphic is correct... b) Assuming my isomorphism claim is true enough, I'd claim that the "meaning" carried by your preferred ethical framework is just framing.
That is, (a) imagine that there's a fixed moral landscape. (b) Imagine there are three transcriptions of it, one in each framework. (c) Imagine agents would all agree on the moral landscape, but (d) in practice differ on the transcription they prefer. We can then pessimistically ascribe this difference to the agents preferring to make certain classes of moral problems difficult to think about (i.e., shoving them under the rug).
Deontology and virtue ethics don't care about getting things done.
I maintain that this is incorrect. The framework of virtue ethics could easily have the item "it is virtuous to be the sort of person who gets things done." And "Make things happen, or else" could be a deontological rule. (Just because most examples of these moral frameworks are lame doesn't mean that it's a problem with the framework as opposed to the implementation.)
comment by Viliam_Bur · 2014-08-15T07:58:55.535Z · LW(p) · GW(p)
This reminds me of a part of Zombie Sequence, specifically the Giant Lookup Table. Yes, you can approximate consequentialism by a sufficiently complex set of deontological rules, but the question is: Where did those rules come from? What process generated them?
If we somehow wouldn't have any consequentialist intuitions, what is that probability that we would invent a "don't murder" deontological rule, instead of all the possible alternatives? Actually, why would we even feel a need for having any rules?
Deontological rules seem analogical to a lookup table. They are precomputed answers to ethical questions. Yes, they may be correct. Yes, using them is probably much faster than trying to compute them from scratch. But the reason why we have these deontological rules instead of some other deontological rules is partly consequentialism and partly historical accidents.
Replies from: Lumifer, pragmatist, Azathoth123↑ comment by pragmatist · 2014-08-15T15:26:12.128Z · LW(p) · GW(p)
But the reason why we have these deontological rules instead of some other deontological rules is partly consequentialism and partly historical accidents.
Why is it partly consequentialism? In what sense did consequentialism have any causal role to play in the development of deontological ethical systems? I highly doubt that the people who developed and promulgated them were closet consequentialists who chose the rules based on their consequences.
↑ comment by Azathoth123 · 2014-08-16T04:38:34.712Z · LW(p) · GW(p)
Where did those rules come from? What process generated them?
Where did your utility function come from? What process generated it?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-08-16T16:27:23.765Z · LW(p) · GW(p)
Evolution, of course.
We could classify reflexes and aversions as deontological rules. Some of them would even sound moral-ish, such as "don't hit a person stronger than you" or "don't eat disgusting food". Not completely unlike what some moral systems say. I guess more convincing examples could be found.
But if the rule is more complex, if it requires some thinking and modelling of the situation and other people... then consequences are involved. Maybe imaginary consequences (if we don't give sacrifice to gods, they will be angry and harm us). Though this could be considered merely a rationalization of a rule created by memetic evolution.
comment by lukeprog · 2014-08-14T00:05:45.958Z · LW(p) · GW(p)
Replies from: Gunnar_Zarncke, lavalamp↑ comment by Gunnar_Zarncke · 2014-08-14T08:51:27.027Z · LW(p) · GW(p)
That one differs in not giving the reverse formulation but only reducing virtue and deontological to consequentialist but not vice versa. nonetheless a relevant link.
comment by NancyLebovitz · 2014-08-14T01:20:35.156Z · LW(p) · GW(p)
Can deontology and/or virtue ethics include "keep track of the effects of your actions, and if the results are going badly wrong, rethink your rules"?
Replies from: lavalampcomment by jimrandomh · 2014-08-15T15:54:27.331Z · LW(p) · GW(p)
Sort of! But not exactly. This is a topic I've been meaning to write a long post on for ages, and have given a few short impromptu presentations about.
Consequentialism, deontology, and virtue ethics are classifiers over world-histories, actions, and agents, respectively. They're mutually reducible, in that you can take a value system or a value-system fragment in any one of the three forms, and use it to generate a value system or value-system fragment in either of the other two forms. But value-system fragments are not equally naturally expressed in different forms; if you take a value from one and try to reframe it in the others, you sometimes get an explosion of complexity, particularly if you want to reduce value-system fragments which have weights and scaling properties, and have those weights and scaling properties carry through.
comment by Darklight · 2014-08-14T16:35:55.415Z · LW(p) · GW(p)
Henry Sidgwick in "The Methods of Ethics" actually makes the argument that Utilitarianism can be thought of as having a single predominant rule, namely the Greatest Happiness Principle, and that all other correct moral rules could follow from it, if you just looked closely enough at what a given rule was really saying. He noted that when properly expanded, a moral rule is essentially an injunction to act a certain way in a particular circumstance, that is universal to any person in identical circumstances. He also had some interesting things to say about the relationships between virtues and Utilitarianism, and more or less tried to show that the various commonly valued virtues could be inferred from a Utilitarian perspective.
Of course Sidgwick was arguing in a time before the clear-cut delineation of moral systems into "Deontological", "Consequentialist", and "Virtue Ethics". But I thought it would be useful to point out that early classical Utilitarian thinkers did not see these clear-cut delineations and instead, often made use of the language of rules and virtues to further their case for Utilitarianism as a comprehensive and inclusive moral theory.
comment by [deleted] · 2014-08-14T15:54:51.341Z · LW(p) · GW(p)
I agree. In principle, you could construct a total order over all possible states of the world. All else is merely a pretty compression scheme. That being said, the scheme is quite necessary.
comment by Anders_H · 2014-08-14T01:26:28.974Z · LW(p) · GW(p)
I've been wondering if it makes sense to think of ethical philosophies as different classes of modeling assumptions:
Any moral statement can be expressed using the language of consequentialism, deontology or virtue ethics. The statements can therefore be translated from one framework to another. In that sense, the frameworks are equivalent. However, some statements are much easier to express in a given language.
Sometimes, we make models of ethics to explore the underlying rules that make an ethical statement "true". We try to predict whether an ethical statement is true using information from other, closely related ethical statements. However, ethical statements are multidimensional and therefore vary across many different axes. Two ethical statements can be closely related on one axis, and completely different from each other on another axis. In order to learn about the underlying rules, we have to specify which axis we are going to make modeling assumptions on. The choice will determine whether you call yourself a "consequentialist" or a "deontologist".
Problems such as "the repugnant conclusion" and "being so honest that you tell a murderer where your children are" occur when we extrapolate too far along this axis, and end up way beyond the range of problems that the model is fit to.
comment by blacktrance · 2014-08-14T21:10:43.927Z · LW(p) · GW(p)
You can take a set of object-level answers and construct a variety of ethical systems that produce those answers, but it still matters which ethical system you use because your justification for those answers would be different, and because while the systems may agree on those answers, they may diverge on answers outside the initial set.
Replies from: lavalampcomment by diegocaleiro · 2014-08-24T22:53:13.515Z · LW(p) · GW(p)
Isn't that a necessary step for the claims Derek Parfit makes about convergence in his "On What Matters"?
comment by joaolkf · 2014-08-23T03:23:05.070Z · LW(p) · GW(p)
I have been saying this for quite some time. I regret not posting it first. It would be nice to have a more formal proof of all of this with utility functions, deontics and whatnot. If you are up for it, let me know. I could help, feedback, or we could work together. Perhaps someone else has done it already. It has always struck me as pretty obvious, but this is the first time I've seen stated like this.
Replies from: lavalamp↑ comment by lavalamp · 2014-08-24T00:01:05.763Z · LW(p) · GW(p)
Check out the previous discussion Luke linked to: http://lesswrong.com/lw/c45/almost_every_moral_theory_can_be_represented_by_a/
It seems there's some question about whether you can phrase deontological rules consequentially-- to make this more formal that needs to be settled. My first thought is that the formal version of this would say something along the lines of "you can achieve an outcome that differs by only X%, with a translation function that takes rules and spits out a utility function, which is only polynomially larger." It's not clear to me how to define a domain in such a way as to allow you to compute that X%.
...unfortunately, as much as I would like to see people discuss the moral landscape instead of the best way to describe it, I have very little time lately. :/
Replies from: joaolkfcomment by [deleted] · 2014-08-18T00:35:11.897Z · LW(p) · GW(p)
I think the choice of systems is less important than their specifics, and that giving a choice but no specifics is being obtuse.
comment by Shmi (shminux) · 2014-08-13T23:16:01.798Z · LW(p) · GW(p)
On isomorphism: every version of utilitarianism I know of leads to a repugnant conclusion of one way or another, or even multiple ones. I don't think that deontology and virtue ethics are nearly as susceptible. In other words, you cannot construct a utilitarian equivalent of an ethical system which is against suffering (without explicitly minimizing some negative utility) but does not value torture over dust specks.
EDIT: see the link in this lukeprog's comment. for limits of consequentialization.
Replies from: buybuydandavis, DanielLC, lavalamp↑ comment by buybuydandavis · 2014-08-14T01:54:30.021Z · LW(p) · GW(p)
I agree with your point on utilitarianism, but it is only one form of consequentialization, it's not the entire class. Consequentialism doesn't need to lead to a repugnant conclusion.
↑ comment by DanielLC · 2014-08-13T23:57:32.070Z · LW(p) · GW(p)
If you translate Kantian ethics into consequentialism, you get a utility function with a large negative value for you torturing someone, lying, and doing several other things. Suppose a sadist comes up to you and asks where your children are, so that he can torture them. Lying to him has huge negative utility. Telling them where your children are does not. He'll torture them, but since it's not you that's torturing them, it doesn't matter. It's his problem, or rather it would be if he was a Kantian.
Does a utility function that only prohibits torture if a specific person does it really count as being against suffering?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-08-14T18:35:25.852Z · LW(p) · GW(p)
"He'll torture them, but since it's not you that's torturing them, it doesn't matter."
That doesn't remotely follow. Kantians are supposed to abstain from lying, etc, because of the knock on effects, because they would not wish lying to become general law. So "it's not me who's doing it" isthe antithesis of the Kantianism.
Replies from: DanielLC↑ comment by DanielLC · 2014-08-14T20:39:54.103Z · LW(p) · GW(p)
If you're unwilling to lie to prevent torture, then it seems pretty clear that you're more okay with you lying than the other guy torturing.
Under deontological ethics, you are not responsible for everything. If someone is going to kill someone, and it doesn't fall under your responsibility, you have no ethical imperative to stop them. In what sense can you be considered to care about things your are not responsible for?
Replies from: None↑ comment by [deleted] · 2014-08-15T00:50:50.310Z · LW(p) · GW(p)
If someone is going to kill someone, and it doesn't fall under your responsibility, you have no ethical imperative to stop them.
This isn't true, at any rate, for Kant. Kant would say that you have a duty to help people in need when it doesn't require self-destructive or evil behavior on your part. It's permissible, perhaps, to help people in need self-destructively, and it's prohibited to help them by doing something evil (like lying). You are responsible for the deaths or torture of the children in the sense that you're required to do what you can to prevent such things, but you're not responsible for the actions of other people, and you can't be required (or permitted) to do forbidden things (this is true of any consistent ethical theory).
And of course, Kant thinks we can and do care about lots of things we aren't morally responsible for. Morality is not about achieving happiness, but becoming worthy of happiness. Actually being happy will require us to care about all sorts of things.
Replies from: DanielLC↑ comment by DanielLC · 2014-08-15T01:44:40.128Z · LW(p) · GW(p)
This isn't true, at any rate, for Kant. Kant would say that you have a duty to help people in need when it doesn't require self-destructive or evil behavior on your part.
In other words, if it costs you nothing. You consider having no self-destructive or evil behavior on your part to be infinitely more valuable.
this is true of any consistent ethical theory
It is true by definition. That's what "forbidden" means.
And of course, Kant thinks we can and do care about lots of things we aren't morally responsible for.
We are not using the same definition of "care". I mean whatever motivates you to action. If you see no need to take action, you don't care.
Replies from: None↑ comment by [deleted] · 2014-08-15T04:30:44.154Z · LW(p) · GW(p)
In other words, if it costs you nothing. You consider having no self-destructive or evil behavior on your part to be infinitely more valuable.
No, there's a lot of room between 'costs you nothing' and 'self-destructive'. The question is whether or not a whole species or society could exist under universal obedience to a duty, and a duty that requires self-destruction for the sake of others would make life impossible. But obviously, helping others at some cost to you doesn't.
Also, I was pretty careful to say that you can't have a DUTY to help others self-destructively. But it's certainly permissible to do so (so long as its not aimed at self-destruction). You are however prohibited from acting wrongly for the sake of others, or yourself. And that's just Kant saying "morality is the most important thing in the universe." That's not so weird a thought.
"We are not using the same definition of "care". I mean whatever motivates you to action. If you see no need to take action, you don't care."
No, we're using the same definition. So again, Kant thinks we can and do care about, for example, the moral behavior of others. We're not morally responsible for their behavior (but then, no ethical theory I know of asserts this), but we can certainly care about it. You ought to prevent the murderer at the door from finding the victim. You should do everything in your power, and it's permissible to die trying if that's necessary. You just can't do evil. Because that would be to place something above the moral law, and that's irrational.
It's not plausible to think that if someone doesn't act, they don't care. If someone insults me, I generally won't strike them or even respond, but that doesn't mean I'm not pissed off. I just think obeying the law and being civil is more important than my feelings being hurt.
But I'm just channeling Kant here, I'm not saying I agree with this stuff. But, give credit...there are very few ethical ideas as compelling and powerful and influential as his.
Replies from: DanielLC↑ comment by DanielLC · 2014-08-15T05:25:41.622Z · LW(p) · GW(p)
No, there's a lot of room between 'costs you nothing' and 'self-destructive'.
I got the impression that you aren't allowed any self-harm or evil acts. If you won't stop something for epsilon evil, then you care about it less than epsilon evil. If this is true for all epsilon, you only care an infinitesimal amount.
I don't mean "costs nothing" as in "no self-harm". I mean that a Kantian cares about not directly harming others, so directly harming others would be a cost to something. You could measure how much they care about something by how much they're willing to harm others for it. If they're only willing to harm others by zero, they care zero about it.
Also, I was pretty careful to say that you can't have a DUTY to help others self-destructively. But it's certainly permissible to do so (so long as its not aimed at self-destruction).
It's also permissible under nihilist ethics. I'm not going to say that nihilism is anti-suffering just because nihilism allows you to prevent it.
I judge an ethical system based on what someone holding to it must do, not what they can.
You are however prohibited from acting wrongly for the sake of others, or yourself. And that's just Kant saying "morality is the most important thing in the universe."
If you are prohibited from acting wrongly under any circumstances, then the most important thing is that you, personally, are moral. Everyone else acting immoral is an infinitely distant second.
No, we're using the same definition.
If someone insults me, I generally won't strike them or even respond, but that doesn't mean I'm not pissed off.
We are not using the same definition. When I say that someone following an ethical framework should care about suffering, I don't mean that it should make them feel bad. I mean that it should make them try to stop the suffering.
Although my exact words were "In what sense can you be considered to care about things your are not responsible for?", so technically the answer would be "In the sense that you feel bad about it."
Replies from: None↑ comment by [deleted] · 2014-08-15T19:31:48.487Z · LW(p) · GW(p)
I got the impression that you aren't allowed any self-harm or evil acts. If you won't stop something for epsilon evil, then you care about it less than epsilon evil. If this is true for all epsilon, you only care an infinitesimal amount.
This sounds right to me, so long as 'self-harm' is taken pretty restrictively, and not so as to include things like costing me $20.
In his discussion of the 'murderer at the door' case Kant takes pains to distinguish between 'harm' and 'wrong'. So while we should never wrong anyone, there's nothing intrinsically wrong with harming people (he grants that you're harming, but not wronging, the victim by telling the truth to the murderer). So in this sense, I think you're right that Kantian deontology isn't worried about suffering in any direct sense. Kant will agree that suffering is generally morally significant, and that we all have an interest in minimizing it, but he'll say that it's not immediately a moral issue. (I think he's right about that). So this isn't to say that a Kantian shouldn't care about suffering, just that it's as subordinate to morality as is pleasure, wealth, etc.
I judge an ethical system based on what someone holding to it must do, not what they can.
It seems to me arbitrary to limit your investigation of ethics in this way. The space of permissibility is interesting, not least because there's a debate about whether or not that space is empty.
then the most important thing is that you, personally, are moral. Everyone else acting immoral is an infinitely distant second.
Agreed, though everything is an infinitely distant second, including your own happiness. But no one would say that you aren't therefore passionately attached to your own happiness, or that you're somehow irrational or evil for being so attached.
↑ comment by lavalamp · 2014-08-13T23:35:04.355Z · LW(p) · GW(p)
Are you saying that some consequentialist systems don't even have deontological approximations?
It seems like you can have rules of the form "Don't torture... unless by doing the torture you can prevent an even worse thing" provides a checklist to compare badness ...so I'm not convinced?
Replies from: shminux↑ comment by Shmi (shminux) · 2014-08-13T23:52:04.728Z · LW(p) · GW(p)
Are you saying that some consequentialist systems don't even have deontological approximations?
Actually, this one is trivially true, with the rule being "maximize the relevant utility". I am saying the converse need not be true.
comment by LizzardWizzard · 2014-08-14T07:53:53.941Z · LW(p) · GW(p)
(C)
All three theories are obviously wrong, moss-grown and vulnerable.
As consequentalist u can think that raping and killing is ok if torturer receives more amount of joy then amount of pain received by the victim. It seems even more obvious in the case with group of assaulters and one victim.
As deontologist you can always justify almost any deed by some trivial "for greater good" rule
And virtue ethics can be misleading in many ways, consider Halo effect and what can be consequences - Dr. Evil will take over world in seconds.
Please prove where I am wrong, I just pointed out weak points of each movement which seemed obvious to me
Replies from: cousin_it, Luke_A_Somers↑ comment by cousin_it · 2014-08-14T10:07:27.088Z · LW(p) · GW(p)
As consequentalist u can think that raping and killing is ok if torturer receives more amount of joy then amount of pain received by the victim. It seems even more obvious in the case with group of assaulters and one victim.
Well, a consequentialist's utility function doesn't have to allow that. There are some utility functions that don't justify that kind of behavior. But I agree that the question of how to aggregate individual utilities is a weak spot for consequentialism. The repugnant conclusion is pretty much about that.
As deontologist you can always justify almost any deed by some trivial "for greater good" rule
As far as I understand, a deontologist's rule set doesn't have to allow that. There are some rule sets that don't justify all deeds.
And virtue ethics can be misleading in many ways, consider Halo effect and what can be consequences - Dr. Evil will take over world in seconds.
Yeah, I guess that's why virtue ethicists say that recognizing virtue is an important skill. Not sure if they have a more detailed answer, though.
↑ comment by Luke_A_Somers · 2014-08-14T14:15:26.667Z · LW(p) · GW(p)
As consequentalist u can think that raping and killing is ok if torturer receives more amount of joy then amount of pain received by the victim. It seems even more obvious in the case with group of assaulters and one victim.
... if 'u' never ever admits the broad and nigh-inevitable aftereffects of such an event, severely underestimates the harm of being raped, and doesn't consider the effects of such a policy in general, yes, u could.
In other words, the consequentialist faces the problem that they can do bad things if they're don't think things through.
The deontologist is generally fairly stable against doing horrible things for the greater good, but more often faces the problem that they are barred from doing something that really IS for a much greater good. Ethical tradeoffs are something it's ill-suited to dealing with.