Evaluating Moral Theories
post by ArisC · 2017-01-23T05:04:07.146Z · LW · GW · Legacy · 40 commentsContents
40 comments
I would like to use my first post to expand on a framework I introduced in the Welcome thread for evaluating moral theories, and to request your feedback.
This thesis rests on the fact that a moral theory is a tool for helping us make choices. Starting from this premise, I believe that a moral theory needs to meet three criteria for it to be acceptable:
a) Its comprising principles must be non-contradictory. I think this is pretty self evident: if a theory consists of a number of principles that contradict each other, there will be situations where the theory will suggest contradictory actions - hence failing its purpose as a tool to enable choice making.
b) Its comprising principles must be non-arbitrary as far as possible. What I mean by this is that the principles must be derived logically from facts on which everyone agrees. Otherwise, if a moral theory rests on an arbitrary and subjective principle, the theory's advocates will never be able to convince people who do not share that principle of their theory's validity.
c) If the principles of the moral theory are taken to their logical conclusion, they must not lead to a society that the theory's proponents themselves would consider dystopian.
Note that my premise (i.e. that a moral theory is supposed to help us make choices) necessitates that the theory is not vague. So saying that a utilitarian system, using some magical measurement of utility, is a good moral theory is pointless in my view.
However, I want to draw a distinction between morality at the social level and morality at the personal level. The former refers to a moral system whose proponents believe should apply to the whole world; the latter, to the principles by which people live their private lives. The three criteria I listed should only be used to evaluate morality at the social level: if you want to impose your principles over every single human, you'd better make sure they are non-contradictory, acceptable by everyone and won't mess up the world.
Morality at the personal level is different: if you are using your principles to determine your actions only, it's fine if these principles are arbitrary. If lying makes you feel uncomfortable, I think it's fair enough for you to value honesty as a principle, even if you cannot provide a very rational justification.
Finally, one final comment: I believe there are some moral issues which cause disagreement because of the fundamental inability of our language to define certain concepts. For instance, the whole debate on abortion comes down to the definition of life - and since we lack one, I don't think we can ever rationally settle that debate.
------------------------------------------------------------
Now I also have a question for whomever is reading this: the only theory I can think of that meets all three criteria is libertarianism:
a) It only has one principle - do not do anything that infringes on other people's liberty - so it's inherently consistent.
b) The fact on which everyone has to agree: we have no proof of some sort of moral authority, hence any moral command is arbitrary. In the absence of such moral authority, no-one has the right to impose their own morality on others.
c) Though libertarianism may lead to meanness - e.g. inability to condemn people for lack of kindness or charity - it's not dystopian by my view.
My question is - are there other theories that would meet all three criteria? (I think total anarchy, including the lack of condemnation of violence, could meet the first two criteria, but I think few would argue it meets the third one).
40 comments
Comments sorted by top scores.
comment by TiffanyAching · 2017-01-23T07:45:10.163Z · LW(p) · GW(p)
Hi ArisC! Gratz on your first post. A few thoughts:
I can't agree with your b) criterion - non-arbitrary. The fundamental principle has to be arbitrary or you end up in a turtles-all-the-way-down situation where each principle rest upon another. "The fundamental principle is to not infringe on the liberty of others". Why not? "Because everyone agrees there's no way to prove moral authority". No they don't. Billions don't. "Well they should, because it's true." Well so what if it is? "That means you have no right to impose moral authority on anyone" What's this "no right" of which you speak, what does that mean?
This "no-one has the right" statement surely implies the existence of another principle - "it is right to be just/fair, it is wrong to be unjust/unfair". Having the right to something means having it fairly. If "don't infringe on personal liberty" is not based upon any other principle, then it is itself arbitrary. If it is based upon an ideal of "don't do unjust things, (such as assuming moral authority)" then you've got yourself another, even deeper principle. And that could cause some issues with your a) criterion, consistency, because it's possible to imagine scenarios where "injustice is wrong" and "interfering with personal liberty is wrong" are in conflict - in fact we deal with those scenarios every day in the real world. And speaking of the consistency criterion:
if a theory consists of a number of principles that contradict each other, there will be situations where the theory will suggest contradictory actions - hence failing its purpose as a tool to enable choice making.
Surely a moral system fails of its purpose as "a tool for choice-making" if its comprising principles - or principle, in the libertarianism case - won't actually cover a whole range of moral-choice scenarios? To pick an example at random, imagine an honesty-based payment system for an online product. The site says "please pay whatever you think this is worth". You happen to know that the site needs $5 per customer to make the business profitable. You actually believe the value of the product to be $10. How much do you pay? Or take the old Trolley Problem, where you have choice between allowing five kids to die by inaction vs. killing one through your own act. I don't see how "do not infringe on other people's liberty" is a useful tool for making either of those choices without really stretching the definitions of "infringe" and "liberty" to breaking point. "Don't infringe on people's liberty" can only inform choices where someone's liberty is at stake - to re-frame all moral decisions as centering on someone's "liberty" would, again, seem to me to require torturing the definition of liberty.
Now I know this isn't answering your question about moral systems that meet your criteria but all I can say to that is that I don't accept your first two criteria at all. The first I've discussed. As for the second, I think that the basic idea of authority - the designation of certain individuals as rule-makers and rule-enforcers by group consensus - is justifiable. It's part of my moral system.
My bedrock principle is "survival of the human species". It is arbitrary - why care about the survival of humanity? - but it is also based in reality. We have basic biological urges to survive, to procreate (most of us) and to nurture our offspring so that they also survive and procreate. Most of us want the species to keep going. I do. So that's where I start. We have to live with each other as individuals to survive as a species. That's the second level, and I think that's also clearly based in fact. And from there a whole slew of tertiary principles arise based on what makes it possible for us to live and co-operate with each other. Justice, honesty, value for life, mutual tolerance and yes, personal liberty too. They are not "consistent" in the way I understand you to use the word, because they have to be balanced against each other in any given situation to achieve the goal - survival of the species. They do, as far as I can see, lead when taken to their logical extent to a society that is not dystopian - not perfect, but pretty functional. Optimal balancing is something we've been arguing about for millennia but we've done well enough so far that we are still here, talking about morality on the internet.
Replies from: TheAncientGeek, ArisC↑ comment by TheAncientGeek · 2017-01-23T17:45:06.374Z · LW(p) · GW(p)
You need to distinguish between arbitrary foundations and unfounded foundations. By definition, the most basic foundations of a theory are not going to be rendered non arbitrary by deeper foundations, but that does not mean they are arbitrary....arbitrariness may be removed by other means, such as by choosing your axioms to lead to the results you want.
↑ comment by ArisC · 2017-01-23T08:39:15.621Z · LW(p) · GW(p)
Thanks for your response!
First, re the suitability of (b) as a general criterion: if your theory rests on arbitrary principles, then you admit that it's nothing more than a subjective guide... so then what's the point of trying to argue for it? If at the end of the day it all comes down to personal preference, you might as well give up on the discussion, no?
With regards to liberty meeting that criterion, it is at least a fact on which everyone can agree that not everyone agrees on an absolute moral authority. So starting from this fact, we can derive the principle that nothing gives you the right to infringe on other people's liberty. This doesn't exactly presuppose a "fairness" principle - it's sort of like bootstrapping: it just presupposes the absence of a right to harm others. I am not saying that not being violent is right; I am saying that being violent isn't.
Your point on the fact that this theory leaves a lot of moral dilemmas uncovered, you are right. Sadly, I don't have an answer to that. Perhaps I could add a 4th criterion, to do with completeness, but I suspect that no moral theory would meet all of the criteria. But to be clear here - you are not rejecting criterion a as far as I can tell; you are just saying it's not sufficient, right?
As for your personal principle - I cannot say whether it meets criteria a and c because you have not provided enough details, e.g. how do you balance justice vs honesty vs liberty? If what you are saying is "it all comes down to the particular situation", then you are not describing a moral theory but personal judgement.
But I appreciate the critique - my arguing back isn't me blindly rejecting any counter-arguments!
Replies from: TiffanyAching, Viliam↑ comment by TiffanyAching · 2017-01-23T21:29:42.412Z · LW(p) · GW(p)
Hey, I appreciate your ability to engage constructively with a critique of your views! Rare gift, that.
if your theory rests on arbitrary principles, then you admit that it's nothing more than a subjective guide
As other people have pointed out, maybe we should consider here what we mean by "arbitrary". In your initial statement you said that non-arbitrary was that which was derived logically from facts on which everyone agrees. So to avoid ambiguity maybe we should just say that criterion (b) is "the principle(s) of the moral system must be derived logically from facts on which everyone agrees".
Now, there are no facts, as fact relates to this discussion, on which everyone agrees, and there never will be. There are, of course, facts, but among the seven-odd billion human inhabitants of the planet you will always find people to disagree with any of them. There are literally still people who think the sun revolves around the earth. I swear that's not hyperbole - google "modern geocentrism".
(By the way, you also said "if a moral theory rests on an arbitrary and subjective principle, the theory's advocates will never be able to convince people who do not share that principle of their theory's validity" - but millions of religious converts give the lie to that. Subjectivity is demonstrably no barrier to persuasion - not saying that's a good thing but it's a real thing.)
So say we cut (b) down to "logically derived from facts". I think that's useful. Facts are truly, objectively real, total consensus isn't. But upon which facts, then, do we start to build our moral system? You state that your chosen basis is the fact that not everyone agrees about moral authority. As gjm pointed out, there seems to be a gap between "we humans can't agree on what constitutes moral authority" and "nobody should impose their morality on any other person in a way that limits their freedom". After all, despite our differing views on what is or is not moral, most people do believe in the basic idea that it's justifiable to constrain the freedom of others in at least some situations.
But I'll leave that bit aside for now to go back to the issue of fact as a basis for a moral system. Your fact isn't the only fact. It's also a fact that some people are physically stronger and smarter than others. Some people base their moral system on that fact, and say that might is right, the strong have an absolute right to rule the weak, will to power and so on and on. Douchetarians, basically. There are many facts upon which one could build a moral system. How do I pick one, with some defensible basis for my choice among many?
I take as my founding fact that fact which appears to be the most fundamental, the most basically applicable to humanity, the most basically applicable to life - that it wants to keep being alive. Find me a fact about humanity more bedrock-basic than that and I swear I'll rethink my moral system.
This brings me back to criterion (a), consistency.
how do you balance justice vs honesty vs liberty? If what you are saying is "it all comes down to the particular situation", then you are not describing a moral theory but personal judgement.
The principle - there is only one - is "what serves the species". That is, what allows us to keep living with each other and co-operating with each other, because that's necessary to our continued existence. Every other moral principle is a branch on that trunk. Honesty, justice, personal liberty, civic responsibility, mercy, compassion - we came up with those concepts, evolved them, because they can all be applied to meet the goal. So the non-subjective answer to "how do you balance principles in any given situation" is "what balance best serves the goal of keeping society ticking?". Now that's difficult to decide but there's a major difference between an objectively correct answer that's difficult to find and there being no objectively correct answer.
So do I reject criterion (a)? Not exactly. What I think is that by starting with the moral principles as a tool for moral choice-making you're skipping a step. Why worry about making moral choices at all unless there's some reason to do so? The first step is to define the goal to which making moral choices must tend. Once you define that, you can have multiple principles which may seem to be sometimes in conflict with each other - the consistency comes from the goal. The principles are to be applied in a way which is always consistent with meeting the goal. Now, some people say the goal is "maximize happiness". You might say your goal falls somewhere in that band - or you might go all out and say the goal is "maximize freedom", period. I say we can be neither happy nor free if we're not here and if we're not able to successfully live together we won't be here. I say start at the start - keep ourselves existing, and then work in as much happiness and freedom as we can manage.
And just to be totally clear, I am saying that sometimes "maintaining personal liberty inviolate" is not the way to meet the goal "keep humanity existing". "Disregard personal liberty and afford it no value" is also not the way to meet the goal. But "personal freedom entirely unrestricted" is simply not a survival strategy. Forget humans - chimps punish or prevent behaviors that endanger the group. Every social animal I'm aware of does. And for all our wonderful evolved brains and tools and self-awareness and power of language, that's still what we are - social animals.
Replies from: ArisC↑ comment by ArisC · 2017-01-24T00:57:45.187Z · LW(p) · GW(p)
Thanks for the continuing dialogue!
I am fine to tweak the definition of (b) to be facts-based as you say. And you are right to say that there may be many facts to choose from - I never said libertarianism is definitely the only possible theory to meet all criteria, just the only one I could come up with. So, yes, Douchetarianists, as you call them, could also claim that their theory meets (b), but I'd argue it fails to meet (c).
The problem with your moral theory, as I see it, is that it also fails to meet (c), because there could be many plausible, but horrific in my view, arguments you could make: e.g. that eugenics would improve the species' odds of survival, as would assigning jobs to people based on how good they would be at them vs letting them choose for themselves &c.
Replies from: TiffanyAching↑ comment by TiffanyAching · 2017-01-24T23:13:40.065Z · LW(p) · GW(p)
The problem with your moral theory, as I see it, is that it also fails to meet (c), because there could be many plausible, but horrific in my view, arguments you could make [...]
I was expecting this response either from you or someone else, but didn't want to make my previous comment too long (a habit of mine) by preempting it. It's a totally valid next question, and I've considered it before.
Criterion (c) is that the principles of my moral system must not lead when taken to their logical extent to a society that I, the proponent of the system, would consider dystopian. The crux of my counter-argument is that most of what you'd consider horrific, I would also probably consider horrific, as would most people - and humans don't do well in societies that horrify them. Taking any path that leads to a "dystopia" is inconsistent with the goal.
(I'm trying to prevent this comment from turning into a prohibitively massive essay so I'll try to restrain myself and keep this broad - please feel free to request further detail about anything I say.)
Eugenics, first of all, doesn't work. (I take you to mean "negative eugenics" - killing or sterilizing those you consider undesirable, rather than "positive eugenics" - tinkering with DNA to produce kids with traits you find desirable, which hasn't really been tried and only very recently became a real possibility to consider.) We suck at guessing whether a given individual's progeny will be "good humans" or not. Too many factors, too many ways a human can be valuable, and even then all you have is a baby with a good genetic start in life - there's still all the "nurture" to come. It's like herding cats with a blindfold on. I could go on for pages about all the ways negative eugenics doesn't work - but say we were capable of making useful judgments about which humans would produce "bad" offspring. You'd then have to make the case that the principle "negative eugenics is fine to do" furthers the goal (helping humanity to survive) to such an extent that it outweighs the necessary hits taken by other goal-furthering principles like "don't murder people", "don't maim people", "don't give too much power to too few people" and, on an even more basic level, "don't suppress empathy".
Do you and I consider negative eugenics "horrific" because we think we (or at least our genitals) would be on the chopping block? Probably not, though we might fear it a bit. It horrifies us because we feel empathy for those who would suffer it. Empathy is hard-wired in most people. Measure your brain activity while you watch me getting hit with a hammer and your pain centers will show activity. You can feel for me (though measurably less if we're not the same race - these brains evolved in little tribes and are playing catch-up with the very recent states of national and global inter-dependence). Giving weight - a lot of weight - to principles protective or supportive of empathy is consistent with the goal because empathy helps us survive as a species. Numb or suppress it too much and we're screwed. Run counter to it too much without successfully suppressing it and you've got a society full of horrified, outraged people. Not great for social co-operation.
Which brings me to your other example, assigning jobs based on ability without regard to choice. Again, won't work. Gives you a society full of miserable resentful people who don't give their forced-jobs the full passion or creativity of which they are capable, or actively direct their energies towards trying to get re-assigned to the job they want. Would go further into this but this is already too long!
I know those two were only examples on your part but my point is that the question "does this help humanity to survive" is always a case of trying to balance "does it help in this way to an extent that outweighs how it harms in these other ways". That has to be taken into account when considering a "horrible scenario". People having empathy - caring for and helping each other - helps us to survive. People being physically and mentally healthy ("happy" is a big part of both, by the way) helps. People having personal freedom to create and invent and try things helps. People being ambitious and competing and seeking to become better helps. We need principles that take all that value into account - and sometimes those principles are going to be up against each other and we have to look for the least-worst answer. It's never simple, we get it wrong all the time, but we must deal with it. If morality was easy we wouldn't have spent the last ten thousand years arguing about it.
Now, I noticed that elsewhere you said it was bothering you that people were going off on tangents to your main issues, so I'll try to circle back to your original point. You're trying to devise a framework for evaluating a moral system, and I do think your criteria raise some useful lines of inquiry, but I don't see how it's possible to "evaluate" something without expressing or defining what it is you want it to do. My evaluation of my hairdryer depends totally on whether I want it to dry my hair or tell me amusing anecdotes. Evaluation comes up "pretty good" on the former and "totally crap" on the latter. Now "figuring out a way to evaluate a moral system" is something I'm all for and the best help I have to give with that is to suggest that you define what it is you want a moral system to do first - a base on which build your evaluation framework.
[Edited to add: I got through two paragraphs on eugenics without bringing up the you-know-whozis! Where should I pick up my medal?]
↑ comment by Viliam · 2017-01-23T16:22:10.880Z · LW(p) · GW(p)
By the way, what exactly do you mean by "arbitrary" and "non-arbitrary"? I am asking because the homo sapiens species itself is in some sense "arbitrary" -- do we want the result to be equally attractive for humans and space spiders, or is it okay if it is merely attractive for humans?
My opinions is that while it would be a nice coincidence if my moral system would also happen to be attractive for the space spiders, I actually care about humans. Not even all humans equally, for example I wouldn't care too much if e.g. psychopaths would decide that my moral system seems too arbitrary for them.
But then, considering the space spiders (and human spiders), there seem to be two different questions here: how would I define "morality" for agents more or less like myself, and how would I define "optimal rules for peaceful coexistence" for agents completely unlike myself.
I refuse to use the word "morality" for the latter, because depending on the nature of the space spiders, the optimal outcome could still be something quite horrifying for the average human. But in some sense it would be less "arbitrary".
comment by Dagon · 2017-01-23T19:49:58.251Z · LW(p) · GW(p)
one additional piece of advice: split the discussion. Talking about desiderata of moral theories really should be separated from discussion of any specific theory, unless you're using multiple theories as examples of points your meta-theory makes.
The mixing makes it come across as "here's a metatheory I came up with, and oh, hey, look! my preferred object-level theory is the only thing that fits it." It's not clear if you're supporting your metatheory with the fact that libertarianism is "correct", or supporting libertariasm with your metatheory, but it IS clear that you're not being rigorous with the levels.
Replies from: gjm, ArisC↑ comment by gjm · 2017-01-23T21:34:42.016Z · LW(p) · GW(p)
On the other hand, doing it this way at least provides a bit more transparency. If ArisC had first of all posted a lengthy and apparently abstract discussion at the meta-level, and then later turned out to be a partisan of the one object-level theory that (allegedly) fits the meta-criteria, then I think some readers would have concluded that the meta-discussion was not being conducted in good faith.
comment by Dagon · 2017-01-23T14:43:11.233Z · LW(p) · GW(p)
i applaud the attempt, but don't actually fully agree with any of your main points. The missing component is the acknowledgement that we are all running on corrupt hardware, and don't have infinite compute power.
a) This is a fine goal, but completely impossible if you expect simple, human-operable principles. Instead, a good human moral structure acknowledges contradictions and tensions among it's tenets.
b) I can't tell if you're arguing moral realism (moral structure is a fact), or just that you think it should be acceptable to people with certain beliefs. Since there are no facts with which literally everyone agrees, this is probably doomed.
c) The simplified version of a moral theory which can fit in a person's head, or be communicated across individuals in a lifetime cannot be so perfect that it applies in all cases. Blind adherence to any small set of strictures without impossible calculation levels can always go wrong.
I'm also not sure I get your distinction between personal and societal. I think you mean "applied to yourself" and "enforced on others". This is a fine point to keep in mind, and to note that extra humility is needed when interacting with others. That said, a moral theory which doesn't specify when you can (and should/must) coerce or punish others is pretty useless.
Replies from: ArisC, gjm↑ comment by ArisC · 2017-01-24T00:52:23.834Z · LW(p) · GW(p)
You are right that we have limited computational power, but this is a theoretical tool. I do not grant that a) is impossible - it can be achieved by either having a system that relies on a single principle (e.g. libertarianism) or one that relies on ordered principles, so that if two conflict in a particular scenario, you go with the highest ranking one.
On (b), insane people & postmodernists aside, I do think there are facts on which everyone agrees... and re the latter, I do not know how seriously I take their disagreements with objective facts given that I have yet to witness one jump out of Sokal's window!
Yes, that's what I mean about societal vs personal. A societal theory should be coercive, which is exactly why it must meet these criteria - otherwise, if it doesn't meet (a) there will be situations where the theory will coerce you to perform two mutually exclusive actions, if it doesn't meet (b) you won't get people to agree to a covenant that allows for coercion and if it doesn't meet (c), if you start coercively applying its principles, you will end up with a dystopia.
Replies from: TheAncientGeek, Dagon↑ comment by TheAncientGeek · 2017-01-24T06:29:54.837Z · LW(p) · GW(p)
Is liberty a fact?
Consider a steelman of the postmodernist position: "Every question of major concern contains some element of evaluation, and thereforecannot be settled as a matter of objective fact."
Replies from: ArisC↑ comment by ArisC · 2017-01-24T06:40:58.842Z · LW(p) · GW(p)
As I said though, I will start taking postmodernists seriously when they put their money where their mouths are and give a public display of how gravity isn't necessarily a thing!
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2017-01-24T07:29:01.734Z · LW(p) · GW(p)
Thats a bad example twice over. In the first place, it is not a particularly realistic or fleshman example of something pomos say. For another, its somewhat defensible scientifically.
Any scientific theory is subject to refutation, hence the not necessarily. In particular, gravity is less of an independent entity in relativity than it is in Newtonian physics.
ETA
How about responding to my strongman?
Replies from: ArisC↑ comment by ArisC · 2017-01-24T10:12:02.951Z · LW(p) · GW(p)
OK, serious response: if you don't want to admit the existence of facts, then the whole conversation is pointless - morality comes down to personal preference. That's fine as a conclusion - but then I don't want to see anyone who holds it calling other people immoral.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2017-01-24T11:24:27.759Z · LW(p) · GW(p)
I didn't say anything amounting to "there are no facts"...and furthermore I wasnt even citing my own views, but those of postmodernists...and furthermore wasn't attributing wholesale rejection of facts to them either. You seem to have rounded off my comment to "yay pomo". Please read more carefully in the future.
Replies from: ArisC↑ comment by ArisC · 2017-01-24T11:46:26.419Z · LW(p) · GW(p)
First, you wrote "Every question of major concern contains some element of evaluation, and therefore cannot be settled as a matter of objective fact" - if this does not mean to say "there are no facts", I am not sure what it is trying to say.
Second, this whole this is pertaining to the second criterion. My point is that rejecting this criterion, for whatever reason, is saying that you are willing to admit arbitrary principles - but these are by definition subjective, random, not grounded in anything. So you are then saying that it's okay for a moral theory to be based on what is, at the end of the day, personal preference.
Third, if this isn't your view, why bring it up? I don't think it's conductive to a discussion to say "well, I don't think so, but some people say..." If we all agree that this position is not valid, why bother with it? If you do think it's valid, then saying "it's not my view" is confusing.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2017-01-24T20:16:46.611Z · LW(p) · GW(p)
First, you wrote "Every question of major concern contains some element of evaluation, and therefore cannot be settled as a matter of objective fact" - if this does not mean to say "there are no facts", I am not sure what it is trying to say.
It starts "Every question of major concern" so,straight off, it allows facts of minor concern. But concern to whom? Postmodernists do not, I contend, deny the existence of basic physical facts as regard them as rather uninteresting. When Derrida is sitting in a rive gauche cafe stirring his coffee, he does not dispute the existence of the coffee, the cafe or the spoon, but he is not going to write a book about them either.
Postmodernists are, I think, more interested in questions of wide societal and political concern. (perhaps you are, if your comment "everything else pertains to politics, and is kind of pointless if not;" i anything to go by). And those complex questions have evaluative components (in the sense of the fact/value divide). Which is compatible with the existence of factual components as well, whcih is another wayin which I am not denying the existence of facts.
But what I am proposing is a kind of on drop rule by which a question that is partly evaluative cannot be solved on a straightforward factual basis. For instance, there are facts to the efect that a fetus that is so many weeks old is capable of independent existence, but they don't tetll you whether abortion is right or wrong by themselves.
↑ comment by Dagon · 2017-01-24T05:43:51.435Z · LW(p) · GW(p)
If you believe that some moral theories are better than others (and it wasn't clear that you do, but I suspect it is so), why would you ever accept a personal theory that's not good enough to be coerced on others?
Replies from: ArisC, TheAncientGeek↑ comment by TheAncientGeek · 2017-01-24T07:06:44.031Z · LW(p) · GW(p)
If the point of a non-personal, universal or group-level, morality is to satisfy group level values, such as equality and justice, then the justification for coercion is that co ordination is necessary to achieve them, and voluntary co ordination is not sufficient.
If the point of personal morality is to achieve personal values, there is no justification for one person to impose it on another with different values.
Its not about how good theories are but what they are supposed to do.
↑ comment by gjm · 2017-01-23T21:39:12.976Z · LW(p) · GW(p)
distinction between personal and societal
Regardless, I think there's a further distinction there that ArisC isn't making. Actual systems of social enforcement don't generally consist of enforcing all and only the tenets of a particular moral theory. No one (so far as I know) thinks it's a fundamental moral principle that we should drive on the left or, as it may be, the right side of the road, or that we should pay 25% income tax rather than 20% or 30%, but this sort of thing is enforced, often quite vigorously. Presumably there's some sort of moral system underlying the choice of things to enforce (though, given how laws are actually made, it will be a weird mishmash of the moral systems and non-moral preferences and pecuniary interests and so forth of the law-makers, and it certainly won't satisfy any nice neat meta-level axioms except by good luck), but the things enforced will not be the same as the moral values leading to their selection.
Replies from: Dagon↑ comment by Dagon · 2017-01-23T23:39:09.771Z · LW(p) · GW(p)
Well, Since it's not part of everyone's moral theory, clearly we can't impinge on people's liberty to drive in their preferred lane.
Ok, sorry - please ignore that.
I suspect this is a good place for another level - how does your moral theory prescribe enforcement of morally-irrelevant actions? And does your meta-theory prefer theories with more enforcement of cohesion or more room for diversity?
comment by gjm · 2017-01-23T14:45:32.274Z · LW(p) · GW(p)
Libertarianism is not the only system that meets your three criteria, because the null system with no moral principles at all also meets them.
That system has the obvious drawback of providing absolutely no moral guidance in any situation, but likewise a system that truly has no principles besides "do not do anything that infringes on other people's liberty" leaves many moral questions entirely unanswered. (Unless it is interpreted extremely strictly, so that every tiny knock-on effect of an action needs to be considered and even tiny infringements of liberty are forbidden -- in which case I rather suspect it reduces to "do not do anything".)
Still, perhaps (some variety of) libertarianism is the unique non-null system meeting those criteria? I don't think so; I don't think libertarianism actually obeys (b). I don't see any valid way to get from "we have no proof of any moral authority" to "therefore no one must do anything that infringes on another's liberty". At most it might justify "the fact that you consider something immoral isn't enough to justify infringing someone else's liberty", but that's a much weaker proposition; and it seems to me that if you accept this as a justification for utilitarianism then a similar argument ending "In the absence of such moral authority, no one has the right to inflict suffering on others in the name of their own morality" equally justifies hedonistic utilitarianism.
I don't see anything here, for instance, that rules out situations like these: (1) You do something that infringes someone else's liberty, because it's the only way to secure greater liberty for someone else. (2) You do something that infringes someone else's liberty, because it happens that you want to. (In the absence of a moral authority, no one has the right to stop you...)
Also, of course, not everyone does agree that there is no sort of moral authority; for instance, some people believe that there is a god who plays this role, or that some sort of combination of everyone's preferences constitutes a moral authority. And your conclusion seems to depend on "there is no moral authority" rather than "not everyone agrees on a moral authority".
I think TiffanyAching is right that if you have a moral system based on fundamental principles then those principles have to be arbitrary, at least in the sense of not deriving from other moral principles; that's what "fundamental" means. But I'm not sure "arbitrary" is quite the right term. Suppose e.g. someone somehow manages to prove that in a particular society, acceptance of some set of moral principles will reliably lead to maximizing total happiness within that society. (This is of course not at all likely to happen.) Then adopting those principles would be "arbitrary" in the sense that it wouldn't be overtly based on any particular moral principle, but I'd see it as very non-arbitrary.
Replies from: ArisC↑ comment by ArisC · 2017-01-24T00:46:47.829Z · LW(p) · GW(p)
When you write utilitarianism, I assume it's a typo?
On your first point: if by null system you mean no moral guidance whatsoever, so that you allow violence, I think this fails criterion (c) - it's pretty dystopian in my view. Of course, that criterion specifies that the judge of whether the resulting society is dystopian is the theory's proponent, so if you think that's an acceptable society, fair enough, and you are right.
I do think libertarianism meets (b) - I think your propositions (no one has the right to inflict suffering on others...) is exactly what I am saying, I don't think it's a weak statement... why do you think so?
I think there may be some confusion though on how we define liberty - I use the term literally, so I do not accept that a rich person is more free than a poor person, for instance. So there can be no situation where you infringe on A's liberty to increase B's - unless A has already broken the moral code by physically harming B. For (2), this goes back to bootstrapping: because you have no right to harm others, people have a right to prevent you from doing so.
Re moral authority: actually both statements work - unless you can convince me of your moral authority's existence, I will not accept it as a basis for morality, and so the point is moot. We need to ground our morality on facts that are accepted as facts by everyone sane (I know the definition of sane invites a lot of debate, but I am being a bit practical here!)
Replies from: gjm↑ comment by gjm · 2017-01-24T02:28:51.307Z · LW(p) · GW(p)
When you write utilitarianism, I assume it's a typo?
I don't think so. What am I missing?
it's pretty dystopian
Hmm, actually you might be right. I was thinking that taking the principles of the null system to their logical conclusion yields no principles and therefore tells you nothing about how to run a society, so that "the sort of society you get by taking the principles to their logical conclusion" could be any sort of society at all; but on reflection I think that's not consistent with your treatment of libertarianism and I should instead have taken the logical conclusion to be "a society with no rules at all".
I don't think it's a weak statement... why do you think so?
Because the principle (call it "L0") "your moral principles don't entitle you to infringe on others' liberty" doesn't say anything about infringements of liberty with other motivations. If I infringe on your liberty for my financial gain or for fun or because I think the gods have, for inscrutable reasons of their own, told me to, then I am not doing it for the sake of my moral principles and L0 has nothing to say about it.
To forbid those you need a stronger principle, something like L1: "nothing entitles you to infringe on others' liberty". But you can't get that just from the nonexistence of universally agreed moral standards.
I use the term literally, so I do not accept that a rich person is more free than a poor person, for instance.
I am not sure exactly what notion of liberty you're espousing here, but if you define it too narrowly then I am going to claim that having no principles but that of liberty does mean a dystopia just as surely as having no principles at all, and for the same reasons: it leaves lots of terrible things un-obstructed. To me, it seems obvious that freedom admits of degrees, that having more scope of action means having more freedom, and therefore that ceteris paribus a richer person is more free than a poorer person. Would you like to say more about what, for you, falls under the heading of infringing someone else's freedom, and convince me that your definition doesn't fail the arbitrariness criterion that you proposed?
(If freedom admits of degrees at all then it is in principle possible that an action takes some freedom from A in order to give more extra freedom to B. Is your notion of freedom binary, black-and-white, or does it have degrees?)
I'm afraid I'm not sure what you mean by "both statements work". What statements? As for grounding our morality on facts that everyone (sane) accepts, I think Hume was right that you can never validly derive an "ought" from an "is", and I don't believe there is any case in which an inference from facts to values is accepted by everyone sane. So I think the position you're taking leads in the end to the null system (which I think we are agreed is likely to lead to dystopia if taken as the whole social system of a society).
Replies from: ArisC↑ comment by ArisC · 2017-01-24T04:09:37.294Z · LW(p) · GW(p)
Because we were discussing libertarianism, not sure how utilitarianism got dragged into the picture!
I see your point re L0. I go for L1, and I think you do get that from the agreed moral standard that you cannot find any good reason to do so - at least, not one that adheres to criteria (a) and (c) too.
Can you give me examples of horrible things a narrow definition would leave un-obstructed? My notion of freedom is binary, it refers to physical violence.
As for criterion (b), which seems to be the most controversial, my concern is that if we don't accept it, if we say that there are no facts, or at least no facts on which everyone agrees, then what is the point of moral philosophy anyway?
Replies from: gjm↑ comment by gjm · 2017-01-24T11:41:49.903Z · LW(p) · GW(p)
not sure how utilitarianism got dragged into the picture!
Ah, I left too much implicit. My argument was this: The argument you were making for a general principle "no infringing on others' liberty" can be modified a little, in a way that doesn't seem to me to make it less valid, so that instead it supports a different principle, namely "no making other people suffer". If I'm right about that, then the argument can't be a valid justification for a moral system that says "no infringing on others' liberty, but it's OK to make them suffer".
I go for L1, and I think you do get that from the agreed moral standard that you cannot find any good reason to do so
I'm not sure I understand. Can you sketch in a little more detail how you get to L1 from the absence of universally-agreed values?
Can you give me examples of horrible things a narrow definition would leave un-obstructed?
If the only rule is "no infringing on others' liberty by physical violence" then this leaves no objection to
- Stealing all another person's possessions.
- Conducting a large-scale defamation campaign, with the result that the person loses their job and their friends, and dies alone of starvation.
- Society-wide prejudice that says that (say) blue-eyed people cannot get any job, are refused entry to shops, etc. (So they all starve to death too.)
- Sabotaging someone's house just badly enough that in a few years' time it's likely to collapse and leave them homeless (and possibly kill their family).
if we say that there are no facts, or at least no facts on which everyone agrees, then what is the point of moral philosophy anyway?
Some people hold that even though not everyone agrees about values, there are objectively right values that can be discovered (and some people have just failed to do so). For them, moral philosophy is about figuring out what those values are or could be, and the fact that not everyone agrees indicates only that people are fallible.
Some people hold that there are no objectively right values, but still want values to live by. For them, moral philosophy is about figuring out what value-systems produce what sorts of result; about what value-systems are most coherent internally; about making sense of the values they find built into their brains; about how one can proceed when people with different values interact.
(I am in the latter camp, for what it's worth.)
Replies from: ArisC↑ comment by ArisC · 2017-01-24T11:58:38.738Z · LW(p) · GW(p)
Question - how do you do this thing with the blue line indicating my quote?
For L1: well, I am not sure how to say this - if we agree there are no universal values, by definition there is no value that permits you to infringe on me, right?
On your examples...
1 ==> okay, here you have discovered a major flaw in my theory which I had just taken for granted: property rights. I just (arbitrarily!) assumed their existence, and that to infringe on my property rights is to commit violence. This will take some thinking on my behalf.
2 ==> I am genuinely ambivalent about this. Don't get me wrong, if someone defamed me in real life, I would take action against them... but in principle, I cannot really come up with a reason why this would be immoral (at least, not a reason that wouldn't have other bad consequences if taken to its logical conclusion - i.e. criterion (c)!)
3 ==> here I am actually quite definitive: while I personally hate discrimination, I don't think it should be illegal. I think people should have the right to hire whomever they please for whatever reason they please. Again, I think the principle behind making discrimination illegal is very hard to justify - and to limit to the workforce.
4 ==> I would call that violence.
As for facts & values: the question for the people in the first camp you mention is, how do we determine what are the objectively right values? That's what I am trying to do through my three criteria. I don't think it's good philosophy to both say "there ARE right values but there is NO way of determining what they are".
Let me say again that when it comes to how I live my personal life, I also have values that do not necessarily meet my criteria, especially criterion (b). Some times I try to rationalise them by saying, like you, that they will lead me to the best outcomes. But really, they are just probably the result of my particular upbringing.
Replies from: gjm↑ comment by gjm · 2017-01-24T12:29:55.768Z · LW(p) · GW(p)
this thing with the blue line
Greater-than sign at the start of the paragraph. (When you're composing a comment, clicking the button that says "Show help" will tell you about some of these things. It won't throw away the comment you're editing.)
assumed [...] that to infringe on my property rights is to commit violence.
I did wonder :-). For what it's worth, I think that's pretty much an indefensible position, but I know it's popular in libertarian circles and maybe there are ways to defend it that haven't occurred to me.
I really cannot come up with a reason why [a massive defamation campaign] would be immoral
I will gently suggest that you should maybe see this as a deficiency in the ethical framework you're working in...
while I personally hate discrimination, I don't think it should be illegal
That was what I expected. But if you do that, there are possible scenarios where people literally starve to death because of it. Of course nothing forces you to care more about that than you do about the evils of government coercion, but I want it to be clear what the tradeoffs actually are here. (And I suggest that starving to death is as clear a loss of liberty as any.)
I would call that violence
OK, but see where we've now ended up. An action involving no direct violence is being classified as "violence" because, over a period of years, it is statistically likely to cause physical harm. But this same description covers an enormous number of other things that I bet you don't want to class as violence or infringement of liberty. One example: If a factory emits a lot of pollution, it injures the health of people around it; some of them will die.
how do we determine what are the objectively right values?
Yup, that's a really tough problem, and its toughness is one reason why many people (including me) are inclined to think that in fact there aren't any objectively right values. Some believers in objectively right values hold that they can be found in revelations from a god or gods. Some believe that they can be found by careful consideration of what it could mean for humans to flourish. Etc. Personally, I'm pessimistic about the prospects of all these approaches. Including, I'm afraid, yours :-).
Replies from: ArisC↑ comment by ArisC · 2017-01-24T14:08:04.260Z · LW(p) · GW(p)
I will gently suggest that you should maybe see this as a deficiency in the ethical framework you're working in...
All this does is weaken my argument for libertarianism, not my model for evaluating moral theories! Let's not conflate the two.
the evils of government coercion / starving to death... To be clear - it's not exactly the government coercion that bothers me. It's that criminalising discrimination is... just a bit random. As an employer, I can show preference for thousands of characteristics, and rationalise them (e.g. for extroverts - "I want people who can close sales") but not gender/race/age? It's a bit bizarre.
statistically likely to cause physical harm This is the subject of another post I want to write, and will do when I have time - I think the important thing here is the intent. But let's discuss this in more detail in another post!
pollution This is tricky, as many negative externalities are. To be honest, I'd say this falls into the category of "issues we cannot deal with because the tools in our disposable, such as language, are not precise enough", much like abortion. I think no moral theory would ever give you solid guidance on such matters.
there aren't any objective values Fair enough. My approach is predicated on the existence of values. If you want to say there is no such thing, absolutely fine by me - as long as you (and by you here I mean "one" - based on this conversation, I don't think this applies to you specifically!) are not sanctimonious about your own morals.
(but note that you can still use my framework to rank theories - even if no theory is actually the correct one, you can have degrees of failure - so a theory that's not even internally consistent is inferior to others that are).
Replies from: gjm↑ comment by gjm · 2017-01-24T14:56:04.543Z · LW(p) · GW(p)
All this does is weaken my argument for libertarianism
Really? Then maybe I misunderstood what you said before, because I thought you were saying that you can't find any grounds for moral disapproval of massive defamation campaigns. That seems to me like a defect not in some particular argument but in what counts for you as grounds for moral disapproval.
[Meta-note: If you want to quote already-quoted material, you can use two ">" characters.]
criminalising discrimination is ... just a bit random.
I understand, but I think it's less random than you may think, in two ways. (1) What picks out gender, race, age, and other things that put people in "protected classes" (as I think the terminology in some jurisdictions has it) is that they are things that have been widely used for unfair discrimination. History does produce effects that in isolation look random: you get laws saying "don't do X" but no laws saying "don't do Y" even though X and Y are about equally bad, because X is a thing that actually happened and Y isn't. It looks random but I'm not sure it's actually a problem. (2) There is, I think, a more general and less random principle underlying this: When hiring (or whatever), don't discriminate on the basis of characteristics that are not actually relevant to how well someone will do the job. If you're employing a chemistry teacher, someone with blue eyes won't on that account teach any worse; so don't refuse to employ blue-eyed people as chemistry teachers. (Artificial example because real examples might be too distracting.) What makes this a little more difficult is that in some cases the "irrelevant" attributes may correlate with relevant ones; e.g., suppose blue-eyed people are shorter than brown-eyed people on average and you're putting together a basketball team, then you will mostly not choose blue-eyed people. But in this case you should measure their height rather than looking at their eyes, and so I think it goes for other characteristics that correlate with things that matter.
statistically likely to cause physical harm [...] the important thing here is the intent [...] let's discuss this in more detail in another post
OK, but I do want to emphasize that (though I'm prepared to be convinced otherwise) this looks to me like a really serious problem for libertarianish positions that say that only liberty matters and therefore we have no business erecting legal obstacles to anything other than violent freedom-infringement.
negative externalities [...] issues we cannot deal with [...] much like abortion
I may be being insufficiently charitable, but this feels like a cop-out. There's nothing about this that obviously indicates to me that negative externalities are too subtle to be addressed by the mental tools at our disposal. Are you quite sure you aren't just saying this because it's something that doesn't fit with the position you're committed to?
My approach is predicated on the existence of values.
OK. But if you hold that there's a way of finding out what these values are, then doesn't that call into question the impossibility of getting everyone to agree about them? (Which is a key step in your argument.) It seems as if the argument depends on its own failure!
degrees of failure
Yes, I agree. (But cautiously; if someone concocts a perfectly consistent moral theory that aims at maximizing human misery, I am not convinced that that would be better than a theory that matches better with widespread intuitions about values, but has some inconsistencies in handling edge cases.)
Replies from: ArisC↑ comment by ArisC · 2017-01-24T15:04:59.057Z · LW(p) · GW(p)
because I thought you were saying that you can't find any grounds for moral disapproval of massive defamation campaigns
Yes, I meant I couldn't find grounds for disapproval of defamation under a libertarian system.
On discrimination, your argument is very risky. For example, in a racist society, a person's race will impact how well they do at their job. Besides, on a practical level, it's very hard to determine what characteristics actually correlate with performance.
Are you quite sure you aren't just saying this because it's something that doesn't fit with the position you're committed to? That's a bit unfair - I readily admitted the weakness in my whole theory re property rights. The problem with externalities like pollution is that it is difficult to say at what point something hurts someone to a significant extent, because "hurting someone" is not particularly well defined. Similarly for non-physical violence (e.g. bullying), and to an extent, this applies to defamation too.
OK. But if you hold that there's a way of finding out what these values are, then doesn't that call into question the impossibility of getting everyone to agree about them? (Which is a key step in your argument.) It seems as if the argument depends on its own failure!
Not clear on what you mean here... could you paraphrase please?
Replies from: gjm↑ comment by gjm · 2017-01-24T16:12:08.137Z · LW(p) · GW(p)
I meant I couldn't find grounds for disapproval of defamation under a libertarian system.
Ah, OK. Then what I want to suggest is that you should probably see this as a reason to be dissatisfied with libertarianism. (Though of course it might turn out that actually there's nothing you can do to stop massive defamation campaigns that wouldn't have worse adverse consequences in practice. I doubt that, though.)
in a racist society, a person's race will impact how well they do at their job.
It might. I think there are two sorts of mechanism. The first is that a racist society might mess up some people's education and other opportunities, leading them to end up worse at things than if they belonged to a favoured rather than a disfavoured group. The second is that some jobs (most, in fact) involve interacting with other people, and if the others are racist then members of disfavoured groups might be less effective because others won't cooperate with them.
Both of these mean that the principle "don't discriminate on the basis of things that don't make an actual difference" isn't enough on its own to prevent all harmful-seeming discrimination, so appealing only to that principle probably justifies less anti-discrimination law than actually exists in many places. I'm OK with that; you're saying that there shouldn't be any anti-discrimination law because of its arbitrariness, and I'm pointing out that at least some has pretty good and non-arbitrary justification. I'm not trying to convince you that all the anti-discrimination measures currently in existence are good; only that some might be :-).
it's very hard to determine what characteristics actually correlate with performance.
I agree, but my argument was that "this characteristic correlates with performance" generally isn't good grounds for discrimination in hiring etc.
That's a bit unfair - I readily admitted the weakness in my whole theory re property rights.
You did (for which, well done!) but someone can be epistemically virtuous on one occasion but not another :-). And I did admit that maybe I was being uncharitable. But I really don't see how it's plausible that negative externalities are just Too Much for the human race's mental tools to cope with. You say the problem is that it's difficult to draw boundary lines (if I'm understanding you right); yeah, it is, but that's a problem with pretty much everything including, I suggest, infringement of liberty. The real world comes in shades of grey; our institutions sometimes need to be defined in black and white; the best we can do is to draw the boundaries in reasonable places, and I don't think it makes sense to throw up our hands and despair merely because practical considerations sometimes require slightly arbitrary decisions to be made.
(It is only a matter of practical considerations. We could organize our laws and whatnot to acknowledge that most things vary continuously, and e.g. instead of having an offence of "murder" that one either has or hasn't committed say sometimes that someone has committed 0.1 of a murder, etc. But it would be far more complicated and the gain would probably not be worth the cost.)
could you paraphrase please?
You argued: There is no universally agreed moral system or moral authority, nor any prospect of their being one. Therefore, it can never be right to force your moral system on someone else. Therefore, we should be libertarians. (As I've said, every step in this seems dubious to me; my apologies if as a result I have presented it badly.) And this is how you derive libertarianism. Now, you say that libertarianism is the one true objectively right moral system, which you know to be right by means of this argument. And here's the thing: if this is really a good argument, then others ought to be persuadable by it too, in which case ultimately everyone should end up libertarian. But then it would no longer be true that there's no universally agreed moral system! But that was an essential premise of the argument. So it's self-undermining. If it's a good argument, then it provides a universal moral system, whose nonexistence was a premise of the argument.
comment by UmamiSalami · 2017-01-26T05:30:30.196Z · LW(p) · GW(p)
Kantian ethics: do not violate the categorical imperative. It's derived logically from the status of humans as rational autonomous moral agents. It leads to a society where people's rights and interests are respected.
Utilitarianism: maximize utility. It's derived logically from the goodness of pleasure and the badness of pain. It leads to a society where people suffer little and are very happy.
Virtue ethics: be a virtuous person. It's derived logically from the nature of the human being. It leads to a society where people act in accordance with moral ideals.
Etc.
comment by niceguyanon · 2017-01-23T21:44:30.486Z · LW(p) · GW(p)
What are your own thoughts about the problem of monopolies, are they even a problem at all? The standard answer is that they either would not occur or would be a beneficial thing.
Replies from: ArisC↑ comment by ArisC · 2017-01-24T00:38:44.937Z · LW(p) · GW(p)
You mean under libertarianism? Well, economically I think they are a bad thing - but in theory, I don't see how they can be avoided without coercion.
Of course, if I were a president or prime minister, I would have to be a bit more pragmatic - I don't think pure libertarianism would ever work!
comment by TheAncientGeek · 2017-01-23T18:32:05.330Z · LW(p) · GW(p)
This is to some extent a post I wanted to see, since I have been saying for ages that metaethics should be approached with an explicit set of criteria for what an ethical theory is, what it should do, and what makes it right, as opposed to glancing at something, and noticing a signal of approval from system I.
c) If the principles of the moral theory are taken to their logical conclusion, they must not lead to a society that the theory's proponents themselves would consider dystopian.
That's rather weak. Why not have a criterion that selects for good outcomes, rather than avoiding bad outcomes?
One objection might be that the revised c) begs the question in favour of consequentialism....but the original c) already had a toehold in consequentialism.
What I mean by this is that the principles must be derived logically from facts on which everyone agrees.
You are going to need to do something about the is-ought divide. On the other hand, your example is liberty, which is clearly a value.