Questions for Moral Realists
post by Peter Wildeford (peter_hurford) · 2013-02-13T05:44:44.948Z · LW · GW · Legacy · 110 commentsContents
Why is there only one particular morality? Where does morality come from? Why should we care about (your) morality? None 110 comments
My meta-ethics are basically that of Luke's Pluralistic Moral Reductionism. (UPDATE: Elaborated in my Meta-ethics FAQ.)
However, I was curious as to whether this "Pluralistic Moral Reductionism" counts as moral realism or anti-realism. Luke's essay says it depends on what I mean by "moral realism". I see moral realism as broken down into three separate axes:
There's success theory, the part that I accept, which states that moral statements like "murder is wrong" do successfully refer to something real (in this case, a particular moral standard, like utilitarianism -- "murder is wrong" refers to "murder does not maximize happiness").
There's unitary theory, which I reject, that states there is only one "true" moral standard rather than hundreds of possible ones.
And then there's absolutism theory, which I reject, that states that the one true morality is rationally binding.
I don't know how many moral realists are on LessWrong, but I have a few questions for people who accept moral realism, especially unitary theory or absolutism theory. These are "generally seeking understanding and opposing points of view" kind of questions, not stumper questions designed to disprove or anything. While I'm doing some more reading on the topic, if you're into moral realism, you could help me out by sharing your perspective.
~
Why is there only one particular morality?
This goes right to the core of unitary theory -- that there is only one true theory of morality. But I must admit I'm dumbfounded at how any one particular theory of morality could be "the one true one", except in so far as someone personally chooses that theory over others based on preferences and desires.
So why is there only one particular morality? And what is the one true theory of morality? What makes this theory the one true one rather than others? How do we know there is only one particular theory? What's inadequate about all the other candidates?
~
Where does morality come from?
This gets me a bit more background knowledge, but what is the ontology of morality? Some concepts of moral realism have an idea of a "moral realm", while others reject this as needlessly queer and spooky. But essentially, what is grounding morality? Are moral facts contingent; could morality have been different? Is it possible to make it different in the future?
~
Why should we care about (your) morality?
I see rationality as talking about what best satisfies your pre-existing desires. But it's entirely possible that morality isn't desirable by someone at all. While I hope that society is prepared to coerce them into moral behavior (either through social or legal force), I don't think that their immoral behavior is necessarily irrational. And on some accounts, morality is independent of desire but still has rational force.
How does morality get it's ability to be rationally binding? If the very definition of "rationality" includes being moral, is that mere wordplay? Why should we accept this definition of rationality and not a different one?
I look forward to engaging in diologue with some moral realists. Same with moral anti-realists, I guess. After all, if moral realism is true, I want to know.
110 comments
Comments sorted by top scores.
comment by lukeprog · 2013-02-13T07:58:55.367Z · LW(p) · GW(p)
How does morality get it's ability to be rationally binding? If the very definition of "rationality" includes being moral, is that mere wordplay? Why should we accept this definition of rationality and not a different one?
Also see Peter Singer's The Triviality of the Debate Over 'Is-Ought' and the Definition of 'Moral'. In short, a justification of moral prescriptions comes back to an explanation of why you should care about those moral prescriptions. Or on LW: The Moral Void.
A killer quote from Singer's paper:
Disputes over the definition of morality and over the "is-ought" problem are disputes over words which raise no really significant issues. [Of course,] lack of clarity about the meaning of words is an important source of error, both in philosophy and in practical argument… My complaint is that what should be regarded as something to be got out of the way in the introduction to a work of moral philosophy has become the subject matter of almost the whole of moral philosophy...
Now that I think about it, I'm going to make that the new epigraph for Pluralistic Moral Reductionism.
Replies from: danieldewey↑ comment by danieldewey · 2013-02-13T13:15:39.888Z · LW(p) · GW(p)
Thanks for linking that paper, I hand't encountered it and it seems useful.
comment by Alicorn · 2013-02-13T07:01:30.524Z · LW(p) · GW(p)
By the definitions above, I'm a unitary but not an absolutism theorist. I would describe rationally binding constraints as those that govern prudence, not morality; one can be perfectly prudent without being moral (indeed, if one does not have morality among one's priorities, perfect prudence could require immorality). A brief sketch of my moral theory can be found here.
Why is there only one particular morality?
What would it mean for there to be several? I think morality drops out of personhood. It's possible that other things drop out of personhood, too, or that categories other than persons produce their own special results (although I don't know what any of that might look like), but I wouldn't refer to such things by the same name; that would just be confusing. If there were several moralities it's unclear which would bind actors or how they'd interact. Of course people have all kinds of preferences, but these govern what it's prudent for those actors to do and what axiology is likely to inform their attempts at world-steering, not what is moral.
Where does morality come from?
People. Only people are morally obliged to do or not do things. Only people have rights that makes it particularly moral or immoral to do or not do things with them. (I have a secondary feature to my system that still only constrains people but doesn't refer so specially to acting on them, of which I am less confident; it's a patch for incompleteness, not a grounding principle.) Rights and the obligation to respect them are just a thing that happens when something complicated and persony exists.
Are moral facts contingent; could morality have been different?
Only cosmetically. There could have failed to be any people, or there could be only one person in the world who could find it a practical impossibility to violate their own rights, or such far-flung people that they couldn't interact in any potentially immoral way. But given the existence of people who can interact with each other, I think morality is a necessity.
Is it possible to make it different in the future?
Only cosmetically. If there were no people - or if everyone's preferences changed so they always waived all their rights - or something, then morality could cease to be an interesting feature of the world, but it would still be there.
Why should we care about (your) morality?
Caring is not even morally obligatory (although compliance is), let alone rationally required.
Replies from: Kaj_Sotala, christina↑ comment by Kaj_Sotala · 2013-02-13T07:34:44.244Z · LW(p) · GW(p)
What would it mean for there to be several?
That there are many possible moral intuitions or axioms that one could base one's morality on, with no objective criteria for saying which set of intuitions or axioms is the best one? Your basic axioms say that (to simplify a lot) personhood grants rights and morality is about respecting those rights, while a utilitarian could say that suffering is bad and pleasure is good and morality is about how to best minimize suffering and maximize pleasure. Since all morality ultimately reduces to some kinds of axioms that just have to be taken as granted, I am in turn confused about what it would even mean to say that there are is only one correct set of them. (There obviously is some set of axioms that is the only correct one for me, but moral realism seems to imply some set that would be the only correct one for everybody.)
Replies from: Alicorn, ygert↑ comment by Alicorn · 2013-02-13T17:35:15.269Z · LW(p) · GW(p)
That there are many possible moral intuitions or axioms that one could base one's morality on, with no objective criteria for saying which set of intuitions or axioms is the best one?
Well, yes, I suppose this is literally what that would mean, but I don't see much reason to call any particular thing chosen out of a grab bag "morality" instead of "prudence" or "that thing that Joe does" or "a popular action-tree-pruning algorithm".
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-02-15T21:10:59.441Z · LW(p) · GW(p)
Your theory of morality is certainly complex and well-thought out, but I think is based on an assertion "persons have rights, which it is wrong to violate" that isn't established in any sort of traditionally realist way. Indeed, I think you agree with me that since absolutism theory is false, only those who prefer to recognize rights (or, alternatively, are caught in some regulatory scheme that enforces those rights) have a reason to recognize rights.
Alternatively, as Kaj mentioned, there are other systems of morality, like utilitarianism, that also capture a lot of what is meant by morality and there aren't any grounds to dismiss them as inferior. In an essay I wrote, "Too Many Moralities", I make the place I choose to carve reality around the word "morality" as to whether the “end” holds as its goal acting not with regard to only the self, but rather with regard to the direct or indirect benefit of others. If does, it counts as “morality”, and if it doesn’t, it does not. I don't personally yet see any reason why a particular theory deserves the special treatment of being singled out as the "one, true theory of morality".
I'd appreciate your thoughts on the matter because it could help me understand (and perhaps even sympathize with) the unitary perspective a lot more.
Replies from: Alicorn↑ comment by Alicorn · 2013-02-15T21:30:59.793Z · LW(p) · GW(p)
Hmm. I'm not sure I understand your perspective. I'm happy to call all sorts of incorrect moralities "things based on moral intuition", even if I think the extrapolation is wrong, does that help?
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-02-16T19:25:10.947Z · LW(p) · GW(p)
Why do you think their extrapolation is wrong? And what does "wrong" mean in that context?
Replies from: Alicorn↑ comment by Alicorn · 2013-02-17T06:24:40.994Z · LW(p) · GW(p)
I'm not sure I know what you mean by the first question. Regarding the second, it means that they have not arrived at the (one true unitary) morality, at least as far as I know. If someone looks an optical illusion like, say, the Muller-Lyer, they base their conclusions about the lengths of the lines they're looking at on their vision, but reach incorrect conclusions. I don't think deriving moral theory from moral intuition is that straightforward or that it's fooled in any particularly analogous way, but that's about what I mean by someone extrapolating incorrectly from moral intuitions.
Replies from: Kaj_Sotala, peter_hurford↑ comment by Kaj_Sotala · 2013-02-21T15:43:30.852Z · LW(p) · GW(p)
I'm not sure I know what you mean by the first question.
I think that he meant something like:
- You seem to be saying that while different people can have different moralities, many (most?) of the moralities that people can have are wrong.
- You also seem to be implying that you consider your morality to be more correct than that of many others.
- Since you believe that there are moralities which are wrong, and that you have a morality which is, if not completely correct then at least more correct than the moralities of many others, that means that you need to have some sort of a rule for deciding what kind of a morality is right and what kind of morality is wrong.
- So what is the rule that makes you consider your morality more correct than e.g. consequentialism? What are some of the specific mistakes that e.g. consequentialism makes, and how do you know that they are mistakes?
↑ comment by Peter Wildeford (peter_hurford) · 2013-03-22T17:22:57.189Z · LW(p) · GW(p)
Sorry for so long between this response and the previous one, but I'm still interested. With the Muller-Lyer Illusion, you can demonstrate it's an illusion by using a ruler. Following your analogy, how would you demonstrate that a incorrect moral extrapolation was similarly in error? Is there a moral "ruler"?
Replies from: Alicorn↑ comment by Alicorn · 2013-03-22T18:14:12.257Z · LW(p) · GW(p)
Not one that you can buy at an office supply store, at any rate, but you can triangulate a little using other people and of course checking for consistency is important.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-03-26T18:25:21.260Z · LW(p) · GW(p)
So what is moral is what is the most popular among all internally consistent possibilities?
Replies from: Alicorn↑ comment by Alicorn · 2013-03-27T07:01:48.575Z · LW(p) · GW(p)
No, morality is not contingent on popularity.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-03-27T14:51:56.349Z · LW(p) · GW(p)
I'm confused. Can you explain how you triangulate morality using other people?
Replies from: Alicorn↑ comment by Alicorn · 2013-03-27T18:23:44.057Z · LW(p) · GW(p)
Mostly, they're helpful for locating hypotheses.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-03-28T02:44:26.262Z · LW(p) · GW(p)
I'm still confused, sorry. How do you arrive at a moral principle and how do you know it's not a moral illusion?
Replies from: Alicorn↑ comment by Alicorn · 2013-03-28T05:56:25.830Z · LW(p) · GW(p)
You can't be certain it's not a moral illusion, I hope I never implied that.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-03-28T18:34:57.007Z · LW(p) · GW(p)
You're right; you haven't. Do you put any probability estimate on whether a certain moral principle is not an illusion? If so, how?
Replies from: Alicorn↑ comment by Alicorn · 2013-03-28T18:41:16.457Z · LW(p) · GW(p)
I don't naturally think in numbers and decline to forcibly attach any. I could probably order a list of statements from more to less confident.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-03-28T19:56:07.065Z · LW(p) · GW(p)
I could probably order a list of statements from more to less confident.
By what basis do you make that ordering?
Replies from: Alicorn↑ comment by Alicorn · 2013-03-28T23:41:06.049Z · LW(p) · GW(p)
I'm not sure what you mean by this question.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-03-29T15:02:41.857Z · LW(p) · GW(p)
You say you can order a list of statements from more to less confident. Say, Moral Principle A is more confident than Moral Principle B. But how do you know that? Why isn't Moral Principle B more confident than Moral Principle A? I imagine you have some criteria for determining the confidence of moral principles to determine their order, but I don't know what that criteria is.
Replies from: Alicorn↑ comment by Alicorn · 2013-03-29T16:52:26.230Z · LW(p) · GW(p)
Someone has taken a dislike to this thread, so I'm going to tap out now.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-03-30T15:52:08.322Z · LW(p) · GW(p)
Thanks for the conversation.
↑ comment by ygert · 2013-02-13T13:16:21.346Z · LW(p) · GW(p)
(There obviously is some set of axioms that is the only correct one for me, but moral realism seems to imply some set that would be the only correct one for everybody.)
Note: What I think Alicorn is saying (And I think it makes a of of sense), is that those "axioms" can be derived from the notion of "personhood" or "humanity". That is, given that humans are the way there are, from that we can derive some rules about how to behave. These rules are not truly universal, as aliens would not have them, or be in any way obliged to come up with them. (Of course, they would have there own separate system, but calling that system a form of morality would be distorting the meaning of the word.)
Replies from: Alicorn↑ comment by Alicorn · 2013-02-13T17:33:46.331Z · LW(p) · GW(p)
No. Personhood ≠ humanity. If we find persony aliens I will apply the same moral system to them. Your interpretation seems to cross the cosmetic features of what I'm saying with some of the deeper principles of what Eliezer tends to say.
Replies from: ygert↑ comment by christina · 2013-03-17T00:40:11.424Z · LW(p) · GW(p)
Interesting thoughts. Definitely agree that morality comes from people, and specifically their interactions with each other. Although I would additionally clarify that in my case I consider morality (as opposed to a simple action decided by personal gain or benefit) comes from the interaction between sentients where one or more can act on another based on knowledge not only of their own state but the state of that other. This is because I consider any sentient to have some nonzero moral value to me, but am not sure if I would consider all of them persons. I am comfortable thinking of an ape or a dolphin as a person, but I think I do not give a mouse the same status. Nevertheless, I would feel some amount of moral wrongness involved in causing unnecessary pain to the mouse, since I believe such creatures to be sentient and therefore capable of suffering.
I'm not sure how the rest of my morality compares to yours, though. I don't think there is any one morality, or indeed that moral facts exist at all. Now, this does not mean that I subscribe to multiple moralities, especially those whose actions and consequences directly contradict each other. I simply believe that if one of my highest goals is the protection of sapient life, and someone else's highest goal is the destruction of it, I cannot necessarily expect that I can ever show them, with any facts about the world, that their morality is wrong. I could only say that it was a fact about the world that their morality is in direct contradiction with mine.
Now I don't believe that anything I've said above about morality (which was mostly metaethics anyway) precludes my existence or anyone else's existence as a moral actor. In fact, all people, by their capability to make decisions based on their knowledge of the present state of others, and their ability to extrapolate that state into the future based on their actions, are automatically moral actors in my view of things. I just don't necessarily think they always act in accordance with their own morals or have morals mutually compatible with my morals.
Nevertheless, I think that facts are very useful in discussing morality, because sometimes people are not actually in disagreement with each other's highest moral goals--they simply have a disagreement about facts and if that can be resolved, they can agree on a mutually compatible course of action.
comment by Qiaochu_Yuan · 2013-02-13T06:29:17.240Z · LW(p) · GW(p)
Why is there only one particular morality?
I think the standard LW argument for there being only one morality is based on the psychological unity of mankind. Human minds do not occupy an arbitrary or even a particularly large region of mindspace: the region they occupy is quite small for good reasons. Likewise, the moral theories that human minds adopt occupy quite a small region of moralityspace. The arguments around CEV suggest that these moral theories ought to converge if we extrapolate enough. I am not sure if this exact argument is defended in a LW post.
But essentially, what is grounding morality? Are moral facts contingent; could morality have been different? Is it possible to make it different in the future?
See this comment.
Why should we care about (your) morality?
Because you and I are not arbitrary minds in mindspace. Viewed against the entirety of mindspace we are practically identical, and we have minds that care about that.
Replies from: buybuydandavis, Kaj_Sotala, MrMind, Vladimir_Nesov, Jack, Manfred↑ comment by buybuydandavis · 2013-02-13T06:54:00.285Z · LW(p) · GW(p)
the region they occupy is quite small for good reasons.
The region is exactly as large as it is. The fact that is has size, and is not a single point, tells you that our moralities are different. In some things, the difference will not matter, and in some it will. It seems we don't have any problem finding things to fight over. However small you want to say that the differences are, there's a lot of conflict over them.
The more I look around, the more I see people with fundamentally different ways of thinking and valuing. Now I suppose they have more commonality between them and banana slugs, and likely they would band together should the banana slugs rise up and launch a sneak attack. But these different kinds of people with different values often don't seem to want to live in the same world.
Hitchens writes in Newsweek magazine: “Winston Churchill ... found it intolerable even to breathe the same air, or share the same continent or planet, as the Nazis.”
(By the way, if anyone can find the original source from Churchill, I'd appreciate it.)
I'd also note that even having contextually identical moralities doesn't imply a lack of conflict. We could all be psychopaths. Some percentage of us are already there.
Viewed against the entirety of mindspace we are practically identical, and we have minds that care about that.
Seems like our minds care quite a lot about the differences, however small you think they are. The differences aren't small, by the measure of how much we care about them.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-25T21:23:53.778Z · LW(p) · GW(p)
No amount of difference or disagreement makes the slightest impact on realism. Realists accept that some many or all people are wrong.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2013-11-26T03:44:23.138Z · LW(p) · GW(p)
Of course.
No amount of reality need have the slightest impact on moral realists.
Is there any experiment that could be run that would refute moral realism?
Maybe Clippy is right, we should all be clippists, and we're just all "wrong" to think otherwise. Clippism - the true objective morality. Clippy seems to think so. I don't, and I don't care what Clippy thinks in this regard.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-26T09:25:20.942Z · LW(p) · GW(p)
Realism does not equate to empiricism
It also doesn't equate to non-empiricism. Eg "Fish do not feel pain, so angling is not cruel">
3.If you are like a clippy-- an entity that only uses rationality to fulfil arbitrary aims -- you won't be convinced/.Guess what? That has no impact on realism whatsoever. A compelling argument is an argument capable of compelling an agent capable of understanding it, and with a commitment to rationality as an end.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2013-11-26T20:37:24.503Z · LW(p) · GW(p)
Are your aims arbitrary? If not, why are Clippy's aims arbitrary, and your's not arbitrary?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-27T14:11:32.575Z · LW(p) · GW(p)
Clippy doesn't care aboiut having a coherent set of aims, or about revising and improving its aims.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2013-11-27T20:11:16.947Z · LW(p) · GW(p)
That doesn't answer my question.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-27T20:25:29.455Z · LW(p) · GW(p)
You asked two questions. My reply was meant to indicate that arbitrariness depends on coherence and extrapolation (revision, reflection), both of which Clippy has rather less of whichthan I do.
↑ comment by Kaj_Sotala · 2013-02-13T07:46:14.026Z · LW(p) · GW(p)
I think the standard LW argument for there being only one morality is based on the psychological unity of mankind.
I would very much doubt such an argument: most humans also share the same mechanisms for language learning, but still end up speaking quite different languages. (Yes, you can translate between them and learn new languages, but that doesn't mean that all languages are the same: there are things that just don't translate well from one language to another.) Global structure, local content.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-13T08:00:21.122Z · LW(p) · GW(p)
I don't think the analogy to languages holds water. Substituting a word for a different word doesn't have the kind of impact on what people do that substituting a moral rule for a different moral rule does. Put another way, there are selection pressures constraining what human moralities look like that don't constrain what human languages look like.
Replies from: twanvl↑ comment by twanvl · 2013-02-13T17:47:27.948Z · LW(p) · GW(p)
Substituting a word for a different word doesn't have the kind of impact on what people do that substituting a moral rule for a different moral rule does.
This sounds like a strawman argument to me. It doesn't refute the argument that part of morality is cultural but based on a shared morality-learning mechanism.
there are selection pressures constraining what human moralities look like that don't constrain what human languages look like.
There are also selection pressures constraining what human languages look like, that don't constrain what human moralities look like. Or to give another example: there are selection pressures that constrain what dogs look like that don't constrain what catfish look like, and vice-versa. That doesn't mean that they also have similarities.
↑ comment by MrMind · 2013-02-13T11:13:43.199Z · LW(p) · GW(p)
I think the standard LW argument for there being only one morality is based on the psychological unity of mankind.
Having read the Meta-Ethics sequence, this is my belief too. Indeed, Elizier cares to index human-evaluation and pebblesorters-evaluation algorithms, by calling the first "morality" and the second "pebblesorting", but he is careful to avoid talking about Elizier-morality and MrMind-morality, or even Elizier-yesterday-morality and Elizier-today-morality.
Of course his aims were different, and compared to differently evolved aliens (or AI's) our morality is truly one of a kind.
But if we magnify our view on morality-space, I think it's impossible not to recognize that there are differences!
I think that this state of affair can be explained in this way: while there's a psychological unity of mankind, it concerns only very primitive aspects of our lives: the existence of joy, sadness, the importance of sex, etc.
But our innermost and basic evaluation algorithm doesn't cover every aspects of our lives, mainly because our culture poses problems too new for a genetic solution to have been spread to the whole population.
Thus ad-hoc solutions, derived from culture and circumstances, step in: justice, fairness, laws, and so on. Those solutions may very well vary in time and space, and our brains being what they are, sometimes they overwrite what should have been the most primitive output.
When we talk about morality, we are usually already assuming the most primitive basic facts about human evaluation algorithm, and we try to argue about the finer point not covered by the genetic wiring of our brains, as for example if murder is always wrong.
In comparison with pebble-sorters or clipping AI, humanity exhibits a very narrow way of evaluating reality, to the point that you can talk about a single human-algorithm and call it "morality". But if you zoom in, it is clear that the bedrock of morality doesn't cover every problems that cultures naturally throw at pepole, and that's why you need to invent "patches" or "add-ons" to the original algorithm, in form of morality concepts like justice, fairness, the sacrality of life, etc. Obviously, different groups of people will come up with different patches. But there are add-ons that were invented a long time ago, and they are now so widespread and ingrained in certain group's education, that they feel as if they are part of the original primitive morality, while infact they are not. There are also new problems that require the (sometimes urgent) invention of new patches (e.g.: nuclear proliferation, genetic manipulation, birth control), and they are even more problematic and still in a state of transition nowadays.
Is this view unitary, or even realist? In my opinion, philosophical distinctions are too crude and simplicistic to categorize correctly the view of morality as "algorithm + local patches". Maybe it needs its whole new category, something like the "algorithmic theories of morality" (although the category of "synthetic etical naturalism" comes close to capture the concept).
↑ comment by Vladimir_Nesov · 2013-02-13T14:29:18.054Z · LW(p) · GW(p)
The arguments around CEV suggest that these moral theories ought to converge.
In the practical sense, only something in particular can be done with the world, so if "morality" is taken to refer to the goal given to a world-optimizing AI, it should be something specific by construction. If we take "morality" as given by the data of individual people, we can define personal moralities for each of them that would almost certainly be somewhat different from each other. Given the task of arriving at a single goal for the world, it might prove useful to exploit the similarities between personal moralities, or to sidestep this concept altogether, but eventual "convergence" is more of a design criterion than a prediction. In a world that had both humans and pebblesorters in it, arriving at a single goal would still be an important problem, even though we wouldn't expect these goals to "naturally" converge under reflection.
↑ comment by Jack · 2013-02-13T14:36:52.174Z · LW(p) · GW(p)
I think the standard LW argument for there being only one morality is based on the psychological unity of mankind. Human minds do not occupy an arbitrary or even a particularly large region of mindspace: the region they occupy is quite small for good reasons. Likewise, the moral theories that human minds adopt occupy quite a small region of moralityspace. The arguments around CEV suggest that these moral theories ought to converge if we extrapolate enough. I am not sure if this exact argument is defended in a LW post.
This sounds like ethical subjectivism (that ethical sentences are propositions about the attitudes of people). I'm quite amenable to ethical subjectivism but it's an anti-realist position.
Replies from: Qiaochu_Yuan, byrnema↑ comment by Qiaochu_Yuan · 2013-02-13T17:41:19.796Z · LW(p) · GW(p)
See this comment. If Omega changed the attitudes of all people, that would change what those people mean when they say morality-in-our-world, but it would not change what I mean (here, in the real world rather than the counterfactual world) when I say morality-in-the-counterfactual-world, in the same way that if Omega changed the brains of all people so that the meanings of "red" and "yellow" were switched, that would change what those people mean when they say red, but it would not change what I mean when I say red-in-the-counterfactual-world.
Replies from: Jack↑ comment by Jack · 2013-02-13T18:22:20.192Z · LW(p) · GW(p)
I deal with exactly this issue in a post I made a while back (admittedly it is too long). It's an issue of levels of recursion in our process of modelling reality (or a counterfactual reality). Your moral judgments aren't dependent on the attitudes of people (including yourself) that you are modeling (in this world or in a counterfactual world): they're dependent on the cognitive algorithms in your actual brain.
In other words, the subjectivist account of morality doesn't say that people look at the attitudes of people in the world and then conclude from that what morality says. We don't map attitudes and then conclude from those attitudes what is and is moral. Rather, we map the world and then out brains react emotionally to facts about that world and project our attitudes onto them. So morality doesn't change in a world where people's attitudes change because you're using the same brain to make moral judgments about the counterfactual world as you use to make moral judgments about this world.
The post I linked to has some diagrams that make this clearer.
As for the linked comment, I am unsure there is a single, distinct, and unchanging logical object to define-- but if there is one I agree with the comment and think that defining the algorithm that produces human attitudes is a crucial project. But clearly an anti-realist one.
Edit: rewrote for clarity.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-13T19:13:25.315Z · LW(p) · GW(p)
Right, but that is strong evidence that morality isn't an externally existing object.
I'm not sure what you mean by this.
Real objects are subject to counterfactual alterations.
Yes, but logical objects aren't.
...if there is one I agree with the comment and think that defining the algorithm that produces human attitudes is a crucial project. But clearly an anti-realist one.
If I said "when we talk about Peano arithmetic, we are referring to a logical object. If counterfactually Peano had proposed a completely different set of axioms, that would change what people in the counterfactual world mean by Peano arithmetic, but it wouldn't change what I mean by Peano-arithmetic-in-the-counterfactual-world," would that imply that I'm not a mathematical Platonist?
Replies from: Jack↑ comment by Jack · 2013-02-13T19:20:49.741Z · LW(p) · GW(p)
I literally just edited my comment for clarity. It might make more sense now. I will edit this comment with a response to your point here.
Edit:
If I said "when we talk about Peano arithmetic, we are referring to a logical object. If counterfactually Peano had proposed a completely different set of axioms, that would change what people in the counterfactual world mean by Peano arithmetic, but it wouldn't change what I mean by Peano-arithmetic-in-the-counterfactual-world," would that imply that I'm not a mathematical Platonist?
Any value system is a logical object. For that matter, any model of anything is a logical object. Any false theory of physics is a logical object. Theories of morality and of physics (logical objects both) are interesting because they purport to describe something in the world. The question before us is do normative theories purport to describe an object that is mind-independent or an object that is subjective?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-13T19:46:26.211Z · LW(p) · GW(p)
Okay. I don't think we actually disagree about anything. I just don't know what you mean by "realist."
So morality doesn't change in a world where people's attitudes change because you're using the same brain to make moral judgments about the counterfactual world as you use to make moral judgments about this world.
Yes, that sounds right.
↑ comment by byrnema · 2013-02-13T16:59:52.914Z · LW(p) · GW(p)
This sounds like ethical subjectivism (that ethical sentences are propositions about the attitudes of people). I'm quite amenable to ethical subjectivism but it's an anti-realist position.
OK, suppose that this is an anti-realism position. People's attitudes exist, but this isn't what we mean by morality existing. Is that how it follows as an anti-realist position?
I was intrigued by a comment you made some time ago that you are not a realist, so you wonder what it is that everyone is arguing about. What is your position on ethical subjectivism?
Replies from: Jack↑ comment by Jack · 2013-02-13T17:55:57.653Z · LW(p) · GW(p)
OK, suppose that this is an anti-realism position. People's attitudes exist, but this isn't what we mean by morality existing. Is that how it follows as an anti-realist position?
So here is a generic definition of realism (in general, not for morality in particular
a, b, and c and so on exist, and the fact that they exist and have properties such as F-ness, G-ness, and H-ness is (apart from mundane empirical dependencies of the sort sometimes encountered in everyday life) independent of anyone's beliefs, linguistic practices, conceptual schemes, and so on.
E.g. A realist position on ghosts doesn't include the position that "ghost" is a kind of hallucination people have even though there is something that exists there.
What is your position on ethical subjectivism?
I think it is less wrong than every variety of moral realism but I am unsure if moral claims are reports of subjective attitudes (subjectivism) or expressions of subjective attitudes (non-cognitivism). But I don't think that distinction matters very much.
Luckily, I live in a world populated by entities who mostly concur with my attitudes regarding how the universe should be. This lets us cooperate and formalize procedures for determining outcomes that are convivial to our attitudes. But these attitudes are the result of a cognitive structure determined by natural selection and culture transmission, altered by reason and language. As such, they contain all manner of kludgey artifacts and heuristics that respond oddly to novel circumstances. So I find it weird that anyone thinks they can be described by something like preference utilitarianism of Kantian deontology. Those are the kind of parsimonious, elegant theories that we expect to find governing natural laws, not culturally and biologically evolved structures. In fact, Kant was emulating Newton.
Attitudes produced by human brains are going to be contextually inconsistent, subject to framing effects, unable to process most novel inputs, cluttered etc. What's more, since our attitudes aren't produced by a single, universal utility function but a cluster of heuristics, most moral disagreements are going to be the result of certain heuristics being more dominant in some people than others. That makes these grand theories about these attitudes silly to argue about: positions aren't determined by things in the universe or by logic. They're determined by the cognitive styles of individuals and the cultural conditioning they receive. Most of Less Wrong is robustly consequentialist because most people here share a particular cognitive style-- we don't have any grand insights into reality when it comes to normative theory.
Replies from: byrnema↑ comment by byrnema · 2013-02-13T22:09:51.424Z · LW(p) · GW(p)
E.g. A realist position on ghosts doesn't include the position that "ghost" is a kind of hallucination people have even though there is something that exists there.
I see, thanks for that distinction! I now need to reread parts of the metaethics sequence since I believe I came away with the thesis that morality is real in this sense... That is, that morality is real because we have bits of code (evolutionary, mental, etc) that output positive or negative feelings about different states of the universe and this code is "real" even if the positive and negative doesn't exist external to that code.
So I find it weird that anyone thinks they can be described by something like preference utilitarianism of Kantian deontology. Those are the kind of parsimonious, elegant theories that we expect to find governing natural laws, not culturally and biologically evolved structures.
I agree...
That makes these grand theories about these attitudes silly to argue about: positions aren't determined by things in the universe or by logic. They're determined by the cognitive styles of individuals and the cultural conditioning they receive.
and I don't disagree with this. I do hope/half expect that there should be some patterns to our attitudes, not as simplistic as natural laws but perhaps guessable to someone who thought about it the right way.
Thanks for describing your positions in more detail.
↑ comment by Manfred · 2013-02-13T19:19:18.599Z · LW(p) · GW(p)
I think the standard LW argument for there being only one morality is based on the psychological unity of mankind.
I think you're mixing up CEV with morality. CEV is an instance of the strategy "cooperate with humans" in some sort of AI-building prisoner's dilemma. It gives the AI some preferences, and the only guarantee that those preferences will be good is that humans are similar.
There is "only one" "morality" (kinda) because when I say "this is right" I am executing a function, and functions are unique-ish. But Me.right can be different from You.right. You just happen to be wrong sometimes, because You.right isn't right, because when I say right I mean Me.right.
So that "good" from the first paragraph would be Me.good, not CEV.good.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-13T19:43:07.587Z · LW(p) · GW(p)
You don't think morality should just be CEV?
Replies from: Manfred, RomeoStevens↑ comment by RomeoStevens · 2013-02-14T05:01:06.543Z · LW(p) · GW(p)
A "winning" CEV should result in people with wildly divergent moralities all being deliriously happy.
comment by JonatasMueller · 2013-02-13T06:01:37.753Z · LW(p) · GW(p)
I will answer by explaining my view of morally realist ethics.
Conscious experiences and their content are physical occurrences and real. They can vary from the world they represent, but they are still real occurrences. Their reality can be known with the highest possible certainty, above all else, including physics, because they are immediately and directly accessible, while the external world is accessible indirectly.
Unlike the physical world, it seems that physical conscious perceptions can theoretically be anything. The content of conscious perceptions could, with the right technology, be controlled, as in a virtual world, and made to be anything, even things that differ from the external physical world. While the physical world has no ethical value except from conscious perceptions, conscious perceptions can be ethical value, and only by being good or bad conscious perceptions, or feelings. This seems to be so by definition, because ethical value is being good or bad.
That a conscious experience can be a good or bad physical occurrence is also a reality which can be felt and known with the highest possible certainty. This makes it rational, and an imperative, to follow it and care about it, to act in order to foster good conscious feelings and to prevent bad conscious feelings, because it is logical that this will make the universe better. This is acting ethically. Not acting accordingly is irrational and mistaken. Ethics is about realizing valuable states.
Human beings have primitive emotional and instinctive motivations that are not guided by intelligence and rationality. These primitive motivations can take control of human minds and make them act in irrational and unintelligent ways. Although human beings may consider it good to act according to their primitive motivations in cases in which they conflict with acting ethically, this would be an irrational and mistaken decision.
When primitive motivations conflict with human intelligent reason, these two could be thought of as two different agents inside one mind, with differing motivations. Intelligent reason does not always prevail, because primitive motivations have strong control of behavior. However, it would be rational and intelligent for intelligent reason to always take the ultimate control of behavior if it could somehow suppress the power of primitive motivations. This might be done by somehow strengthening human intelligent reason and its control of motivations.
Actions which foster good conscious feelings and prevent bad conscious feelings need not do so in the short-term. Many effective actions tend to do so only in the long-term. Likewise, such actions need not do so directly; many effective actions only do so indirectly. Often it is rational to act if it is probable that it will be ethically positive eventually.
That people have personal identities is false; they are mere parts of the universe. This is clear upon advanced philosophical analysis, but can be hard to understand for those who haven't thought much about it. An objective and impersonal perspective is called for. For this reason it is rational for all beings to 'act ethically' not only for themselves but also for all other beings in the same universe. For an explanation of why personal identities don't exist, what is relevant for the question of why acting ethically in a collective rather than selfish sense, see this brief essay:
https://www.facebook.com/notes/jonatas-müller/universal-identity/10151189314697917
Replies from: falenas108, twanvl, Stuart_Armstrong, peter_hurford, Jabberslythe↑ comment by falenas108 · 2013-02-13T06:50:53.411Z · LW(p) · GW(p)
That a conscious experience can be a good or bad physical occurrence is also a reality which can be felt and known with the highest possible certainty. This makes it rational, and an imperative, to follow it and care about it, to act in order to foster good conscious feelings and to prevent bad conscious feelings, because it is logical that this will make the universe better. This is acting ethically. Not acting accordingly is irrational and mistaken. Ethics is about realizing valuable states.
Why is fostering good conscious feelings and prevent bad conscious feelings necessarily correct? It is intuitive for humans to say we should maximize conscious experience, and that falls under the success theory that Peter talks about, but why is this necessarily the one true moral system?
You say
Ethics is about realizing valuable states.
But valuable to who? If there were a person who valued others being in pain, why would this person's views matter less?
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-02-21T00:19:34.631Z · LW(p) · GW(p)
"Why is fostering good conscious feelings and prevent bad conscious feelings necessarily correct? It is intuitive for humans to say we should maximize conscious experience, and that falls under the success theory that Peter talks about, but why is this necessarily the one true moral system?"
If we agree that good and bad feelings are good and bad, that only conscious experiences produce direct ethical value, which lies in its good or bad quality, then theories that contradict this should not be correct, or they would need to justify their points, but it seems that they have trouble in that area.
"But valuable to who? If there were a person who valued others being in pain, why would this person's views matter less?"
:) That's a beauty of personal identities not existing. It doesn't matter who it is. In the case of valuing others being in pain, would it be generating pleasure from it? In that case, lots of things have to be considered, among which: the net balance of good and bad feelings caused from the actions; the societal effects of legalizing or not certain actions...
↑ comment by twanvl · 2013-02-13T17:02:36.195Z · LW(p) · GW(p)
Unlike the physical world, it seems that physical conscious perceptions can theoretically be anything.
Not quite anything, since the size and complexity of conscious thought is bounded by the human brain. But that is not relevant to this discussion of ethics.
While the physical world has no ethical value except from conscious perceptions, conscious perceptions can be ethical value, and only by being good or bad conscious perceptions, or feelings. This seems to be so by definition, because ethical value is being good or bad.
Should I interpret this as you defining ethics as good and bad feelings?
That a conscious experience can be a good or bad physical occurrence is also a reality which can be felt and known with the highest possible certainty. This makes it rational, and an imperative, to follow it and care about it, to act in order to foster good conscious feelings and to prevent bad conscious feelings
So, do you endorse wireheading?
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-02-21T00:04:31.009Z · LW(p) · GW(p)
"Not quite anything, since the size and complexity of conscious thought is bounded by the human brain. But that is not relevant to this discussion of ethics."
Indeed, conscious experience may be bound by the size and complexity of brains or similar machinery, of humans, other animals, and cyborgs. Theoretically, conscious perceptions may be able to be anything (or nearly), as we could theorize about brains the size of Jupiter or much larger. You get the point.
"Should I interpret this as you defining ethics as good and bad feelings?"
Almost. Not ethics, but ethical value in a direct, ultimate sense. There is also indirect value, which is things that can lead to direct value, which are myriad, and ethics is much more than defining value, it comprises laws, decision theory, heuristics, empirical research, and many theoretical considerations. I'm aware that Elizer has written a post on Less Wrong saying that ethical value is not on happiness alone. Although happiness alone is not my proposition, I find his post on the topic quite poorly developed, and really not an advisable read.
"So, do you endorse wireheading?"
This depends very much on the context. All else being equal, wireheading could be good for some people, depending on the implications of it. However, all else seems hardly equal in this case. People seem to have a diverse spectrum of good feelings that may not be covered by the wireheading (such as love, some types of physical pleasure, good smell and taste, and many others), and the wireheading might prevent people from being functional and acting in order to increase ethical value in the long-term, so as to possibly deny its benefits. I see wireheading, in the sense of artificial paradise simulations, as a possibly desirable condition in a rather distant future of ideal development and post-scarcity, though.
↑ comment by Stuart_Armstrong · 2013-03-11T11:59:50.669Z · LW(p) · GW(p)
While the physical world has no ethical value except from conscious perceptions, conscious perceptions can be ethical value, and only by being good or bad conscious perceptions, or feelings. This seems to be so by definition, because ethical value is being good or bad.
That a conscious experience can be a good or bad physical occurrence is also a reality which can be felt and known with the highest possible certainty.
A bit unclear, but I'm assuming you mean something like "we have good or bad (technically, pleasant or unpleasant) conscious experiences, and we know this with great certainty". That seems fine.
This makes it rational, and an imperative, to follow it and care about it, to act in order to foster good conscious feelings and to prevent bad conscious feelings, because it is logical that this will make the universe better.
Why? This is the whole core of the disagreement, and you're zooming over it way too fast. Even for ourselves, our wanting systems and our liking systems are not well aligned - we want things we don't like, and vice-versa. A preference utilitarian would say our wants are the most important; you seem to disagree, focusing on the good/bad aspect instead. But what logical reason would there be to follow one or the other?
You seem to get words to do too much of the work. We have innate senses of positivity and negativity for certain experiences; we also have an innate sense that morality exists. But those together do not make positive experiences good "by definition" (nor does calling them "good" rather than "positive").
But those are relatively minor points - if there was a single consciousness in the universe, them maybe your argument could get off the ground. But we have many current and potential consciousnesses, with competing values and conscious experiences. You seem to be saying that we should logically be altruists, because we have conscious experiences. I agree we should be altruists; but that's a personal preference, and there's no logic to it. Following your argument (consciousness before physics) one could perfectly become a solipsist, believing only one's own mind exists, and ignoring others. Or your could be a racist altruist, preferring certain individuals or conscious experiences. Or you could put all experiences together on an infinite numbers of comparative scales (there is no intrinsic measure to compare the quality of two positive experiences in different people).
But in a way, that's entirely a moot point. Your claim is that a certain ethics logically follows from our conscious reality. There I must ask you to prove it. State your assumptions, show your claims, present the deductions. You'll need to do that, before we can start critiquing your position properly.
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-03-11T12:45:43.534Z · LW(p) · GW(p)
Hi Stuart,
Why? This is the whole core of the disagreement, and you're zooming over it way too fast. Even for ourselves, our wanting systems and our liking systems are not well aligned - we want things we don't like, and vice-versa. A preference utilitarian would say our wants are the most important; you seem to disagree, focusing on the good/bad aspect instead. But what logical reason would there be to follow one or the other?
Indeed, wanting and liking do not always correspond, also from a neurological perspective. Wanting involves planning and planning often involves error. We often want things mistakenly, be it by evolutionary selected reasons, cultural reasons, or just bad planning. Liking is what matters, because it can be immediately and directly determined to be good, with the highest certainty. This is an empirical confirmation of its value, while wanting is like an empty promise.
We have good and bad feelings associated with some evolutionarily or culturally determined things. Theoretically, the result of good and bad feelings could be associated with any inputs. The inputs don't matter, nor does wanting necessarily matter, nor innate intuitions of morality. The only thing that has direct value, which is empirically confirmed, is good and bad feelings.
if there was a single consciousness in the universe, them maybe your argument could get off the ground. But we have many current and potential consciousnesses, with competing values and conscious experiences.
Well noticed. That comment was not well elaborated and is not a complete explanation. It is also necessary for that point you mentioned to consider the philosophy of personal identities, which is a point that I examine in my more complete essay on Less Wrong, and also in my essay Universal Identity.
But in a way, that's entirely a moot point. Your claim is that a certain ethics logically follows from our conscious reality. There I must ask you to prove it. State your assumptions, show your claims, present the deductions. You'll need to do that, before we can start critiquing your position properly.
I have a small essay written on ethics, but it's a detailed topic, and my article may be too concise, assuming much previous reading on the subject. It is here. I propose that we instead focus on questions as they come up.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-03-11T13:41:16.334Z · LW(p) · GW(p)
Liking is what matters, because it can be immediately and directly determined to be good, with the highest certainty.
That is your opinion. Others believe wanting is fundamental and rational, that can be checked and explained and shared - while liking is a misleading emotional response (that probably shows much less consistency, too).
How would you resolve the difference? They say something is more important, you say something else is. Neither of your disagree about the facts of the world, just about what is important and what isn't. What can you point to that makes this into a logical disagreement?
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-03-11T13:53:31.331Z · LW(p) · GW(p)
One argument is that from empiricism or verification. Wanting can be and often is wrong. Simple examples can show this, but I assume that they won't be needed because you understand. Liking can be misleading in terms of motivation or in terms of the external object which is liked, but it cannot be misleading or wrong in itself, in that it is a good feeling. For instance, a person could like to use cocaine, and this might be misleading in terms of being a wrong motivation, that in the long-term would prove destructive and dislikeable. However, immediately, in terms of the sensation of liking itself, and all else being equal, then it is certainly good, and this is directly verifiable by consciousness.
Taking this into account, some would argue for wanting values X, Y, or Z, but not values A, B, or C. This is another matter. I'm arguing that good and bad feelings are the direct values that have validity and should be wanted. Other valid values are those that are instrumentally reducible to these, which are very many, and most of what we do.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-03-11T14:14:29.791Z · LW(p) · GW(p)
Liking can be misleading in terms of motivation or in terms of the external object which is liked, but it cannot be misleading or wrong in itself, in that it is a good feeling.
"Wanting can be misleading in terms of the long term or in terms of the internal emotional state with which it is connected, but it cannot be misleading or wrong in itself, in that it is a clear preference."
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-03-11T14:28:39.654Z · LW(p) · GW(p)
Indeed, but what separates wanting and liking is that preferences can be wrong, they require no empirical basis, while liking in itself cannot be wrong, and it has an empirical basis.
When rightfully wanting something, that something gets a justification. Liking, understood as good feelings, is a justification, while another is avoiding bad feelings, and this can be causally extended to include instrumental actions that will cause this in indirect ways.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-03-11T15:28:53.996Z · LW(p) · GW(p)
Then how can wanting be wrong? They're there, they're conscious preferences (you can introspect and get them, just as liking), and they have as much empirical basis as liking.
And wanting can be seen as more fundamental - they are your preferences, and inform your actions (along with your world model), whereas using liking to take action involve having a (potentially flawed) mental model of what will increase your good experiences and diminish bad ones.
The game can be continued endlessly - what you're saying is that your moral system revolves around liking, and that the arguments that this should be so are convincing to you. But you can't convince wanters with the same argument - their convictions are different, and neither set of arguments are "logical". It becomes a taste-based debate.
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-03-11T18:32:41.522Z · LW(p) · GW(p)
Sorry, I thought you already understood why wanting can be wrong.
Example 1: imagine a person named Eliezer walks to an ice cream stand, and picks a new flavor X. Eliezer wants to try the flavor X of ice cream. Eliezer buys it and eats it. The taste is awful and Eliezer vomits it. Eliezer concludes that wanting can be wrong and that it is different from liking in this sense.
Example 2: imagine Eliezer watched a movie in which some homophobic gangsters go about killing homosexuals. Eliezer gets inspired and wants to kill homosexuals too, so he picks a knife and finds a nice looking young man and prepares to torture and kill him. Eliezer looks at the muscular body of the young man, and starts to feel homosexual urges and desires, and instead he makes love with the homosexual young man. Eliezer concludes that he wanted something wrong and that he had been a bigot and homosexual all along, liking men, but not wanting to kill them.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-03-11T18:51:13.512Z · LW(p) · GW(p)
I understand why those examples are wrong. Because I have certain beliefs (broadly, but not universally, shared). But I don't see how any of those beliefs can be logically deduced.
Quite a lot follows from "positive conscious experiences are intrinsically valuable", but that axiom won't be accepted unless you already partially agree with it anyway.
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-03-11T19:01:36.506Z · LW(p) · GW(p)
I don't think that someone can disagree with it (good conscious feelings are intrinsically good; bad conscious feelings are intrinsically bad), because it would be akin to disagreeing that, for instance, the color green feels greenish. Do you disagree with it?
Because I have certain beliefs (broadly, but not universally, shared). But I don't see how any of those beliefs can be logically deduced.
Can you elaborate? I don't understand... Many valid wants or beliefs can be ultimately reduced as to good and bad feelings, in the present or future, for oneself or for others, as instrumental values, such as peace, learning, curiosity, love, security, longevity, health, science...
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-03-12T11:18:03.312Z · LW(p) · GW(p)
I don't think that someone can disagree with it (good conscious feelings are intrinsically good; bad conscious feelings are intrinsically bad), because it would be akin to disagreeing that, for instance, the color green feels greenish. Do you disagree with it?
I do disagree with it! :-) Here is what I agree with:
- That humans have positive and negative conscious experiences.
- That humans have an innate sense that morality exists: that good and bad mean something.
- That humans have preferences.
I'll also agree that preferences often (but not always) track the positive or negative conscious experiences of that human. That human impressions of good and bad sometimes (but not always) track positive or negative conscious experiences of humans in general, at least approximately.
But I don't see any grounds for saying "positive conscious experiences are intrinsically (or logically) good". That seems to be putting in far to many extra connotations, and moving far beyond the facts we know.
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-03-12T17:39:19.300Z · LW(p) · GW(p)
I agree with what you agree with.
Did you read my article Arguments against the Orthogonality Thesis?
I think that the argument for the intrinsic value (goodness or badness) of conscious feelings goes like this:
Conscious experiences are real, and are the most certain data about the world, because they are directly accessible, and don't depend on inference, unlike the external world as we perceive it. It would not be possible to dismiss conscious experiences as unreal, inferring that they not be part of the external world, since they are more certain than the external world is. The external world could be an illusion, and we could be living inside a simulated virtual world, in an underlying universe that be alien and with different physical laws.
Even though conscious experiences are representations (sometimes of external physical states, sometimes of abstract internal states), apart from what they represent they do exist in themselves as real phenomena (likely physical).
Conscious experiences can be felt as intrinsically neutral, good, or bad in value, sometimes intensely so. For example, the bad value of having deep surgery without anesthesia is felt as intrinsically and intensely bad, and this badness is a real occurrence in the world. Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.
Ethical value is, by definition, what is good and what is bad. We have directly accessible data of occurrences of intrinsic goodness and badness. They are ethical value.
↑ comment by Stuart_Armstrong · 2013-03-12T18:35:35.059Z · LW(p) · GW(p)
Did you read my article Arguments against the Orthogonality Thesis?
Of course!
Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.
-> Likewise, an experience of extreme success or pleasure is often intrinsically felt as good, and this feeling of goodness is a real occurrence in the world.
And that renders the 4th point moot - your extra axiom (the one that goes from "is" to "ought") is "feelings of goodness are actually goodness". I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it's a logical transition.
Replies from: JonatasMueller, JonatasMueller↑ comment by JonatasMueller · 2013-03-12T23:51:34.916Z · LW(p) · GW(p)
This is a relevant discussion in another thread, by the way:
http://lesswrong.com/lw/gu1/decision_theory_faq/8lt9?context=3
↑ comment by JonatasMueller · 2013-03-12T20:43:59.526Z · LW(p) · GW(p)
I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it's a logical transition.
Could you explain more at length for me?
The feeling of badness is something bad (imagine yourself or someone being tortured and tell me it's not bad), and it is a real occurrence, because conscious contents are real occurrences. It is then a bad occurrence. A bad occurrence must be a bad ethical value. All this is data, since conscious perceptions have a directly accessible nature, they are "is", and the "ought" is part of the definition of ethical value, that what is good ought to be promoted, and what is bad ought to be avoided.
This does not mean that we should seek direct good and avoid direct bad on the immediate present, such as making parties to no end, but it means that we should seek it in the present and the future, seeking indirect values such as working, learning, promoting peace and equality, so that the future, even in the longest-term, will have direct value.
(To the anonymous users who down-voted this, do me the favor of posting a comment saying why you disagree, if you are sure that you are right and I am wrong, otherwise it's just rudeness, the down-vote should be used as a censoring mechanism for inappropriate posts rather than to express disagreement with a reasonable point of view. I'm using my time to freely explain this as a favor to whoever is reading, and it's a bit insulting and bad mannered to down-vote it).
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-03-13T12:53:48.944Z · LW(p) · GW(p)
A bad occurrence must be a bad ethical value.
Why? That's an assertion - it won't convince anyone who doesn't already agree with you. And you're using two meanings of the word "bad" - an unpleasant subjective experience, and badness according to a moral system. Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.
I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it's a logical transition.
Could you explain more at length for me?
I have a personal moral system that isn't too far removed from the one you're espousing (a bit more emphasise on preference). However, I do not assume that this moral system can be deduced from universal or logical principles, for the reasons stated above. Most humans will have moral systems not too far removed from ours (in the sense of Kolmogorov complexity - there are many human cultural universals, and our moral instincts are generally similar), but this isn't a logical argument for the correctness of something.
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-03-13T15:24:48.929Z · LW(p) · GW(p)
A bad occurrence must be a bad ethical value.
Why? That's an assertion - it won't convince anyone who doesn't already agree with you. And you're using two meanings of the word "bad" - an unpleasant subjective experience, and badness according to a moral system.
If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics. It seems to be a matter of including something in a verbal definition, so it seems to be correct. Moral realism would follow. It is not undesirable, but helpful, since anti-realism implies that our values are not really valuable, but just fiction.
Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.
I agree, this would be a special case, of incomplete knowledge about conscious animals. This would be possible for instance in some artificial intelligences, but they might learn about it indirectly by observing animals, humans, and getting contact with human culture in various forms. Otherwise, they might become morally anti-realist.
I have a personal moral system that isn't too far removed from the one you're espousing (a bit more emphasise on preference).
Could you explain a bit this emphasis on preference?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-03-13T15:46:04.321Z · LW(p) · GW(p)
If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics.
Which is exactly why I critiqued using the word "bad" for the conscious experiences, using "negative" or "unpleasant", words which describe the conscious experience in a similar way without sneaking in normative claims.
I have a personal moral system that isn't too far removed from the one you're espousing (a bit more emphasise on preference).
Could you explain a bit this emphasis on preference?
Er, nothing complex - in my ethics, there are cases where preferences trump feelings (eg experience machines) and cases where feelings trump preferences (eg drug users who are very unhappy). That's all I'm saying.
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-03-13T23:36:37.832Z · LW(p) · GW(p)
Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.
The normative claims in this case need not be definitive and overruling in that case. Perhaps that is where your resistance to accepting it comes from. In moral realism, a justified preference or instrumental / indirect value that weights more can overpower a direct feeling as well. This justified preference will be ultimately reducible to direct feelings in the present or in the future, for oneself or for others, though.
Could you give me examples of any reasonable preferences that could not be reducible to good and bad feelings in that sense?
Anyway, there is also the argument from personal identity which calls for equalization of values taking into account all subjects (equally valued, if ceteris paribus), and their reasoning, if contextually equivalent. This could be in itself a partial refutation of the orthogonality thesis, a refutation in theory and for autonomous and free general superintelligent agents, but not necessarily for imprisoned and tampered ones.
Replies from: Stuart_Armstrong, JonatasMueller↑ comment by Stuart_Armstrong · 2013-03-15T08:55:01.569Z · LW(p) · GW(p)
Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.
Then they are no longer purely descriptive, and I can't agree that they are logically or empirically true.
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-03-16T00:48:59.009Z · LW(p) · GW(p)
Apart from that, what do you think of the other points? If you wish, we could continue a conversation on another online medium.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-03-18T10:54:16.718Z · LW(p) · GW(p)
Certainly, but I don't have much time for the next few weeks :-(
Send me a message in mid-April if you're still interested!
↑ comment by JonatasMueller · 2013-03-14T13:22:18.181Z · LW(p) · GW(p)
I think that this is an important point: the previously argued normative badness of directly accessible bad conscious experiences is not absolute and definitive, or in terms of justifying actions. It should weight on the scale with all other factors involved, even indirect and instrumental ones that could only affect intrinsic goodness or badness in a distant and unclear way.
↑ comment by Peter Wildeford (peter_hurford) · 2013-02-15T21:15:19.512Z · LW(p) · GW(p)
That people have personal identities is false; they are mere parts of the universe. This is clear upon advanced philosophical analysis, but can be hard to understand for those who haven't thought much about it. An objective and impersonal perspective is called for. For this reason it is rational for all beings to 'act ethically' not only for themselves but also for all other beings in the same universe. For an explanation of why personal identities don't exist, what is relevant for the question of why acting ethically in a collective rather than selfish sense, see this brief essay: https://www.facebook.com/notes/jonatas-müller/universal-identity/10151189314697917
Right now I see this as perhaps the most challenging and serious form of moral realism, so I definitely intend to take time and care to study it. I'll have to get back to you, as I think I said I would before.
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-02-21T00:23:35.348Z · LW(p) · GW(p)
I think that it is a worthy use of time, and I applaud your rational attitude of looking to refute one's theories. I also like to do that in order to evolve them and discard wrong parts.
Don't hesitate to bring up specific parts for debate.
↑ comment by Jabberslythe · 2013-02-13T07:38:49.723Z · LW(p) · GW(p)
I don't directly apprehend anything as the being "good" or the "bad" in the moral realist sense and I don't count other peoples' accounts of directly apprehending such things as evidence (especially since schizophrenics and theists exist).
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-02-21T00:11:43.938Z · LW(p) · GW(p)
Conscious perceptions are quite direct and simple. Do you feel, for example, a bad feeling like intense pain as being a bad occurrence (which, like all occurrences in the universe, is physical), and likewise, for example, a good feeling like a delicious taste as being a good occurrence?
I argue that these are perceived with the highest degree of certainty of all things and are the only things that can be ultimately linked to direct good and bad value.
Replies from: Jabberslythe↑ comment by Jabberslythe · 2013-02-21T02:35:06.535Z · LW(p) · GW(p)
No, though I admit it has felt like that for me at some points in my life. Even if I did, there are a bunch of reasons why that I would not trust that intuition
I like certain things and dislike certain things, and in a certain sense I would be mistaken if I were doing things that reliably caused me pain. That certain sense is that if I were better informed I would not take that action. If, however, I liked pain, I would still take that action, and so I would not be mistaken. I could go through the same process to explain why an sadist is not mistaken.
I do not know what else to say except that this is just an appeal to intuition, and that specific intuitions are worthless unless they are proven to reliably point towards the truth.
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-02-22T02:43:07.878Z · LW(p) · GW(p)
Liking pain seems impossible, as it is an aversive feeling. However, for some people, some types of pain or self-harm cause a distraction from underlying emotional pain, which is felt as good or relieving, or it may give them some thrill, but in these cases it seems that it is always pain + some associated good feeling, or some relief of an underlying bad feeling, and it is for the good feeling or relief that they want pain, rather than pain for itself.
Conscious perceptions in themselves seem to be what is most certain in terms of truth. The things they represent, such as the physical world, may be illusions, but one cannot doubt feeling the illusions themselves.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-22T03:05:08.270Z · LW(p) · GW(p)
Let's play the Monday-Tuesday game. On Monday I like pain. On Tuesday I like some associated good feeling that pain provides. What's the difference between Monday and Tuesday?
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-02-22T05:58:34.524Z · LW(p) · GW(p)
The idea that one can like pain in itself is not substantiated by evidence. Masochists or self-harmers seek some pleasure or relief they get from pain or humiliation, not pain for itself. They won't stick their hands in a pot with boiling water.
http://en.wikipedia.org/wiki/Sadomasochism http://en.wikipedia.org/wiki/Self-harm
To follow that line of reasoning, please provide evidence that there exists anyone that enjoys pain in itself. I find that unbelievable, as pain is aversive by nature.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-22T06:20:42.334Z · LW(p) · GW(p)
This is not how you play the Monday-Tuesday game! Also, a request to play the Monday-Tuesday game isn't an argument, it's a request for clarification. Specifically, I'm asking you to clarify what the difference between two statements is. Maybe we should try a simpler example:
On Monday I like ice cream. On Tuesday I like some associated good feeling that ice cream provides. What's the difference between Monday and Tuesday?
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-02-22T22:17:16.268Z · LW(p) · GW(p)
Who cares about that silly game. Accepting to play it or not is my choice.
You can only validly like ice cream by way of feelings, because all that you have direct access to in this universe is consciousness. The difference between Monday and Tuesday in your example is only in the nature of the feelings involved. In the pain example, it is liked by virtue of the association with other good feelings, not pain in itself. If a person somehow loses the associated good feelings, certain painful stimuli cease to be desirable.
Replies from: skepsci, Qiaochu_Yuan↑ comment by skepsci · 2013-02-23T03:09:59.614Z · LW(p) · GW(p)
If a person somehow loses the associated good feelings, ice cream also ceases to be desirable. I still don't see the difference between Monday and Tuesday.
I think I might have some idea what you mean about masochists not liking pain. Let me tell a different story, and you can tell me whether you agree...
Masochists like pain, but only in very specific environments, such as roleplaying fantasies. Within that environment, masochists like pain because of how it affects the overall experience of the fantasy. Outside that environment, masochists are just as pain-averse as the rest of the world.
Does that story jibe with your understanding?
Replies from: JonatasMueller↑ comment by JonatasMueller · 2013-03-04T21:44:10.542Z · LW(p) · GW(p)
Yes, that is correct. I'm glad a Less Wronger finally understood.
↑ comment by Qiaochu_Yuan · 2013-02-23T03:34:16.615Z · LW(p) · GW(p)
Accepting to play it or not is my choice.
Yes, in the same way that explaining your ideas well or poorly is your choice, but I don't see what this has to do with explaining the difference between liking X and liking associated good feelings that X provides.
comment by TheAncientGeek · 2013-11-25T21:18:49.799Z · LW(p) · GW(p)
This goes right to the core of unitary theory -- that there is only one true theory of morality. But I must admit I'm dumbfounded at how any one particular theory of morality could be "the one true one", except in so far as someone personally chooses that theory over others based on preferences and desires.
Think of morality, not as solipsistically fulfilling personal desires, but as a means of resolving conflicts between desires (within groups). Why would it then be impossible for it there to be an optimal way (amongst groups) of doing so?
but I must admit I'm dumbfounded at how any one particular theory of morality could be "the one true one", except in so far as someone personally chooses that theory over others based on preferences and desires.
If N people choose differing One True Moralities, then there are N True Moralities, so that doesn't work at all.
This gets me a bit more background knowledge, but what is the ontology of morality?
Thinking of morality as conflict resolution, it is then something like group desicion theory or economics...those things do not require a special ontological realm.
How does morality get it's ability to be rationally binding?
One answer is that it gets it the same way maths does, If an agent can that there is a good arguemnt for X AND it has an desire to believe rationally demonstrable things in general THEN it will be bound by what can be proven to it, or what ic an prove to itself. To sidestep this argument, you have to assume not ONLY that rationality is always in the service of desires, but ALSO that desires cannot possibly include desires to be maximally rational, to believe truth for its own sake, etc (in contravention of the Orthogonality Thesis!)
If the very definition of "rationality" includes being moral, is that mere wordplay?
Who siays it does? There are ways of inferring conclusions other than pulling them straight out of definitions.
Why should we accept this definition of rationality and not a different one?
You don't get a free choice. Why would any old definition count as a definition of morality? I could define "dog" as "middle C played ona n oboe"...but I would be talking nonsense.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-11-26T20:31:35.185Z · LW(p) · GW(p)
Thanks for this reply. I lack the time to consider it at the moment, but look forward to circling back and engaging with it in the near future.
comment by Manfred · 2013-02-13T19:59:12.121Z · LW(p) · GW(p)
I'm dumbfounded at how any one particular theory of morality could be "the one true one", except in so far as someone personally chooses that theory over others based on preferences and desires.
Great, we agree, let's choose based on preferences and desires :P
Are moral facts contingent; could morality have been different?
What people say and do could have been different, so when using "morality" descriptively, like "people could have different moralities," then sure. But "morality the referent," the algorithm that takes in a situation like "punching puppies" and returns "punching puppies is right" or "punching puppies is wrong," wouldn't change - people can refer to a different algorithm when they said "morality," but they can't change the thing I refer to. Articulated in this post.
How does morality get it's ability to be rationally binding?
Because humans have some abstract things that influence our actions, and morality, as written about by humans, is used to describe human actions. I'm reminded of Douglas Adams' puddle: "Imagine a puddle waking up one morning and thinking, 'This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn't it?'"
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-02-15T21:16:41.947Z · LW(p) · GW(p)
But "morality the referent," the algorithm that takes in a situation like "punching puppies" and returns "punching puppies is right" or "punching puppies is wrong," wouldn't change - people can refer to a different algorithm when they said "morality," but they can't change the thing I refer to. Articulated in this post.
I don't disagree with that. But I think it's a mistake for someone to leap from talking about "my morality" to "morality (in general)". Perhaps this is what projectivists get at?
~
Because humans have some abstract things that influence our actions, and morality, as written about by humans, is used to describe human actions. I'm reminded of Douglas Adams' puddle: "Imagine a puddle waking up one morning and thinking, 'This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn't it?'"
I don't quite understand your use of the analogy.
Replies from: Manfred↑ comment by Manfred · 2013-02-16T00:44:35.541Z · LW(p) · GW(p)
I think it's a mistake for someone to leap from talking about "my morality" to "morality (in general)". Perhaps this is what projectivists get at?
I have no clue if it's what projectivists get at, so you may want to elaborate :P
Because humans have some abstract things that influence our actions, and morality, as written about by humans, is used to describe human actions. I'm reminded of Douglas Adams' puddle: "Imagine a puddle waking up one morning and thinking, 'This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn't it?'"
I don't quite understand your use of the analogy.
I'm implying that people first noticed what influenced them, and then decided to call parts of it "morality." Thus making it no great mystery that morality influences people. The puddle was shaped to fit the hole, so it has no right to be surprised when it finds itself in a hole that fits it.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-02-17T20:34:45.730Z · LW(p) · GW(p)
I have no clue if it's what projectivists get at, so you may want to elaborate :P
Projectivism is, I suppose, a part psychological and part meta-ethical theory that suggests people talk about their own desires about how the world should be as if they are objective, mind-independent moral truths. Hence "my morality" -> "morality".
I'm implying that people first noticed what influenced them, and then decided to call parts of it "morality." Thus making it no great mystery that morality influences people. The puddle was shaped to fit the hole, so it has no right to be surprised when it finds itself in a hole that fits it.
That makes sense. But that implies a desires-based theory of moral motivation, which isn't usually considered moral realism.
Replies from: Manfred↑ comment by Manfred · 2013-02-17T21:36:24.370Z · LW(p) · GW(p)
which isn't usually considered moral realism.
Yeah, agreed - it's only moral realism in the sense that "I'm right, you're wrong" can be a true thing to say.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-02-18T02:17:29.526Z · LW(p) · GW(p)
Yeah, agreed - it's only moral realism in the sense that "I'm right, you're wrong" can be a true thing to say.
I call that "success theory" and I agree with it.
comment by torekp · 2013-02-14T02:28:04.701Z · LW(p) · GW(p)
I lean strongly toward unitary theory, with two caveats. First, not all specific moral statements need be true or false; some can have the middle truth-value (or no truth-value if you prefer to say that). Second, unitary-ness is not a logical truth - if true, it's a consequence of the attitudes people actually have and the circumstances in which we find ourselves.
Why is there only one particular morality? Because people keep insisting on talking about it. We keep finding that, like Churchill, we prefer "jaw, jaw" to "war, war". We keep trying to justify our ways to each other and getting meaningful responses back, in which people try and generally succeed to find similar motivations in each other with which to assess proposed rules, virtues, and goods. We can usually tell the difference between a moral argument, offered in the spirit of reasoning together about what to do, versus an exercise in the dark arts of persuasion. And enough of us prefer the former to perpetuate the search for mutually agreeable goals, rules, and virtues.
What is the one true morality? It's too early to tell. But why think that there is one? Because of the great commonalities between human beings. Granted, we have different preferences, but in many cases it seems possible to step back from them and find a general principle like "I get to use my skills to create more of what I want, and you get to use yours to create more of what you want" - a principle that abstracts from individual differences, in a fair way, and leads to an outcome acceptable to both parties. In some cases it may not be possible - which is why I allow that some moral statements may be neither true nor false. (A theory should be no more precise than its subject matter.)
Where does morality come from? From our natures as both rational beings who communicate, and social beings who need to cooperate and derive enormous utility from that. This also answers the "why care" question - without implying what you call absolutism.
Mainstream philosophy comparison: Habermas and Scanlon hold roughly the same views on the points I've mentioned here.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-02-15T21:13:07.130Z · LW(p) · GW(p)
This seems to me to be an ad populum fallacy. Just because everyone acts as if there is one, true morality doesn't make it so. Or am I missing something?
The great commonalities between humans might make many of our moralities very similar, but I think there are some differences. For one example, I think eating (factory farmed) meat is a great moral wrong, but I don't think there's anything, even in principle, I could say to convince some other people to share my view.
Replies from: torekp↑ comment by torekp · 2013-02-16T19:18:11.355Z · LW(p) · GW(p)
Good points to raise, thanks.
There are two steps to my reasoning. First, if everyone acts as if agreement is possible, that tends to make it much more likely that agreement is, in fact, possible. The second step is a meta-ethical analysis which says that, if everyone freely and rationally agrees on a set of norms, virtues, etc., that is morality. (Or at least, an important part of it.) Of course, the second step is open to debate too, and popular opinion is irrelevant there. But a near-universal and deeply ingrained drive to reason-together-what-to-do is relevant to the first step.
If there are behavior codes you live by but you don't think you could convince others to live by, you could call that "morality" if you insist. In that case, the rules that we all can rationally agree to cooperate under aren't all of morality, but just a part of it - "justice", maybe.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-02-17T20:16:59.984Z · LW(p) · GW(p)
First, if everyone acts as if agreement is possible, that tends to make it much more likely that agreement is, in fact, possible. The second step is a meta-ethical analysis which says that, if everyone freely and rationally agrees on a set of norms, virtues, etc., that is morality.
I agree. But I think that people act as if agreement is possible not because they are thinking of morality in the consensus sense but rather that they are thinking of morality as an actual law external to people. For many of them, perhaps, even the law of God.
I wonder if there's any experimental philosophy on lay people's thoughts on meta-ethics. It would be interesting to see, for sure.
If there are behavior codes you live by but you don't think you could convince others to live by, you could call that "morality" if you insist. In that case, the rules that we all can rationally agree to cooperate under aren't all of morality, but just a part of it - "justice", maybe.
You're right that perhaps we're debating definitions and not substance. But I think you'd be hardpressed to come up with what behaviors are actually in the "consensus morality". And it's going to end up a lot like cultural relativism.
What I'd advocate is more of an ends relativism -- actually making things clearer by specifically stating which normative/moral ends we're dealing with. For example, we could contrast the claim "One ought to be a vegetarian (relative to the end of utilitarianism)" versus the claim "One is ethically permitted to eat meat (relative to the common moral beliefs of our culture)".
Replies from: torekp↑ comment by torekp · 2013-02-20T01:17:00.251Z · LW(p) · GW(p)
I doubt that much hangs what people's meta-ethical intuitions are. Compare, for example, the issue of what people's theory of the nature of "gold" is. If people generally think that gold is what it is because the gods have mixed their aura into it, still, what gold actually is depends only on what explains the features whereby we recognize it. That explanans still comes down to its having atomic number 79 - gods be damned. And, to boot, the religious theorists of Aurum might claim to high heaven that without its divine connection, gold would be worthless. But I doubt you'd find much jewelry in the trash can after they switch theories. Similarly, I don't think a switch from divine-command to rational-agreement metaethics will result in trashing morality.