What is Metaethics?
post by lukeprog · 2011-04-25T16:53:23.625Z · LW · GW · Legacy · 562 commentsContents
Mainstream views in metaethics Tying it all together Notes References None 562 comments
When I say I think I can solve (some of) metaethics, what exactly is it that I think I can solve?
First, we must distinguish the study of ethics or morality from the anthropology of moral belief and practice. The first one asks: "What is right?" The second one asks: "What do people think is right?" Of course, one can inform the other, but it's important not to confuse the two. One can correctly say that different cultures have different 'morals' in that they have different moral beliefs and practices, but this may not answer the question of whether or not they are behaving in morally right ways.
My focus is metaethics, so I'll discuss the anthropology of moral belief and practice only when it is relevant for making points about metaethics.
So what is metaethics? Many people break the field of ethics into three sub-fields: applied ethics, normative ethics, and metaethics.
Applied ethics: Is abortion morally right? How should we treat animals? What political and economic systems are most moral? What are the moral responsibilities of businesses? How should doctors respond to complex and uncertain situations? When is lying acceptable? What kinds of sex are right or wrong? Is euthanasia acceptable?
Normative ethics: What moral principles should we use in order to decide how to treat animals, when lying is acceptable, and so on? Is morality decided by what produces the greatest good for the greatest number? Is it decided by a list of unbreakable rules? Is it decided by a list of character virtues? Is it decided by a hypothetical social contract drafted under ideal circumstances?
Metaethics: What does moral language mean? Do moral facts exist? If so, what are they like, and are they reducible to natural facts? How can we know whether moral judgments are true or false? Is there a connection between making a moral judgment and being motivated to abide by it? Are moral judgments objective or subjective, relative or absolute? Does it make sense to talk about moral progress?
Others prefer to combine applied ethics and normative ethics so that the breakdown becomes: normative ethics vs. metaethics, or 'first order' moral questions (normative ethics) vs. 'second order' questions (metaethics).
Mainstream views in metaethics
To illustrate how people can give different answers to the questions of metaethics, let me summarize some of the mainstream philosophical positions in metaethics.
Cognitivism vs. non-cognitivism: This is a debate about what is happening when people engage in moral discourse. When someone says "Murder is wrong," are they trying to state a fact about murder, that it has the property of being wrong? Or are they merely expressing a negative emotion toward murder, as if they had gasped aloud and said "Murder!" with a disapproving tone?
Another way of saying this is that cognitivists think moral discourse is 'truth-apt' - that is, moral statements are the kinds of things that can be true or false. Some cognitivists think that all moral claims are in fact false (error theory), just as the atheist thinks that claims about gods are usually meant to be fact-stating but in fact are all false because gods don't exist.1 Other cognitivists think that at least some moral claims are true. Naturalism holds that moral judgments are true or false because of natural facts,2 while non-naturalism holds that moral judgments are true or false because of non-natural facts.3 Weak cognitivism holds that moral judgments can be true or false not because they agree with certain (natural or non-natural) opinion-independent facts, but because our considered opinions determine the moral facts.4
Non-cognitivists, in contrast, tend to think that moral discourse is not truth-apt. Ayer (1936) held that moral sentences express our emotions ("Murder? Yuck!") about certain actions. This is called emotivism or expressivism. Another theory is prescriptivism, the idea that moral sentences express commands ("Don't murder!").5 Or perhaps moral judgments express our acceptance of certain norms (norm expressivism).6 Or maybe our moral judgments express our dispositions to form sentiments of approval or disapproval (quasi-realism).7
Moral psychology: One major debate in moral psychology concerns whether moral judgments require some (defeasible) motivation to adhere to the moral judgment (motivational internalism), or whether one can make a moral judgment without being motivated to adhere to it (motivational externalism). Another debate concerns whether motivation depends on both beliefs and desires (the Humean theory of motivation), or whether some beliefs are by themselves intrinsically motivating (non-Humean theories of motivation).
More recently, researchers have run a number of experiments to test the mechanisms by which people make moral judgments. I will list a few of the most surprising and famous results:
- Whether we judge an action as 'intentional' or not often depends on the judged goodness or badness of the action, not the internal states of the agent.8
- Our moral judgments are significantly affected by whether we are in the presence of freshly baked bread or a low concentration of fart spray that only the subconscious mind can detect.9
- Our moral judgments are greatly affected by pointing magnets at the point in our brain that processes theory of mind.10
- People tend to insist that certain things are right or wrong even when a hypothetical situation is constructed such that they admit they can give no reason for their judgment.11
- We use our recently-evolved neocortex to make utilitarian judgments, and deontological judgments tend to come from evolutionarily older parts of our brains.12
- People give harsher moral judgments when they feel clean.13
Moral epistemology: Different views on cognitivism vs. non-cognitivism and moral psychology suggest different views of moral epistemology. How can we know moral facts? Non-cognitivists and error theorists think there are no moral facts to be known. Those who believe moral facts answer to non-natural facts tend to think that moral knowledge comes from intuition, which somehow has access to non-natural facts. Moral naturalists tend to think that moral facts can be accessed simply by doing science.
Tying it all together
I will not be trying very hard to fit my pluralistic moral reductionism into these categories. I'll be arguing about the substance, not the symbols. But it still helps to have a concept of the subject matter by way of such examples.
Maybe mainstream metaethics will make more sense in flowchart form. Here's a flowchart I adapted from Miller (2003). If you don't understand the bottom-most branching, read chapter 9 of Miller's book or else just don't worry about it. (Click through for full size.)
Next post: Conceptual Analysis and Moral Theory
Previous post: Heading Toward: No-Nonsense Metaethics
Notes
1 This is not quite correct. The error theorist can hold that a statement like "Murder is not wrong" is true, for he thinks that murder is not wrong or right. Rather, the error theorist claims that all moral statements which presuppose the existence of a moral property are false, because no such moral properties exist. See Joyce (2004). Mackie (1977) is the classic statement of error theory.
2 Sturgeon (1988); Boyd (1988); Brink (1989); Brandt (1979); Railton (1986); Jackson (1998). I have written introductions to the three major versions of moral naturalism: Cornell realism, Railton's moral reductionism (1, 2), and Jackson's moral functionalism.
3 Moore (1903); McDowell (1998); Wiggins (1987).
4 For an overview of such theories, see Miller (2003), chapter 7.
5 See Carnap (1937), p. 23-25; Hare (1952).
6 Gibbard (1990).
7 Blackburn (1984).
8 The Knobe Effect. See Knobe (2003).
9 Schnall et al. (2008); Baron & Thomley (1994).
10 Young et al. (2010). I interviewed the author of this study here.
11 This is moral dumfounding. See Haidt (2001).
12 Greene (2007).
13 Zhong et al. (2010).
References
Baron & Thomley (1994). A Whiff of Reality: Positive Affect as a Potential Mediator of the Effects of Pleasant Fragrances on Task Performance and Helping. Environment and Behavior, 26(6): 766-784.
Blackburn (1984). Spreading the Word. Oxford University Press.
Brandt (1979). A Theory of the Good and the Right. Oxford University Press.
Brink (1989). Moral Realism and the Foundations of Ethics. Cambridge University Press.
Boyd (1988). How to be a Moral Realist. In Sayre-McCord (ed.), Essays in Moral Realism (pp. 181-122). Cornell University Press.
Carnap (1937). Philosophy and Logical Syntax. Kegan Paul, Trench, Trubner & Co.
Gibbard (1990). Wise Choices, Apt Feelings. Clarendon Press.
Greene (2007). The secret joke of Kant's soul. In Sinnott-Armstrong (ed.), Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Disease, and Development. MIT Press.
Haidt (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108: 814-834
Hare (1952). The Language of Morals. Oxford University Press.
Jackson (1998). From Metaphysics to Ethics. Oxford UniversityPress.
Joyce (2001). The Myth of Morality. Cambridge University Press.
Knobe (2003). Intentional Action and Side Effects in Ordinary Language. Analysis, 63: 190-193.
Mackie (1977). Ethics: Inventing Right and Wrong. Penguin.
McDowell (1998). Mind, Value, and Reality. Harvard University Press.
Miller (2003). An Introduction to Contemporary Metaethics. Polity.
Moore (1903). Principia Ethica. Cambridge University Press.
Schnall, Haidt, Clore, & Jordan (2008). Disgust as embodied moral judgment. Personality and Social Psychology Bulletin, 34(8): 1096-1109.
Sturgeon (1988). Moral explanations. In Sayre-McCord (ed.), Essays in Moral Realism (pp. 229-255). Cornell University Press.
Railton (1986). Moral realism. Philosophical Review, 95: 163-207.
Wiggins (1987). A sensible subjectivism. In Needs, Values, Truth (pp. 185-214). Blackwell.
Young, Camprodon, Hauser, Pascual-Leone, & Saxe (2010). Disruption of the right temporoparietal junction with transcranial magnetic stimulation reduces the role of beliefs in moral judgments. Proceedings of the National Academy of Sciences, 107: 6753-6758.
Zhong, Strejcek, & Sivanathan (2010). A clean self can render harsh moral judgment. Journal of Experimental Social Psychology, 46 (5): 859-862
562 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2011-04-25T19:43:11.326Z · LW(p) · GW(p)
Hm. What is this post for? It doesn't explain the ideas it refers to in any detail sufficient to feel what they mean, and from what it does tell, the ideas seem pretty crazy/simplistic, paying attention to strange categories, like that philpapers survey. (The part before "Mainstream views in metaethics" section does seem to address the topic of the post, but the rest is pretty bizarre. If that was the point, it should've been made, I think, but it probably wasn't.)
Replies from: lukeprog, lukeprog↑ comment by lukeprog · 2011-04-26T15:27:07.866Z · LW(p) · GW(p)
My posts are now going to feel naked to me whenever they lack a comment from you complaining that the post isn't book-length, covering every detail of a given topic. :)
Like I said, I don't have much interest in fitting my views into the established categories, but I wanted to give people an overview of how metaethics is usually done so they at least have some illustrations of what the subject matter is.
And if you find mainstream metaethics bizarre, well... welcome to a diseased discipline.
Replies from: Amanojack↑ comment by Amanojack · 2011-04-27T16:12:15.021Z · LW(p) · GW(p)
Since you understand how diseased the discipline of ethics is, I'm hoping the next post in the series will focus heavily on clearing up the semantic issues that have made it so diseased. I don't think any real sense can be made of metaethics until the very nature of what someone is doing when they utter an ethical statement is covered.
We use language to do a lot of things: express emotions, make other people do stuff, signal, intimidate, get our thoughts into other people's minds, parrot what someone else said - and often more than one of these at a time. Since we presumably are trying to get at the speaker's intention, we really can't know the "meaning" without asking the speaker, yet various metaethical theorists call themselves emotivists, error theorists, prescriptivists, and so on. It seems to me the choice of an meta-ethical theory boils down to a choice of what the theorists wants to presume people are trying to do when they use the word ought.
Surely no one can deny that sometimes some people do indeed intend "You ought not steal" as a command, or as a way of expressing disgust at the notion of theft, or simply as a means of intimidation. My meta-meta-ethical theory is that it all depends on what the person uttering the statement intends to accomplish by saying it. A debate between these meta-ethical theories sounds very likely to revolve around whose definition of ought is "correct".
In short, I think the main reason ethics is so diseased as a discipline is that the theorists are trying to argue whose definition is better, rather than acknowledging that it is pretty hard for anyone to know what each person intends by their moralistic language.
Replies from: Clippy, lukeprog, Bongo, Gray↑ comment by Bongo · 2011-04-28T02:23:57.191Z · LW(p) · GW(p)
Maybe the preoccupation with "statements" is part of the disease. After all, there would probably be ethics even without language or with a very different language. And after all, when investigating x, you should investigate x, not statements about x.
Replies from: CuSithBell, Amanojack↑ comment by CuSithBell · 2011-04-28T02:28:18.540Z · LW(p) · GW(p)
But first you need to identify x. Which is a question about the meaning of a word.
Replies from: Amanojack↑ comment by Amanojack · 2011-04-28T03:03:01.411Z · LW(p) · GW(p)
Though Bongo is surely right there would be moral sentiments even without language, now we are dealing with something identified: specific emotions like empathy, sense of justice, disgust, indignation, pity. Yeah those would exist without language. And yes, language has made things much more complicated, and the preoccupation with analyzing sentences makes it even even worse.
If people can realize all that without looking at the very nature of communication, that would be great, but in my experience most people feel hesitant about scrapping so many centuries of philosophy and need to see how the language makes such a mess of things before they can truly feel comfortable with it. If Bongo is ready to scrap language analysis now and drop all the silly -isms, I'm preaching to the choir.
↑ comment by Amanojack · 2011-04-28T02:56:40.082Z · LW(p) · GW(p)
Ethics is unique, at least to me, in that I still have no idea what the heck people are even referring to most of the time when they use moralistic language. I can't investigate X until I know what X is even supposed to be about. Most of the time there is a fundamental failure to communicate, even regarding the definition of the field itself. And whenever there isn't such a failure, the problem disappears and all discussants agree as if nothing.
↑ comment by Gray · 2011-04-27T16:39:28.393Z · LW(p) · GW(p)
This pretty much sums up a very large reason why I think metaethics itself is a diseased discipline. I don't even know why this site likes to talk about "metaethics" whenever it wants to moralize, other than, perhaps, saying the prefix "meta" makes it sound more technical and "rational", when it is really just another layer of obscurity.
I think, just like politics, this site should avoid the topic of ethics as much as possible. Most of the "science" of ethics is just post-Christian nonsense. Seriously, read Nietzsche. I don't trust any of this talk about ethics by someone who hasn't read, and understood, Nietzsche.
Replies from: wedrifid, thomblake↑ comment by wedrifid · 2011-04-27T17:13:51.565Z · LW(p) · GW(p)
I think, just like politics, this site should avoid the topic of ethics as much as possible. Most of the "science" of ethics is just post-Christian nonsense. Seriously, read Nietzsche. I don't trust any of this talk about ethics by someone who hasn't read, and understood, Nietzsche.
I reject your appeal to authority or sophistication. I also suggest you are confused about what discussion of metaethics entails.
The 'meta' implies that the discussions of ethics can be separated entirely from normative moralizing and be engaged with as a purely epistemic challenge. This is not to say that people don't throw their own moralizing into the conversation incessantly but that is a mix of confusion and bias on the part of the individual and not intrinsic to the subject.
It is useful to be able to describe precisely what people mean when they make ethical judgments and even what the associated words mean and how they relate to intuitions.
Replies from: torekp↑ comment by lukeprog · 2011-05-02T00:50:56.168Z · LW(p) · GW(p)
I should add that nobody who has read and understood the sequences should be surprised by what I'll describe as 'pluralistic moral reductionism.' I'm writing this sequence because I think this basic view on standard metaethical questions hasn't yet been articulated clearly enough for my satisfaction. And then, I want to make a bit of progress on the hard questions of 'metaethics' (it depends where you draw the boundary around 'metaethics') - but only after I've swept away the easy questions of metaethics.
comment by Scott Alexander (Yvain) · 2011-04-25T21:19:09.170Z · LW(p) · GW(p)
This post covered at least as much material as my old college moral philosophy classes did in a month. It also left me feeling more confident that I understood all the terms involved than that month of classes did. Thank you for being able to explain difficult things clearly and concisely.
Replies from: Yvain, Emile, lukeprog↑ comment by Scott Alexander (Yvain) · 2011-04-26T01:30:07.921Z · LW(p) · GW(p)
I request an explanation of why my comment telling Luke he did a good job is more highly upvoted than the post Luke did a good job on. If you agree with me that Luke did a good job strongly enough to upvote the statement, why not upvote Luke?
Replies from: zaph, prase, TheOtherDave, Vladimir_Nesov, RobinZ↑ comment by zaph · 2011-04-26T10:00:37.853Z · LW(p) · GW(p)
Couldn't that just be due to a higher number of total votes (both up an down) for the OP? I would assume fewer people read each comment, and downvoters may have decided to only weigh in on the OP. A hypothetical controversial post could have a karma of 8, with 10 downvotes negating 10 upvotes, and a supportive comment could have 9 upvotes due to half of the upvotes of the first post giving it their vote. The comment has higher karma, but lower volatility, so to speak.
Replies from: wedrifid↑ comment by prase · 2011-04-26T08:30:28.615Z · LW(p) · GW(p)
I have upvoted your comment because it gives a feedback to the author, which should be encouraged (negative feedback leads to improvement, but surely we don't want to read only disapproval, do we?). Not always when I upvote a comment, I agree with its content.
↑ comment by TheOtherDave · 2011-04-26T03:10:37.577Z · LW(p) · GW(p)
Oddly, the comment is now less upvoted than the post, but your request for an explanation is being downvoted. I'm kinda curious as to the underlying thought processes now, myself.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-04-26T10:16:22.866Z · LW(p) · GW(p)
This is making me wonder if karma can cause people to model LW as having a group mind, and if people generally think of social groups which are too large to model each individual as being group minds.
Replies from: TheOtherDave, wedrifid↑ comment by TheOtherDave · 2011-04-26T18:54:58.026Z · LW(p) · GW(p)
I'm not sure if it's related to what you're wondering, but if it helps clarify anything I'll add that I don't exactly know what a group mind is, or what exactly it means to model a group as one, but that when I ask questions of a forum (or, as in this case, mention to a forum that I'm curious about something) I expect that a large number of individuals will read the question, decide individually whether they have a useful answer and whether they feel like providing it, and act accordingly.
In this case, more specifically, I figured that the people whose voting patterns matched the group-level behavior -- e.g., the ones who upvoted Yvain but not Luke at first, or who downvoted Yvain's request for explanation -- might address my curiosity with personal anecdotes... and potentially that various other people would weigh in with theories.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-04-26T20:03:29.660Z · LW(p) · GW(p)
What I was thinking of with the "group mind" is that it can be tempting if one is flamed by a few people in a group, to feel as though the whole group is on the attack.
↑ comment by wedrifid · 2011-04-26T10:41:21.921Z · LW(p) · GW(p)
This is making me wonder if karma can cause people to model LW as having a group mind, and if people generally think of social groups which are too large to model each individual as being group minds.
For my part I model karma interactions and group thinking processes here via subgroups (which are not necessarily mutually exclusive). There are also a few who get their own model - which is either a compliment, insult or in some cases both.
↑ comment by Vladimir_Nesov · 2011-04-26T09:41:19.449Z · LW(p) · GW(p)
Tolerate tolerance? For example, I downvoted the post, but not your comment.
↑ comment by Emile · 2011-04-26T08:21:36.386Z · LW(p) · GW(p)
WrongBot said something similar, but I found it a bit hard to follow, especially since I'm unfamiliar with some of the terminology like "natural facts", and also because keeping track of a lot of newly-introduced terminology describing the various positions is not easy.
comment by PlaidX · 2011-04-25T22:22:31.726Z · LW(p) · GW(p)
I still have a hard time seeing how any of this is going to go somewhere useful.
Replies from: lukeprog, thomblake↑ comment by thomblake · 2011-04-26T15:34:27.194Z · LW(p) · GW(p)
Here is my understanding:
Ethics is the study of what one has most reason to do or want. On that definition, it is directly relevant to instrumental rationality. And if we want to discover the facts about ethics, we should determine what sort of things those facts would be, so that we might recognize them when we've found them - this, on one view, is the role of metaethics. This post is an intro to current thought on metaethics, which should at least make more clear the scope of the problem to any who would like to pursue it.
comment by Oscar_Cunningham · 2011-04-25T17:14:46.500Z · LW(p) · GW(p)
What does "natural fact" mean?
Replies from: lukeprog, torekp↑ comment by lukeprog · 2011-04-25T17:18:37.209Z · LW(p) · GW(p)
It means different things to different people. Moore (1903) wrote:
By 'nature', then, I do mean and have meant that which is the subject matter of the natural sciences and also of psychology.
Alternatively, Baldwin (1993) suggests:
For a property to be natural is for it to be causal, that is, to be such that its presence, in suitable conditions, brings about certain effects.
Warnock's (1960) interpretation of Moore was:
[Moore] was willing to accept a criterion for 'non-natural' which suggested that a non-natural property was one which could not be discerned by the senses.
Miller (2003) concludes:
Replies from: Will_NewsomeI will simply take natural properties to be those which are either causal or detectable by the senses.
↑ comment by Will_Newsome · 2011-04-26T03:57:19.845Z · LW(p) · GW(p)
I will simply take natural properties to be those which are either causal or detectable by the senses.
If you plan on using the word 'naturalistic' to describe your meta-ethics at some point, I hope you give a better definition than these philosophers have given. "Naturalistic" often seems to be a way of saying "there is no magic involved!", but it's not like metaphysical phenomena are necessarily magical. Using logical properties of symmetric decision algorithms to solve timeless coordination problems, for instance, doesn't fit into Miller's definition of natural properties, but it's probably somewhat tied up into some facets of meta-ethics (or morality at the very least, but that line is easily blurred and probably basically shouldn't exist in a correct technical solution).
I'm really just trying to keep a relevant distinction between "naturalistic" and "metaphysical" which are both interesting and valid, instead of having two categories "naturalistic" and "magical" where you get points for pointing out how non-magical and naturalistic a proposed solution is.
This stems from a general fear of causal / timeful / reductionist explanations that could miss important points about teleology / timelessness / pattern attractors / emergence, e.g. the distinction between timeless and causal decision theory or between timeless and causal validity semantics (if there is one), which have great bearing on reflective/temporal consistency and seem very central to meta-ethics.
I don't think you're heading there with your solution to meta-ethics, but as an aside I'm still confused about what it is you're trying to solve if you're not addressing any of these questions that seem very central.
Your past selves' utility functions are just evidence. Meta-ethics should tell you how to treat that evidence, just as it should tell you how future selves should treat your present utility function as evidence. Figuring out what my past selves' or others' utility functions are in some sense is of course a necessary step, but even after you have that data you still need to figure out what they mean. Asking "you know what your values are: what else is there?" is like asking "you know what your beliefs are: what else is there?". The way I see it this is a key part of Creating Friendly AI that seems to have been flat out lost over the years and I'm not sure why. We might be able to save causal validity semantics if we thought about it with the tools we now have available.
The above are considerations, not assertions that should be treated as if I would bet heavily on them.
↑ comment by torekp · 2011-04-26T00:50:30.482Z · LW(p) · GW(p)
A better answer than any that Luke cited would start with the network of causal laws paradigmatically considered "natural," such as those of physics and chemistry, then work toward properties, relations, objects and facts. There might (as a matter of logical possibility) have been other clusters of causal laws, such as supernatural or non-natural laws, but these would be widely separated from the natural laws with little interaction (pineal gland only?) or very non-harmonious interaction (gods defying physics).
We had a discussion about this earlier. I will try to dig up a link.
comment by XiXiDu · 2011-04-26T17:01:24.789Z · LW(p) · GW(p)
I am increasingly getting the perception that morality/ethics is useless hogwash. I already believed that to be the case before Less Wrong and I am not sure why I ever bothered to take it seriously again. I guess I was impressed that people who are concerned with 'refining the art of rationality' talk about it and concluded that after all there must be something to it. But I have yet to come across a single argument that would warrant the use of any terminology related to moral philosophy.
The article Say Not "Complexity" should have been about morality. Say not "morality"...
Consider the following questions:
- Do moral judgements express beliefs?
- Do judgements express beliefs?
- How do we evaluate evidence in the making of a decision?
All questions ask for the same, yet each one is less vague than the previous one.
It is as obvious as it can get that there is no single argument against deliberately building a paperclip maximizer if I want to build one and are aware of the consequences. It is not a question about morality but solely a question about wants.
The whole talk about morality seems to be nothing more than a signaling game.
The only reasons we care about other people is either to survive, i.e. get what we want, or because it is part of our preferences to see other people being happy. Accordingly, trying to maximize happiness for everybody can be framed in the language of volition rather than morality.
Once we get rid of the moral garbage, thought experiments like the trolley problem are no more than a question about one's preferences.
Replies from: None, Morendil, hairyfigment, endoself, FAWS, Morendil, gabgohjh, timtyler↑ comment by [deleted] · 2011-04-26T17:38:07.123Z · LW(p) · GW(p)
But I have yet to come across a single argument that would warrant the use of any terminology related to moral philosophy.
I would argue that the problem is not with morality, but with how it is being approached here.
The only reasons we care about other people is either to survive
This is a starting point for understanding morality.
trying to maximize happiness for everybody
is utilitarianism, which seems to be the house approach to morality - the very approach which you find unpersuasive.
thought experiments like the trolley problem are no more than a question about one's preferences.
Not quite. It's possible to wish a person dead, while being reluctant to kill him yourself, and even while considering anyone who does kill him a murderer who needs to be caught and brought to justice. Morality derives from preferences in a way, but it is indirect. An analogous phenomenon is the market price. The market price of a good derives from the preferences of everyone participating in the market, but the derivation is indirect. The price of a good isn't merely what you would prefer to pay for it, because that's always zero. Nor is it merely what the seller would prefer to be paid for it, because there is no upper limit on what he would charge if he could. Rather, the market price is set by supply and demand, and supply and demand depend in large part on preferences. So price derives from preferences, but the derivation is indirect, and it is mediated by interaction between people. Morality, I think, is similar. It derives from preferences indirectly, by way of interaction. This leaves open the possibility that morality is as variable as prices, but I think that because of the preferences that it rests on, it is much, much less variable, though not invariable. Natural selection holds these preferences largely in check. For example, if some genetic line of people were to develop a preference for being slaughtered, they would quickly die out.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-04-27T09:09:04.853Z · LW(p) · GW(p)
It's possible to wish a person dead, while being reluctant to kill him yourself, and even while considering anyone who does kill him a murderer who needs to be caught and brought to justice.
This just shows that human wants are inconsistent, that humans are holding conflicting ideas simultaneously, why invoke 'morality' in this context?
So price derives from preferences, but the derivation is indirect, and it is mediated by interaction between people.
People or road blockades, what's the difference? I just don't see why one would talk about morality here. The preferences of other people are simply more complex road blockades on the way towards your goal. Some of those blockades are artistically appealing so you try to be careful in removing them...why invoke 'morality' in this context?
Replies from: None↑ comment by [deleted] · 2011-04-27T10:23:12.460Z · LW(p) · GW(p)
This just shows that human wants are inconsistent
But these two desires are not inconsistent, because for someone to die by, say, natural causes, is not the same thing as for him to die by your own hand.
People or road blockades, what's the difference? I just don't see why one would talk about morality here. The preferences of other people are simply more complex road blockades on the way towards your goal. Some of those blockades are artistically appealing so you try to be careful in removing them...why invoke 'morality' in this context?
You could say the same thing about socks. E.g., "I just don't see why one would talk about socks here. Socks are simply complex arrangements of molecules. Why invoke "sock" in this context?"
What are you going to do instead of invoking "sock"? Are you going to describe the socks molecule by molecule as a way of avoiding using the word "sock"? That would be cumbersome, to say the least. Nor would it be any more true. Socks are real. They aren't imaginary. That they're made out of molecules does not stop them from being real.
All this can be said about morality. What are you going to do instead of invoking "morality"? Are you going to describe people's reactions as a way of avoiding using the word "morality"? That would be cumbersome, to say the least. Nor would it be any more true. Morality is real. It isn't imaginary. That it's made out of people's reactions doesn't stop it from being real.
Denying the reality of morality simply because it is made out of people's reactions, is like denying the reality of socks simply because they're made out of molecules.
Replies from: XiXiDu, XiXiDu, XiXiDu↑ comment by XiXiDu · 2011-04-27T14:55:04.284Z · LW(p) · GW(p)
Consider the the trolley problem. Naively you kill the fat guy if you care about other people and also if you only care about yourself, because you want others to kill the fat guy as well because you are more likely to be one of the many people tied to the rails than the fat guy.
Of course there is the question about how killing one fat guy to save more people and similar decisions could erode society. Yet it is solely a question about wants, about the preferences of the agents involved. I don't see how it could be helpful to add terminology derived from moral philosophy here or elsewhere.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T16:45:23.270Z · LW(p) · GW(p)
It is meaningful wherever it is meaningful to discuss whether there are wants people should and shouldn't have.
↑ comment by XiXiDu · 2011-04-27T14:34:02.843Z · LW(p) · GW(p)
What are you going to do instead of invoking "morality"? Are you going to describe people's reactions as a way of avoiding using the word "morality"? That would be cumbersome, to say the least.
I am going to use moral terminology in the appropriate cultural context. But why would one use it on a site that supposedly tries to dissolve problems using reductionism as a general heuristic? I am also using the term "free will" because people model their decisions according to that vague and ultimately futile concept. But if possible (if I am not too lazy) I avoid using any of those bogus memes.
Morality is real. It isn't imaginary. That it's made out of people's reactions doesn't stop it from being real.
Of course, it is real. Cthulhu is also real, it is a fictional cosmic entity. But if someone acts according to their fear of Cthulhu I am not going to resolve their fear by talking about it in terms of the Lovecraft Mythos but in terms of mental illness.
What are you going to do instead of invoking "morality"? Are you going to describe people's reactions as a way of avoiding using the word "morality"? That would be cumbersome, to say the least.
How so? Can you give an example where the use of terminology derived from moral philosophy is useful instead of obfuscating?
↑ comment by XiXiDu · 2011-04-27T15:03:54.174Z · LW(p) · GW(p)
Consider the Is–ought problem. The basis for every ought statement is what I believe to be correct with respect to my goals.
If you want to reach a certain goal and I want to help you and believe to know a better solution than you do then I tell you what you ought to do because 1.) you want to reach a goal 2.) I want you to reach your goal 3.) my brain does exhibit a certain epistemic state making me believe to be able to satisfy #1 & #2.
Replies from: None, Peterdjones↑ comment by [deleted] · 2011-04-27T16:45:39.845Z · LW(p) · GW(p)
But why would one use it on a site that supposedly tries to dissolve problems using reductionism as a general heuristic?
It is no more a philosophical puzzle that needs dissolving than prices are a philosophical puzzle that need dissolving.
I am also using the term "free will" because people model their decisions according to that vague and ultimately futile concept.
I think that the concept of "free will" may indeed be more wholly a philosopher's invention, just as the concept of "qualia" is in my view wholly a philosopher's invention. But the everyday concepts from which it derives are not a philosopher's invention. I think that the everyday concept that philosophers turned into the concept of "free will" is the concept of the uncoerced and intentional act - a concept employed when we decide what to do about people who've annoyed us. We ask: did he mean to do it? Was he forced to do it? We have good reason for asking these questions.
But if possible (if I am not too lazy) I avoid using any of those bogus memes.
Philosophers invent bogus memes that we should try to free ourselves of. I think that "qualia" are one of those memes. But philosophers didn't invent morality. They simply talked a lot of nonsense about it.
Of course, it is real. Cthulhu is also real, it is a fictional cosmic entity.
Morality is real in the sense that prices are real and in a sense that Cthulhu is not real.
Some people talk about money in the way that you want to talk about morality, so that's a nice analogy to our discussion and I'll spend a couple of paragraphs on it. They say that the value of money is merely a collective delusion - that I value a dollar only because other people value a dollar, and that they value a dollar only because, ultimately, I value a dollar. So they say that it's all a great big collective delusion. They say that if people woke up one day and realized that a dollar was just a piece of paper, then we would stop using dollars.
But while there is a grain of truth to that (especially about fiat money), there's also much that's misleading in it. Money is a medium of exchange that solves real problems. The value of money may be in a sense circular (i.e., it's valued by people because it's valued by people), but actually a lot of things are circular. A lot of natural adaptations are circular, for example symbiosis. Flowers are the way they are because bees are the way they are, and bees are the way they are because flowers are the way they are. But flowers and bees aren't a collective delusion. They're in a symbiotic relationship that has gradually evolved over a very long period of time. Money is similar - it is a social institution that evolves over a long period of time, and it can reappear when it's suppressed. For example cigarettes can become money if nothing else is available.
And all this is analogous to the situation with morality. In both cases, there's a real phenomenon which some people think is fictional, a collective delusion.
In contrast, religion really is a collective delusion. At least, all those other religions are. :)
Can you give an example where the use of terminology derived from moral philosophy is useful instead of obfuscating?
The term "morality" is not derived from philosophy. Philosophers have simply talked a lot of nonsense about morality. This doesn't mean they invented it. Similarly, philosophers have talked a lot of nonsense about motion (e.g. Zeno's paradoxes). This doesn't mean that motion is a concept that philosophers invented and that we need to "dissolve". We can still talk sensibly about velocity. What we need to dissolve is not velocity, but simply Zeno's paradoxes about velocity, which by some accounts were dissolved as a side-effect of the creation of Calculus.
Consider the the trolley problem. Naively you kill the fat guy if you care about other people and also if you only care about yourself, because you want others to kill the fat guy as well because you are more likely to be one of the many people tied to the rails than the fat guy.
That is an example of the philosophical nonsense I was talking about. If you want to dissolve something, dissolve that nonsense. In reality you are no more likely to push a fat guy onto the rails than you are to ask for the fat guy's seat. In reality we know what the rules are and we obey them.
I don't see how it could be helpful to add terminology derived from moral philosophy here or elsewhere.
Again, the relevant terminology, which in this case includes the word "murder", is not derived from philosophy. Philosophers simply took a pre-existing concept and talked a lot of nonsense about it.
Consider the Is–ought problem. The basis for every ought statement is what I believe to be correct with respect to my goals.
Actually, I think that the use of the word "ought" in relationship to morality is very confusing, because "ought" means a lot of things, and so if you use that word you are apt to confuse those things with each other. In particular, the word "ought" is used a lot in the context of personal advice. If you're giving a friend advice, you're likely to talk about what they "ought" to do. In this context, you are not making statements about morality!
The person to blame for the confusion caused by using the word "ought" in talking about morality is probably Hume. I think that it was he who started this particular bit of nonsense going.
If you want to reach a certain goal and I want to help you and believe to know a better solution than you do then I tell you what you ought to do because 1.) you want to reach a goal 2.) I want you to reach your goal 3.) my brain does exhibit a certain epistemic state making me believe to be able to satisfy #1 & #2.
Here you're talking about giving personal advice to somebody. This is a separate subject from morality.
↑ comment by Peterdjones · 2011-04-27T16:43:28.264Z · LW(p) · GW(p)
You haven't demonstrated that the basis for every ought statement is what you believe to be correct with respect to your goals. If your goal is to kill as many people as possible, you ought not to pursue it..that is the there are oughts about the nature of an end and not just about how to achieve it. This is a very well known issue in moral philosophy called the categorial/hypothetical distinction.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-04-27T18:44:48.716Z · LW(p) · GW(p)
You haven't demonstrated that the basis for every ought statement is what you believe to be correct with respect to your goals.
Imagine your friend tells you that he found a new solution to reach one of your goals. If you doubt that his solution is better than your current solution then you won't adopt your friends solution.
It is true that both your solutions might be incorrect, that there might exist a correct solution that you ought (would want) to embrace if you knew about it. But part of what you want is to do what you believe to be correct. It is at best useless to assume that you might be mistaken, because you can only do the best you can possibly do.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T18:53:02.764Z · LW(p) · GW(p)
That's all irrelevant. You need to show that there are no categorical rights and wrongs. You are just discussing certain aspects of hypothetical (instrumental) "shoulds", which does not do that.
Replies from: NMJablonski, XiXiDu↑ comment by NMJablonski · 2011-04-27T18:54:06.116Z · LW(p) · GW(p)
Why should we think that there are categorical rights and wrongs?
I just don't see any convincing reason to believe they exist.
EDIT: Not to mention, it isn't clear what it would mean - in a real physical sense - for something to be categorically right or wrong.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T19:44:44.995Z · LW(p) · GW(p)
We do think there are categorical rights and wrongs, because it is common sense that designing better gas chambers is not good, however well you do it. So the burden is on the makers of the extraordinary claim.
know what it means for a set to be uncountable, and I don't have the faintest idea what that has to do with the really physical. So that is perhaps unimportant. Perhaps you are stuck in a loop where you can't understand what other people understand because you have a strange notion of meaning.
Replies from: CuSithBell, NMJablonski↑ comment by CuSithBell · 2011-04-27T19:49:22.671Z · LW(p) · GW(p)
Why should we think that there are categorical rights and wrongs?
I just don't see any convincing reason to believe they exist.
We do think there are categorical rights and wrongs [...]
What you should notice about this exchange is that you've made an incorrect prediction, and that therefore there might be something wrong with your model.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T20:05:28.372Z · LW(p) · GW(p)
I suppose you mean I incorrectly roped in NMJ. But I don't think s/he is statistically significant,and then there is the issue of sincerity. Does NMJ really think it is good to design an improved gas chamber?
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-27T20:12:22.260Z · LW(p) · GW(p)
What I mean is that you predicted that "we" think there are categorical rights and wrongs, and you were incorrect (more than just NMJablonski disagree with you). Moreover, the fact that you seem to think "is it good to design an improved gas chamber" is inherently about "categorical rights and wrongs" indicates either dishonest argumentation or a failure to understand the position of your interlocutor.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T20:36:56.712Z · LW(p) · GW(p)
I didn't predict anything about what my interlocutors think: I made an accurate comment about ordinary people at large.
think what I said is that it is about categorical rights and wrongs if it is about anything. NMJ seems to think it is about nothing. If you think it is about something else,you need to say what:: I cannot guess.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-27T20:41:25.381Z · LW(p) · GW(p)
You cannot guess? Do you not see the irony in making this request?
Here is the situation: people often use a single word (such as 'good') to mean many different things. Thus, if you wish to use the word to mean something in particular - especially in an argument about that word! - you might have to define your own meaning.
Besides - the behemoth Opal ("ordinary people at large") is a poor judge of many things.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T20:58:58.954Z · LW(p) · GW(p)
Making the categorical/hypothetical distinction is a way of refining the meaning. I'm already there (although I am getting accused of pedantry for my efforts).
↑ comment by NMJablonski · 2011-04-27T19:47:28.682Z · LW(p) · GW(p)
Would you be willing to move this to the IRC?
↑ comment by XiXiDu · 2011-04-27T18:58:18.096Z · LW(p) · GW(p)
You need to show that there are no categorical rights and wrongs.
I don't need to do that if I don't want to do that. If you want me to act according to categorical rights and wrongs then you need to show me that they exist.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T19:34:34.045Z · LW(p) · GW(p)
You need to do certain things in order to hold a rational discussion, just as you need to do certain things to play chess. I don't have to concede that you can win a chess game without putting my king in check, and I don't have to concede that you can support a conclusion without arguing the points that need arguing. Of course, you don't have to play chess or be rational in any absolute sense. It's just that you can't have your cake and eat it.
Categorical good and evil is a different concept to the hypothetical/instrumental version: the categorical trumps the instrumental. That appears to stymie one particular attempt at reduction. There are many other arguments.
↑ comment by Morendil · 2011-04-26T17:57:46.304Z · LW(p) · GW(p)
The only reasons we care about other people is either to survive, i.e. get what we want, or because it is part of our preferences to see other people being happy
Otherwise known as The True Knowledge.
↑ comment by hairyfigment · 2011-04-29T20:57:42.727Z · LW(p) · GW(p)
But I have yet to come across a single argument that would warrant the use of any terminology related to moral philosophy.
You just did use it.
Now, in this case we could probably rephrase your statement without too much trouble. But it does not seem at all obvious that doing this for all of our beliefs has positive expected value if we just want to maximize epistemic or instrumental rationality.
↑ comment by endoself · 2011-04-26T23:58:16.816Z · LW(p) · GW(p)
I agree with most of this. The only reason for using the word morality is when talking to someone who does not realize that "Whatever you want." is the only answer that really can be given to the question of "What should I do next?". (Does that sentence make sense?)
The main thing I have to add to this is what Eliezer describes here. The causal 'reason' that I want people to be happy is because of the desires in my brain, but the motivational 'reason' is because happiness matches {happiness + survival + justice + individuality + ...}, which sounds stupid, but that is how I make decisions; I look for what best matches against that pattern. These two reasons are important to distinguish - "If neutrinos make me believe '2 + 3 = 6', then 2 + 3 = 5". Here, people use that world 'morality' to describe an idealized version of their decision processes rather than to describe the desires embodied in their brain in order to emphasize that point, and also because of the large number of people that find this pseudo-equivalence nonobvious.
Replies from: XiXiDu, Peterdjones↑ comment by XiXiDu · 2011-04-27T09:00:37.062Z · LW(p) · GW(p)
Here, people use that world 'morality' to describe an idealized version of their decision processes...
If you are confused about facts in the world then you are talking about epistemic rationality, why would one invoke 'morality' in this context?
Replies from: endoself↑ comment by endoself · 2011-04-28T04:13:31.290Z · LW(p) · GW(p)
I'm not sure I understand this. Are you objecting to my use of the word 'idealized', on the grounds that preferences and facts are different things and uncertainty is about facts? I would disagree with that. Someone might have two conflicting but very strong preferences. For example, someone might be opposed to homosexuality based on a feeling of disgust but also have a strong feeling that people should have some sort of right to self-determination. Upon sufficient thought, they may decide that the latter outweighs the former and may stop feeling disgust at homosexuals as a result of that introspection. I believe that this situation is one that occurs regularly among humans.
↑ comment by Peterdjones · 2011-04-27T14:14:47.049Z · LW(p) · GW(p)
"Whatever you want." is the only answer that really can be given to the question of "What should I do next?".
But that is not the answer if someone wants to murder someone. What you have here is actually a reductio ad absurdam o the simplistic theory that morals=desires.
Replies from: NMJablonski, Amanojack, endoself↑ comment by NMJablonski · 2011-04-27T14:41:36.430Z · LW(p) · GW(p)
It only isn't the answer if you have a problem with that particular person being murdered, or perhaps an objection to killing as a principle. I also would object to wanton, chaotic, and criminal killings, but that is because I have a complex network of preferences that inform that objection, not because murder has some intrinsic property of absolute "wrongness".
It is all preferences, and to think otherwise is the most frequent and absurd delusion still prevalent in rationalist communities. Even when a moralistic rationalist admits that moral truths and absolutes do not exist, they continue operating as if they do. They will say:
"Well, there may not be absolute morality, but we can still tell which actions are best for (survival of human race / equality among humans / etc)."
The survival of the human race is a preference! One which not all possible agents share, as we are all keenly aware of in our discussions of the threat posed by superintelligent AI's that don't share our values. There is no obligation for any mind to adopt any values. You can complain about that reality. You can insist that your preferences are the one, true, good and noble preferences, but no rational agent is obligated, in any empirical sense, to agree with you.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T15:13:10.662Z · LW(p) · GW(p)
If people have some of the preferences they have because they should have them, the issue of ethics has simply been pushed back a stage. You cannot knock down the whole concept of ethics just by objecting to one simplistic idea, eg. "intrinsic wrongness". Particularly when more complex ideas have been spelt out..
The most frequent and absurd delusion in rationalist circles is that you can arrive at simple solutions to complicated problems by throwing a little science at them.
Rational agents are obliged to believe what can be demonstrated through reasons. Rationality is a norm. Morality is a norm too, if it is anything. You assume tactily that no reasoned demonstration of ethics can be made, but that is just an assumption. You have not done anything like enough to oblige a reasonable person to believe in the elimination of morality.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-27T17:18:38.240Z · LW(p) · GW(p)
Well, when you have something substantive and meaningful to point to let me know. I suggest tabooing words like "ethics", "morality", "should", etc. If you can give me a clear reductionist description of what you're talking about in metaethics without using those words, I'd love to hear it.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T17:30:44.109Z · LW(p) · GW(p)
There is no reason I should avoid the words "ethics", "morality", etc, in a discussion of ethics, morality, etc. It is in fact, an unreasonable request on your part.
I am also unpersuaded that I need to be a "reductionist" on the topic. The material on reductionism on this site seems to me a charter for coming up with pseudo-solutions that just sweep the problems under the rug.
My substantive point remains that you have not made a case for eliminating ethics in favour of preferences.
Replies from: NMJablonski, torekp, None↑ comment by NMJablonski · 2011-04-27T17:51:31.404Z · LW(p) · GW(p)
Your substantive point is nonsensical. My physical, real world understanding of intelligent agents includes preferences. It does not include anything presently labeled "morality" and I have no idea what I would apply that label to.
I don't think you have anything concrete down there that you're talking about (I'd be excited to be wrong about this). So you can do your little philosophers dance in a world of poorly anchored words but I'm not going to take you seriously until you start talking about reality.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T18:13:10.667Z · LW(p) · GW(p)
If you can't figure out what to apply "morality" to, that is your problem. Most people do not share it.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-27T18:22:25.454Z · LW(p) · GW(p)
Alright.
I'm going to give this one last shot. Can you explain, succinctly, what you're talking about when you say "morality"?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T19:03:14.399Z · LW(p) · GW(p)
concern with the distinction between good and evil or right and wrong; right or good conduct
Replies from: NMJablonski, nhamann↑ comment by NMJablonski · 2011-04-27T19:11:47.669Z · LW(p) · GW(p)
What is it about conduct that makes it right and good as opposed to wrong and evil?
What is it that determines these attributes, if not human preference?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T19:24:28.754Z · LW(p) · GW(p)
There is a selection of possible answers to that question in the original posting. Since you take the question to be poseable, I take it that you now concede that words like "moral", "right" and "good" have a meaning.
Replies from: CuSithBell, NMJablonski↑ comment by CuSithBell · 2011-04-27T19:43:54.256Z · LW(p) · GW(p)
"Oi, I just saw a smeert!"
"What's a smeert?"
"So you believe in smeerts then?"
↑ comment by NMJablonski · 2011-04-27T19:30:47.844Z · LW(p) · GW(p)
I'm sorry. It's clear that you're motivated to "win" an argument, not get at reality.
For the record, words do not have intrinsic meanings. If you are willing to use simpler words that we are likely to agree on to explain what you mean by "moral", "right" and "good" then I will be happy to read it. Otherwise, I just cannot take you seriously enough to continue this.
EDIT: If you really would like to discuss this I suggest we move to the LessWrong IRC channel instead of making a long person to person thread here.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T19:54:48.395Z · LW(p) · GW(p)
You think that claiming to have no understanding at all of ordinary words is getting at reality?
I don't have to break down the meanings of some arbitrarily selected terms. It is not possible in all cases. I may be specifically impossible in the case of 'good' as George Moore famously argued. I am using the same vocabulary as 99% of philosophers: your request that I used other vocabulary is unreasonable. I also don't have to explain myself standing on my head and drinking a glass of water just because your request it.
Replies from: nhamann, NMJablonski↑ comment by nhamann · 2011-04-27T20:33:14.951Z · LW(p) · GW(p)
You think that claiming to have no understanding at all of ordinary words is getting at reality?
It's almost never sufficient, but it is often necessary to discard wrong words.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T20:47:32.456Z · LW(p) · GW(p)
..and it's necessary to have a reasoned motivation for that. If you could really disprove things just by unmotivated refusal to use language, you could disprove everything. Meta-principle: treat one-size-fits-all arguments with suspicion.
Replies from: Cyan↑ comment by Cyan · 2011-04-27T20:51:24.696Z · LW(p) · GW(p)
Meta-principle: treat one-size-fits-all arguments with suspicion.
Around here we call those "fully general counter-arguments".
ETA: you've misunderstood the grandparent, the point of which is not about a refusal to use language but rather about using it more precisely so as to avoid miscommunication and errors.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T21:09:18.036Z · LW(p) · GW(p)
I have not noticed NMJabalonski offering a more precise replacement vocabulary.
Replies from: None, NMJablonski↑ comment by [deleted] · 2011-04-27T21:11:31.826Z · LW(p) · GW(p)
Probably because he doesn't know what to replace it with. You introduced the words into the conversation. We're trying to figure out what you mean by them.
Replies from: NMJablonski, Peterdjones↑ comment by NMJablonski · 2011-04-27T21:16:14.721Z · LW(p) · GW(p)
This summarizes the situation nicely I think. Thanks.
↑ comment by Peterdjones · 2011-04-27T21:36:10.970Z · LW(p) · GW(p)
I did not introduce the words "moral", "good" etc. They are not some weird never-before encountered vocabulary.
Replies from: None↑ comment by [deleted] · 2011-04-27T21:50:41.577Z · LW(p) · GW(p)
You're promoting illusion of transparency. Just explain what you mean, already.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T22:03:22.326Z · LW(p) · GW(p)
I can only do that if you understand the language I intend to do the explaining in. It's called English. Do you understand this language?
Replies from: Alicorn, NMJablonski, None, Cyan↑ comment by Alicorn · 2011-04-27T22:09:08.837Z · LW(p) · GW(p)
I have access to a number of dictionaries which, while written entirely in English, contain many definitions. Please, emulate them.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T22:12:56.362Z · LW(p) · GW(p)
morality :concern with the distinction between good and evil or right and wrong; right or good conduct
good:morally admirable
Ethics (also known as moral philosophy) is a branch of philosophy which seeks to address questions about morality; that is, about concepts such as good and bad, right and wrong, justice, and virtue.
Replies from: Amanojack, NMJablonski↑ comment by Amanojack · 2011-04-27T22:43:27.321Z · LW(p) · GW(p)
Let me try to guess the next few moves in hopes of speeding this up:
A: Admirable according to whom? (And why'd you use "morally" in the definition of "morality"?)
B: Most people. / Everyone. / Everyone who matters.
A: So basically, if a lot of people or everyone admires something, it is morally good? It's a popularity contest?
B: No, it's just objectively admirable.
A: I don't understand what it would mean to be "objectively admirable"?
B: These are two common words. How can you not understand them?
A: Each might make sense separately, but together no. Perhaps you mean "universally admirable"?
B: Yeah, that sounds good.
A: So basically, if everyone admires something, you will want to call it "morally good." They will probably appreciate and agree to those approving words, seeing as they all admire it as well.
Or...?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T22:45:54.427Z · LW(p) · GW(p)
C; Now that you have enough of a handle on "morality" to see the difference between a theory of morality and a theory of flight, you can read the literature.
Replies from: Amanojack↑ comment by Amanojack · 2011-04-27T23:08:11.901Z · LW(p) · GW(p)
??? I'm just trying to understand what your definition of morality is.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T23:11:08.409Z · LW(p) · GW(p)
Don't you already know what it means? I though we established that you speak English.
Replies from: Amanojack↑ comment by Amanojack · 2011-04-27T23:22:19.757Z · LW(p) · GW(p)
You're aware that words have more than one definition, and in debates it is customary to define key terms before beginning? Perhaps I could interest you in this.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T23:25:37.749Z · LW(p) · GW(p)
The debate, which seems to be over, was largely about whether the word has any meaning at all,
↑ comment by NMJablonski · 2011-04-27T22:15:22.763Z · LW(p) · GW(p)
So...
"Something is moral if it is good."
and
"Something is good if it is moral." ?
Replies from: Alicorn, Peterdjones↑ comment by Alicorn · 2011-04-27T22:16:51.910Z · LW(p) · GW(p)
I think "admirable" might break the circle and ground the definitions, albeit tenuously.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-27T22:18:44.288Z · LW(p) · GW(p)
It could, that's true. Only, I think, if we clear up who's doing the admiring. There would be disagreement among a lot of people as to what's admirable.
↑ comment by Peterdjones · 2011-04-27T22:27:23.866Z · LW(p) · GW(p)
Circularity is typical of ordinary dictionary defintiions. OTOH, it doesn't stop people learning meanings.
↑ comment by NMJablonski · 2011-04-27T22:11:54.240Z · LW(p) · GW(p)
We all speak English here to some degree.
The issue is that some words are floating, disconnected from anything in reality, and meaningless. Consider the question: do humans have souls?
What would it mean, in terms of actual experience, for humans to have souls? What is a soul? Can you understand how if someone refused to explain what a soul is, claiming it to be a basic thing which no other words can describe, it would be pretty confusing?
What would it mean, in terms of actual experience, for something to be "morally right"? What characteristics make it that way, and how do you know?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T22:20:57.469Z · LW(p) · GW(p)
To disbelieve in souls, you have to know what "soul" means, You seem to have mistaken an issue of truth for one of meaning.
Can you understand how if someone refused to explain what a soul is, claiming it to > be a basic thing which no other words can describe, it would be pretty confusing?
I think you are going to have to put up with that unfortunate confusion, since you can't reduce everything to nothing.
What would it mean, in terms of actual experience, for something to be "morally right"? What characteristics make it that way, and how do you know?
Something is morally right if it fulfils the Correct Theory of Morality. I'm not claiming to have that. However, I can recognise theories of morality, and I can do that with my ordinary-language notiion of morality. (The theoretic is always based on the pre-theoretic. We do not reach the theoretic in one bound) I'm not creating stumbling blocks for myself by placing arbitrary requirments on definitions, like insisting that they are both concrete and reductive.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-27T22:30:23.000Z · LW(p) · GW(p)
Why do you believe there exists a Correct Theory of Morality?
Replies from: Eugine_Nier, Peterdjones↑ comment by Eugine_Nier · 2011-04-27T22:48:07.245Z · LW(p) · GW(p)
Why do you believe there exists a Correct Theory of Physics?
As Constant points out here all the arguments based on reductionism that you're using could just as easily be used to argue that there is no correct theory of physics.
One difference between physics and morality is that there is currently a lot more consensus about what the correct theory of physics looks like then what the correct theory of morality looks like. However, that is a statement about the current time, if you were to go back a couple centuries you'd find that there was as little consensus about the correct theory of physics as there is today about the correct theory of morality.
Replies from: Amanojack, NMJablonski↑ comment by Amanojack · 2011-04-27T23:40:17.234Z · LW(p) · GW(p)
It's not an argument by reductionism...it's simply trying to figure out how to interpret the words people are using - because it's really not obvious. It only looks like reductionism because someone asks, "What is morality?" and the answer comes: "Right and wrong," then "What should be done," then "What is admirable"... It is all moralistic language that, if any of it means anything, it all means the same thing.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T00:15:31.556Z · LW(p) · GW(p)
Well the original argument, way back in the thread, was NMJablonski arguing against the existence of a "Correct Theory of Morality" by demanding that Peter provide "a clear reductionist description of what [he's] talking about" while "tabooing words like 'ethics', 'morality', 'should', etc.
My point is that NMJablonski's request is about as reasonable as demanding that someone arguing for the existence of a "Correct Theory of Physics" provide a clear reductionist description of what one means while tabooing words like 'physics', 'reality', 'exists', 'experience', etc.
Replies from: Amanojack, TimFreeman↑ comment by Amanojack · 2011-04-28T00:41:55.920Z · LW(p) · GW(p)
Fair enough, though I suspect that by asking for a "reductionist" description NMJablonski may have just been hoping for some kind of unambiguous wording.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T00:57:53.796Z · LW(p) · GW(p)
My point, and possibly Peter's, is that given our current state of knowledge about meta-ethics I can give no better definition of the words "should"/"right"/"wrong" than the meaning they have in everyday use.
Note, following my analogy with physics, that historically we developed a systematic way for judging the validity of statements about physics, i.e., the scientific method, several centuries before developing a semi-coherent meta-theory of physics, i.e., empiricism and Bayseanism. With morality we're not even at the "scientific method" stage.
Replies from: None, TimFreeman↑ comment by [deleted] · 2011-04-28T01:48:24.305Z · LW(p) · GW(p)
My point, and possibly Peter's, is that given our current state of knowledge about meta-ethics I can give no better definition of the words "should"/"right"/"wrong" than the meaning they have in everyday use.
This is consistent with Jablonski's point that "it's all preferences."
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T01:57:04.602Z · LW(p) · GW(p)
This is consistent with Jablonski's point that "it's all preferences."
In keeping with my physics analogy, saying "it's all preferences" about morality is analogous to saying "it's all opinion" about physics.
Replies from: NMJablonski, Vladimir_Nesov, JGWeissman, Amanojack↑ comment by NMJablonski · 2011-04-28T02:19:45.945Z · LW(p) · GW(p)
Clearly there's a group of people who dislike what I've said in this thread, as I've been downvoted quite a bit.
I'm not perfectly clear on why. My only position at any point has been this:
I see a universe which contains intelligent agents trying to fulfill their preferences. Then I see conversations about morality and ethics talking about actions being "right" or "wrong". From the context and explanations, "right" seems to mean very different things. Like:
"Those actions which I prefer" or "Those actions which most agents in a particular place prefer" or "Those actions which fulfill arbitrary metric X"
Likewise, "wrong" inherits its meaning from whatever definition is given for "right". It makes sense to me to talk about preferences. They're important. If that's what people are talking about when they discuss morality, then that makes perfect sense. What I do not understand is when people use the words "right" or "wrong" independently of any agent's preferences. I don't see what they are referring to, or what those words even mean in that context.
Does anyone care to explain what I'm missing, or if there's something specific I did to elicit downvotes?
Replies from: wedrifid, Perplexed, Marius, Marius, Peterdjones, Eugine_Nier, Eugine_Nier↑ comment by wedrifid · 2011-04-28T03:29:55.324Z · LW(p) · GW(p)
Does anyone care to explain what I'm missing, or if there's something specific I did to elicit downvotes?
You signaled disagreement with someone about morality. What did you expect? :)
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T03:42:40.838Z · LW(p) · GW(p)
Your explanation is simple and fits the facts!
I like it :)
↑ comment by Perplexed · 2011-05-01T18:11:16.088Z · LW(p) · GW(p)
What I do not understand is when people use the words "right" or "wrong" independently of any agent's preferences. I don't see what they are referring to, or what those words even mean in that context.
Does anyone care to explain what I'm missing, or if there's something specific I did to elicit downvotes?
I don't know anything about downvotes, but I do think that there is a way of understanding 'right' and 'wrong' independently of preferences. But it takes a conceptual shift.
Don't think of morality as a doctrine guiding you as to how to behave. Instead, imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
Replies from: XiXiDu, NMJablonski, AlephNeil↑ comment by XiXiDu · 2011-05-01T18:39:55.836Z · LW(p) · GW(p)
I do think that there is a way of understanding 'right' and 'wrong' independently of preferences...Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
Sociology? Psychology? Game theory? Mathematics? What does moral philosophy add to the sciences that is useful, that helps us to dissolve confusion and understand the nature of reality?
Replies from: Perplexed↑ comment by Perplexed · 2011-05-01T20:49:13.109Z · LW(p) · GW(p)
What does moral philosophy add to the sciences that is useful, that helps us to dissolve confusion and understand the nature of reality?
Moral philosophy, like all philosophy, does nothing directly to illuminate the nature of reality. What it does is to illuminate the nature of confusion.
How does someone who thinks that 'morality' is meaningless discuss the subject with someone who attaches meaning to the word? Answer: They talk to each other carefully and respectfully.
What do you call the subject matter of that discussion? Answer: Metaethics.
What do you call success in this endeavor? Answer: "Dissolving the confusion".
Replies from: XiXiDu, Amanojack, XiXiDu↑ comment by XiXiDu · 2011-05-02T08:41:12.053Z · LW(p) · GW(p)
Moral philosophy, like all philosophy, does nothing directly to illuminate the nature of reality. What it does is to illuminate the nature of confusion.
Moral philosophy does not illuminate the nature of confusion, it is the confusion. I am asking, what is missing and what confusion is left if you disregard moral philosophy and talk about right and wrong in terms of preferences?
Replies from: Perplexed↑ comment by Perplexed · 2011-05-02T15:26:22.621Z · LW(p) · GW(p)
I'm tempted to reply that what is missing is the ability to communicate with anyone who believes in virtue ethics or deontological ethics, and therefore doesn't see how preferences are even involved. But maybe I am not understanding your point.
Perhaps an example would help. Suppose I say, "It is morally wrong for Alice to lie to Bob." How would you analyze that moral intuition in terms of preferences. Whose preferences are we talking about here? Alice's, Bob's, mine, everybody else's? For comparison purposes, also analyze the claim "It is morally wrong for Bob to strangle Alice."
Replies from: XiXiDu, Amanojack↑ comment by XiXiDu · 2011-05-02T17:13:26.352Z · LW(p) · GW(p)
"It is morally wrong for Alice to lie to Bob."
Due to your genetically hard-coded intuitions about appropriate behavior within groups of primates, your upbringing, cultural influences, rational knowledge about the virtues of truth-telling and preferences involving the well-being of other people, you feel obliged to influence the intercourse between Alice and Bob in a way that persuades Alice to do what you want, without feeling inappropriately influenced by you, by signaling your objection to certain behaviors as an appeal to the order of higher authority .
"It is morally wrong for Bob to strangle Alice."
If you say, "I don't want you to strangle Alice.", Bob might reply, "I don't care what you want!".
If you say, "Strangling Alice might have detrimental effects on your other preferences.", Bob might reply, "I assign infinite utility to the death of Alice!" (which might very well be the case for humans in a temporary rage).
But if you say, "It is morally wrong to strangle Alice.", Bob might get confused and reply, "You are right, I don't want to be immoral!". Which is really a form of coercive persuasion. Since when you say, "It is morally wrong to strangle Alice.", you actually signal, "If you strangle Alice you will feel guilty.". It is a manipulative method that might make Bob say, "You are right, I don't want to be immoral!", when what he actually means is, "I don't want to feel guilty!".
Primates don't like to be readily controled by other primates. To get them to do what you want you have to make them believe that, for some non-obvious reason, they actually want to do it themselves.
Replies from: Perplexed↑ comment by Perplexed · 2011-05-02T19:19:57.192Z · LW(p) · GW(p)
This sounds like you are trying to explain-away the phenomenon, rather than explain it. At the very least, I would think, such a theory of morality needs to make some predictions or explain some distinctions. For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
Replies from: XiXiDu, Amanojack↑ comment by XiXiDu · 2011-05-03T09:07:24.577Z · LW(p) · GW(p)
For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
Complex influences, like your culture and upbringing.That's also why some people don't say that it is morally wrong to burn a paperback book while others are outraged by the thought. And those differences and similarities can be studied, among other fields, in terms of cultural anthropology and evolutionary psychology.
It needs a multidisciplinary approach to tackle such questions. But moral philosophy shouldn't be part of the solution because it is largely mistaken about cause and effect. Morality is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense moral philosophy is a meme that is part of a larger effect and therefore can't be part of a reductionist explanation of itself. The underlying causes of cultural norms and our use of language can be explained by social and behavioural sciences, applied mathematics like game theory, computer science and linguistics.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-03T22:01:00.498Z · LW(p) · GW(p)
But rationality shouldn't be part of the solution because it is largely mistaken about cause and effect. Rationalitty is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense rationality is a meme that is part of a larger effect and therefore can't be part of a reductionist explanation of itself.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-05-03T22:09:53.388Z · LW(p) · GW(p)
However, these claims are false, so you have to make a different argument.
I've seen this sort of substitution-argument a few times recently, so I'll take this opportunity to point out that arguments have contexts, and if it seems that an argument does not contain all the information necessary to support its conclusions (because directly substituting in other words produces falsehood), this is because words have meanings, steps are elided, and there are things true and false in the world. This does not invalidate those arguments! These elisions are in fact necessary to prevent each argument from being a re-derivation of human society from mathematical axioms. Arguers should try to be sensitive to the way in which the context of an argument may or may not change how that argument applies to other subjects (A simple example: "You should not enter that tunnel because your truck is taller than the ceiling's clearance" is a good argument only if the truck in question is actually taller than the ceiling's clearance.). This especially applies when arguments are not meant to be formal, or in fact when they are not intended to be arguments.
Replies from: TimFreeman, None↑ comment by TimFreeman · 2011-05-03T22:37:58.365Z · LW(p) · GW(p)
These substitution arguments are quite a shortcut. The perpetrator doesn't actually have to construct something that supports a specific point; instead, they can take an argument they disagree with, swap some words around, leave out any words that are inconvenient, post it, and if the result doesn't make sense, the perpetrator wins!
Making a valid argument about why the substitution argument doesn't make sense requires more effort than creating the substitution argument, so if we regard discussions here as a war of attrition, the perpetrator wins even if you create a well-reasoned reply to him.
Substitution arguments are garbage. I wish I knew a clean way to get rid of them. Thanks for identifying them as a thing to be confronted.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-05-04T01:17:25.185Z · LW(p) · GW(p)
Cool, glad I'm not just imagining things! I think that sometimes this sort of argument can be valuable ("That person also has a subjective experience of divine inspiration, but came to a different conclusion", frex), but I've become more suspicious of them recently - especially when I'm tempted to use one myself.
↑ comment by [deleted] · 2011-05-04T02:25:37.952Z · LW(p) · GW(p)
if it seems that an argument does not contain all the information necessary to support its conclusions (because directly substituting in other words produces falsehood), this is because words have meanings, steps are elided, and there are things true and false in the world. This does not invalidate those arguments!
Thing is, this is a general response to virtually any criticism whatsoever. And it's often true! But it's not always a terribly useful response. Sometimes it's better to make explicit that bit of context, or that elided step.
Moreover it's also a good thing to remember about the other guy's argument next time you think his conclusions obviously do not follow from his (explicitly stated) premises - that is, next time you see what looks to you to be an invalid argument, it may not be even if strictly on a formal level it is, precisely because you are not necessarily seeing everything the other guy is seeing.
So, it's not just about substitutions. It's a general point.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-05-04T03:48:22.339Z · LW(p) · GW(p)
Thing is, this is a general response to virtually any criticism whatsoever. And it's often true! But it's not always a terribly useful response. Sometimes it's better to make explicit that bit of context, or that elided step.
True! This observation does not absolve us of our eternal vigilance.
Moreover it's also a good thing to remember about the other guy's argument next time you think his conclusions obviously do not follow from his (explicitly stated) premises - that is, next time you see what looks to you to be an invalid argument, it may not be even if strictly on a formal level it is, precisely because you are not necessarily seeing everything the other guy is seeing.
So, it's not just about substitutions. It's a general point.
Emphatically agreed.
↑ comment by Amanojack · 2011-05-03T00:42:36.804Z · LW(p) · GW(p)
For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
Guilt works here, for example. (But XiXiDu covered that.) Social pressure also. Veiled threat and warning, too. Signaling your virtue to others as well. Moral arguments are so handy that they accomplish all of these in one blow.
ETA: I'm not suggesting that you in particular are trying to guilt trip people, pressure them, threaten them, or signal. I'm saying that those are all possible explanations as to why someone might prefer to couch their arguments in moral terms: it is more persuasive (as Dark Arts) in certain cases. Though I reject moralist language if we are trying to have a clear discussion and get at the truth, I am not against using Dark Arts to convince Bob not to strangle Alice.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-05-03T01:11:46.879Z · LW(p) · GW(p)
Perplexed wrote earlier:
Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
Sometimes you'll want to explain why your punishment of others is justified. If you don't want to engage Perplexed's "moral realism", then either you don't think there's anything universal enough (for humans, or in general) in it to be of explanatory use in the judgments people actually make, or you don't think it's a productive system for manufacturing (disingenuous yet generally persuasive) explanations that will sometimes excuse you.
Replies from: Amanojack↑ comment by Amanojack · 2011-05-03T01:19:39.148Z · LW(p) · GW(p)
Assuming I haven't totally lost track of context here, I think I am saying that moral language works for persuasion (partially as Dark Arts), but is not really suitable for intellectual discourse.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-05-03T01:38:59.790Z · LW(p) · GW(p)
Okay. Whatever he hopes is real (but you think is only confused), will allow you to form persuasive arguments to similar people. So it's still worth talking about.
↑ comment by Amanojack · 2011-05-03T00:37:21.531Z · LW(p) · GW(p)
Virtue ethicists and deontologists merely express a preference for certain codes of conduct because they believe adhering to these codes will maximize their utility, usually via the mechanism of lowering their time preference.
ETA: And also, as XiXiDu points out, to signal virtuosity.
↑ comment by Amanojack · 2011-05-03T00:34:24.004Z · LW(p) · GW(p)
Upvoted because I strongly agree with the spirit of this post, but I don't think moral philosophy succeeds in dissolving the confusion. So far it has failed miserably, and I suspect that it is entirely unnecessary. That is, I think this is one field that can be dissolved away.
↑ comment by NMJablonski · 2011-05-01T18:16:47.156Z · LW(p) · GW(p)
imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Which metrics do I use to judge others?
There has been some confusion over the word "preference" in the thread, so perhaps I should use "subjective value". Would you agree that the only tools I have for judging others are subjective values? (This includes me placing value on other people reaching a state of subjective high value)
Or do you think there's a set of metrics for judging people which has some spooky, metaphysical property that makes it "better"?
Replies from: XiXiDu, Perplexed↑ comment by XiXiDu · 2011-05-01T18:50:05.147Z · LW(p) · GW(p)
Or do you think there's a set of metrics for judging people which has some spooky, metaphysical property that makes it "better"?
And why would that even matter as long as I am able to realize what I want without being instantly struck by thunder if I desire or do something that violates the laws of morality? If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter, to whom would it matter and why would I care if I am happy and my preferences are satisfied? Is it some sort of game that I am losing, where those who are the most right win? What if I don't want to play that game, what if I don't care who wins?
Replies from: Perplexed, NMJablonski↑ comment by Perplexed · 2011-05-02T00:50:44.930Z · LW(p) · GW(p)
If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter,
Because it harms other people directly or indirectly. Most immoral actions have that property.
to whom would it matter
To the person you harm. To the victim's friends and relatives. To everyone in the society which is kept smoothly running by the moral code which you flout.
and why would I care if I am happy and my preferences are satisfied?
Because you will probably be punished, and that tends to not satisfy your preferences.
Is it some sort of game that I am losing, where those who are the most right win?
If the moral code is correctly designed, yes.
What if I don't want to play that game, what if I don't care who wins?
Then you are, by definition, irrational, and a sane society will eventually lock you up as being a danger to yourself and everyone else.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-05-02T08:38:58.105Z · LW(p) · GW(p)
Because it harms other people directly or indirectly. Most immoral actions have that property.
Begging the question.
To the person you harm. To the victim's friends and relatives.
Either that is part of my preferences or it isn't.
To everyone in the society which is kept smoothly running by the moral code which you flout.
Either society is instrumental to my goals or it isn't.
Because you will probably be punished, and that tends to not satisfy your preferences.
Game theory? Instrumental rationality? Cultural anthropology?
If the moral code is correctly designed, yes.
If I am able to realize my goals, satisfy my preferences, don't want to play some sort of morality game with agreed upon goals and am not struck by thunder once I violate those rules, why would I care?
Then you are, by definition, irrational...
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
Replies from: Jonathan_Graehl, Jonathan_Graehl, Peterdjones↑ comment by Jonathan_Graehl · 2011-05-03T01:33:01.295Z · LW(p) · GW(p)
Also, what did you mean by
Game theory?
Cultural anthropology?
... in response to "Because you will probably be punished, and that tends to not satisfy your preferences." ?
I think you mean that you should correctly predict the odds and disutility (over your life) of potential punishments, and then act rationally selfishly. I think this may be too computationally expensive in practice, and you may not have considered the severity of the (unlikely event) that you end up severely punished by a reputation of being an effectively amoral person.
Yes, we see lots of examples of successful and happy unscrupulous people in the news. But consider selection effects (that contradiction of conventional moral wisdom excites people and sells advertisements).
Replies from: XiXiDu↑ comment by XiXiDu · 2011-05-03T07:54:16.845Z · LW(p) · GW(p)
I meant that we already do have a field of applied mathematics and science that talks about those things, why do we need moral philosophy?
I am not saying that it is a clear cut issue that we, as computationally bounded agents, should abandon moral language, or that we even would want to do that. I am not advocating to reduce the complexity of natural language. But this community seems to be committed to reductionism, minimizing vagueness and the description of human nature in terms of causal chains. I don't think that moral philosophy fits this community.
This community doesn't talk about theology either, it talks about probability and Occam's razor. Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
Replies from: timtyler, Peterdjones↑ comment by timtyler · 2011-05-03T09:49:21.176Z · LW(p) · GW(p)
This community doesn't talk about theology either[...]Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
It is a useful umbrella term - rather like "advertising".
↑ comment by Peterdjones · 2011-05-03T12:11:15.280Z · LW(p) · GW(p)
Can all of it be described in those terms? Isn't that a philosophical claim?
↑ comment by Jonathan_Graehl · 2011-05-03T01:28:18.091Z · LW(p) · GW(p)
There's nothing to dispute. You have a defensible position.
However, I think most humans have as part of what satisfies them (they may not know it until they try it), the desire to feel righteous, which can most fully be realized with a hard-to-shake belief. For a rational person, moral realism may offer this without requiring tremendous self-delusion. (disclaimer: I haven't tried this).
Is it worth the cost? Probably you can experiment. It's true that if you formerly felt guilty and afraid of punishment, then deleting the desire to be virtuous (as much as possible) will feel liberating. In most cases, our instinctual fears are overblown in the context of a relatively anonymous urban society.
Still, reputation matters, and you can maintain it more surely by actually being what you present yourself as, rather than carefully (and eventually sloppily and over-optimistically) weighing each case in terms of odds of discovery and punishment. You could work on not feeling bad about your departures from moral perfection more directly, and then enjoy the real positive feeling-of-virtue (if I'm right about our nature), as well as the practical security. The only cost then would be lost opportunities to cheat.
It's hard to know who to trust as having honest thoughts and communication on the issue, rather than presenting an advantageous image, when so much is at stake. Most people seem to prefer tasteful hypocrisy and tasteful hypocrites. Only those trying to impress you with their honesty, or those with whom you've established deep loyalties, will advertise their amorality.
↑ comment by Peterdjones · 2011-05-03T23:50:48.387Z · LW(p) · GW(p)
What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
It's irrational to think that the evaluative buck stops with your own preferences.
Replies from: nshepperd↑ comment by nshepperd · 2011-05-04T00:07:13.943Z · LW(p) · GW(p)
Maybe he doesn't care about the "evaluative buck", which while rather unfortunate, is certainly possible.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-04T00:36:26.009Z · LW(p) · GW(p)
If he doesn't care about rationality, he is still being irrational,
↑ comment by NMJablonski · 2011-05-01T18:51:48.692Z · LW(p) · GW(p)
This.
↑ comment by Perplexed · 2011-05-02T00:43:05.656Z · LW(p) · GW(p)
I'm claiming that there is a particular moral code which has the spooky game-theoretical property that it produces the most utility for you and for others. That is, it is the metric which is Pareto optimal and which is also a 'fair' bargain.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-05-02T02:00:53.214Z · LW(p) · GW(p)
So you're saying that there's one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I'm not convinced.
Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved.
The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent's utility. It all comes down to subjective values. There exists no other motivating force.
Replies from: Perplexed, Amanojack↑ comment by Perplexed · 2011-05-02T02:29:50.125Z · LW(p) · GW(p)
... what the optimal strategy is will change if the net values across the group changes.
True, but that may not be as telling an objection as you seem to think. For example, if you run into someone (not me!) who claims that the entire moral code is based on the 'Golden Rule' of "Do unto others as you would have others do unto you." Tell that guy that moral behavior changes if preferences change. He will respond "Well, duh! What is your point?".
Replies from: NMJablonski↑ comment by NMJablonski · 2011-05-02T02:49:25.907Z · LW(p) · GW(p)
Well, duh! What is your point?
There are people who do not recognize this. It was, in fact, my point.
Edit: Hmm, did I say something rude Perplexed?
Replies from: Perplexed↑ comment by Perplexed · 2011-05-02T03:42:10.631Z · LW(p) · GW(p)
Hmm, did I say something rude, Perplexed?
Not to me. I didn't downvote, and in any case I was the first to use the rude "duh!", so if you were rude back I probably deserved it. Unfortunately, I'm afraid I still don't understand your point.
Perhaps you were rude to those unnamed people who you suggest "do not recognize this".
Replies from: NMJablonski↑ comment by NMJablonski · 2011-05-02T17:53:54.461Z · LW(p) · GW(p)
Unfortunately, I'm afraid I still don't understand your point.
I think we may have reached the somewhat common on LW point where we're arguing even though we have no disagreement.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-05-03T01:13:44.871Z · LW(p) · GW(p)
It's easy to bristle when someone in response to you points out something you thought it was obvious that you knew. This happens all the time when people think they're smart :)
↑ comment by Amanojack · 2011-05-02T02:36:16.477Z · LW(p) · GW(p)
I'm fond of including clarification like, "subjective values (values defined in the broadest possible sense, to include even things like your desire to get right with your god, to see other people happy, to not feel guilty, or even to "be good")."
Some ways I've found to dissolve people's language back to subjective utility:
If someone says something is good, right, bad, or wrong, ask, "For what purpose?"
If someone declares something immoral, unjust, unethical, ask, "So what unhappiness will I suffer as a result?"
But use sparingly, because there is a big reason many people resist dissolving this confusion.
↑ comment by AlephNeil · 2011-05-01T22:13:13.611Z · LW(p) · GW(p)
Don't think of morality as a doctrine guiding you as to how to behave. Instead, imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Yes! That's a point that I've repeated so often to so many different people [not on LW, though] that I'd more-or-less "given up" - it began to seem as futile as swatting flies in summer. Maybe I'll resume swatting now I know I'm not alone.
Replies from: Swimmer963, Perplexed↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-01T22:20:37.397Z · LW(p) · GW(p)
Don't think of morality as a doctrine guiding you as to how to behave.
This is mainly how I use morality. I control my own actions, not the actions of other people, so for me it makes sense to judge my own actions as good or bad, right or wrong. I can change them. Judging someone else changes nothing about the state of the world unless I can persuade them to act differently.
Replies from: None, Perplexed↑ comment by [deleted] · 2011-05-01T22:31:01.286Z · LW(p) · GW(p)
Judging someone else changes nothing about the state of the world unless I can persuade them to act differently.
Avoiding a person (a) does not (necessarily) persuade them to act differently, but (b) definitely changes the state of the world. This is not a minor nitpicking point. Avoiding people is also called social ostracism, and it's a major way that people react to misbehavior. It has the primary effect of protecting themselves. It often has the secondary effect of convincing the ostracized person to improve their behavior.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-01T22:33:59.863Z · LW(p) · GW(p)
Then I would consider that a case where I could change their behaviour. There are instances where avoiding someone would bother them enough to have an effect, and other cases where it wouldn't.
Replies from: None↑ comment by Perplexed · 2011-05-02T00:52:01.460Z · LW(p) · GW(p)
it makes sense to judge my own actions as good or bad, right or wrong. I can change them.
Yes, but if you judge a particular action of your own to be 'wrong', then why should you avoid that action? The definition of wrong that I supply solves that problem. By definition if an action is wrong, then it is likely to elicit punishment. So you have a practical reason for doing right rather than doing wrong.
Furthermore, if you do your duty and reward and/or punish other people for their behavior, then they too will have a practical reason to do right rather than wrong.
Before you object "But that is not morality!", ask yourself how you learned the difference between right and wrong.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-02T11:43:07.842Z · LW(p) · GW(p)
ask yourself how you learned the difference between right and wrong.
It's a valid point that I probably learned morality this way. I think that's actually the definition of 'preconventional' morality-it's based on reward/punishment. Maybe all my current moral ideas have roots in that childhood experience, but they aren't covered by it anymore. There are actions that would be rewarded by most of the people around me, but which I avoid because I consider there to be a "better" alternative. (I should be able to think of more examples of this, but I guess one is laziness at work. I feel guilty if I don't do the cleaning and maintenance that needs doing even though everyone else does almost nothing. I also try to follow a "golden rule" that if I don't want something to happen to me, I won't do it to someone else even if the action is socially acceptable amidst my friends and wouldn't be punished.
Replies from: Perplexed↑ comment by Perplexed · 2011-05-02T16:19:34.105Z · LW(p) · GW(p)
I think that's actually the definition of 'preconventional' morality-it's based on reward/punishment.
Ah. Thanks for bringing up the Kohlberg stages - I hadn't been thinking in those terms.
The view of morality I am promoting here is a kind of meta-pre-conventional viewpoint. That is, morality is not 'that which receives reward and punishment', it is instead 'that which (consequentially) ought to receive reward and punishment, given that many people are stuck at the pre-conventional level'.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-05-02T16:39:46.287Z · LW(p) · GW(p)
'that which (consequentially) ought to receive reward and punishment, given that many people are stuck at the pre-conventional level'.
How many people? I think (I remember reading in my first-year psych textbook) that most adults functionning at a "normal" level in society are at the conventional level: they have internalized whatever moral standards surround them and obey them as rules, rather than thinking directly of punishment or reward. (They may still be thinking indirectly of punishment and reward; a conventionally moral person obeys the law because it's the law and it's wrong to break the law, implicitly because they would be punished if they did.) I'm not really sure how to separate how people actually reason on moral issues, versus how they think they do, and whether the two are often (or ever???) the same thing.
Replies from: Perplexed↑ comment by Perplexed · 2011-05-02T16:55:52.079Z · LW(p) · GW(p)
How many people?
How many people are stuck at that level? I don't know.
How many people must be stuck there to justify the use of punishment as deterrent? My gut feeling is that we are not punishing too much unless the good done (to society) by deterrence is outweighed by the evil done (to the 'criminal') by the punishment.
And also remember that we can use carrots as well as sticks. A smile and a "Thank you" provide a powerful carrot to many people. How many? Again, I don't know, but I suspect that it is only fair to add these carrot-loving pre-conventionalists in with the ones who respond only to sticks.
↑ comment by Marius · 2011-05-01T17:47:42.050Z · LW(p) · GW(p)
What I do not understand is when people use the words "right" or "wrong" independently of any agent's preferences
Assuming Amanojack explained your position correctly, then there aren't just people fulfilling their preferences. There are people doing all kinds of things that fulfill or fail to fulfill their preferences - and, not entirely coincidentally, which bring happiness and grief to themselves or others. So then a common reasonable definition of morality (that doesn't involve the word preferences) is that set of habits that are most likely to bring long-term happiness to oneself and those around one.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-05-01T18:01:03.777Z · LW(p) · GW(p)
there aren't just people fulfilling their preferences.
You missed a word in my original. I said that there were agents trying to fulfill their preferences. Now, per my comment at the end of your subthread with Amanojack, I realize that the word "preferences" may be unhelpful. Let me try to taboo it:
There are intelligent agents who assign higher values to some futures than others. I observe them generally making an effort to actualize those futures, but sometimes failing due to various immediate circumstances, which we could call cognitive overrides. What I mean by that is that these agents have biases and heuristics which lead them to poorly evaluate the consequences of actions.
Even if a human sleeping on the edge of a cliff knows that the cliff edge is right next to him, he will jolt if startled by noise or movement. He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it. Similarly, under conditions of sufficient hunger, thirst, fear, or pain, the analytical parts of the agent's mind give way to evolved heuristics.
definition of morality (that doesn't involve the word preferences) is that set of habits that are most likely to bring long-term happiness to oneself and those around one.
If that's how you would like to define it, that's fine. Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
Replies from: Marius↑ comment by Marius · 2011-05-02T19:43:08.383Z · LW(p) · GW(p)
He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it
I suspect it's a matter of degree rather than either-or. People sleeping on the edges of cliffs are much less likely to jot when startled than people sleeping on soft beds, but not 0% likely. The interplay between your biases and your reason is highly complex.
Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
Yes; absolutely. I suspect that a coherent definition of morality that isn't contingent on those will have to reference a deity.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-05-03T04:16:42.337Z · LW(p) · GW(p)
We are, near as I can tell, in perfect agreement on the substance of this issue. Aumann would be proud. :)
↑ comment by Marius · 2011-04-28T02:47:05.685Z · LW(p) · GW(p)
I don't understand what you mean by preferences when you say "intelligent agents trying to fulfill their preferences". I have met plenty of people who were trying to do things contrary to their preferences. Perhaps before you try (or someone tries for you) to distinguish morality from preferences, it might be helpful to distinguish precisely how preferences and behavior can differ?
Replies from: Amanojack↑ comment by Amanojack · 2011-04-28T03:14:24.011Z · LW(p) · GW(p)
Example? I prefer not to stay up late, but here I am doing it. It's not that I'm acting against my preferences, because my current preference is to continue typing this sentence. It's simply that English doesn't differentiate very well between "current preferences"= "my preferences right this moment" and "current preferences"= "preferences I have generally these days."
Seinfeld said it best.
Replies from: Marius↑ comment by Marius · 2011-04-28T10:24:49.707Z · LW(p) · GW(p)
But I want an example of people acting contrary to their preferences, you're giving one of yourself acting according to your current preferences. Hopefully, NMJablonski has an example of a common action that is genuinely contrary to the actor's preferences. Otherwise, the word "preference" simply means "behavior" to him and shouldn't be used by him. He would be able to simplify "the actions I prefer are the actions I perform," or "morality is just behavior", which isn't very interesting to talk about.
Replies from: Amanojack↑ comment by Amanojack · 2011-05-01T15:20:44.407Z · LW(p) · GW(p)
"This-moment preferences" are synonymous with "behavior," or more precisely, "(attempted/wished-for) action." In other words, in this moment, my current preferences = what I am currently striving for.
Jablonski seems to be using "morality" to mean something more like the general preferences that one exhibits on a recurring basis, not this-moment preferences. And this is a recurring theme: that morality is questions like, "What general preferences should I cultivate?" (to get more enjoyment out of life)
Replies from: Marius↑ comment by Marius · 2011-05-01T17:07:19.795Z · LW(p) · GW(p)
Ok, so if I understand you correctly: It is actually meaningful to ask "what general preferences should I cultivate to get more enjoyment out of life?" If so, you describe two types of preference: the higher-order preference (which I'll call a Preference) to get enjoyment out of life, and the lower-order "preference" (which I'll call a Habit or Current Behavior rather than a preference, to conform to more standard usage) of eating soggy bland french fries if they are sitting in front of you regardless of the likelihood of delicious pizza arriving. So because you prefer to save room for delicious pizza yet have the Habit of eating whatever is nearby and convenient, you can decide to change that Habit. You may do so by changing your behavior today and tomorrow and the day after, eventually forming a new Habit that conforms better to your preference for delicious foods.
Am I describing this appropriately? If so, by the above usage, is morality a matter of Behavior, Habit, or Preference?
Replies from: Amanojack↑ comment by Amanojack · 2011-05-01T17:28:45.174Z · LW(p) · GW(p)
Sounds fairly close to what I think Jablonski is saying, yes.
Preference isn't the best word choice. Ultimately it comes down to realizing that I want different things at different times, but in English future wanting is sometimes hard to distinguish from present wanting, which can easily result in a subtle equivocation. This semantic slippage is injecting confusion into the discussion.
Perhaps we have all had the experience of thinking something like, "When 11pm rolls around, I want to want to go to sleep." And it makes sense to ask, "How can I make it so that I want to go to sleep when 11pm rolls around?" Sure, I presently want to go to sleep early tonight, but will I want to then? How can I make sure I will want to? Such questions of pure personal long-term utility seem to exemplify Jablonksi's definition of morality.
Replies from: Marius↑ comment by Marius · 2011-05-01T17:46:39.910Z · LW(p) · GW(p)
ok cool, replying to the original post then.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-05-01T17:50:05.579Z · LW(p) · GW(p)
Oops, I totally missed this subthread.
Amanojack has, I think, explained my meaning well. It may be useful to reduce down to physical brains and talk about actual computational facts (i.e. utility function) that lead to behavior rather than use the slippery words "want" or "preference".
Replies from: Amanojack↑ comment by Amanojack · 2011-05-01T18:03:19.755Z · LW(p) · GW(p)
Good idea. Like, "My present utility function calls for my future utility function to be such and such"?
Replies from: NMJablonski↑ comment by NMJablonski · 2011-05-01T18:07:51.040Z · LW(p) · GW(p)
I replied to Marius higher up in the thread with my efforts at preference-taboo.
↑ comment by Peterdjones · 2011-04-28T13:17:53.685Z · LW(p) · GW(p)
Clearly there's a group of people who dislike what I've said in this thread, as I've been downvoted quite a bit.
Same here.
"Those actions which I prefer" or "Those actions which most agents in a particular place prefer" or "Those actions which fulfill arbitrary metric X"
It doesn't mean any of those things, since any of them can be judged wrong.
Likewise, "wrong" inherits its meaning from whatever definition is given for "right". It makes sense to me to talk about preferences. They're important. If that's what people are talking about when they discuss morality, then that makes perfect sense.
Morality is about having the right preferences, as rationality is about having true beliefs.
What I do not understand is when people use the words "right" or "wrong" >independently of any agent's preferences. I don't see what they are referring to, or >what those words even mean in that context.
Do you think the sentence "there are truths no-one knows" is meaningful?
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T16:41:51.709Z · LW(p) · GW(p)
Morality is about having the right preferences, as rationality is about having true beliefs.
I understand what it would mean to have a true belief, as truth is noticeably independent of belief. I can be surprised, and I can anticipate. I have an understanding of a physical world of which I am part, and which generates my experiences.
It does not make any sense for there to be some "correct" preferences. Unlike belief, where there is an actual territory to map, preferences are merely a byproduct of the physical processes of intelligence. They have no higher or divine purpose which demands certain preferences be held. Evolution selects for those which aid survival, and it doesn't matter if survival means aggression or cooperation. The universe doesn't care.
I think you and other objective moralists in this thread suffer from extremely anthropocentric thinking. If you rewind the universe to a time before there are humans, in a time of early expansion and the first formation of galaxies, does there exist then the "correct" preferences that any agent must strive to discover? Do they exist independent of what kinds of life evolve in what conditions?
If you are able to zoom out of your skull, and view yourself and the world around you as interesting molecules going about their business, you'll see how absurd this is. Play through the evolution of life on a planetary scale in your mind. Be aware of the molecular forces at work. Run it on fast forward. Stop and notice the points where intelligence is selected for. Watch social animals survive or die based on certain behaviors. See the origin of your own preferences, and why they are so different from some other humans.
Objective morality is a fantasy of self-importance, and a hold-over from ignorant quasi-religious philosophy which has now cloaked itself in scientific terms and hides in university philosophy departments. Physics is going to continue to play out. The only agents who can ever possibly care what you do are other physical intelligences in your light cone.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T16:52:39.869Z · LW(p) · GW(p)
Do you think mathematical statements are true and false? Do you think mathematics has an actual territory?
It is plainly the case that people can have morally wrong preferences, and therefore no argument against ethics that ethics are not forced on people. People will suffer if they hold incorrect or irrational factual beliefs, and they will suffer if they have evil preferences. In both cases there is a distinction between right and wrong, and in both cases there is an option.
I think you and others on this thread suffer from a confusion between ontology and epistemology. There can be objective truths in mathematics without having the number 23 floating around in space. Moral objectivity likewise does not demand the physical existence of moral objects.
There are things I don't want done to me. I should not therefore do them to others. I can reason my way to that conclusion without the need for moral objects, and without denying that I am made of atoms.
Replies from: CuSithBell, NMJablonski↑ comment by CuSithBell · 2011-04-28T18:57:12.801Z · LW(p) · GW(p)
Wait. So you don't believe in an objective notion of morality, in the sense of a morality that would be true even if there were no people? Instead, you think of morality as, like, a set of reasonable principles a person can figure out that prevent their immediate desires from stomping on their well-being, and/or that includes in their "selfishness" a desire for the well-being of others?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T19:21:18.343Z · LW(p) · GW(p)
Everything is non objective for some value of objective. It is doubtful that there are mathematical truths without mathematicians. But that does not make math as subjective as art.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-28T19:33:46.094Z · LW(p) · GW(p)
Okay. The distinction I am drawing is: are moral facts something "out there" to be discovered, self-justifying, etc., or are they facts about people, their minds, their situations, and their relationships.
Could you answer the question for that value of objective? Or, if not, could you answer the question by ignoring the word "objective" or providing a particular value for it?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T19:53:27.634Z · LW(p) · GW(p)
The second is closer, but there is still the issue of the fact-value divide.
ETA: I have a substantive pre-written article on this, but where am I going to post it with my karma...?
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-28T19:57:16.253Z · LW(p) · GW(p)
I translate that as: it's better to talk about "moral values" than "moral facts" (moral facts being facts about what moral values are, I guess), and moral values are (approximately) reasonable principles a person can figure out that prevent their immediate desires from stomping on their well-being, and/or that includes in their "selfishness" a desire for the well-being of others.
Something like that? If not, could you translate for me instead?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T20:13:48.460Z · LW(p) · GW(p)
I think the the fact that moral values apply to groups is important.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-28T20:29:24.523Z · LW(p) · GW(p)
I take this to mean that, other than that, you agree.
(This is the charitable reading, however. You seem to be sending strong signals that you do not wish to have a productive discussion. If this is not your intent, be careful - I expect that it is easy to interpret posts like this as sending such signals.)
If this is true, then I think the vast majority of the disagreements you've been having in this thread have been due to unnecessary miscommunication.
↑ comment by NMJablonski · 2011-04-28T16:58:07.716Z · LW(p) · GW(p)
Do you think mathematical statements are true and false? Do you think mathematics has an actual territory?
Mathematics is not Platonically real. If it is we get Tegmark IV and then every instant of sensible ordered universe is evidence against it, unless we are Boltzmann brains. So, no, mathematics does not have an actual territory. It is an abstraction of physical behaviors that intelligences can use because intelligences are also physical. Mathematics works because we can perform isomorphic physical operations inside our brains.
It is plainly the case that people can have morally wrong preferences
You can say that as many times as you like, but that wont make it true.
ETA: You also still haven't explained how a person can know that.
Replies from: jimrandomh, Peterdjones↑ comment by jimrandomh · 2011-04-28T17:21:00.472Z · LW(p) · GW(p)
Mathematics is not Platonically real. If it is we get Tegmark IV and then every instant of sensible ordered universe is evidence against it, unless we are Boltzmann brains.
Only if is-real is a boolean. If it's a number, then mathematics can be "platonically real" without us being Boltzmann brains.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T17:23:35.466Z · LW(p) · GW(p)
Upvoted. That's a good point, but also a whole other rabbit hole. Do you think morality is objective?
Replies from: None, jimrandomh↑ comment by [deleted] · 2011-04-28T20:25:55.130Z · LW(p) · GW(p)
Do you think morality is objective?
As opposed to what? Subjective? What are the options? Because that helps to clarify what you mean by "objective". Prices are created indirectly by subjective preferences and they fluctuate, but if I had to pick between calling them "subjective" or calling them "objective" I would pick "objective", for a variety of reasons.
↑ comment by jimrandomh · 2011-04-28T18:25:15.386Z · LW(p) · GW(p)
Do you think morality is objective?
No; morality reduces to values that can only be defined with respect to an agent, or a set of agents plus an aggregation process. However, almost all of the optimizing agents (humans) that we know about share some values in common, which creates a limited sort of objectivity in that most of the contexts we would define morality with respect to agree qualitatively with each other, which usually allows people to get away with failing to specify the context.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T19:15:26.621Z · LW(p) · GW(p)
morality reduces to values that can only be defined with respect to an agent, or a set of agents plus an aggregation process.
Upvoted. I think you could get a decent definition of the word "morality" along these lines.
↑ comment by Peterdjones · 2011-04-28T17:13:49.496Z · LW(p) · GW(p)
A person can know that by reasoning about it.
If you think there is nothing wrong with having a preference for murder, it is about time you said so. It changes a lot.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T17:19:00.463Z · LW(p) · GW(p)
It still isn't clear what it means for a preference for murder to be "wrong"!
So far I can only infer your definition of "wrong" to be:
"Not among the correct preferences"
... but you still haven't explained to us why you think there are correct preferences, besides to stamp your foot and say over and over again "There are obviously correct preferences" even when many people do not agree.
I see no reason to believe that there is a set of "correct" preferences to check against.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T17:34:28.087Z · LW(p) · GW(p)
So you think there is nothing wrong in having a preference for murder? Yes or no?
I need to find out whether I should be arguing to specific cases from general principles or vice versa.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T17:35:16.326Z · LW(p) · GW(p)
I do not believe there is a set of correct preferences. There is no objective right or wrong.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T18:09:37.902Z · LW(p) · GW(p)
Funny how you never quite answer the question as stated. Can you even say it is subjectively wrong?
Replies from: NMJablonski, wedrifid↑ comment by NMJablonski · 2011-04-28T18:12:08.451Z · LW(p) · GW(p)
"Wrong" meaning what?
Would I prefer the people around me not be bloodthirsty? Yes, I would prefer that.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T18:46:36.274Z · LW(p) · GW(p)
Can people reason that bloodthirst is not a good preference to have...?
Replies from: None, wedrifid↑ comment by [deleted] · 2011-04-28T19:01:21.631Z · LW(p) · GW(p)
Even if there's no such thing as objective right and wrong, they might easily be able to reason that being bloodthirsty is not in their best selfish interest.
Replies from: Eugine_Nier, Peterdjones↑ comment by Eugine_Nier · 2011-04-29T01:12:08.915Z · LW(p) · GW(p)
bloodthirsty is not in their best selfish interest.
If there's no right or wrong, why does that matter?
Replies from: None↑ comment by [deleted] · 2011-04-29T03:15:07.132Z · LW(p) · GW(p)
I don't understand the question, nor why you singled out that fragment.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-29T03:32:59.043Z · LW(p) · GW(p)
When you say "Even if there's no such thing as objective right and wrong" you're still implicitly presuming a default morality, namely ethical egoism.
↑ comment by Peterdjones · 2011-04-28T19:11:14.508Z · LW(p) · GW(p)
Yes. Even subjective morality refutes NMJ's nihilism.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T19:11:37.554Z · LW(p) · GW(p)
I agree with Sewing-Machine
Being bloodthirsty would lead to results I do not prefer.
ETA: Therefore I would not choose to become bloodthirsty. This is based on existing preference.
↑ comment by wedrifid · 2011-04-29T03:49:35.524Z · LW(p) · GW(p)
Funny how you never quite answer the question as stated. Can you even say it is subjectively wrong?
It isn't 'funny' at all. You were trying to force someone into a lose lose morality signalling position. It is appropriate to ignore such attempts and instead state what your actual position is.
Your gambit here verges on logically rude.
↑ comment by Eugine_Nier · 2011-04-28T02:45:40.598Z · LW(p) · GW(p)
In keeping with my analogy let's translate your position into the corresponding position on physics:
I see a universe which contains intelligent agents with opinions and/or beliefs. Then I see conversations about physics and reality talking about beliefs being "true" or "false". From the context and explanations, "true" seems to mean very different things. Like:
"My beliefs" or "The beliefs of most agents in a particular place" or "Those beliefs which fulfill arbitrary metric X"
Likewise, "false" inherits its meaning from whatever definition is given for "true". It makes sense to me to talk about opinions and/or beliefs . They're important. If that's what people are talking about when they discuss truth, then that makes perfect sense. What I do not understand is when people use the words "true" or "false" independently of any agent's opinion. I don't see what they are referring to, or what those words even mean in that context.
Do you still agree with the changed version? If not, why not?
(I never realized how much fun it could be to play a chronophone.)
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T02:53:02.257Z · LW(p) · GW(p)
Based upon my experiences, physical truths appear to be concrete and independent of beliefs and opinions. I see no cases where "right" has a meaning outside of an agent's preferences. I don't know how one would go about discovering the "rightness" of something, as one would a physical truth.
It is a poor analogy.
Edit: Seriously? I'm not trying to be obstinate here. Would people prefer I go away?
New edit: Thanks wedrifid. I was very confused.
Replies from: wedrifid, Eugine_Nier↑ comment by wedrifid · 2011-04-28T03:28:24.386Z · LW(p) · GW(p)
Seriously? I'm not trying to be obstinate here. Would people prefer I go away?
You're not being obstinate. You're more or less right, at least in the parent. There are a few nuances left to pick up but you are not likely to find them by arguing with Eugine.
↑ comment by Eugine_Nier · 2011-04-28T03:14:14.048Z · LW(p) · GW(p)
Based upon my experiences, physical truths appear to be concrete and independent of beliefs and opinions.
Please explain what the word "concrete" means independent of anyone's beliefs and opinions.
↑ comment by Eugine_Nier · 2011-04-28T03:46:19.592Z · LW(p) · GW(p)
Does anyone care to explain what I'm missing, or if there's something specific I did to elicit downvotes?
How about this. You stop down-voting the comments in this thread you disagree with and I'll do the same.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T03:51:08.958Z · LW(p) · GW(p)
... I'm not down-voting the comments I disagree with.
I down-voted a couple of snide comments from Peter earlier.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T03:59:47.818Z · LW(p) · GW(p)
Well, somebody is.
If it's not you I'm sorry.
↑ comment by Vladimir_Nesov · 2011-04-28T17:19:27.226Z · LW(p) · GW(p)
For the record, I think in this thread Eugine_Nier follows a useful kind of "simple truth", not making errors as a result, while some of the opponents demand sophistication in lieu of correctness.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T17:29:16.358Z · LW(p) · GW(p)
I think we're demanding clarity and substance, not sophistication. Honestly I feel like one of the major issues with moral discussions is that huge sophisticated arguments can emerge without any connection to substantive reality.
I would really appreciate it if someone would taboo the words "moral", "good", "evil", "right", "wrong", "should", etc. and try to make the point using simpler concepts that have less baggage and ambiguity.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-28T17:32:58.619Z · LW(p) · GW(p)
Clarity can be difficult. What do you mean by "truth"?
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T17:34:18.055Z · LW(p) · GW(p)
I mean it in precisely the sense that The Simple Truth does. Anticipation control.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-28T17:39:38.470Z · LW(p) · GW(p)
That's not the point. You must use your heuristics even if you don't know how they work, and avoid demanding to know how they work or how they should work as a prerequisite to being allowed to use them. Before developing technical ideas about what it means for something to be true, or what it means for something to be right, you need to allow yourself to recognize when something is true, or is right.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T17:47:53.334Z · LW(p) · GW(p)
I'm sorry, but if we had no knowledge of brains, cognition, and the nature of preference, then sure, I'd use my feelings of right or wrong as much as the next guy, but that doesn't make them objectively true.
Likewise, just because I intuitively feel like I have a time-continuous self, that doesn't make consciousness fundamental.
As an agent, having knowledge of what I am, and what causes my experiences, changes my simple reliance on heuristics to a more accurate scientific exploration of the truth.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-28T18:11:52.724Z · LW(p) · GW(p)
I'm sorry, but if we had no knowledge of brains, cognition, and the nature of preference, then sure
Just make sure that the particular piece of knowledge you demand is indeed available, and not, say, just the thing you are trying to figure out.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T18:15:10.105Z · LW(p) · GW(p)
(Nod)
I still think it's a pretty simple case here. Is there a set of preferences which all intelligent agents are compelled by some force to adopt? Not as far as I can tell.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T18:36:24.861Z · LW(p) · GW(p)
Morality doesn't work like physical law either. Nobody is compelled to be rational, but people who do reason can agree about certain things. That includes moral reasoning.
Replies from: nshepperd↑ comment by nshepperd · 2011-04-30T11:37:24.930Z · LW(p) · GW(p)
I think we should move this conversation back out of the other post, where it really doesn't belong.
Can you clarify what you mean by this?
For what X are you saying "All agents that satisfy X must follow morality."?
Replies from: TheOtherDave, Peterdjones↑ comment by TheOtherDave · 2011-04-30T12:20:23.441Z · LW(p) · GW(p)
If you're moving it anyway, I would recommend moving it here instead.
↑ comment by Peterdjones · 2011-04-30T12:27:34.496Z · LW(p) · GW(p)
I'm saying that in "to be moral you must to follow whatever rules constitute morality" the "must" is a matter of logical necessity, as opposed to the two interpretations of compulsion considered by NMJ: physical necessity, and edict.
Replies from: JoshuaZ, nshepperd↑ comment by JoshuaZ · 2011-04-30T13:22:20.796Z · LW(p) · GW(p)
You still haven't explained in this framework why one can talk about how one gets that people "should" be moral anymore than people "should play chess". If morality is just another game, then it loses all the force you associate with it, and it seems clear that you are distinguishing between chess and morality.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-30T17:48:00.515Z · LW(p) · GW(p)
The rules of physics have a special quality of unavoidability, you don't have an option to avoid them. Likewise people are held morally accountable under most circumstances and cant just avoid culpabability by saying "oh, I don't play that game". I don't think these are aposteriori facts. I think physics is definitionally the science of the fundamental, and morality is definitionally where the evaluative buck stops.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-30T17:57:58.452Z · LW(p) · GW(p)
... but they're held morally accountable by agents whose preferences have been violated. The way you just described it means that morality is just those rules that the people around you currently care enough about to punish you if you break them.
In which case morality is entirely subjective and contingent on what those around you happen to value, no?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-30T18:45:32.393Z · LW(p) · GW(p)
It can make sense to say that the person being punished was actually in the right. Were the British right to imprison Gandhi?
Replies from: JoshuaZ, wedrifid↑ comment by JoshuaZ · 2011-04-30T22:42:20.853Z · LW(p) · GW(p)
It can make sense to say that the person being punished was actually in the right. Were the British right to imprison Gandhi?
Peter, at this point, you seem very confused. You've asserted that morality is just like chess apparently comparing it to a game where one has agreed upon rules. You've then tried to assert that somehow morality is different and is a somehow more privileged game that people "should" play but the only evidence you've given is that in societies with a given moral system people who don't abide by that moral system suffer. Yet your comment about Gandhi then endorses naive moral realism.
It is possible that there's a coherent position here and we're just failing to understand you. But right now that looks unlikely.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-01T13:39:09.916Z · LW(p) · GW(p)
As I have pointed out about three times, the comparison with chess was to make a point about obligation, not to make a point about arbitrariness
the only evidence you've given is that in societies with a given moral system people who don't abide by that moral system suffer.
I never gave that, that was someone else characterisation. What I said was that it is an anaytlcal trtuth that morality is where the evaluative buck stops.
I don't know what you mean by the naive in naive realism. It is a a central characteristic of any kind of realism that you can have truth beyond conventional belief. The idea that there is more to morality than what a particular society wants to punish is a coherent one. It is better as morality, because subjectivism is too subject to get-out clauses. It is better as an explanation, because it can explain how de facto morality in societies and individuals can be overturned for something better.
↑ comment by nshepperd · 2011-04-30T14:29:52.707Z · LW(p) · GW(p)
Hmm... This is reminiscent of Eliezer's (and my) metaethics¹. In particular, I would say that "the rules that constitute morality" are, by the definition embedded in my brain, some set which I'm not exactly sure of the contents of but which definitely includes {kindness, not murdering, not stealing, allowing freedom, ...}. (Well, it may actually be a utility function, but sets are easier to convey in text.)
In that case, "should", "moral", "right" and the rest are all just different words for "the object is in the above set (which we call morality)". And then "being moral" means "following those rules" as a matter of logical necessity, as you've said. But this depends on what you mean by "the rules constituting morality", on which you haven't said whether you agree.
What do you think?
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-30T15:16:19.291Z · LW(p) · GW(p)
In particular, I would say that "the rules that constitute morality" are, by the definition embedded in my brain, some set which I'm not exactly sure of the contents of but which definitely includes {kindness, not murdering, not stealing, allowing freedom, ...}.
What determines the contents of the set / details of the utility function?
Replies from: nshepperd↑ comment by nshepperd · 2011-04-30T16:39:20.100Z · LW(p) · GW(p)
The short answer is: my/our preferences (suitably extrapolated).
The long answer is: it exists as a mathematical object regardless of anyone's preference, and one can judge things by it even in an empty universe. The reason we happen to care about this particular object is because it embodies our preferences, and we can find out exactly what object we are talking about by examining our preferences. It really adds up to the same thing, but if one only heard the short answer they might think it was about preferences, rather than described by them.
But anyway, I think I'm mostly trying to summarise the metaethics sequence by this point :/ (probably wrongly :p)
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-30T17:03:10.253Z · LW(p) · GW(p)
I see what you mean, and I don't think I disagree.
I think one more question will clarify. If your / our preferences were different, would the mathematical set / utility function you consider to be morality be different also? Namely, is the set of "rules that constitute morality" contingent upon what an agent already values (suitably extrapolated)?
Replies from: nshepperd↑ comment by nshepperd · 2011-05-01T13:55:26.620Z · LW(p) · GW(p)
No. On the other hand, me!pebble-sorter would have no interest in morality at all, and go on instead about how p-great p-morality is. But I wouldn't mix up p-morality with morality.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-05-01T16:23:50.051Z · LW(p) · GW(p)
So, you're defining "morality" as an extrapolation from your preferences now, and if your preferences change in the future, that future person would care about what your present self might call futureYou-morality, even if future you insists on calling it "morality"?
↑ comment by JGWeissman · 2011-04-28T02:49:41.709Z · LW(p) · GW(p)
saying "it's all preferences" about morality is analogous to saying "it's all opinion" about physics.
No matter what opinions anyone holds about gravity, objects near the surface of the earth not subject to other forces accelerate towards the earth at 9.8 meters per second per second. This is an empirical fact about physics, and we know ways our experience could be different if it were wrong. Do you have an example of a fact about morality, independent of preferences, such that we could notice if it is wrong?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T03:08:30.171Z · LW(p) · GW(p)
No matter what opinions anyone holds about gravity, objects near the surface of the earth not subject to other forces accelerate towards the earth at 9.8 meters per second per second.
Do you have an example of a fact about morality, independent of preferences,
Killing innocent people is wrong barring extenuating circumstances.
(I'll taboo the "weasel words" innocent and extenuating circumstances as soon as you taboo the "weasel words" near the surface of the earth and not subject to other forces.
such that we could notice if it is wrong?
I'm not sure it's possible for my example to be wrong anymore then its possible for 2+2 to equal 3.
Replies from: NMJablonski, prase, JGWeissman↑ comment by NMJablonski · 2011-04-28T03:48:59.152Z · LW(p) · GW(p)
What is the difference between:
"Killing innocent people is wrong barring extenuating circumstances"
and
"Killing innocent people is right barring extenuating circumstances"
How do you determine which one is accurate? What observable consequences does each one predict? What do they lead you to anticipate?
Replies from: Eugine_Nier, Peterdjones↑ comment by Eugine_Nier · 2011-04-28T03:57:25.116Z · LW(p) · GW(p)
How do you determine which one is accurate? What observable consequences does each one predict? What do they lead you to anticipate?
Moral facts don't lead me to anticipate observable consequences, but they do affect the actions I choose to take.
Replies from: None, CuSithBell↑ comment by [deleted] · 2011-04-28T04:03:14.474Z · LW(p) · GW(p)
Preferences also do that.
Replies from: Eugine_Nier, Peterdjones↑ comment by Eugine_Nier · 2011-04-28T04:06:28.713Z · LW(p) · GW(p)
Yes, well opinions also anticipate observations. But in a sense by talking about "observable consequences" your taking advantage of the fact that the meta-theory of science is currently much more developed then the meta-theory of ethics.
↑ comment by Peterdjones · 2011-04-28T12:57:47.040Z · LW(p) · GW(p)
But some preferences can be moral, just as some opinions can be true. There is no automatic entailment from "it is a preference" to "it has nothing to do with ethics".
↑ comment by CuSithBell · 2011-04-28T03:58:34.252Z · LW(p) · GW(p)
The question was - how do you determine what the moral facts are?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T04:03:35.476Z · LW(p) · GW(p)
Currently, intuition. Along with the existing moral theories, such as they are.
Similar to the way people determined facts about physics, especially facts beyond the direct observation of their senses, before the scientific method was developed.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-28T04:08:36.371Z · LW(p) · GW(p)
Right, and 'facts' about God. Except that intuitions about physics derive from observations of physics, whereas intuitions about morality derive from observations of... intuitions.
You can't really argue that objective morality not being well-defined means that it is more likely to be a coherent notion.
Replies from: Eugine_Nier, None, Peterdjones↑ comment by Eugine_Nier · 2011-04-28T04:11:26.649Z · LW(p) · GW(p)
My point is that you can't conclude the notion of morality is incoherent simple because we don't yet have a sufficiently concrete definition.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-28T04:15:10.302Z · LW(p) · GW(p)
Technically, yes. But I'm pretty much obliged, based on the current evidence, to conclude that it's likely to be incoherent.
More to the point: why do you think it's likely to be coherent?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T04:31:24.847Z · LW(p) · GW(p)
Mostly by outside view analogy with the history of the development of science. I've read a number of ancient Greek and Roman philosophers (along with a few post-modernists) arguing against the possibility of a coherent theory of physics using arguments very similar to the ones people are using against morality.
I've also read a (much larger) number of philosophers trying to shoehorn what we today call science into using the only meta-theory then available in a semi-coherent state: the meta-theory of mathematics. Thus we see philosophers, Descartes being the most famous, trying and failing to study science by starting with a set of intuitively obvious axioms and attempting to derive physical statements from them.
I think people may be making the same mistake by trying to force morality to use the same meta-theory as science, i.e., asking what experiences moral facts anticipate.
As for likely I'm not sure how likely this is, I just think its more likely then a lot of people on this thread assume.
Replies from: CuSithBell, JGWeissman, Amanojack, TimFreeman↑ comment by CuSithBell · 2011-04-28T15:45:16.531Z · LW(p) · GW(p)
To be clear - you are talking about morality as something externally existing, some 'facts' that exist in the world and dictate what you should do, as opposed to a human system of don't be a jerk. Is that an accurate portrayal?
If that is the case, there are two big questions that immediately come to mind (beyond "what are these facts" and "where did they come from") - first, it seems that Moral Facts would have to interact with the world in some way in order for the study of big-M Morality to be useful at all (otherwise we could never learn what they are), or they would have to be somehow deducible from first principles. Are you supposing that they somehow directly induce intuitions in people (though, not all people? so, people with certain biological characteristics?)? (By (possibly humorous, though not mocking!) analogy, suppose the Moral Facts were being broadcast by radio towers on the moon, in which case they would be inaccessible until the invention of radio. The first radio is turned on and all signals are drowned out by "DON'T BE A JERK. THIS MESSAGE WILL REPEAT. DON'T BE A JERK. THIS MESSAGE WILL...".)
The other question is, once we have ascertained that there are Moral Facts, what property makes them what we should do? For instance, suppose that all protons were inscribed in tiny calligraphy in, say, French, "La dernière personne qui est vivant, gagne." ("The last person who is alive, wins" - apologies for Google Translate) Beyond being really freaky, what would give that commandment force to convince you to follow it? What could it even mean for something to be inherently what you should do?
It seems, ultimately, you have to ask "why" you should do "what you should do". Common answers include that you should do "what God commands" because "that's inherently What You Should Do, it is By Definition Good and Right". Or, "don't be a jerk" because "I'll stop hanging out with you". Or, "what makes you happy and fulfilled, including the part of you that desires to be kind and generous" because "the subjective experience of sentient beings are the only things we've actually observed to be Good or Bad so far".
So, where do we stand now?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-29T01:26:46.040Z · LW(p) · GW(p)
as opposed to a human system of don't be a jerk.
Now we're getting somewhere. What do you mean by the work "jerk" and why is it any more meaningful then words like "moral"/"right"/"wrong"?
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-29T01:30:45.137Z · LW(p) · GW(p)
The distinction I am trying to make is between Moral Facts Engraved Into The Foundation Of The Universe and A Bunch Of Words And Behaviors And Attitudes That People Have (as a result of evolution & thinking about stuff etc.). I'm not sure if I'm being clear, is this description easier to interpret?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-29T01:35:47.837Z · LW(p) · GW(p)
Near as I can tell, what you mean by "don't be a jerk" is one possible example of what I mean by morality.
Hope that helps.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-29T01:46:06.510Z · LW(p) · GW(p)
Great! Then I think we agree on that.
↑ comment by JGWeissman · 2011-04-28T04:47:17.561Z · LW(p) · GW(p)
I think people may be making the same mistake by trying to force morality to use the same meta-theory as science, i.e., asking what experiences moral facts anticipate.
If that is true, what virtue do moral fact have which is analogous to physical facts anticipating experience, and mathematical facts being formally provable?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T04:48:43.444Z · LW(p) · GW(p)
If that is true, what virtue do moral fact have which is analogous to physical facts anticipating experience, and mathematical facts being formally provable?
If I knew the answer we wouldn't be having this discussion.
↑ comment by Amanojack · 2011-04-28T04:54:35.631Z · LW(p) · GW(p)
Define your terms, then you get a fair hearing. If you are just saying the terms could maybe someday be defined, this really isn't the kind of thing that needs a response.
To put it in perspective, you are speculating that someday you will be able to define what the field you are talking about even is. And your best defense is that some people have made questionable arguments against this non-theory? Why should anyone care?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T05:22:32.724Z · LW(p) · GW(p)
After thinking about it a little I think I can phrase it this way.
I want to answer the question: "What should I do?"
It's kind of a pressing question since I need to do something (doing nothing counts as a choice and usually not a very good one).
If the people arguing that morality is just preference answer: "Do what you prefer", my next question is "What should I prefer?"
Replies from: None, Amanojack, wedrifid, None↑ comment by [deleted] · 2011-04-28T07:46:39.781Z · LW(p) · GW(p)
my next question is "What should I prefer?"
Three definitions of "should":
used in auxiliary function to express obligation, propriety, or expediency
As for obligation - I doubt you are under any obligation other than to avoid the usual uncontroversially nasty behavior, along with any specific obligations you may have to specific people you know. You would know what those are much better than I would. I don't really see how an ordinary person could be all that puzzled about what his obligations are.
As for propriety - over and above your obligation to avoid uncontroversially nasty behavior, I doubt you have much trouble discovering what's socially acceptable (stuff like, not farting in an elevator), and anyway, it's not the end of the world if you offend somebody. Again, I don't really see how an ordinary person is going to have a problem.
As for expediency - I doubt you intended the question that way.
If this doesn't answer your question in full you probably need to explain the question. The utilitarians have this strange notion that morality is about maximizing global utility, so of course, morality in the way that they conceive it is a kind of life-encompassing total program of action, since every choice you make could either increase or decrease total utility. Maybe that's what you want answered, i.e., what's the best possible thing you could be doing.
But the "should" of obligation is not like this. We have certain obligations but these are fairly limited, and don't provide us with a life-encompassing program of action. And the "should" of propriety is not like this either. People just don't pay you any attention as long as you don't get in their face too much, so again, the direction you get from this quarter is limited.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T11:53:51.621Z · LW(p) · GW(p)
As for obligation - I doubt you are under any obligation other than to avoid the usual >uncontroversially nasty behavior, along with any specific obligations you may have to >specific people you know. You would know what those are much better than I would. I >don't really see how an ordinary person could be all that puzzled about what his >obligations are.
You have collapsed several meanings of obligation together there. You may have explicit legal obligations to the state, and IOU style obligations to individuals who have done you a favour, and so on. But moralobligations go beyond all those, If you are living a brutal dictatorship, there are conceivable circumstances where you morally should not obey the law. Etc, etc.
↑ comment by Amanojack · 2011-05-01T15:13:02.193Z · LW(p) · GW(p)
If the people arguing that morality is just preference answer: "Do what you prefer", my next question is "What should I prefer?"
In order to accomplish what?
Should you prefer chocolate ice cream or vanilla? As far as ice cream flavors go, "What should I prefer" seems meaningless...unless you are looking for an answer like, "It's better to cultivate a preference for vanilla because it is slightly healthier" (you will thereby achieve better health than if you let yourself keep on preferring chocolate).
This gets into the time structure of experience. In other words, I would be interpreting your, "What should I prefer?" as, "What things should I learn to like (in order to get more enjoyment out of life)?" To bring it to a more traditionally moral issue, "Should I learn to like a vegetarian diet (in order to feel less guilt about killing animals)?"
Is that more or less the kind of question you want to answer?
↑ comment by [deleted] · 2011-04-28T06:08:20.170Z · LW(p) · GW(p)
This might have clarified for me what this dispute is about. At least I have a hypothesis, tell me if I'm on the wrong track.
Antirealists aren't arguing that you should go on a hedonic rampage -- we are allowed to keep on consulting our consciences to determined the answer to "what should I prefer." In a community of decent and mentally healthy people we should flourish. But the main upshot of the antirealist position is that you cannot convince people with radically different backgrounds that their preferences are immoral and should be changed, even in principle.
At least, antirealism gives some support to this cynical point of view, and it's this point of view that you are most interested in attacking. Am I right?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T06:20:48.739Z · LW(p) · GW(p)
That's a large part of it.
The other problem is that anti-realists don't actually answer the question "what should I do?", they merely pass the buck to the part of my brain responsible for my preferences but don't give it any guidance on how to answer that question.
↑ comment by TimFreeman · 2011-04-28T04:37:21.777Z · LW(p) · GW(p)
Talk about morality and good and bad clearly has a role in social signaling. It is also true that people clearly have preferences that they act upon, imperfectly. I assume you agree with these two assertions; if not we need to have a "what color is the sky?" type of conversation.
If you do agree with them, what would you want from a meta-ethical theory that you don't already have?
Replies from: Eugine_Nier, Peterdjones↑ comment by Eugine_Nier · 2011-04-28T04:45:39.951Z · LW(p) · GW(p)
If you do agree with them, what would you want from a meta-ethical theory that you don't already have?
Something more objective/universal.
Edit: a more serious issue is that just as equating facts with opinions tells you nothing about what opinions you should hold. Equating morality and preference tells you nothing about what you should prefer.
Replies from: TimFreeman, Peterdjones↑ comment by TimFreeman · 2011-05-02T17:41:58.371Z · LW(p) · GW(p)
So we seem to agree that you (and Peterdjones) are looking for an objective basis for saying what you should prefer, much as rationality is a basis for saying what beliefs you should hold.
I can see a motive for changing one's beliefs, since false beliefs will often fail to support the activity of enacting one's preferences. I can't see a motive for changing one's preferences - obviously one would prefer not to do that. If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
If you live in a social milieu where people demand that you justify your preferences, I can see something resembling morality coming out of those justifications. Is that your situation? I'd rather select a different social milieu, myself.
Replies from: handoflixue, Peterdjones, Eugine_Nier↑ comment by handoflixue · 2011-05-06T20:02:44.039Z · LW(p) · GW(p)
I recently got a raise. This freed up my finances to start doing SCUBA diving. SCUBA diving benefits heavily from me being in shape.
I now have a strong preference for losing weight, and reinforced my preference for exercise, because the gains from both activities went up significantly. This also resulted in having a much lower preference for certain types of food, as they're contrary to these new preferences.
I'd think that's a pretty concrete example of changing my preferences, unless we're using different definitions of "preference."
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-06T20:40:23.413Z · LW(p) · GW(p)
I suppose we are using different definitions of "preference". I'm using it as a friendly term for a person's utility function, if they seem to be optimizing for something, or we say they have no preference if their behavior can't be understood that way. For example, what you're calling food preferences are what I'd call a strategy or a plan, rather than a preference, since the end is to support the SCUBA diving. If the consequences of eating different types of food magically changed, your diet would probably change so it still supported the SCUBA diving.
Replies from: handoflixue↑ comment by handoflixue · 2011-05-06T21:49:03.184Z · LW(p) · GW(p)
Ahh, I re-read the thread with this understanding, and was struck by this:
I like using the word "preference" to include all the things that drive a person, so I'd prefer to say that your preference has two parts
It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.
Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.
Presumably anyone who wants a metaethical theory has a preference that would be maximized by discovering and obeying that theory. This would still be weighted against their existing other preferences, same as my preference for rationality has yet to eliminate akrasia or procrastination from my life :)
Does that make sense as a "motivation for wanting to change your preferences"?
Replies from: TimFreeman, TimFreeman↑ comment by TimFreeman · 2011-05-06T22:35:50.974Z · LW(p) · GW(p)
I agree that akrasia is a bad thing that we should get rid of. I like to think of it as a failure to have purposeful action, rather than a preference.
My dancing around here has a purpose. You see, I have this FAI specification that purports to infer everyone's preference and take as its utility function giving everyone some weighted average of what they prefer. If it infers that my akrasia is part of my preferences, I'm screwed, so we need a distinction there. Check http://www.fungible.com. It has a lot of bugs that are not described there, so don't go implementing it. Please.
In general, if the FAI is going to give "your preference" to you, your preference had better be something stable about you that you'll still want when you get it.
If there's no fix for akrasia, then it's hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I'm spewing BS about stuff that sounds nice to do, but I really don't want to do it. I certainly would want an akrasia fix if it were available. Maybe that's the important preference.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-05-06T23:02:02.749Z · LW(p) · GW(p)
If there's no fix for akrasia, then it's hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I'm spewing BS about stuff that sounds nice to do, but I really don't want to do it.
Very much agreed.
↑ comment by TimFreeman · 2011-05-06T23:23:19.278Z · LW(p) · GW(p)
It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.
Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.
At the end of the day, you're going to prefer one action over another. It might make sense to model someone as having multiple utility functions, but you also have to say that they all get added up (or combined some other way) so you can figure out the immediate outcome with the best preferred expected long-term utility and predict the person is going to take an action that gets them there.
Replies from: handoflixue↑ comment by handoflixue · 2011-05-07T00:26:01.623Z · LW(p) · GW(p)
I don't think very many people actually act in a way that suggests consistent optimization around a single factor; they optimize for multiple conflicting factors. I'd agree that you can evaluate the eventual compromise point, and I suppose you could say they optimize for that complex compromise. For me, it happens to be easier to model it as conflicting desires and a conflict resolution function layered on top, but I think we both agree on the actual result, which is that people aren't optimizing for a single clear goal like "happiness" or "lifetime income".
predict the person
Prediction seems to run in to the issue that utility evaluations change over time. I used to place a high utility value on sweets, now I do not. I used to live in a location where going out to an event had a much higher cost, and thus was less often the ideal action. So on.
It strikes me as being rather like weather: You can predict general patterns, and even manage a decent 5-day forecast, but you're going to have a lot of trouble making specific long-term predictions.
↑ comment by Peterdjones · 2011-05-03T22:56:48.981Z · LW(p) · GW(p)
I can see a motive for changing one's beliefs, since false beliefs will often fail to support the activity of enacting one's preferences. I can't see a motive for changing one's preferences
There isn't an instrumental motive for changing ones preferences. That doesn't add up to "never change your preferences" unless you assume that instrumentality -- "does it help me achieve anything" is the ultimate way of evaulating things. But it isn't: morality is. It is morally wrong to design better gas chambers.
Replies from: TimFreeman, TimFreeman↑ comment by TimFreeman · 2011-05-04T02:45:45.361Z · LW(p) · GW(p)
The interesting question is still the one you didn't answer yet:
If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
I only see two possible answers, and only one of those seems likely to come from you (Peter) or Eugene.
The unlikely answer is "I wouldn't do anything different". Then I'd reply "So, morality makes no practical difference to your behavior?", and then your position that morality is an important concept collapses in a fairly uninteresting way. Your position so far seems to have enough consistency that I would not expect the conversation to go that way.
The likely answer is "If I'm willpower-depleted, I'd do the immoral thing I prefer, but on a good day I'd have enough willpower and I'd do the moral thing. I prefer to have enough willpower to do the moral thing in general." In that case, I would have to admit that I'm in the same situation, except with a vocabulary change. I define "preference" to include everything that drives a person's behavior, if we assume that they aren't suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I'm calling "preference" is the same as what you're calling "preference and morality". I am in the same situation in that when I'm willpower-depleted I do a poor job of acting upon consistent preferences (using my definition of the word), I do better when I have more willpower, and I want to have more willpower in general.
If I guessed your answer wrong, please correct me. Otherwise I'd want to fix the vocabulary problem somehow. I like using the word "preference" to include all the things that drive a person, so I'd prefer to say that your preference has two parts, perhaps an "amoral preference" which would mean what you were calling "preference" before, and "moral preference" would include what you were calling "morality" before, but perhaps we'd choose different words if you objected to those. The next question would be:
Okay, you're making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?
...and I have no clue what your answer would be, so I can't continue the conversation past that point without straightforward answers from you.
Replies from: Eugine_Nier, Peterdjones↑ comment by Eugine_Nier · 2011-05-04T04:43:08.935Z · LW(p) · GW(p)
If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?
Follow morality.
Okay, you're making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?
One way to illustrate this distinction is using Eliezer's "murder pill". If you were offered a pill that would reverse and/or eliminate a preference would you take it (possibly the offer includes paying you)? If the preference is something like preferring vanilla to chocolate ice cream, the answer is probably yes. If the preference is for people not to be murdered the answer is probably no.
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T16:44:16.382Z · LW(p) · GW(p)
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
If that's a definition of morality, then morality is a subset of psychology, which probably isn't what you wanted.
Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can't be nailed down even with multiple days of Q&A, that would be worthwhile and not just a statement about psychology. But if we had such statements to make about morality, we would have been making them all this time and there would be clarity about what we're talking about, which hasn't happened.
Replies from: Eugine_Nier, Peterdjones↑ comment by Eugine_Nier · 2011-05-04T20:23:53.618Z · LW(p) · GW(p)
One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
If that's a definition of morality, then morality is a subset of psychology, which probably isn't what you wanted.
That's not a definition of morality but an explanation of one reason why the "murder pill" distinction is important.
↑ comment by Peterdjones · 2011-05-04T16:57:39.361Z · LW(p) · GW(p)
...the way human brains are designed, thinking about your preferences can cause them to change.
If that's a definition of morality, then morality is a subset of psychology, which probably isn't what you wanted.
If that's a valid argument, then logic, mathematics, etc are branches of psychology.
Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can't be nailed down
Are you saying there has never been any valid moral discourse or persuasion?
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T17:31:15.308Z · LW(p) · GW(p)
If that's a definition of morality, then morality is a subset of psychology, which probably isn't what you wanted.
If that's a valid argument, then logic, mathematics, etc are branches of psychology.
There's a difference between changing your mind because a discussion lead you to bound your rationality differently, and changing your mind because of suggestability and other forms of sloppy thinking. Logic and mathematics is the former, if done right. I haven't seen much non-sloppy thinking on the subject of changing preferences.
I suppose there could be such a thing -- Joe designed an elegant high-throughput gas chamber, he wants to show the design to his friends, someone tells Joe that this could be used for mass murder, Joe hadn't thought that the design might actually be used, so he hides his design somewhere so it won't be used. But that's changing Joe's belief about whether sharing his design is likely to cause mass murder, not changing Joe's preference about whether he wants mass murder to happen.
Are you saying there has never been any valid moral discourse or persuasion?
No, I'm saying that morality is a useless concept and that what you're calling moral discourse is some mixture of (valid change of beliefs based on reflection and presentation of evidence) and invalid emotional manipulation based on sloppy thinking involving, among other things, undefined and undefinable terms.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-04T19:42:54.825Z · LW(p) · GW(p)
But that's changing Joe's belief about whether sharing his design is likely to cause mass murder, not changing Joe's preference about whether he wants mass murder to happen.
But there are other stories where the preference itself changes. "If you approve of womens rights, you should approve of Gay rights".
No, I'm saying that morality is a useless concept and that what you're calling moral discourse is some mixture of (valid change of beliefs based on reflection and presentation of evidence) and invalid emotional manipulation based on sloppy thinking involving, among other things, undefined and undefinable terms.
Everything is a mixture of the invalid and the valid. Why throw somethin out instead of doing it better?
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T19:53:42.289Z · LW(p) · GW(p)
"If you approve of womens rights, you should approve of Gay rights".
IMO we should have gay rights because gays want them, not because moral suasion was used successfully on people opposed to gay rights. Even if your argument above worked, I can't envision a plausible reasoning system in which the argument is valid. Can you offer one? Otherwise, it only worked because the listener was confused, and we're back to morality being a special case of psychology again.
Everything is a mixture of the invalid and the valid. Why throw somethin out instead of doing it better?
Because I don't know how to do moral arguments better. So far as I can tell, they always seems to wind up either being wrong, or not being moral arguments.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-04T20:11:20.733Z · LW(p) · GW(p)
IMO we should have gay rights because gays want them, not because moral suasion was used successfully on people opposed to gay rights
They are not going to arrive without overcoming opposition somehow.
Even if your argument above worked, I can't envision a plausible reasoning system in which the argument is valid.
Does that mean your "because gays/women want them" isn't valid? Why offer it then?
Because I don't know how to do moral arguments better.
Because you reject them?
↑ comment by Peterdjones · 2011-05-04T12:17:42.315Z · LW(p) · GW(p)
The likely answer is "If I'm willpower-depleted, I'd do the immoral thing I prefer, but on a good day I'd have enough willpower and I'd do the moral thing. I prefer to have enough willpower to do the moral thing in general." In that case, I would have to admit that I'm in the same situation, except with a vocabulary change. I define "preference" to include everything that drives a person's behavior,
But preference itself is influenced by reasoning and experience. The Preference theory focuses on proximate causes, but there are more distal ones too.
if we assume that they aren't suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I'm calling "preference" is the same as what you're calling "preference and morality"
I am not and never was using "preference" to mean something disjoint from morality. If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences. That is not an argument for nihilism or relativism. You could have an epistemology where everything is talked about as belief, and the difference between true belief and false belief is ignored.
Okay, you're making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?
If by a straightforward answer. you mean an answer framed in terms of some instrumental value that i fulfils, I can't do that. I can only continue to challenge the frame itself. Morality is already, in itself, the most important value. It isn't "made" important by some greater good.
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T13:42:52.547Z · LW(p) · GW(p)
I am not and never was using "preference" to mean something disjoint from morality. If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences.
There's a choice you're making here, differently from me, and I'd like to get clear on what that choice is and understand why we're making it differently.
I have a bunch of things I prefer. I'd rather eat strawberry ice cream than vanilla, and I'd rather not design higher-throughput gas chambers. For me those two preferences are similar in kind -- they're stuff I prefer and that's all there is to be said about it.
You might share my taste in ice cream and you said you share my taste in designing gas chambers. But for you, those two preferences are different in kind. The ice cream preference is not about morality, but designing gas chambers is immoral and that distinction is important for you.
I hope we all agree that the preference not to design high-throughput gas chambers is commonly and strongly held, and that it's even a consensus in the sense that I prefer that you prefer not to design high-throughput gas chambers. That's not what I'm talking about. What I'm talking about is the question of why the distinction is important to you. For example, I could define the preferences of mine that can be easily desscribed without using the letter "s" to be "blort" preferences, and the others to be non-blort, and rant about how we all need to distinguish blort preferences from non-blort preferences, and you'd be left wondering "Why does he care?"
And the answer would be that there is no good reason for me to care about the distinction between blort and non-blort preferences. The distinction is completely useless. A given concept takes mental effort to use and discuss, so the decision to use or not use a concept is a pragmatic one: we use a concept if the mental effort of forming it and communicating about it is paid for by the improved clarity when we use it. The concept of blort prefrerences does not improve the clarity of our thoughts, so nobody uses it.
The decision to use the concept of "morality" is like any other decision to define and use a concept. We should use it if the cost of talking about it is paid for by the added clarity it brings. If we don't use the concept, that doesn't change whether anyone wants to build high-throughput gas chambers -- it just means that we don't have the tools to talk about the difference in kind between ice cream flavor preferences and gas chamber building preferences. If there's no use for such talk, then we should discard the concept, and if there is a use for such talk, we should keep the concept and try to assign a useful and clear meaning to it.
So what use is the concept of morality? How do people benefit from regarding ice cream flavor preferences as a different sort of thing from gas chamber building preferences?
Morality is already, in itself, the most important value.
I hope we're agreed that there are two different kinds of things here -- the strongly held preference to not design high-throughput gas chambers is a different kind of thing from the decision to label that preference as a moral one. The former influences the options available to a well-organized mass murderer, and the latter determines the structure of conversations like this one. The former is a value, the latter is a choice about how words label things. I claim that if we understand what is going on, we'll all prefer to make the latter choice pragmatically.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-04T14:12:24.355Z · LW(p) · GW(p)
You've written quite a lot of words but you're still stuck on the idea that all importance is instrumental importance, importance for something that doesn't need to be impoitant in itself. You should care about morality because it is a value and values are definitionally what is important and what should be cared about. If you suddenly started liking vanilla.nothing important would change. You wouldn't stop being you, and your new self wouldn't be someone your old self would hate. That wouldn't be the case if you suddenly started liking murder or gas chambers. You don't now like people who like those things, and you wouldn't now want to become one.
I claim that if we understand what is going on, we'll all prefer to make the latter choice pragmatically.
If we understand what is going on , we should make the choice correctly -- that is, according to rational norms. If morality means something other than the merely pragmatic, we should not label the pragmatic as the moral. And it must mean something different because it is an open, investigatable question whether some instrumentally useful thing is also ethically good, whereas questions like "is the pragmatic useful" are trivial and tautologous.
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T14:19:55.048Z · LW(p) · GW(p)
You should care about morality because it is a value and values are definitionally what is important and what should be cared about.
You're not getting the distinction between morality-the-concept-worth-having and morality-the-value-worth-enacting.
I'm looking for a useful definition of morality here, and if I frame what you say as a definition you seem to be defining a preference to be a moral preference if it's strongly held, which doesn't seem very interesting. If we're going to have the distinction, I like Eugene's proposal that a moral preference is one that's worth talking about better, but we need to make the distinction in such a way that something doesn't get promoted to being a moral preference just because people are easily deceived about it. There should be true things to say about it.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-04T14:38:26.081Z · LW(p) · GW(p)
and if I frame what you say as a definition you seem to be defining a preference to be a moral preference if it's strongly held,
But what I actually gave as a definition is the concept of morality is the concept of ultimate value and importance. A concept which even the nihilists need so that they can express their disbelief in it. A concept which even social and cognitive scientists need so they can describe the behaviour surrounding it.
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T16:38:13.312Z · LW(p) · GW(p)
You are apparently claiming there is some important difference between a strongly held preference and something of ultimate value and importance. Seems like splitting hairs to me. Can you describe how those two things are different?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-04T16:47:20.114Z · LW(p) · GW(p)
Just because you do have a stongly held preference, it doesn't mean you should. The difference between true beliefs and fervently held ones is similar.
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T16:58:23.914Z · LW(p) · GW(p)
Just because you do have a stongly held preference, it doesn't mean you should. The difference between true beliefs and fervently held ones is similar.
One can do experiments to determine whether beliefs are true, for the beliefs that matter. What can one do with a preference to figure out if it should be strongly held?
If that question has no answer, the claim that the two are similar seems indefensible.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-04T17:32:34.650Z · LW(p) · GW(p)
for the beliefs that matter
What makes them matter?
What can one do with a preference to figure out if it should be strongly held?
Reason about it?
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T17:55:13.291Z · LW(p) · GW(p)
What makes [beliefs] matter?
Empirical content. That is, a belief matters if it makes or implies statements about things one might observe.
What can one do with a preference to figure out if it should be strongly held?
Reason about it?
Can you give an example? I tried to make one at http://lesswrong.com/lw/5eh/what_is_metaethics/43fh, but it twisted around into revising a belief instead of revising a preference.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-04T19:58:02.469Z · LW(p) · GW(p)
Empirical content. That is, a belief matters if it makes or implies statements about things one might observe.
So it doesn't matter if it only affects what you will do?
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T20:06:25.175Z · LW(p) · GW(p)
Empirical content. That is, a belief matters if it makes or implies statements about things one might observe.
So it doesn't matter if it only affects what you will do?
If I'm thinking for the purpose of figuring out my future actions, that's a plan, not a belief, since planning is relevant when I haven't yet decided what to do.
I suppose beliefs about other people's actions are empirical.
I've lost the relevance of this thread. Please state a purpose if you wish to continue, and if I like it, I'll reply.
↑ comment by TimFreeman · 2011-05-04T01:09:38.783Z · LW(p) · GW(p)
[Morality is] the ultimate way of evaluating things... It is morally wrong to design better gas chambers.
Okay, that seems clear enough that I'd rather pursue that than try to get an answer to any of my previous questions, even if all we may have accomplished here is to trade Eugene's evasiveness for Peter's.
If you know that morality is the ultimate way of evaluating things, and you're able to use that to evaluate a specific thing, I hope you are aware of how you performed that evaluation process. How did you get to the conclusion that it is morally wrong to design better gas chambers?
Execution techniques have improved over the ages. A guilliotine (sp?) is more compassionate than an axe, for example, since with an axe the executioner might need a few strokes, and the experience for the victim is pretty bad between the first stroke and the last. Now we use injections that are meant to be painless, and perhaps they actually are. In an environment where executions are going to happen anyway, it seems compassionate to make them happen better. Are you saying gas chambers, specifically, are different somehow, or are you saying that designing the guilliotine was morally wrong too and it would have been morally preferable to use an axe during the time guilliotines were used?
Replies from: Marius↑ comment by Marius · 2011-05-04T01:20:01.027Z · LW(p) · GW(p)
I'm pretty sure he means to refer to high-throughput gas chambers optimized for purposes of genocide, rather than individual gas chambers designed for occasional use.
He may or may not oppose the latter, but improving the former is likely to increase the number of murders committed.
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T01:25:32.797Z · LW(p) · GW(p)
Agreed, so I deleted my post to avoid wasting Peter's time responding.
↑ comment by Eugine_Nier · 2011-05-02T18:49:48.040Z · LW(p) · GW(p)
Let's try a different approach.
I have spent some time thinking about how to apply the ideas of Eliezer's metaethics sequence to concrete ethical dilemmas. One problem that quickly comes up is that as PhilGoetz points out here, the distinction between preferences and biases is very arbitrary.
So the question becomes how do you separate which of your intuitions are preferences and which are biases?
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T03:15:20.720Z · LW(p) · GW(p)
[H]ow do you separate which of your intuitions are preferences and which are biases?
Well, valid preferences look like they're derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way. Biases are everything else.
I don't see how that question is relevant. I don't see any good reason for you to dodge my question about what you'd do if your preferences contradicted your morality. It's not like it's an unusual situation -- consider the internal conflicts of a homosexual Evangelist preacher, for example.
Replies from: Peterdjones, wedrifid, Eugine_Nier↑ comment by Peterdjones · 2011-05-04T13:09:07.913Z · LW(p) · GW(p)
What makes your utility function valid? If that is just preferences, then presumably it is going to work circularly and just confirm your current preferences, If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T14:06:51.769Z · LW(p) · GW(p)
What makes your utility function valid?
I don't judge it as valid or invalid. The utility function is a description of me, so the description either compresses observations of my behavior better than an alternative description, or it doesn't. It's true that some preferences lead to making more babies or living longer than other preferences, and one may use evolutionary psychology to guess what my preferences are likely to be, but that is just a less reliable way of guessing my preferences than from direct observation, not a way to judge them as valid or invalid.
If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn't more or less valid than one that cares about the short term. (Actually, if you only care about things that are too far away for you to effectively plan, you're in trouble, so long-term preferences can promote survival less than shorter term ones, depending on the circumstances.)
This issue is confused by the fact that a good explanation of my behavior requires simultaneously guessing my preferences and my beliefs. The preference might say I want to go to the grocery store, and I might have a false belief about where it is, so I might go the wrong way and the fact that I went the wrong way isn't evidence that I don't want to go to the grocery store. That's a confusing issue and I'm hoping we can assume for the purposes of discussion about morality that the people we're talking about have true beliefs.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-04T14:27:34.327Z · LW(p) · GW(p)
The utility function is a description of me,
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn't more or less valid than one that cares about the short term.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
Replies from: TimFreeman↑ comment by TimFreeman · 2011-05-04T16:32:10.085Z · LW(p) · GW(p)
The utility function is a description of me,
If it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases.
I'm not saying it's a complete description of me. To describe how I think you'd also need a description of my possibly-false beliefs, and you'd also need to reason about uncertain knowledge of my preferences and possibly-false beliefs.
The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
In my model, reasoning and reflection can change beliefs and change the heuristics I use for planning. If a preference changes, then it wasn't a preference. It might have been a non-purposeful activity (the exact schedule of my eyeblinks, for example), or it might have been a conflation of a belief and a preference. "I want to go north" might really be "I believe the grocery store is north of here and I want to go to the grocery store". "I want to go to the grocery store" might be a further conflation of preference and belief, such as "I want to get some food" and "I believe I will be able to get food at the grocery store". Eventually you can unpack all the beliefs and get the true preference, which might be "I want to eat something interesting today".
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-04T16:37:19.107Z · LW(p) · GW(p)
I'm not saying it's a complete description of me. [etc]
That still doesn't explain what the difference between your prefernces and your biases is.
If a preference changes, then it wasn't a preference.
That's rather startling. Is it a fact about all preferences that they hold from birth to death? What about brain plasticity?
Replies from: TimFreeman, CuSithBell↑ comment by TimFreeman · 2011-05-04T17:07:08.532Z · LW(p) · GW(p)
Is it a fact about all preferences that they hold from birth to death? What about brain plasticity?
It's a term we're defining because it's useful, and we can define it in a way that it holds from birth forever afterward. Tim had the short-term preference dated around age 3 months to suck mommy's breast, and Tim apparently has a preference to get clarity about what these guys mean when they talk about morality dated around age 44 years. Brain plasticity is an implementation detail. We prefer simpler descriptions of a person's preferences, and preferences that don't change over time tend to be simpler, but if that's contradicted by observation you settle for different preferences at different times.
I suppose I should have said "If a preference changes as a consequence of reasoning or reflection, it wasn't a preference". If the context of the statement is lost, that distinction matters.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-04T17:18:35.959Z · LW(p) · GW(p)
So you are defining "preference" in a way that is clearly arbitrary and possibly unempirical...and complaining about the way moral philosophers use words?
↑ comment by CuSithBell · 2011-05-04T16:51:22.446Z · LW(p) · GW(p)
That's rather startling.
I agree! Consider, for instance, taste in particular foods. I'd say that enjoying, for example, coffee, indicates a preference. But such tastes can change, or even be actively cultivated (in which case you're hemi-directly altering your preferences).
Of course, if you like coffee, you drink coffee to experience drinking coffee, which you do because it's pleasurable - but I think the proper level of unpacking is "experience drinking coffee", not "experience pleasurable sensations", because the experience being pleasurable is what makes it a preference in this case. That's how it seems to me, at least. Am I missing something?
↑ comment by wedrifid · 2011-05-04T05:04:10.671Z · LW(p) · GW(p)
and uncertainty about the future should interact with the utility function in the proper way.
"The proper way" being built in as a part of the utility function and not (necessarily) being a simple sum of the multiplication of world-state values by their probability.
↑ comment by Eugine_Nier · 2011-05-04T04:53:51.917Z · LW(p) · GW(p)
Well, valid preferences look like they're derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way.
Um, no. Unless you are some kind of mutant who doesn't suffer from scope insensitivity or any of the related biases your uncertainty about the future doesn't interact with your preferences in the proper way until you attempt to coherently extrapolate them. It is here that the distinction between a bias and a valid preference becomes both important and very arbitrary.
Here is the example PhilGoetz gives in the article I linked above:
In Crime and punishment, I argued that people want to punish criminals, even if there is a painless, less-costly way to prevent crime. This means that people value punishing criminals. This value may have evolved to accomplish the social goal of reducing crime. Most readers agreed that, since we can deduce this underlying reason, and accomplish it more effectively through reasoning, preferring to punish criminals is an error in judgement.
Most people want to have sex. This value evolved to accomplish the goal of reproducing. Since we can deduce this underlying reason, and accomplish it more efficiently than by going out to bars every evening for ten years, is this desire for sex an error in judgement that we should erase?
I believe I answered your other question elsewhere in the thread.
↑ comment by Peterdjones · 2011-04-28T12:18:04.057Z · LW(p) · GW(p)
Rationality is the equivalent of normative morality: it is a set of guidelines for arriving at the opinions you should have:true ones. Epistemology is the equivalent of metaethics. It strives to answer the question "what is truth".
↑ comment by Peterdjones · 2011-04-28T12:05:48.830Z · LW(p) · GW(p)
People clearly have opinions they act on. What makes you think we need this so-called "rationality" to tell us which opinions to have?
↑ comment by [deleted] · 2011-04-28T04:58:38.333Z · LW(p) · GW(p)
Except that intuitions about physics derive from observations of physics, whereas intuitions about morality derive from observations of... intuitions.
Intuitions are internalizations of custom, an aspect of which is morality. Our intuitions result from our long practice of observing custom. By "observing custom" I mean of course adhering to custom, abiding by custom. In particular, we observe morality - we adhere to it, we abide by it - and it is from observing morality that we gain our moral intuitions. This is a curious verbal coincidence, that the very same word "observe" applies in both cases even though it means quite different things. That is:
- Our physical intuitions are a result of observing physics (in the sense of watching attentively).
- Our moral intuitions are a result of observing morality (in the sense of abiding by).
However, discovering physics is not nearly as passive as is suggested by the word "observe". We conduct experiments. We try things and see what happens. We test the physical world. We kick the rock - and discover that it kicks back. Reality kicks back hard, so it's a good thing that children are so resilient. An adult that kicked reality as hard as kids kick it would break their bones.
And discovering morality is similarly not quite as I said. It's not really by observing (abiding by) morality that we discover morality, but by failing to observe (violating) morality that we discover morality. We discover what the limits are by testing the limits. We are continually testing the limits, though we do it subtly. But if you let people walk all over you, before long they will walk all over you, because in their interactions with you they are repeatedly testing the limits, ever so subtly. We push on the limits of what's allowable, what's customary, what's moral, and when we get push-back we retreat - slightly. Customs have to survive this continual testing of their limits. Any custom that fails the constant testing will be quickly violated and then forgotten. So the customs that have survived the constant testing that we put them through, are tough little critters that don't roll over easily. We kick customs to see whether they kick back. Children kick hard, they violate custom wildly, so it's a good thing that adults coddle them. An adult that kicked custom as hard as kids kick it would wind up in jail or dead.
Custom is "really" nothing other than other humans kicking back when we kick them. When we kick custom, we're kicking other humans, and they kick back. Custom is an equilibrium, a kind of general truce, a set of limits on behavior that everyone observes and everyone enforces. Morality is an aspect of this equilibrium. It is, I think, the more serious, important bits of custom, the customary limits on behavior where we kick back really hard, or stab, or shoot, if those limits are violated.
Anyway, even though custom is "really" made out of people, the regularities that we discover in custom are impersonal. One person's limits are pretty much another person's limits. So custom, though at root personal, is also impersonal, in the "it's not personal, it's just business" sense of the movie mobster. So we discover regularities when we test custom - much as we discover regularities when we test physical reality.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-28T15:22:31.820Z · LW(p) · GW(p)
Yes, but we've already determined that we don't disagree - unless you think we still do? I was arguing against observing objective (i.e. externally existing) morality. I suspect that you disagree more with Eugine_Nier.
↑ comment by Peterdjones · 2011-04-28T12:38:58.267Z · LW(p) · GW(p)
Right, and 'facts' about God. Except that intuitions about physics derive from >observations of physics, whereas intuitions about morality derive from observations >of... intuitions.
Which is true, and explains why it is a harder problem than physics, and less progress has been made.
Replies from: wedrifid↑ comment by Peterdjones · 2011-04-28T13:07:51.088Z · LW(p) · GW(p)
Do you sincerely believe there is no difference? If not:: why not start by introspecting your own thinking on the subject?
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T16:49:57.666Z · LW(p) · GW(p)
Again, we come to this issue of not having a precise definition of "right" and "wrong".
You're dodging the questions that I asked.
Replies from: PeterdjonesHow do you determine which one is accurate? What observable consequences does each one predict? What do they lead you to anticipate
↑ comment by Peterdjones · 2011-04-28T17:03:09.554Z · LW(p) · GW(p)
I am not dodging them. I am arguing that they are inappropriate to the domain, and that not all definitions have to work that way.
Replies from: CuSithBell, NMJablonski↑ comment by CuSithBell · 2011-04-28T18:33:35.023Z · LW(p) · GW(p)
But you already have determined that one of them is accurate, right?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T18:59:32.493Z · LW(p) · GW(p)
Everything is inaccurate for some value of accurate. The point is you can't arrive at an accurate definition without a good theory, and you can't arrive at a good theory without an (inevitably inaccurate) definition.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-28T19:02:21.291Z · LW(p) · GW(p)
It's a problem to assert that you've determined which of A and B is accurate, but that there isn't a way to determine which of A and B is accurate.
Edited to clarify: When I wrote this, the parent post started with the line "You say that like it's a problem."
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T19:32:00.370Z · LW(p) · GW(p)
I haven't asserted that any definition of "Morality" can jump through the hoops set up by NMJ and co. but there is an (averagely for Ordinary Language) inaccurate definition which is widely used.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-28T19:38:11.264Z · LW(p) · GW(p)
The question in this thread was not "define Morality" but "explain how you determine which of "Killing innocent people is wrong barring extenuating circumstances" and "Killing innocent people is right barring extenuating circumstances" is morally right."
(For people with other definitions of morality and / or other criteria for "rightness" besides morality, there may be other methods.)
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T20:03:34.353Z · LW(p) · GW(p)
The question was rather unhelpfully framed in Jublowskian terms of "observable consequences". I think killing people is wrong because I don't want to be killed, and I don't want to Act on a Maxim I Would Not Wish to be Universal Law.
Replies from: NMJablonski, CuSithBell↑ comment by NMJablonski · 2011-04-28T20:09:23.859Z · LW(p) · GW(p)
My name is getting all sorts of U's and W's these days.
If there was a person who decided they did want to be killed, would killing become "right"?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T20:26:12.700Z · LW(p) · GW(p)
Does he want everyone to die? Does he want to kill them against their wished? Are multiple agents going to converge on that opinion?
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-28T20:30:49.976Z · LW(p) · GW(p)
What are the answers under each of those possible conditions (or, at least, the interesting ones)?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T20:54:57.577Z · LW(p) · GW(p)
Why do you need me to tell you? Under normal circumstances the normal "murder is worng" answer will obtain -- that's the point.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-28T21:43:02.258Z · LW(p) · GW(p)
Why do you need me to tell you?
Because I'm trying to have a discussion with you about your beliefs?
Looking at this I find it hard to avoid concluding that you're not interested in a productive discussion - you asked a question about how to answer a question, got an answer, and refused to answer it anyway. Let me know if you wish to discuss with me as allies instead of enemies, but until and unless you do I'm going to have to bow out of talking with you on this topic.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T22:03:10.846Z · LW(p) · GW(p)
I believe murder is wrong. I believe you can figure that out if you don't know it. The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions. The point of asking questions is to demonstrate that it is possible to reason about morality: if someone answers the questions, they are doing the reasoning.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-28T23:56:24.817Z · LW(p) · GW(p)
The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions.
This seems problematic. If that's the case, then your ethical system exists solely to support the bottom line. That's just rationalizing, not actual thinking. Moreover, is doesn't tell you anything helpful when people have conflicting intuitions or when you don't have any strong intuition, and those are the generally interesting cases.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-29T00:09:14.655Z · LW(p) · GW(p)
A system that could support any conclusion would be useless, and a system that couldn't support the strongest and most common intuitions would be pretty incredible. A system that doesn't suffer from quodlibet isn't going to support both of a pair of contradictory intuitions. And that's pretty well the only way of resolving such issues. The rightness and wrongness of feelings can't help.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-29T00:13:49.911Z · LW(p) · GW(p)
So to make sure I understand, you are trying to make a system that agrees and supports with all your intuitions and you hope that the system will then give unambiguous answers where you don't have intuitions?
I don't think that you realize how frequently our intuitions clash, not just the intuitions of different people, but even one's own intuitions (for most people at least). Consider for example train car problems. Most people whether or not they will pull the lever or push the fat person feel some intuition for either solution. And train problems are by far not the only example of a moral dilemma that causes that sort of issue. Many mundane, real life situations, such as abortion, euthanasia, animal testing, the limits of consent, and many other issues cause serious clashes of intuitions.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-29T00:35:13.699Z · LW(p) · GW(p)
I want a system that supports core intuitions. A consistent system can help to disambiguate intuitions.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-29T00:37:02.577Z · LW(p) · GW(p)
And how do you decide which intuitions are "core intuitions"?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-29T00:50:14.231Z · LW(p) · GW(p)
There's a high degree of agreement about them. They seem particularly clear to me.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-29T00:51:56.013Z · LW(p) · GW(p)
Can you give some of those? I'd be curious what such a list would look like.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-29T15:07:21.541Z · LW(p) · GW(p)
eg., Murder, stealing
Replies from: NMJablonski, JoshuaZ↑ comment by NMJablonski · 2011-04-29T16:52:57.847Z · LW(p) · GW(p)
So what makes an intuition a core intuition and how did you determine that your intuitions about murder and stealing are core?
↑ comment by CuSithBell · 2011-04-28T20:24:53.973Z · LW(p) · GW(p)
In this post: "How do you determine which one is accurate?"
In your response further down the thread: "I am not dodging [that question]. I am arguing that [it is] inappropriate to the domain [...]"
And then my post: "But you already have determined that one of them is accurate, right?"
That question was not one phrased in the way you object to, and yet you still haven't answered it.
Though, at this point it seems one can infer (from the parent post) that the answer is something like "I reason about which principle is more beneficial to me."
↑ comment by NMJablonski · 2011-04-28T17:06:00.955Z · LW(p) · GW(p)
Any belief you have about the nature of reality, that does not inform your anticipations in any way, is meaningless. It's like believing in a god which can never be discovered. Good for you, but if the universe will play out exactly the same as if it wasn't there, why should I care?
Furthermore, why posit the existence of such a thing at all?
Replies from: None, Peterdjones↑ comment by [deleted] · 2011-04-28T21:03:17.308Z · LW(p) · GW(p)
Any belief you have about the nature of reality, that does not inform your anticipations in any way, is meaningless.
On a tangent - I think the subjectivist flavor of that is unfortunate. You're echoing Eliezer's Making Beliefs Pay Rent, but the anticipations that he's talking about are "anticipations of sensory experience". Ultimately, we are subject to natural selection, so maybe a more important rent to pay than anticipation of sensory experiences, is not being removed from the gene pool. So we might instead say, "any belief you have about the nature of reality, that does not improve your chances of survival in any way, is meaningless."
Elsewhere, in his article on Newcomb's paradox, Eliezer says:
Rational agents should WIN.
Survival is ultimate victory.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T21:13:06.543Z · LW(p) · GW(p)
I don't generally disagree with anything you wrote. Perhaps we miscommunicated.
"any belief you have about the nature of reality, that does not improve your chances of survival in any way, is meaningless."
I think that would depend on how one uses "meaningless" but I appreciate wholeheartedly the sentiment that a rational agent wins, with the caveat that winning can mean something very different for various agents.
↑ comment by Peterdjones · 2011-04-28T17:24:05.595Z · LW(p) · GW(p)
Moral beliefs aren't beliefs about moral facts out there in reality, they are beliefs about what I should do next. "What should I do" is an orthogonal question to "what can I expect if I do X". Since I can reason morally, I am hardly positing anything without warrant.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T17:25:18.656Z · LW(p) · GW(p)
You just bundled up the whole issue, shoved it inside the word "should" and acted like it had been resolved.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T17:47:17.842Z · LW(p) · GW(p)
I have stated several times that the whole issue has not been resolved. All I'm doing at the moment is refuting your over-hasty generalisation that:
"morality doesn't work like empirical prediction, so ditch the whole thing".
It doesn't work like the empiricism you are used to because it is, in broad brush strokes, a different thing that solves a different problem.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-28T17:57:57.036Z · LW(p) · GW(p)
It doesn't work like the empiricism you are used to because it is, in broad brush strokes, a different thing that solves a different problem.
Can you recognize that from my position it doesn't work like the empiricism I'm used to because it's almost entirely nonsensical appeals to nothing, arguing by definitions, and the exercising of the blind muscles of eld philosophy?
I am unpersuaded that there exists a set of correct preferences. You have, as far as I can see, made no effort to persuade me, but rather just repeatedly asserted that there are and asked me questions in terms that you refuse to define. I am not sure what you want from me in this case.
Why should I accept your bald assertions here?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T18:22:12.004Z · LW(p) · GW(p)
You may be entirely of the opinion that it is all stuff and nonsense: I am only interested in what can be rationally argued.
I don't think you think it works like empiricism. I think you have tried to make it work like empiricism and then given up. "I have a hammer in my hand, and it won't work on this 'screw' of yours, so you should discard it".
People can and reason about what preferences they should have, and such reasoning can be as objective as mathematical reasoning, without the need for a special arena of objects.
↑ comment by prase · 2011-04-28T15:26:38.192Z · LW(p) · GW(p)
What is weasel-like with "near the surface of the earth"?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-29T00:50:15.832Z · LW(p) · GW(p)
In this context, it's as "weasel-like" as "innocent". In the sense that both are fudge factors you need to add to the otherwise elegant statement to make it true.
↑ comment by JGWeissman · 2011-04-28T03:15:32.533Z · LW(p) · GW(p)
I'm not sure it's possible for my example to be wrong anymore then its possible for 2+2 to equal 3.
What would it take to convince you your example is wrong?
Note how "2+2=4" has observable consequences:
Suppose I got up one morning, and took out two earplugs, and set them down next to two other earplugs on my nighttable, and noticed that there were now three earplugs, without any earplugs having appeared or disappeared - in contrast to my stored memory that 2 + 2 was supposed to equal 4. Moreover, when I visualized the process in my own mind, it seemed that making XX and XX come out to XXXX required an extra X to appear from nowhere, and was, moreover, inconsistent with other arithmetic I visualized, since subtracting XX from XXX left XX, but subtracting XX from XXXX left XXX. This would conflict with my stored memory that 3 - 2 = 1, but memory would be absurd in the face of physical and mental confirmation that XXX - XX = XX.
Does your example (or another you care to come up with) have observable consequences?
↑ comment by Amanojack · 2011-04-28T02:40:14.850Z · LW(p) · GW(p)
I don't think you can explicate such a connection, especially not without any terms defined. In fact, it is just utterly pointless to try to develop a theory in a field that hasn't even been defined in a coherent way. It's not like it's close to being defined, either.
For example, "Is abortion morally wrong?" combines about 12 possible questions into it because it has a least that many interpretations. Choose one, then we can study that. I just can't see how otherwise rationality-oriented people can put up with such extreme vagueness. There is almost zero actual communication happening in this thread in the sense of actually expressing which interpretation of moral language anyone is taking. And once that starts happening it will cover way too many topics to ever reach a resolution. We're simply going to have to stop compressing all these disparate-but-subtly-related concepts into a single field, taboo all the moralist language, and hug some queries (if any important ones actually remain).
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T03:40:43.464Z · LW(p) · GW(p)
I don't think you can explicate such a connection, especially not without any terms defined. In fact, it is just utterly pointless to try to develop a theory in a field that hasn't even been defined in a coherent way. It's not like it's close to being defined, either.
In any science I can think of people began developing it using intuitive notions, only being able to come up with definitions after substantial progress had been made.
↑ comment by TimFreeman · 2011-04-28T02:49:02.934Z · LW(p) · GW(p)
...given our current state of knowledge about meta-ethics I can give no better definition of the words "should"/"right"/"wrong" than the meaning they have in everyday use.
You can assume that the words have no specific meaning and are used to signal membership in a group. This explains why the flowchart in the original post has so many endpoints about what morality might mean. It explains why there seems to be no universal consensus on what specific actions are moral and which ones are not. It also explains why people have such strong opinions about morality despite the fact that statements about morality are not subject to empirical validation.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T03:02:52.273Z · LW(p) · GW(p)
You can assume that the words have no specific meaning and are used to signal membership in a group.
One could make the same claim about words like "exits"/"true"/"false". Especially if our knowledge of science was at the same state as our knowledge of ethics.
Just the words "exits"/"true"/"false" had a meaning even before the development of science and Bayeanism even though a lot of people used them to signal group affiliation, I believe the words "should"/"right"/"wrong" even though a lot of people use them to signal group affiliation
Replies from: TimFreeman, Amanojack↑ comment by TimFreeman · 2011-04-28T03:14:58.502Z · LW(p) · GW(p)
But science isn't about words like "exist", "true", or "false". Science is about words like "Frozen water is less dense than liquid water". I can point at frozen water, liquid water, and a particular instance of the former floating on the latter. Scientific claims were well-defined even before there was enough knowledge to evaluate them. I can't point at anything for claims about morality, so the analogy between ethics and science is not valid.
Come on people. Argument by analogy doesn't prove anything even when the analogies are valid! Stop it.
If you don't like the hypothesis that words like "should", "right", and "wrong" are social signaling, give some other explanation of the evidence that is simpler. The evidence in question is:
The flowchart in the original post has many endpoints about what morality might mean.
There seems to be no universal consensus on what specific actions are moral and which ones are not.
People have strong opinions about morality despite the fact that statements about morality are not subject to empirical validation.
↑ comment by Peterdjones · 2011-04-28T13:27:53.433Z · LW(p) · GW(p)
You can't point at anything for claims about pure maths either. That something is not empirical does not automatically invalidate it.
Morality is not just social signalling, because it makes sense to say some social signals ("I am higher status than you because I have more slaves") are morally wrong.
Replies from: wedrifid↑ comment by wedrifid · 2011-04-28T13:47:26.921Z · LW(p) · GW(p)
Morality is not just social signalling, because it makes sense to say some social signals ("I am higher status than you because I have more slaves") are morally wrong.
That conclusion does not follow. Saying you have slaves is a signal about morality and, depending on the audience, often a bad signal.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T13:59:23.262Z · LW(p) · GW(p)
Note that there is a difference between "morality is about signalling" and "signalling is about morality". If I say "I am high status because I live a moral life" I am blatantly using morality to signal, but it doesn't remotely follow from that there is nothing to morality except signalling. It could be argued that, morally speaking, I should pursue morality for its own sake and not to gain status.
Replies from: wedrifid, wedrifid↑ comment by Eugine_Nier · 2011-04-28T03:31:10.463Z · LW(p) · GW(p)
But science isn't about words like "exist", "true", or "false". Science is about words like "Frozen water is less dense than liquid water".
Only because the force of the word "exists" is implicit in the indicative mood of the word "is".
Come on people. Argument by analogy doesn't prove anything even when the analogies are valid! Stop it.
But they can help explain what people mean, and they can show argument prove too much.
- The flowchart in the original post has many endpoints about what morality might mean.
I could draw an equally complicate flow chart about what "truth" and "exists"/"is" might mean.
- There seems to be no universal consensus on what specific actions are moral and which ones are not.
The amount of consensus is roughly the same as the amount of consensus there was before the development of science about which statements are true and which aren't.
- People have strong opinions about morality despite the fact that statements about morality are not subject to empirical validation.
People had strong opinions about truth before the concept of empirical validation was developed.
Replies from: Amanojack↑ comment by Amanojack · 2011-04-28T04:20:08.365Z · LW(p) · GW(p)
Your criticisms of "truth" are not so far off, but you're essentially saying that parts of science are wrong so you can be wrong, too. No actually, you think it is OK to flounder around in the field when you're just starting out. Sure, but not when you don't even know what it is you're supposed to be studying - if anything! This is not analogous to physics, where the general goal was clear from the very beginning: figure out what physical mechanisms underly macro-scale phenomena, such as the hardness of metal, conductivity, magnetic attraction, gravity, etc.
You're just running around to whatever you can grab onto to avoid the main point that there is nothing close to a semblance of delineation of what this "field" is actually about, and it is getting tiresome.
Replies from: Peterdjones, Eugine_Nier↑ comment by Peterdjones · 2011-04-28T12:28:04.641Z · LW(p) · GW(p)
I think the claim that ethicists don't know at all what they are studying is unfounded.
↑ comment by Eugine_Nier · 2011-04-28T04:34:03.206Z · LW(p) · GW(p)
This is not analogous to physics, where the general goal was clear from the very beginning: figure out what physical mechanisms underly macro-scale phenomena, such as the hardness of metal, conductivity, magnetic attraction, gravity, etc.
I believe this is hindsight bias.
Replies from: Amanojack↑ comment by Amanojack · 2011-04-28T03:20:09.774Z · LW(p) · GW(p)
That is sort of half true, but it feels like you're just saying that to say it, as there have been criticisms of this same line of reasoning that you haven't answered.
How about the fact that beliefs about physics actually pay rent? Do moral ones?
Replies from: wedrifid, Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T03:34:18.110Z · LW(p) · GW(p)
How about the fact that beliefs about physics actually pay rent? Do moral ones?
Not in the sense of anticipated experience, however they do inform our actions.
↑ comment by TimFreeman · 2011-04-28T02:55:22.036Z · LW(p) · GW(p)
My point is that NMJablonski's request is about as reasonable as demanding that someone arguing for the existence of a "Correct Theory of Physics" provide a clear reductionist description of what one means while tabooing words like 'physics', 'reality', 'exists', 'experience', etc.
No, the reductionist description of the Correct Theory of Physics eventually involves pointing at lab equipment. There is no lab equipment for morality, so the analogy is not valid.
Replies from: Eugine_Nier, Peterdjones↑ comment by Eugine_Nier · 2011-04-28T03:11:53.673Z · LW(p) · GW(p)
No, the reductionist description of the Correct Theory of Physics eventually involves pointing at lab equipment. There is no lab equipment for morality, so the analogy is not valid.
I could point a gun to your head and ask you to explain why I shouldn't pull the trigger.
Replies from: TimFreeman, Desrtopa↑ comment by TimFreeman · 2011-04-28T03:27:49.146Z · LW(p) · GW(p)
I could point a gun to your head and ask you to explain why I shouldn't pull the trigger.
That scenario doesn't lead to discovering the truth. If I deceive you with bullshit and you don't pull the trigger, that's a victory for me. I invite you to try again, but next time pick an example where the participants are incentivised to make true statements.
ETA: ...unless the truth we care about is just which flavors of bullshit will persuade you not to pull the trigger. If that's what you mean by morality, you probably agree with me that it is just social signaling.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-28T03:39:23.420Z · LW(p) · GW(p)
Well you could just as easily use your lab equipment to deceive me with bullshit.
↑ comment by Desrtopa · 2011-05-01T09:33:50.583Z · LW(p) · GW(p)
And if he gave a true moral argument you would have to accept it?
How would you distinguish a true argument from a merely persuasive one?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-05-01T18:54:32.252Z · LW(p) · GW(p)
Like I mentioned elsewhere in this thread, the "No Universally Compelling Argument" post you site applies equally well to physical and even mathematical facts (in fact that was what Eliezer was mainly referring to in that post).
In fact, the main point of that sequence is that just because there are no universally compelling arguments doesn't mean truth doesn't exist. As Eliezer mentions in where recursive justification hits bottom:
Replies from: DesrtopaNow, one lesson you might derive from this, is "Don't be born with a stupid prior." This is an amazingly helpful principle on many real-world problems, but I doubt it will satisfy philosophers.
↑ comment by Desrtopa · 2011-05-01T19:19:02.507Z · LW(p) · GW(p)
A formal proof is still a proof though, although nothing mandates that a listener must accept it. A mind can very well contain an absolute dismissal mechanism or optimize for something other than correctness.
We can understand what sort of assumptions we're making when we derive information from mathematical axioms, or the axioms of induction, and how further information follows from that. But what assumptions are we making that would allow us to extrapolate absolute moral facts? Does our process give us any way to distinguish them from preferences?
↑ comment by Peterdjones · 2011-04-28T13:49:18.507Z · LW(p) · GW(p)
That morality is not straightforwardly empirical is part of why it is inappropriate to demand concrete definitions.
Replies from: None↑ comment by [deleted] · 2011-04-28T14:07:07.183Z · LW(p) · GW(p)
Do you believe in God? If I defended the notion of God in a similar way -- it is not straightforwardly empirical, it's inappropriate to demand concrete definitions, it's not under the domain of science, just because you can't define it and measure it doesn't mean it doesn't exist -- would you find that persuasive?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T14:11:17.541Z · LW(p) · GW(p)
But I am only defending the idea that morality means something. Atheists think "God" means something. "uncountable set" means something even if the idea is thoroughly non-concrete.
Replies from: None↑ comment by [deleted] · 2011-04-28T14:23:15.019Z · LW(p) · GW(p)
Sure, but few-to-no atheists would say something like "'God' means something, but exactly what is an open problem."
The idea of someone refusing to say what they mean by "uncountable set" is even stranger.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T14:31:40.744Z · LW(p) · GW(p)
All atheists have to adopt a broad definition of God, or else they would only be disbelieving in the 7th day adventist God, or whatever...ie they would believe in all deities except one, which is more than the average believer.
Replies from: TheOtherDave, None↑ comment by TheOtherDave · 2011-04-28T15:07:20.865Z · LW(p) · GW(p)
This gets silly.
"Do you believe in woojits?" Well, no, I don't.
"Ah, well, if you disbelieve in woojits, then you must know what woojits are! So, what are woojits?" I have no idea.
"But how is that possible? If you don't have a definition for woojits, on what basis do you reject belief in them?" Having a well-defined notion of something is a prerequisite for belief in it; I don't have a well-defined notion of woojits; therefore I don't believe in woojits.
"No, no. You're confused. All woojit-disbelievers have to adopt a broad definition of woojits in order to disbelieve in them; otherwise they would merely disbelieve in a specific woojit." (shrug) OK, if you like, I have a broad definition of woojit... so broad, in fact, that it is effectively identical to my definition of all the other concepts I don't believe in and haven't thought about, which is the overwhelming majority of all possible concepts. For my part, I consider this equivalent to not having a definition of woojit at all.
As I say, this gets silly. It's just arguing about definitions of words.
Now, I would agree that atheists who grow up in theist cultures do have a definition of God, though I disagree with you that it's necessarily broad: I know at least one atheist who was raised Roman Catholic, for example, and the god he disbelieves in is the Roman Catholic god of his youth, and the idea that "God" might conceivably refer to anything else just doesn't have a lot of meaning to him.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T15:12:11.494Z · LW(p) · GW(p)
If you don't know what woojits are, you shouldn't jump to the conclusion that you don't believe in them. That is a mistake of rationality.
If your RC has concluded that he is an atheist without even considering other gods, that is a mistake of rationality too.
Replies from: CuSithBell, TheOtherDave↑ comment by CuSithBell · 2011-04-28T15:20:52.441Z · LW(p) · GW(p)
But earlier you indicated that asking what a woojit is requires accepting the notion of woojits as coherent.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T15:32:15.878Z · LW(p) · GW(p)
No, I said that asking about the nature of moral claims means "moral" has some prima facie meaning. "woojit" is a made up word with no prima facie meaning. Not analogous.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-28T15:38:19.250Z · LW(p) · GW(p)
Replace woojit then with boojum and the point still goes through.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T15:44:13.469Z · LW(p) · GW(p)
It doesn't sill go through, since it did not in the first place. It's a concrete fact that you can look up "moral" in a dictionary, for all that what you read isn't very useful.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-28T15:48:36.468Z · LW(p) · GW(p)
How is that relevant? I don't see why the presence in a dictionary matters. But even if it did, boojum is in some dictionaries and encyclopedia too. It is a type of snark.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T15:54:28.997Z · LW(p) · GW(p)
It's only in some,and not all, dictionaries because it is a made up word that is supposed to be ill defined and puzzling. Some Lexicographers feel that readers need to be advised that when they encounter this word, it is being used to flag "here is something strange and meaningless".
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-28T16:03:31.114Z · LW(p) · GW(p)
So what matters then is if all dictionaries have it? Why does that matter? Does this mean we couldn't have this discussion before dictionaries were invented? Did the nature of morality change with the invention of a dictionary? Moreover, if one got every dictionary to include "boojum" and "snark" would that then make it different?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T16:16:00.132Z · LW(p) · GW(p)
If a word is defined in all dictionaries, then the claim that it is completely meaningless is extraordinary and poorly motivated. Dictionaries are of course only significant because they make usage concrete.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-28T16:24:38.313Z · LW(p) · GW(p)
If a word is defined in all dictionaries, then the claim that it is completely meaningless is extraordinary and poorly motivated
The claim was about incoherence not whether it was "completely meaningless" and I fail to see how motivation is either relevant or you get anything about a claim being poorly motivated from this. If you prefer a different analogy, consider such terms as transubstantiation, consubstantiation, homoousion, hypostatic union, kerygma and modalism. Similarly, in a Hebrew dictionary you will have all ten Sephirot defined (Keter, chochmah, etc.). Is it is extraordinary and poorly motivated to say that these kabbalistic terms are incoherent?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T16:42:23.841Z · LW(p) · GW(p)
The point about motivation is about where burdens lie.
The discussion so far has been about the accusation that somebody somewhere is culpably refusing to define "morality". This is the first mention of incoherence.
"incoherent" is often used as a loose synonym for "I don't like it". That is not a useful form of argument. The examples of "incoherent" concepts you gave are a mixed bag of concepts ranging from the well defined but false, to the well defined but ungrounded, to the ill defined. If you want to say what specific kind of incoherence "morality" has IYO, feel free.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-04-28T17:47:48.501Z · LW(p) · GW(p)
The point about motivation is about where burdens lie.
How are motivations relevant to where burdens lie?
This is the first mention of incoherence.
Really? So, what about here?
he examples of "incoherent" concepts you gave are a mixed bag of concepts ranging from the well defined but false, to the well defined but ungrounded, to the ill defined. If you want to say what specific kind of incoherence "morality" has IYO, feel free.
You seem confused about what argument CuSithBell is arguing. The argument is not that morality is fundamentally incoherent or meaningless but that most definitions of it fall into those categories and that our common intuition is not sufficient to have useful discussions about it, so you need to supply a definition for what you mean. So far, you seem to have refused to do that. Do you see the distinction?
↑ comment by TheOtherDave · 2011-04-28T15:54:42.778Z · LW(p) · GW(p)
I'm not really sure what a "mistake of rationality" is, or how it differs from simply being mistaken about something.
That said, I would agree with you that my Roman Catholic atheist friend is not arriving at his atheism in a particularly rational way.
WRT woojits, I'm not jumping to any conclusions: I arrived at that conclusion step-by-step. Again: "Having a well-defined notion of something is a prerequisite for belief in it; I don't have a well-defined notion of woojits; therefore I don't believe in woojits." You're free to disagree with any part of that or all of it, but I'd prefer you didn't simply ignore it.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T16:04:49.811Z · LW(p) · GW(p)
A mistake of rationality is quite different from a perceptual error, for instance. It's even different to being wrong, since one can be right for irrational reasons.
"Having a well-defined notion of something is a prerequisite for belief in it
I disagree. I believe in consciousness, but don't have a well defined notion of it.
I don't have a well-defined notion of woojits
On the one hand, "woojit" might be intended as a synonym for something you do believe in. On the other hand. if it is meaningless, "woojits don't exist" is meaningless. Either way, you should not conclude that woojits don't exist because you don't know what they are
Replies from: TheOtherDave, DSimon↑ comment by TheOtherDave · 2011-04-28T17:38:55.466Z · LW(p) · GW(p)
you should not conclude that woojits don't exist because you don't know what they are
Agreed.
↑ comment by [deleted] · 2011-04-28T14:39:23.121Z · LW(p) · GW(p)
I probably don't understand what you mean.
I think that it's easy to be an atheist -- i.e. one doesn't have to make any difficult definitions or arguments to arrive at atheism, and those easy definitions and arguments are correct. If you think it's harder than I do, that would be interesting and could explain why we have such different opinions here.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T14:46:39.281Z · LW(p) · GW(p)
Fine. Then the atheist who doesn't have a difficult definition of God, isn't culpably refusing to explain her "new idea", and someone who thinks there is something to be said about morality can stick with the vanilla definition that morality is Right and Wrong and Such.
↑ comment by NMJablonski · 2011-04-27T22:52:34.532Z · LW(p) · GW(p)
A correct theory of physics would inform my anticipations.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-27T22:54:09.739Z · LW(p) · GW(p)
Please, taboo "anticipations".
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-27T22:57:20.401Z · LW(p) · GW(p)
Replace anticipations with:
My ability, as a mind (subjective observer), to construct an isomorphism in memory that corresponds to future experiences.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-27T23:00:06.608Z · LW(p) · GW(p)
What's an "isomorphism in memory"? What are "future experiences"? And what does it mean for them to "correspond"?
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-27T23:04:50.584Z · LW(p) · GW(p)
I would be happy to continue down this line a ways longer if you would like, and we could get all the way down to the two of us in the same physical location rebuilding the concept of induction. I am confident that if necessary we could do that for "anticipations" and build our way back up. I am not confident that "morality" as it has been used here actually connects to any solid surface in reality, unless it ends up meaning the same thing as "preferences".
Do you disagree?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-04-27T23:14:02.569Z · LW(p) · GW(p)
I am confident that if necessary we could do that for "anticipations" and build our way back up.
In that case maybe we should continue a bit longer until you're disabused of that belief. What I suspect will happen is that you'll continue to attempt to define your words in terms of more and more tenuous abstractions until the words you're using really are almost meaningless.
↑ comment by Peterdjones · 2011-04-27T22:34:30.644Z · LW(p) · GW(p)
I think "X is what the correct theory of X says" is true for all X. The Correct Theory can say "Nothing", of course.
↑ comment by Cyan · 2011-04-27T22:09:04.041Z · LW(p) · GW(p)
I understand English. Please proceed. (I can't speak for the other participants, but I infer that they understand English as well.)
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T22:13:56.583Z · LW(p) · GW(p)
Some of them claim not to understand some common words. If that stretches to "define" and "mean". etc, the explanatory effort will be wasted.
Replies from: Cyan, NMJablonski↑ comment by Cyan · 2011-04-27T22:29:19.708Z · LW(p) · GW(p)
Why not try this: imagine an inquisitive nine-year-old asked you what you meant by "morality"; such a nine-year-old might not know what "define" means, but I expect you wouldn't refuse to explain morality on those grounds.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T22:37:17.716Z · LW(p) · GW(p)
I would only have to point to the distinction between Good Things and Naughty Things which all children have drummed into them from a much earlier age. That is what makes the claim not to have an OL undesrtanding of morality so unlikely.
Replies from: Cyan↑ comment by Cyan · 2011-04-27T22:47:28.492Z · LW(p) · GW(p)
Imagine your nine-year-old interlocutor pointing out that not all children have the same Good Things and Naughty Things drummed into them.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T22:56:40.641Z · LW(p) · GW(p)
So? You seem to think I am arguing for one particular theory.
Replies from: Cyan↑ comment by Cyan · 2011-04-28T01:48:52.869Z · LW(p) · GW(p)
Something is morally right if it fulfils the Correct Theory of Morality. I'm not claiming to have that.
Because of the above, I think you are making a claim that a singular Correct Theory of Morality exists. How would you explain that to a nine-year-old? That's the discussion we could be having.
↑ comment by NMJablonski · 2011-04-27T22:16:18.090Z · LW(p) · GW(p)
You continue to misrepresent my position.
↑ comment by NMJablonski · 2011-04-27T21:13:43.625Z · LW(p) · GW(p)
I have not been offering one.
I have been requesting one.
I don't see any substantive, real world connection to words like "good" or "moral" in this context. I am assuming you do mean something real by them, and I am asking you to convey that meaning by using simpler words that we both already understand in concrete terms.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T21:42:57.293Z · LW(p) · GW(p)
And I think you are as capable as anyone else of seeing the ordinary meanings of these terms. There is no guarantee that they are definable in simpler terms or in concrete terms, since it is likely that some concepts are basic or abstract. You have an unusual inability to understand these terms. and an unlikely background theory of meaning. I think those two facts are connected.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-27T21:52:03.937Z · LW(p) · GW(p)
I think you will find my thoughts on this matter are relatively common in this community.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T21:53:49.107Z · LW(p) · GW(p)
But not in the wider world.
↑ comment by NMJablonski · 2011-04-27T20:02:49.409Z · LW(p) · GW(p)
Alright sport. If you're unwilling to explain, you can go on being an amateur ethicist and I'll resume my policy of ignoring the field until something interesting happens.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T20:15:58.565Z · LW(p) · GW(p)
And I will continue with my policy of not explaining things any adult English speaker knows.
Replies from: CuSithBell, Amanojack, prase↑ comment by CuSithBell · 2011-04-27T20:19:26.601Z · LW(p) · GW(p)
It turned out you were wrong about this! However you'd like it phrased - you did an experiment that failed to confirm your hypothesis, you need to Notice Your Surprise, etc. - you should update on this information.
↑ comment by Amanojack · 2011-04-27T21:14:57.533Z · LW(p) · GW(p)
Refusal to define the key terms that make or break your argument never ends well.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T21:38:53.725Z · LW(p) · GW(p)
What argument? I have so far said little more than that the claim that "morality" is meaningless in ordinary English is unlikely. I don't need anything more than an ordinary dictionary definition for that.
↑ comment by prase · 2011-04-28T15:13:10.044Z · LW(p) · GW(p)
things any adult English speaker knows
... while no two adult English speakers agree on what precisely those things are.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T15:22:11.402Z · LW(p) · GW(p)
...although they will agree approximately. "it's not maximally precise from the get go" is a generalised counterargument.
Replies from: JoshuaZ↑ comment by Amanojack · 2011-04-27T15:02:45.151Z · LW(p) · GW(p)
It just depends on if "should" is interpreted as "what would best fulfill my wants now" or "what would best fulfill your wants now" (or as something else entirely).
We can't make sense of ethical language until we realize different people mean different things by it.
Replies from: Gray, Clippy, CuSithBell↑ comment by Gray · 2011-04-27T16:52:17.801Z · LW(p) · GW(p)
And that's what morality always was in the first place. It's a way of getting other people to do otherwise than what they wanted to do. No one would be convinced by "I don't want you to kill people", but if you can convince someone that "It is wrong to kill people", then you've created conflict in that person's desires.
I wonder, in the end, if people here truly want to "be rational" about morality. Myself, I'm not rational about morality, I go along with it. I don't critique it in my personal life. For instance, I refuse to murder someone, no matter how rational it might be to murder someone.
Stick to epistemic rationality, and instrumental rationality, but avoid at all costs normative rationality, is my opinion.
Replies from: None, Amanojack↑ comment by [deleted] · 2011-04-27T17:20:34.605Z · LW(p) · GW(p)
And that's what morality always was in the first place. It's a way of getting other people to do otherwise than what they wanted to do. No one would be convinced by "I don't want you to kill people", but if you can convince someone that "It is wrong to kill people", then you've created conflict in that person's desires.
This is a widespread but mistaken theory of morality. After all, we don't - and can't - convincingly say that just any old thing is "wrong". Here, I'll alternate between saying that actually wrong things are wrong, and saying that random things that you don't want are wrong.
Actually wrong: "it's wrong to kill people." Yup, it is. You just don't want it: "it's wrong for you to arrest me just because I stabbed this innocent bystander to death." Yeah, right. Actually wrong: "it's wrong to mug people." No kidding. You just don't want it: "it's wrong for you to lock your door when you leave the house, because it's wrong for you to do anything to prevent me from coming into your house and taking everything you own to sell on the black market". Not convincing.
If there were nothing more to things being wrong than that you use the word "wrong" to get people to do things, then there would be no difference between these four attempts to get people to do something. But there is: in the first and third case, the claim that the action is wrong is true (and therefore makes a convincing argument). In the second and fourth case, the claim is false (and therefore makes for an unconvincing argument).
Sure, you can use the word "wrong" to get people to do things that you want them to do, but you can use a lot of words for that. For example, if you're somebody's mother and you want them to avoid driving when they're very sleepy, you can tell them that it's "dangerous" to drive in that condition. But as with the word "wrong", you can't use the word "dangerous" for just any situation, because it's not true in just any situation. When a proposed action is really dangerous - or really wrong - then you can use that fact to convince them not to pursue that action. But it's still a fact, independent of whether you use it to get other people to do things you want.
Replies from: Amanojack, wedrifid↑ comment by Amanojack · 2011-04-27T20:33:52.461Z · LW(p) · GW(p)
Objective ethics on LW? I'm a little shocked. This whole post is basically argument from popularity (perhaps more accurate to call it argument from convincingness). Judgments of valuation may be universal or quasi-universal, but they are always subjective. Words like "right" and "wrong" (and "innocent" and "own") and other objective moralistic terms obscure this, so let me do some un-obscuring.
If there were nothing more to things being wrong than that you use the word "wrong" to get people to do things, then there would be no difference between these four attempts to get people to do something. But there is: in the first and third case, the claim that the action is wrong is true (and therefore makes a convincing argument). In the second and fourth case, the claim is false (and therefore makes for an unconvincing argument).
You have this backwards: The claim makes a convincing argument (to you and many others), therefore you call the claim "right"; or the claim makes an unconvincing argument against the action, therefore you call the claim "wrong."
Actually wrong: "it's wrong to kill people." Yup, it is. You just don't want it: "it's wrong for you to arrest me just because I stabbed this innocent bystander to death."
Notice you had to tuck in the word "innocent," which already implies your conclusion that it is "actually wrong" to harm the bystander.
Actually wrong: "it's wrong to mug people." No kidding. You just don't want it: "it's wrong for you to lock your door when you leave the house, because it's wrong for you to do anything to prevent me from coming into your house and taking everything you own to sell on the black market".
Here you used the word "own," which again already implies your conclusion that it is wrong to steal it. Both examples are purely circular. Most people are disgusted by killing and theft, and they may be counterproductive from most people's points of view, but that is just about all we can say about the matter - and all we need to say. We are disgusted, so we ban such actions.
Moral right and wrong are not objective facts. The fact that you and I subjectively experience a moral reaction to killing and theft may be an objective fact, but the wrongness itself is not objective, even though it may be universal or near-universal (that is, even though almost everyone else may feel the same way). Universal subjective valuation is not objective valuation (this latter term is, I contend, completely meaningless - unless someone can supply a useful definition).
Although he was speaking in the context of economics, Ludwig von Mises gave the most succinct explanation of why all valuation is subjective when he said, "We originally want or desire an object not because it is agreeable or good, but we call it agreeable or good because we want or desire it."
Replies from: None↑ comment by [deleted] · 2011-04-27T21:55:24.216Z · LW(p) · GW(p)
You have this backwards: The claim makes a convincing argument (to you and many others), therefore you call the claim "right"; or the claim makes an unconvincing argument against the action, therefore you call the claim "wrong."
You could say that about any word in the English language. Let's try this with the word "rain". On many occasions, a person may say "it's raining and therefore you should take an umbrella". On some occasions this claim will be false and people will know that it's false (e.g. because they looked out a window and saw that it wasn't raining), and so the argument will not be convincing.
What you're doing here can be applied to this rain scenario. You could say:
The claim makes a convincing argument (to you and many others), therefore you call the claim "right"; or the claim makes an unconvincing argument against the action, therefore you call the claim "wrong."
That is, the claim that it's raining makes a convincing argument on some occasions, and on those occasions you call the claim "right". On other occasions, the claim makes an unconvincing argument, and on those occasions you call the claim "wrong".
So there, we've applied your theory about the concept of morality, to the concept of rain. Your theory could equally well be applied to any concept at all. That is, your theory is that when we are convinced by arguments that employ claims about morality, then we call the claims "right". But you could equally well come up with the theory that when we are convinced by arguments that employ claims about rain, then we call the claims "right".
So what have we demonstrated? With your help, we have demonstrated that in this respect, morality is like rain. And like everything else. Morality is like atoms. Morality is like gravity - in this respect. You have highlighted a property of morality which is shared by absolutely everything else in the universe that we have a word for. And this property is, that you can come up with this reverse theory of it, according to which we call claims employing the term "right" when we are convinced by arguments using those claims.
Notice you had to tuck in the word "innocent," which already implies your conclusion that it is "actually wrong" to harm the bystander.
For me to be guilty of begging the question I would have to be trying to prove that a murder was committed in the hypothetical scenario. But it's a hypothetical scenario in which it is specified that the person committed murder.
Here's the hypothetical scenario, more explicit: someone has just committed a murder. He tells a cop, "it would be wrong for you to arrest me". Since it is not, in fact, wrong for the cop to arrest him, then the argument is unconvincing. In this hypothetical scenario, the reason the argument is unconvincing is that it is not actually wrong for the cop to arrest him.
Now, according to your own reverse theory of morality, the hypothetical scenario that I have specified in fact reduces to the following: someone is in a situation where his claims that it would be wrong to arrest him will go ignored by the cop in question. Therefore, the cop believes that it is right to arrest him.
But as I explained before, you can apply your reverse treatment to absolutely anything at all. Here's an example: in this scenario, someone picks up an orange and says about the orange, "this is an apple". Nobody is convinced by his assertion, and the reason nobody is convinced by his assertion is that the orange is, in fact, not an apple.
Now we can apply your reverse treatment to this scenario. Someone picks up something and says about it, "this is an apple". Nobody is convinced by his assertion, and therefore they call his claim "wrong".
Notice the reversal. In my description of the scenario, that the claim is wrong causes others to disbelieve the claim, because they can see with their own eyes that it is wrong. In your reverse description of the scenario, the primary fact is that people are not convinced by the claim, and the secondary fact which follows from the primary fact is that they call the claim "wrong".
You're not proving anything with the reversal, because you can apply the reversal to anything at all.
Here you used the word "own," which again already implies your conclusion that it is wrong to steal it.
Once again, this is a hypothetical scenario in which it is specified that it would be stealing, and therefore wrong. I am not trying to prove that; I am specifying it to construct the scenario.
Although he was speaking in the context of economics, Ludwig von Mises gave the most succinct explanation of why all valuation is subjective when he said, "We originally want or desire an object not because it is agreeable or good, but we call it agreeable or good because we want or desire it."
Absolutely, but morality is not personal preference any more than price is personal preference. These are separate subjects. Mises would not say, "I recognize that the price of gasoline is $4 not because it is $4; rather, the price of gasoline is $4 because I recognize it as $4, and if tomorrow I recognize it as $2 then it will be $2, whatever the gas station attendant says." That would be absurd for him to say. The same applies to morality.
Replies from: Amanojack↑ comment by Amanojack · 2011-04-27T22:25:14.876Z · LW(p) · GW(p)
You have this backwards: The claim makes a convincing argument (to you and many others), therefore you call the claim "right"; or the claim makes an unconvincing argument against the action, therefore you call the claim "wrong."
So there, we've applied your theory about the concept of morality, to the concept of rain.
You misread me, though perhaps that was my fault. Does the bold help? I was talking about you (Constant), not "you" in the general sense. I wasn't presenting a theory of morality; I was shedding light on yours by suggesting that you are only calling these things right or wrong because you find the arguments convincing.
Actually wrong: "it's wrong to kill people." Yup, it is. You just don't want it: "it's wrong for you to arrest me just because I stabbed this innocent bystander to death."
Notice you had to tuck in the word "innocent," which already implies your conclusion that it is "actually wrong" to harm the bystander.
For me to be guilty of begging the question I would have to be trying to prove that a murder was committed in the hypothetical scenario.
No, you'd have to be trying to justify your statement that "it is wrong to kill people," which it seems you were (likewise for the theft example). Maybe your unusual phrasing confused me as to what you were trying to show with that. Anyway, the daughter posts seem to show we agree on more than it appears here, so bygones.
As for the rest about "[my] reverse theory of morality," that's all from the above misunderstanding. (Sorry to waste time with my unclear wording.)
Replies from: None↑ comment by [deleted] · 2011-04-27T23:07:24.817Z · LW(p) · GW(p)
You misread me, though perhaps that was my fault. Does the bold help? I was talking about you (Constant), not "you" in the general sense. I wasn't presenting a theory of morality; I was shedding light on yours by suggesting that you are only calling these things right or wrong because you find the arguments convincing.
Okay, but even on this reading you could "shed" similar "light" on absolutely any term that I ever use. You're not proving anything special about morality by that. To do that would require finding differences between morality and, say, rain, or apples. But if we were arguing about apples you could make precisely the same move that you made in this discussion about morality.
Here's a parallel back-and-forth employing apples. Somebody says:
And that's what the concept of apples always was in the first place. It's a way of getting other people to do otherwise than what they wanted to do.
I reply:
[insert examples with apples] In the second and fourth case, the claim about apples is false (and therefore makes for an unconvincing argument).
Here, let me construct an example with apples. Somebody goes to Tiffany's, points to a large diamond on display, and says to an employee, "that is an apple, therefore you should be willing to sell it to me for five dollars, which is a great price for an apple." This claim is false, and therefore makes for an unconvincing argument.
Somebody replies:
You have this backwards: The claim about apples makes a convincing argument (to you and many others), therefore you [Constant] call the claim "right" [true*]; or the claim about apples makes an unconvincing argument against the action, therefore you [Constant] call the claim "wrong" [false*].
* I interpret "right" and "wrong" here as meaning "true" and "false", because claims are true or false, and these are referring to claims here.
To which they follow up:
I wasn't presenting a theory of apples; I was shedding light on your theory of apples by suggesting that you are only calling these things right or wrong [these claims true or false **] because you find the arguments convincing.
** I am continuing the previous interpretation of "right" and "wrong" as meaning, in context here, "true" or "false". If this is not what you meant then I can easily substitute in what you actually meant, make the corresponding changes, and make the same point as I am making here.
What all this boils down to is that my interlocutor is saying that I am only calling claims about apples true or false because I find the arguments that employ these claims convincing or unconvincing. For example, if I happen to be in Tiffany's and somebody points to one of the big shiny glassy-looking things with an enormous price tag and says to an employee, "that is an apple, and therefore you should be happy to accept $5 for it", then I will find that person's argument unconvincing. My interlocutor's point is that I am only calling that person's claim (that that object is an apple) false because I find his argument (that the employee should sell it to him for $5) unconvincing.
Whereas my own account is as follows: I first of all find the person's claim about the shiny glassy thing false. Then, as a consequence, I find his argument (that the employee should be happy to part with it for $5) unconvincing.
If you like I can come up with yet another example, taking place in Tiffany's, dropping the apple, and introducing some action such as grabbing a diamond and attempting to leave the premises. I would have my account (that I, a bystander, saw the man grab the diamond, which I believed to be a wrong act, and therefore when security stopped him I was not persuaded by his claims that he had done nothing wrong), and you would have your reversed account (that I was not persuaded by his claims that he had done nothing wrong, and therefore, as a consequence, I believed his grabbing the diamond to be a wrong act).
Replies from: Amanojack, CuSithBell↑ comment by Amanojack · 2011-04-27T23:18:58.476Z · LW(p) · GW(p)
It seems to me that right and wrong being objective, just like truth and falsehood, is what you've been trying to prove all this time. To equate "right and wrong" with "true and false" by assumption would be to, well you know, beg the question. It's not surprising that it always comes back to circularity, because a circular argument is the same in effect as an unjustified assertion, and in fact that's become the theme of not just our exchange here, but this entire thread: "objective ethics are true by assertion."
I think we agreed elsewhere that ethical sentiments are at least quasi-universal; is there something else we needed to agree on? Because the rest just looks like wordplay to me.
Replies from: None↑ comment by [deleted] · 2011-04-27T23:37:55.077Z · LW(p) · GW(p)
To equate "right and wrong" with "true and false" by assumption would be to, well you know, beg the question.
I'm not equating moral right and wrong with true and false. I was disambiguating some ambiguous words that you employed. The word "right" is ambiguous, because in one context it can mean "morally righteous", and in another context it can mean "true". I disambiguated the words in a certain direction because of the immediate textual context. Apparently that was not what you meant. Okay - so ideally I should go back and disambiguate the words in the opposite direction. However, I can tell you right now it will come to the same result. I don't really want to belabor this point so unless you insist, I'm not actually going to write yet another comment in which I disambiguate your terms "right" and 'wrong" in the moral direction.
↑ comment by CuSithBell · 2011-04-27T23:26:07.664Z · LW(p) · GW(p)
Here, let me construct an example with apples. Somebody goes to Tiffany's, points to a large diamond on display, and says to an employee, "that is an apple, therefore you should be willing to sell it to me for five dollars, which is a great price for an apple." This claim is false, and therefore makes for an unconvincing argument.
But, ah, you can observe the properties of the object in question, and see that it has very few in common with the set of things that has generated the term "apple" in your mind, and many in common with "diamond". Is this the same sense in which you say we can simply "recognize" things as fundamentally good or evil? That would make these terms refer to "what my parents thought was good or evil, perturbed by a generation of meaning-learning". The problem there is - apples are generally recognizable. People disagree on what is right or wrong. Are even apples objective?
Replies from: None↑ comment by [deleted] · 2011-04-27T23:51:34.944Z · LW(p) · GW(p)
The problem there is - apples are generally recognizable. People disagree on what is right or wrong. Are even apples objective?
People can disagree about gray areas between any two neighboring terms. Take the word "apple". Apple trees are, according to Wikipedia, the species "Malus domestica". But as evolutionary biologists postulated (correctly, as it turns out), species are gradually formed over hundreds or thousands or millions of years, and the question of what is "the first apple tree" is a question for which there is no crystal clear answer, nor would there be even if we had a complete record of every ancestor of the apple tree going back to the one-celled organisms. Rather, the proto-species that gave rise to the apple tree gradually evolves into the apple tree, and about very early apple trees two fully informed rational people might very well disagree about which ones are apple trees and which ones are proto-apple trees. This is nothing other than the sorites problem, the problem of the heap, the problem of the vagueness of concepts. It is universal and is not specifically true about moral questions.
Morality is, I have argued, an aspect of custom. And it's true that people can disagree, on occasion, about whether some particular act violates custom. So custom is, like apples, vague to some degree. Both apples and custom can be used as examples of the sorites problem, if you're sick of talking about sand heaps. But custom is not radically indeterminate. Customs exist, just as apples exist.
Replies from: Amanojack, CuSithBell↑ comment by Amanojack · 2011-04-28T00:50:59.024Z · LW(p) · GW(p)
Well I agree with this basically, and it reminds me of John Hasnas writing about customary legal systems. I find that when showing this to people I disagree with about ethics we usually end up in agreement:
Replies from: NoneIn the absence of civil government, most people engage in productive activity in peaceful cooperation with their fellows. Some do not. A minority engages in predation, attempting to use violence to expropriate the labor or output of others. The existence of this predatory element renders insecure the persons and possessions of those engaged in production. Further, even among the productive portion of the population, disputes arise concerning broken agreements, questions of rightful possession, and actions that inadvertently result in personal injuries for which there is no antecedently established mechanism for resolution. In the state of nature, interpersonal conflicts that can lead to violence often arise.
What happens when they do? The existence of the predatory minority causes those engaged in productive activities to band together to institute measures for their collective security. Various methods of providing for mutual protection and for apprehending or discouraging aggressors are tried. Methods that do not provide adequate levels of security or that prove too costly are abandoned. More successful methods continue to be used. Eventually, methods that effectively discourage aggression while simultaneously minimizing the amount of retaliatory violence necessary to do so become institutionalized. Simultaneously, nonviolent alternatives for resolving interpersonal disputes among the productive members of the community are sought. Various methods are tried. Those that leave the parties unsatisfied and likely to resort again to violence are abandoned. Those that effectively resolve the disputes with the least disturbance to the peace of the community continue to be used and are accompanied by ever-increasing social pressure for disputants to employ them.
Over time, security arrangements and dispute settlement procedures that are well-enough adapted to social and material circumstances to reduce violence to generally acceptable levels become regularized. Members of the community learn what level of participation in or support for the security arrangements is required of them for the system to work and for them to receive its benefits. By rendering that level of participation or support, they come to feel entitled to the level of security the arrangements provide. After a time, they may come to speak in terms of their right to the protection of their persons and possessions against the type of depredation the security arrangements discourage, and eventually even of their rights to personal integrity and property. In addition, as the dispute settlement procedures resolve recurring forms of conflict in similar ways over time, knowledge of these resolutions becomes widely diffused and members of the community come to expect similar conflicts to be resolved in like manner. Accordingly, they alter their behavior toward other members of the community to conform to these expectations. In doing so, people begin to act in accordance with rules that identify when they must act in the interests of others (e.g., they may be required to use care to prevent their livestock from damaging their neighbors’ possessions) and when they may act exclusively in their own interests (e.g., they may be free to totally exclude their neighbors from using their possessions). To the extent that these incipient rules entitle individuals to act entirely in their own interests, individuals may come to speak in terms of their right to do so (e.g., of their right to the quiet enjoyment of their property).
In short, the inconveniences of the state of nature represent problems that human beings must overcome to lead happy and meaningful lives. In the absence of an established civil government to resolve these problems for them, human beings must do so for themselves. They do this not through coordinated collective action, but through a process of trial and error in which the members of the community address these problems in any number of ways, unsuccessful attempts to resolve them are discarded, and successful ones are repeated, copied by others, and eventually become widespread practices. As the members of the community conform their behavior to these practices, they begin to behave according to rules that specify the extent of their obligations to others, and, by implication, the extent to which they are free to act at their pleasure. Over time, these rules become invested with normative significance and the members of the community come to regard the ways in which the rules permit them to act at their pleasure as their rights. Thus, in the state of nature, rights evolve out of human beings’ efforts to address the inconveniences of that state. In the state of nature, rights are solved problems.
↑ comment by CuSithBell · 2011-04-28T00:14:19.828Z · LW(p) · GW(p)
Ah, okay! We don't disagree then. Thanks for clearing that up!
ETA: Actually, with that clarification, I'd expect many others to agree as well - at least, it seems like what you mean by "custom" and what other posters have called "stuff people want you to do" coincide.
Replies from: None↑ comment by [deleted] · 2011-04-28T00:36:14.564Z · LW(p) · GW(p)
An important point is that nobody gets to unilaterally decide what is or is not custom. That's in contrast to, say, personal preference, which each person does get to decide for themselves.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-04-28T00:41:23.670Z · LW(p) · GW(p)
Right. Though I'd argue that custom implies that morality is objective, and therefore that custom can be incorrect, so that someone can coherently say that their own society's customs are immoral (though probably from within a subculture that supports those alternate customs).
↑ comment by wedrifid · 2011-04-27T17:53:28.464Z · LW(p) · GW(p)
But as with the word "wrong", you can't use the word "dangerous" for just any situation, because it's not true in just any situation.
Not a good analogy. The objective element of 'wrong' is entirely different in nature to that of 'dangerous' even though by many definitions it does, in fact, exist.
Replies from: None↑ comment by [deleted] · 2011-04-27T18:31:29.541Z · LW(p) · GW(p)
Not a good analogy. The objective element of 'wrong' is entirely different in nature to that of 'dangerous' even though by many definitions it does, in fact, exist.
The word "danger" illustrates a point about logic. The logical point is that the fact that X is often used to persuade people does not mean that the nature of X is that it is " a way of getting other people to do otherwise than what they wanted to do". The common use of the word "danger" is an illustration of this logical point. The illustration is correct.
Replies from: wedrifid↑ comment by wedrifid · 2011-04-27T18:43:24.395Z · LW(p) · GW(p)
The objectivity of 'danger' is entirely different to that of 'wrong'. As such using it as an argument here is misleading and confused.
Replies from: NMJablonski, None↑ comment by NMJablonski · 2011-04-27T18:46:01.040Z · LW(p) · GW(p)
Upvoted to both of you for an interesting discussion. It has reached the point it usually does in metaethics where I have to ask for someone to explain:
What the hell does it mean for something to be objectively wrong?
(This isn't targeted at you specifically wedrifid, it just isn't clear to me what the objectivity of "wrongness" could possibly refer to)
Replies from: Amanojack, None, wedrifid↑ comment by Amanojack · 2011-04-27T20:47:18.400Z · LW(p) · GW(p)
Yeah, no one can ever seem to explain what "objectively wrong" would even mean. That's because to call an action wrong is to imply that there is a negative value placed on that action, and for that to be the case you need a valuer. Someone has to do the valuing. Maybe a large group of people - or maybe everyone - values the action negatively, but that is still nothing more than a bunch of individuals engaging in subjective valuation. It may be universal subjective valuation, or maybe they think it's God's subjective valuation, but if so it seems better to spell that out plainly than to obscure it with the authoritative- and scientific-sounding modifier objective.
Replies from: Peterdjones, NMJablonski↑ comment by Peterdjones · 2011-04-27T21:34:36.266Z · LW(p) · GW(p)
The fact that something is done by a subject doesn't necessarily make it subjective. It takes a subject to add 2 and 2, but the answer is objective.
There are many ideas as to what "objectively right" could mean. Two of Kant's famous suggestions are "act only on that maxim you would wish to be universal law" and "treat people always as ends and never as means".
↑ comment by NMJablonski · 2011-04-27T20:51:12.805Z · LW(p) · GW(p)
This encapsulates my thoughts on metaethics entirely.
↑ comment by [deleted] · 2011-04-27T19:31:46.352Z · LW(p) · GW(p)
A hard question. But I will try to give a brief answer.
Morality is an aspect of social custom. Roughly, it is those customs that are enforced especially vigorously. But an important here is that while some customs are somewhat arbitrary and vary from place to place, other customs are much less arbitrary. It is these least arbitrary moral customs that we most commonly think of as universal morality applicable to and recognized by all humanity.
Here's an example: go anywhere in the world as a tourist, and (in full view of a lot of typical people who are minding their own business, maybe traveling, maybe buying or selling, maybe chatting) push somebody in front of a train, killing them. Just a random person. See how people around you react. Recommendation: do this as a thought experiment, not an actual experiment. I'll tell you right now how people around the world will react: they'll be horrified, and they'll try to detain you or incapacitate you, possibly kill you. They will have a word in their language for what you just did, which will translate very well to the English word "murder".
But why is this? Why aren't customs fully arbitrary? This puzzle, I think, is best understood if we think of society as a many-player game. That is, we apply the concepts of game theory to the problem. Custom is a Nash equilibrium. To follow custom is to act in accordance with your equilibrium strategy in this Nash equilibrium. Nash equilibria are not fully arbitrary - and this explains right away at least the general point that customs are not fully arbitrary.
While not arbitrary, Nash equilibria are not necessarily unique, particularly since different societies exist in different environmental conditions, and so different societies can have different sets of customs. However, the customs of all societies around the world, or at least all societies with very few exceptions, share common elements. People across the world will be appalled if you kill someone arbitrarily. People across the world will also be appalled (though probably not as much) if you steal from a vendor - and their concept of what it is to steal from a vendor will be very familiar to you. You're not in great danger of visiting a foreign country and accidentally committing what they consider to be shoplifting, unless you are very careless. I recommend that if something seems to be a free sample, you check with the vendor to make sure that it is indeed a free sample before helping yourself to it. As long as you are not a complete fool, you should be okay in foreign lands, because your internalized concepts of what it is to steal or to rob, what it is to murder, what it is to assault, very closely match those of the locals.
Some people think, "murder is wrong because it is illegal, and law is created by government, so what is wrong is defined by government." But I think that social customs are for the most part not created by government, and I think that the laws against murder, against robbery, and so on, follow the customary prohibitions, rather than creating them.
By the way, it's possible that I've mis-applied game theory, though I think that the concept of the Nash equilibrium is simple enough that a beginner like me should be able to understand it. My knowledge of it is spotty and I plan to remedy this over the next several months, so if I've made a mistake here hopefully I will not repeat it.
Replies from: Amanojack↑ comment by Amanojack · 2011-04-27T20:55:05.444Z · LW(p) · GW(p)
I don't know about the Nash equilibria, but I agree with most everything you've written here. I'd just prefer to call that (quasi-)universal subjective ethics, and to use language that reflects that, as there are exceptions - call them psychopaths or whatever, but in the interest of accuracy. And the other problem with the objectivist interpretation of custom is that sometimes customs do have to change, and sometimes customs are barbaric. It seems that what you were getting at with "actually wrong" in your initial post was the idea that these kind of moral sentiments are universal, which I can buy, but even that is a bit of a leaky generalization.
↑ comment by wedrifid · 2011-04-27T18:57:47.181Z · LW(p) · GW(p)
Pardon me. I deleted my comment before I noticed that someone had replied. (I didn't think replying to Constant was going to be beneficial. To be honest I didn't share your perception of interestingness of the conversation, even though I was a participant.)
What the hell does it mean for something to be objectively wrong?
Very little practically speaking. It is a somewhat related concept to subjectively objective. It doesn't make the value judgements any less subjective it is just that they happen to be built into the word definitions themselves. It doesn't make words like 'should' and 'wrong' any more useful when people with different values are arguing it just takes one of the meanings of 'should' as it is used practically and makes it explicit. I think the sophisticated name may be something related to moral cognitivism, probably with a 'realism' thrown in somewhere for good measure.
↑ comment by [deleted] · 2011-04-27T18:52:27.802Z · LW(p) · GW(p)
I am not comparing the objectivity of "danger" to the objectivity of "wrong". I am not stating or implying that their objectivity is the same or similar. I am using the word "danger" as an illustration of a point. The point is correct, and the illustration is correct. That "danger" has different objectivity from "wrong" is not relevant to the point I was illustrating.
Replies from: wedrifid↑ comment by wedrifid · 2011-04-27T19:08:28.343Z · LW(p) · GW(p)
There is an objective sense in which an analogy is good or bad, related closely to the concept of reference class tennis. Having one technical similarity does not make an analogy an appropriate one and certainly does not prevent it from being misleading. This example of 'for example' is objectively 'bad'.
↑ comment by Amanojack · 2011-04-27T19:53:30.235Z · LW(p) · GW(p)
And that's what morality always was in the first place. It's a way of getting other people to do otherwise than what they wanted to do. No one would be convinced by "I don't want you to kill people", but if you can convince someone that "It is wrong to kill people", then you've created conflict in that person's desires.
That's one of the things morality has been, and it could indeed be the main thing, but my point above is it all depends on what the person means. Even though getting other people to do something might be the main and most important role of moral language historically, it only invites confusion to overgeneralize here - though I know how tempting it is to simplify all this ethical nonsense floating around in one fell swoop. Some people do simply use "ought" to mean, "It is in your best interest to," without any desire to get the person to do something. Some people mean "God would disapprove," and maybe they really don't care if that makes you refrain from doing it or not, but they're just letting you know. These little counterexamples ruin the generalization, then we're back to square one.
I think the only way to really simplify ethics is to acknowledge that people mean all sorts of things by it, and let each person - if anyone cares - explain what they intended in each case.
No, scratch that. The reason ethics is so confused is precisely because people have tried to simplify a whole bunch of disparate-but-somewhat-interrelated notions into a single type of phrasing. A full explanation of everything that is called "ethics" would require examination of religion, politics, sociology, psychology, and much more.
For most things that we think we want ethics for, such as AI, instead of trying to figure out that complex of sundry notions shoehorned into the category of ethics, I think we'd be better off just assiduously hugging the query for each question we want to answer about how to get the results we want in the "moral" sphere (things that hit on your moral emotions, like empathy, indignation, etc.). Mostly I'm interested in this series of posts for the promise it presents for doing away with most of the confusion generated by wordplay such as "objective ethics," which I consider to be just an artifact of language.
↑ comment by CuSithBell · 2011-04-27T15:09:01.184Z · LW(p) · GW(p)
One thing it seems to be used for around here is "what should you never do even if you should". E.g. it's usually a really bad idea (wrt your own wants) to murder someone, even in a large proportion of cases where you think it's a good idea.
↑ comment by endoself · 2011-04-28T04:07:18.323Z · LW(p) · GW(p)
If you don't want someone to murder, you can try to stop them, but they aren't going to agree to not murder unless they want to.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T13:38:12.433Z · LW(p) · GW(p)
Want to before they have had their preferences re arranged by moral exhortation, or after?
Replies from: endoself↑ comment by FAWS · 2011-04-26T17:06:36.885Z · LW(p) · GW(p)
I agree with you that morality can mostly be framed in terms of volation and an adequate decision theory, but I think you are oversimplifying. For example consider people talking about what other people should want purely for their own good. That might be explainable in terms of projecting their own wants in some way (or perhaps selfish self-delusion), but it doesn't seem like something you could easily predict in advance from reasoning about wants if you were unfamiliar with how people act among each other.
↑ comment by Morendil · 2011-04-27T14:23:30.294Z · LW(p) · GW(p)
Talking about wants isn't necessarily any simpler than talking about shoulds.
We seem to be just as confused about either. For instance, how many people say they want to be thin, yet overeat and avoid exercise?
Replies from: Alicorn, Amanojack, XiXiDu↑ comment by Alicorn · 2011-04-27T15:10:40.001Z · LW(p) · GW(p)
For instance, how many people say they want to be thin, yet overeat and avoid exercise?
I think "I want to be thin" has an implied "ceteris paribus". Ceteris ain't paribus.
You could as well say, "How many people say they want to have money, yet spend it on housing, feeding, and clothing themselves and avoid stealing?"
Replies from: Clippy, Morendil↑ comment by Morendil · 2011-04-27T15:53:38.710Z · LW(p) · GW(p)
There seems to be a difference here - how much money you earn isn't perceived as entirely a matter of choice, or at any rate there will be a significant and unavoidable lead time between deciding to earn more and actually earning more.
Whereas body shape is within our immediate sphere of control: if we eat less and work out more, we'll weigh less and bulk up muscle mass, with results expected within days to weeks.
When I say "I can move my arm if I want", this is readily demonstrated by moving my arm. Is this the same sense of "want" that people have in mind when they say "I want to eat less" or "I want to quit smoking"?
The distinction that seems to appear here is between volition - making use of the connection between our brains and our various actuators - and preference - the model we use to evaluate whether an imagined state of the world is more desirable than another. We conflate both in the term "want".
We are often quite confused as to what volitions will bring about states of the world that agree with our preferences. (How many times have you heard "That's not what I wanted to say/write"?)
Replies from: Alicorn, Amanojack↑ comment by Alicorn · 2011-04-27T16:54:08.394Z · LW(p) · GW(p)
I categorically reject your disanalogy from both directions.
I have been eating about half as much as usual for the past week or so, because I'm on antibiotics that screw with my appetite. I look the same. Once, I did physically intense jujitsu twice a week for months on end, at least quadrupling the amount of physical activity I got in each week. I looked the same. If "eating less and working out more" put my shape under my "immediate sphere of control" with results "within days to weeks", this would not be the result. You are wrong. Your statements may apply to people with certain metabolic privileges, but not beyond.
By contrast, if I suddenly decide that I want more money, I have a number of avenues by which I could arrange that, at least on a small scale. It would be mistaken of me to conclude from this abundance of available financial opportunity that everyone chooses to have the amount of money they have, and that people with less money are choosing to take fewer of the equally abundant opportunities they share with the rich.
Replies from: Morendil, wedrifid↑ comment by Morendil · 2011-04-27T18:14:48.706Z · LW(p) · GW(p)
OK, allowing that the examples may have been poorly chosen - the main point I'm making is that people often a) say they want something, b) act in ways that do not bring about what they say they want.
Your response above seems to be that when people say "I want to be thin", they are speaking strictly in terms of preference: they are expressing that they would prefer their world to be just as it is now, with the one amendment that they are a certain body type rather than their current. Similarly when saying they want money.
There are other cases where volition and preferences appear at odds more clearly. People say "I want to quit smoking", but they don't, when it's their own voluntary actions which bring about an undesired state. The distinction seems useful, even if we may disagree on the specifics of how hard it is to align volition and preference in particular cases.
I'm not the first to observe that "What do you want" is a deeper question than it looks like, and that's what I meant to say in the original comment.
When you examine it closely "do people actually want to smoke" isn't a much simpler question than "should there be a law against people smoking" or "is it right or wrong to smoke". It is possible that these questions are in fact entangled in such a way that to fully answer one is also to answer the others.
Replies from: Alicorn↑ comment by Alicorn · 2011-04-27T18:26:41.179Z · LW(p) · GW(p)
I think people sometimes use wanting language strictly in terms of preferences. I think people sometimes have outright contradictory wants. I think people are subject to compulsive or semi-compulsive behaviors that make calling "revealed preference!" on their actions a risky business. The post you linked to (I can't quite tell by your phrasing if you are aware that I wrote it) is about setting priorities between various desiderata, not about declaring some of those desiderata unreal because they take a backseat.
Replies from: Morendil↑ comment by wedrifid · 2011-04-27T17:06:20.905Z · LW(p) · GW(p)
By contrast, if I suddenly decide that I want more money, I have a number of avenues by which I could arrange that, at least on a small scale. It would be mistaken of me to conclude from this abundance of available financial opportunity that everyone chooses to have the amount of money they have, and that people with less money are choosing to take fewer of the equally abundant opportunities they share with the rich.
This all seems true with the exception of 'by contrast'. You seem to have clearly illustrated a similarity between weight loss and financial gain. There are things that are under people's control but which things are under a given person's control vary by the individual and the circumstances. In both cases people drastically overestimate the extent to which the outcome is a matter of 'choice'.
Replies from: Alicorn↑ comment by Amanojack · 2011-04-27T21:31:39.774Z · LW(p) · GW(p)
The real distinction is between what you want to do now and what you want your future self to do later, though there's some word confusion obscuring that point. English is pretty bad at dealing with these types of distinctions, which is probably why this is a recurring discussion item.
↑ comment by XiXiDu · 2011-04-27T16:27:46.625Z · LW(p) · GW(p)
Talking about wants isn't necessarily any simpler than talking about shoulds.
Oughts are instrumental and wants are terminal. See my comments here and here.
Replies from: timtyler↑ comment by timtyler · 2011-04-27T16:32:04.885Z · LW(p) · GW(p)
Oughts are instrumental and wants are terminal.
Disagree - I don't think that is supported by the dictionary. For instance, I want more money - which is widely regarded as being instrumental.
Maybe you need to spell out what you actually meant here.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-04-27T16:53:28.415Z · LW(p) · GW(p)
For instance, I want more money - which is widely regarded as being instrumental.
Oughts and wants are not mutually exclusive in their first-order desirability. You ought to do what you want is a basic axiom of volition. That implies that you also want what you ought. Yet a distinction, if minor, between ought and want is that the former is often a second-order desire as it is instrumental to the latter primary goal.
Replies from: timtyler, Peterdjones↑ comment by timtyler · 2011-04-27T17:02:07.335Z · LW(p) · GW(p)
Wants are fairly straighforwads, but oughts are often tangled up with society, manipulation and signalling. You appear to be presuming some other definition of ought - without making it terribly clear what it is that you are talking about.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-04-27T19:15:44.151Z · LW(p) · GW(p)
Wants are fairly straighforwads, but oughts are often tangled up with society, manipulation and signalling.
When it comes to goals then in a sense an intelligent agent is similar to a stone rolling down a hill, both are moving towards a sort of equilibrium. The difference is that intelligence is following more complex trajectories as its ability to read and respond to environmental cues is vastly greater than that of a stone. And that is the reason why we perceive oughts to be mainly a fact about society, you ought not to be indifferent about the goals of other agents if they are instrumental to what you want.
"Ought" statements are subjectively objective as they refer to the interrelationship between your goals and the necessary actions to achieve them. "Ought" statements point out the necessary consistency between means and ends. If you need pursue action X to achieve "want" Y you ought to want to do Y.
↑ comment by Peterdjones · 2011-04-27T16:54:58.232Z · LW(p) · GW(p)
Again, whether you ought to do what you want depends on what you want.
Replies from: NMJablonski↑ comment by NMJablonski · 2011-04-27T18:16:51.424Z · LW(p) · GW(p)
Can you demonstrate that what you just said is true?
EDIT: And perhaps provide a definition of "ought"?
↑ comment by gabgohjh · 2011-04-27T02:52:26.614Z · LW(p) · GW(p)
This idea (utilitarism) is old, and fraught with problems. Firstly, there is the question of what the correct thing to optimize really is. Should one optimize total happiness or average happiness? Or would it make more sense, for example, to maximize the happiness of the most unhappy person in the population - a max-min problem, i.e a "worst case" optimization procedure? (note that what this in essense is the difference between considering "human rights" and "total happiness", which do not always go hand in hand) And even with all these three things to optimize considered, there's whole spectrum of weighted optimization problems which sit between worst case and average case. Who chooses what is best and most fair? Is the happiness of everybody weighted equally? Or are some people more deserving of happiness than others? Does everybody have an equal capaicty for happiness? Does a higher population equate to more happiness in total? How does time factor into the equation? Do you maximize happiness now? Or do you put effort into developing a perfect society now, for the greater happiness to come?
Not to mention the obvious problem of utility. Let's be charitable and assume that utility means something, and can be measured - already a leap of faith. But then, ask yourself - why assume utility is one dimensional? And if utility were many dimensional - how will one trade off the different dimensions of utility? Is it more important to minimize suffering than to increase happiness - are two things really numerical values which lie on the same scale? And what if we found a pleasure center in the brain which produces "utility"? Would it be better for us to discard our coperal bodies, and all the rest of these silly and irrational "goals", "dreams" and "aspirations" in favor of forever pushing and stimulating this part of the brain for a little bit more meaningless satisfaction?
But what I really want to get at, and here I start to get preachy, that existential meaning is not the same as happiness. The human condition has the capacity to be deeply satisfied in suffering for example, or to feel a deeply dissatisfied when the world may appear optimal. And there are deeply embdeeded ideas which feel right - the concepts of "fairness" (that everybody should be treated equally) or "justice" (that each good deed should have it's reward, and each bad deed it's punishment - not to be confused with detterence) for example - yet it does not seem to have a place in a blind optimizer such as this. Not to mentin the countless other things we don't even understand - the emotions like love, anger, hatred, wrath, asthetic preference, what do these things have in the place of a utilitarian society? And sure, you could be a ardent behaviorlist who thinks ideas like "justice" and "meaning" are just silly supersticious constructs better discarded alongside such antiquated concepts such as as "emotions" and "morality", but I would like to persuade you that there's something more to life than just maximizing happiness.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2011-04-27T04:42:06.677Z · LW(p) · GW(p)
You correctly point out problems with classical utilitarianism; nonetheless, downvoted for equating utilitarianism in general with classical utilitarianism in particular, as well as being irrelevant to the comment it was replying to. And a few other things.
↑ comment by timtyler · 2011-04-26T20:23:34.311Z · LW(p) · GW(p)
I am increasingly getting the perception that morality/ethics is useless hogwash.
Am not clear from your comment what your beef is.
The whole talk about morality seems to be nothing more than a signaling game.
No: morality is also to do with how to behave yourself and ways of manipulating others in addition to its signalling role.
comment by pjeby · 2011-04-25T17:28:46.950Z · LW(p) · GW(p)
I must confess I'm having trouble with that flowchart, specifically the first question about whether a moral judgment expresses a belief, and emotivism being on the "no" side. Doesn't, "Ew, murder" express the belief that murder is icky?
To put it another way, I'm having trouble reconciling the map of what people argue about the nature of morality, with what I know of how at least my brain processes moral belief and judgment.
That is, ISTM that moral judgments at the level where emotion and motivation are expressed do not carry any factual grounding, and they motivate action or express what people "should" or "should not" do. I'm having trouble seeing how this doesn't merge both branches of your diagram.
Of course, if the diagram is merely to illustrate what a bunch of people believe, then my immediate impression is that both groups are (partially) wrong. ;-)
(Another possibility, of course, is that these people are arguing about mind projections rather than what is actually happening in brains.)
Replies from: wedrifid, ata↑ comment by wedrifid · 2011-04-25T18:09:54.600Z · LW(p) · GW(p)
I must confess I'm having trouble with that flowchart, specifically the first question about whether a moral judgment expresses a belief, and emotivism being on the "no" side. Doesn't, "Ew, murder" express the belief that murder is icky?
No. The belief and that feeling and expression will be correlated but one is not the other. It isn't especially difficult or unlikely for them to different.
It would be possible to declare a model in which the "Ew, murder" reaction is defined as an expression of belief. But it isn't a natural one and would not fit with the meaning of natural language.
Replies from: pjeby, lukeprog↑ comment by pjeby · 2011-04-25T19:20:05.186Z · LW(p) · GW(p)
The belief and that feeling and expression will be correlated but one is not the other.
That depends on how you define "belief". My definition is that a "belief" is a representation in your brain that you use to make predictions or judgments about reality. The emotion experienced in response to thinking of the prohibited or "icky" behavior is the direct functional expression of that belief.
It would be possible to declare a model in which the "Ew, murder" reaction is defined as an expression of belief. But it isn't a natural one and would not fit with the meaning of natural language.
I have noticed that sometimes people on LW use the term "alief" to refer to such beliefs, but I don't consider that a natural usage. In natural usage, people refer to intellectual vs. emotional beliefs, rather than artificially limiting the term "belief" to only include verbal symbolism and abstract propositions.
Replies from: wedrifid↑ comment by wedrifid · 2011-04-25T19:30:08.540Z · LW(p) · GW(p)
That depends on how you define "belief". My definition is that a "belief" is a representation in your brain that you use to make predictions or judgments about reality. The emotion experienced in response to thinking of the prohibited or "icky" behavior is the direct functional expression of that belief.
The definition as you actually write it here isn't bad. The conclusion just doesn't directly follow the way you say it does unless you modify that definition with some extra bits to make the world a simpler place.
↑ comment by lukeprog · 2011-04-25T18:32:03.528Z · LW(p) · GW(p)
wedrifid is correct.
Another way to grok the distinction: Imagine that you were testifying at a murder trial, and somebody asked if you if you had killed your mother with a lawnmower. You reply "Lawnmower!" with a disgusted tone.
Now, the prosecutor asks, "Do you mean to claim that lawnmower is X, or that the thought of killing somebody with a lawnmower is disgusting?"
And you could rightly reply, "It may be the case that I believe that lawnmower is X, or that the thought of killing somebody with a lawnmower is disgusting, but I have claimed no such things merely by saying 'Lawnmower!'"
Replies from: pjeby↑ comment by pjeby · 2011-04-25T19:20:20.716Z · LW(p) · GW(p)
"It may be the case that I believe that lawnmower is X, or that the thought of killing somebody with a lawnmower is disgusting, but I have claimed no such things merely by saying 'Lawnmower!'"
You're speaking of claims in language; I'm speaking of brain function.
Functionally, I have observed that the emotions behind such statements are an integral portion of the "belief", and that verbal descriptions of belief such as "murder is bad" or "you shouldn't murder" are attempts to explain or justify the feeling. (In practice, the things I work with are less morally relevant than murder, but the process is the same.)
(See also your note that people continue to justify their judgments on the basis of confabulated consequences even when the situation has been specifically constructed to remove them as a consideration.)
↑ comment by ata · 2011-04-25T17:52:32.930Z · LW(p) · GW(p)
Doesn't, "Ew, murder" express the belief that murder is icky?
I don't think that's a belief. What factual questions would distinguish a world where murder is icky from one where murder is not icky?
Replies from: pjeby↑ comment by pjeby · 2011-04-25T19:23:42.932Z · LW(p) · GW(p)
I don't think that's a belief.
Beliefs can be wrong, but that doesn't make them non-beliefs.
Any belief of the form "X is Y" (especially where Y is a judgment of goodness or badness) is likely either an instance of the mind projection fallacy, or a simple by-definition tautology.
Again, however, this doesn't make it not-a-belief, it's just a mistaken or poorly-understood belief. (For example, expansion to "I find murder to be icky" trivially fixes the error.)
comment by John_Maxwell (John_Maxwell_IV) · 2011-04-27T22:01:33.364Z · LW(p) · GW(p)
Where do the views expressed in the book The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It fit in? I'm assuming this is some form of non-cognitivism?
Replies from: lukeprogcomment by RichardChappell · 2011-04-29T23:50:50.637Z · LW(p) · GW(p)
Non-cognitivists, in contrast, think that moral discourse is not truth-apt.
Technically, that's not quite right (except for the early emotivists, etc.). Contemporary expressivists and quasi-realists insist that they can capture the truth-aptness of moral discourse (given the minimalist's understanding that to assert 'P is true' is equivalent to asserting just 'P'). So they will generally explain what's distinctive about their metaethics in some other way, e.g. by appeal to the idea that it's our moral attitudes rather than their contents that have a certain central explanatory role...
Replies from: lukeprogcomment by Jayson_Virissimo · 2011-04-27T01:45:23.125Z · LW(p) · GW(p)
lukeprog, where would you place David Gauthier in your flow chart?
comment by DanArmak · 2011-04-26T22:38:09.142Z · LW(p) · GW(p)
Some cognitivists think that [...] Other cognitivists think that [...]
Is there a test of the real world that could tell us that some of them are right and others think wrong? If not, what is the value of describing their thoughts?
It's clear to me that applied and normative ethics deal with real and important questions. They are, respectively, heuristics for certain situations, and analysis of possible failure modes of these heuristics.
But I don't understand what metaethics deals with. You write:
Metaethics: What does moral language mean? Do moral facts exist? If so, what are they like, and are they reducible to natural facts? How can we know whether moral judgments are true or false? Is there a connection between making a moral judgment and being motivated to abide by it? Are moral judgments objective or subjective, relative or absolute? Does it make sense to talk about moral progress?
I don't understand why, given the reduction of these questions to substance, they are nearly as important as the first two categories. In fact, some of these questions seem to me not to reduce to anything interesting. "Does it make sense to talk about moral progress?" seems a question about definitions - given an exact definition of "moral" and "progress", there shouldn't be any empirical fact left to discover in order to answer the question. And the part of the post that discusses the positions of various philosophers gives me a strong feeling of confusion and argument about words.
I expect your next posts will make this clearer, but I wish you had included in this post at least a brief description or example of a question in metaethics that it would be useful to know the answer to. Or, at least, interesting to a reasonably broad audience.
Replies from: None, Peterdjones↑ comment by [deleted] · 2011-04-27T20:16:13.294Z · LW(p) · GW(p)
I wish you had included in this post at least a brief description or example of a question in metaethics that it would be useful to know the answer to.
This is nicely put. I second the request: what is a metaethical question that could have a useful answer? It would be especially nice if the usefulness was clear from the question itself, and not from the answer that lukeprog is preparing to give.
↑ comment by Peterdjones · 2011-04-27T13:29:48.634Z · LW(p) · GW(p)
Exact definitions are easy to come by, so long as you are not bothered about correctness. Let morality=42, for instance. If you are bothered about correctness, you need to solve metaethics, the question of what morality is, before you can exactly and correctly define "morality".
I can understand the impatience with philosophy -- "why can't they just solve these problems"--because that was my reaction when I first encountered it some 35 years ago. Did I solve philosophy? I only managed to nibble away at some edges. That's all anyone ever manages.
Replies from: wedrifid, DanArmak↑ comment by DanArmak · 2011-04-27T18:21:03.837Z · LW(p) · GW(p)
you need to solve metaethics, the question of what morality is
The problem isn't that I don't know the answer. The problem is that I don't understand the question.
"Morality" is a word. "Understanding morality" is, first of all, understanding what people mean when they use that word. I already know the answer to that question: they mean a complex set of evolved behaviors that have to do with selecting and judging behaviors and other agents. Now that I've answered that question, if you claim there is a further unanswered question, you will need to specify what it is exactly. Otherwise it's no different from saying we must "solve the question of what a Platonic ideal is".
There are many important questions about morality that need to be answered - how exactly people make moral decisions, how to predict and manipulate them, how to modify our own behavior to be more consistent, etc. But these are part of applied and normative ethics. I don't understand what metaethics is.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T19:13:18.840Z · LW(p) · GW(p)
Understanding morality is second of all deciding what, if anything, it actually is. Water actually is H2O, but you can use the word without knowing that, and you can't find out what water is just by studying how the word is used.
Replies from: DanArmak↑ comment by DanArmak · 2011-04-27T20:13:12.374Z · LW(p) · GW(p)
I think you don't understand my question.
"Water" is H2O. And we can study H2O.
"Morality" is a complex set of evolved behaviors, etc. We can study those behaviors. This is (ETA:) descriptive ethics. What is metaethics, though?
And do you think there are questions to be asked about morals which are not questions about the different human behaviors that are sometimes labeled as morally relevant? Do you think there exists something in the universe, independent of human beings and the accidents of our evolution, that is called "morals"? The original post indicated that some philosophers think so.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-27T20:26:39.617Z · LW(p) · GW(p)
The study of those behaviours is descriptive ethics. The prescription of those behaviours is normative ethics.
We can ask whether some de facto behaviour we have observed is really moral. And that raises the question of what "really moral" means. And that is metaethics and has a number of possible solutions, positive and negative, which are clearly outlined in the original positing. And metaethics does not vanish just because the Platonic approach is rejected.
Replies from: DanArmak↑ comment by DanArmak · 2011-04-28T07:58:42.407Z · LW(p) · GW(p)
We can also ask whether some de facto behavior is really vorpal. That raises the question of what "really vorpal" means. Luckily, I can tell you what it really means: nothing at all.
If you claim the word "moral" means something that I - and most people who use that word - don't know that it means, then 1) you have to tell us what it means as the start of any discussion instead of asking us what it means, and 2) you should really use a new word for your new idea.
The study of those behaviours is descriptive ethics. The prescription of those behaviours is normative ethics.
Thanks for the correction.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T11:25:25.179Z · LW(p) · GW(p)
Luckily, I can tell you what it really means: nothing at all.
Negative solutions are possible, as I said.
If you claim the word "moral" means something that I - and most people who use that word - don't know that it means,
I didn't claim that.. I did say that a precise and correct definition requires coming up with a correct theory. But coming up with a correct theory only requires the imprecise pretheoretical definition, and everyone already has that. (I wasn't asking for it because I don't know it, I was asking for it to remind people that they already have it).
If I had promised a correct theory, I would have implictly promised a post-theoretic definition to go with it. But I didn't make the first promise, so I am not commtted to the second.
The whole thing is aimed as a correction to the ideas that you need to have, or can have, completely clear and accurate deffinitions from the get go.
2) you should really use a new word for your new idea.
People should read carefully, and note that I never claimed to have a New Idea.
Replies from: DanArmak↑ comment by DanArmak · 2011-04-28T14:08:35.196Z · LW(p) · GW(p)
Negative solutions are possible, as I said.
I take it you mean negative solutions to the question: does "morality" have a meaning we don't precisely know yet?
What I'm saying is that it's your burden to show that we should be considering this question at all. It's not clear to me what this question means or how and why it arises in your mind.
It's as if you said you were going to spend a year researching exactly what cars mean. And I asked: what does it mean for cars to "mean" something that we don't know? It's clearly not the same as saying the word "cars" refers to something, because it can't refer to something we don't know about; a word is defined only by the way we use it. And cars at least exist as physical objects, unlike morality.
So before we talk about possible answers (or the lack of them), I'm asking you to explain to me the question being discussed. What does the question mean? What kind of objects can be the answer - can morality "mean" that ice cream is sweet, or is that wrong type of answer? What is the test used to judge if an answer is true or false? Is there a possibility two people will never agree even though one of their answers is objectively true (like in literature, and unlike in mathematics)?
The whole thing is aimed as a correction to the ideas that you need to have, or can have, completely clear and accurate deffinitions from the get go.
If we only have an inaccurate definition for morality right now, and someone proposes an accurate one, how can we tell if it's correct?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-04-28T14:21:20.822Z · LW(p) · GW(p)
No, by negative answers, I mean things like error theories in metaethics.
I think your other questions don't have obvious answers. If you think that the lack of obvious answers should lead to something like "ditch the whole thing", we could have a debate about that. Otherwise, you're not saying anything that hasn't been said already.
comment by TimFreeman · 2011-04-25T20:09:45.691Z · LW(p) · GW(p)
Where does pluralistic moral reductionism go on the flowchart?
Replies from: Wei_Dai, lukeprog↑ comment by Wei Dai (Wei_Dai) · 2011-04-27T00:13:21.023Z · LW(p) · GW(p)
Given that Luke named his theory "pluralistic moral reductionism", Eliezer said his theory is closest to "moral functionalism", and Luke said his views are similar to Eliezers, I think one can safely deduce that it belongs somewhere around the bottom of the chart, not far away from "analytic moral functionalism" and "standard moral reductionism". :)
Replies from: endoself↑ comment by endoself · 2011-04-27T00:54:38.382Z · LW(p) · GW(p)
Based on how I would answer the questions listed and that my views are similar to Eliezer's, I agree. The last question, as I understand it, is equivalent to "If you had a full description of all possible worlds, could you then say which choices are right in each world? Say "no" if you instead think that you would you have to additionally actually observe the real world to make moral choices." I might be misunderstanding something, since this seems like an obvious "yes", but I might be understanding 'too much', perhaps by conflating two things that some philosophers claim to be different due to their confusion.
↑ comment by lukeprog · 2011-04-26T15:22:39.956Z · LW(p) · GW(p)
It doesn't fit anywhere on the chart cuz it's just so freaking meta, yo. :)
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2011-04-26T22:17:41.669Z · LW(p) · GW(p)
But don't most philosophers do that: try to assemble all the other philosophers' positions in a chart while maintaining that his own position is too nuanced to be assigned a point on a chart :)
Replies from: lukeprog, Amanojack↑ comment by lukeprog · 2011-05-03T22:35:45.909Z · LW(p) · GW(p)
My tone was facetious, but the content of my sentence above was literal. I don't think it's an advantage that my theory does or doesn't fit neatly on the above chart. It's just that my theory of metaethics doesn't quite have the same aims or subject matter as the theories presented on this chart. But anyway, you'll see what I mean once I have time to finish writing up the sequence...
↑ comment by Amanojack · 2011-04-27T16:25:36.591Z · LW(p) · GW(p)
Perhaps, but another general trend in philosophy seems to be that people spend centuries arguing over definitions. Anyone who points that out will be necessarily making a meta-critique and hence not be a point on a chart (not that lukeprog's theory will necessarily be like that; just have to wait and see).
comment by [deleted] · 2016-01-13T04:06:39.664Z · LW(p) · GW(p)
Emotivism is a meta-ethical view that claims that ethical sentences do not express propositions but emotional attitudes. Why is it an either or?
Replies from: gjm, Richard_Kennaway↑ comment by gjm · 2016-01-13T11:16:06.870Z · LW(p) · GW(p)
It isn't; someone might perfectly well hold that ethical sentences express both propositions and emotional attitudes. But those people would not be classified as emotivists. It happens that some people hold the more specific position called emotivism, and it's useful to have a word for it.
↑ comment by Richard_Kennaway · 2016-01-13T09:48:10.484Z · LW(p) · GW(p)
Because most people cannot count any higher than one.
comment by grouchymusicologist · 2011-04-25T19:05:29.692Z · LW(p) · GW(p)
Sometimes one hears the term "moral realism," and in fact that term appears pretty often in your bibliography but not in the main text of your post. Would I be right to think that it comprises everything on the flowchart downstream of the answer "Yes" to the question "Are those beliefs about facts that are constituted by something other than human opinion?"?
Replies from: lukeprogcomment by Yoav Ravid · 2019-04-25T08:06:37.129Z · LW(p) · GW(p)
Flowchart is gone :|
comment by thomblake · 2011-04-25T22:00:29.710Z · LW(p) · GW(p)
Tangent: I think Ayer's observation was correct but he had the implication backwards. The English sentence "Yuck!" contains the assertion "That is bad." and is truth-apt.
I have launched into arguments with people after they expressed distaste, and I think it was at least properly grammatical. A start: "What's yucky about that?"
Replies from: Yvain, Amanojack↑ comment by Scott Alexander (Yvain) · 2011-04-25T22:44:58.571Z · LW(p) · GW(p)
When I was in Thailand, I saw some local tribesmen eat a popular snack of giant beetles. I said "Yuck!" and couldn't watch them. However, I recognize that there's nothing weirder about eating a bug than about eating a chicken and that they're perfectly healthy and nutritious to people who haven't been raised to fear eating them.
↑ comment by Amanojack · 2011-04-27T17:30:16.360Z · LW(p) · GW(p)
To interpret "Yuck!" as "That is bad/yucky" is to turn what is ostensibly an expression of subjective experience into an ostensibly "objective" statement. You may as well keep it subjective and interpret it as "I am experiencing revulsion." But you'd have to be a pretty cunning arguer to get into a debate about whether another person is really having a subjective experience of revulsion!
Replies from: thomblake↑ comment by thomblake · 2011-04-27T17:37:32.722Z · LW(p) · GW(p)
It's both - expressing revulsion has a normative component, and so does even experiencing revulsion.
To illustrate: If I eat something and exclaim, "Oishii!", that not only expresses that I am "experiencing deliciousness", but also that the thing I'm tasting "is delicious" - my wife can try it out with the expectation that when she eats it she will also "experience deliciousness". It is a good-tasting thing.
Replies from: Amanojackcomment by ata · 2011-04-25T19:16:03.708Z · LW(p) · GW(p)
This is not quite correct. The error theorist can hold that a statement like "Murder is not wrong" is true, for they think that murder is not wrong or right.
Should that be "The error theorist can't hold that a statement like 'Murder is not wrong' is true"?
(Also, it's not clear to me that classifying error theory as cognitivist is correct. If it claims that all moral statements are based on a fundamentally mistaken intuition, so that "Murder is wrong" has no more factual content than "Murder is flibberty", then is it not asserting that moral claims are not coherent enough to actually be proper beliefs (even false ones)? (And if classifying a metaethic as cognitivist requires only that it implies that moral claims feel like proper beliefs, not necessarily that they actually are proper beliefs, then that would include emotivism too in most cases.))
Replies from: Alicorn, lukeprog↑ comment by Alicorn · 2011-04-25T19:37:12.050Z · LW(p) · GW(p)
This is not quite correct. The error theorist can hold that a statement like "Murder is not wrong" is true, for they think that murder is not wrong or right.
Should that be "The error theorist can't hold that a statement like 'Murder is not wrong' is true"?
No. The error theorist may hold "murder is not wrong" and "murder is not right" to be true. Ey just has to hold "murder is wrong" and "murder is right" to be false, and if ey wants to endorse the "not" statements I guess a rule that "things don't have to be either right or wrong" must operate in the background.
↑ comment by lukeprog · 2011-04-25T19:21:39.646Z · LW(p) · GW(p)
ata,
In this case, I managed to say it correctly the first time. :)
If you're not sure about this stuff, you can read the first chapter of Joyce's 'The Myth of Morality', the central statement of contemporary error theory.
Replies from: ata↑ comment by ata · 2011-04-25T19:37:29.628Z · LW(p) · GW(p)
I can see how an error theorist would agree with "Murder is not wrong" in the same sense in which I'd agree with "Murder is not purple", but it's a strange and not very useful sense. My impression had been that error theorists claim that there are no "right" or "wrong" buckets to sort things into in the first place, rather than proposing that both buckets are there but empty — more like ignosticism than atheism. Am I mistaken about that?
Replies from: Larks↑ comment by Larks · 2011-04-25T20:59:43.326Z · LW(p) · GW(p)
Error theorists believe that when people say "Murder is wrong", those people are actually trying to claim that it is a fact that murder has the property of being wrong. However, those people are incorrect (error theorists think) because Murder does not have the property of being wrong - because nothing has the property of being wrong.
It's not about whether or not there are buckets - error theory just says that most people think there is stuff in buckets, but they're incorrect.
Replies from: prase, lukeprog↑ comment by prase · 2011-04-26T08:34:32.189Z · LW(p) · GW(p)
However, those people are wrong (error theorists think) because Murder does not have the property of being wrong - because nothing has the property of being wrong.
I smell a peculiar odour of inconsistency.
(That means, add some modifier, as "morally", to the second "wrong", else it sounds really weird.)
comment by NihilCredo · 2011-04-26T22:58:27.490Z · LW(p) · GW(p)
How neat is the dichotomy between cognitivists and non-cognitivists? Are there significant philosophical factions holding positions such as
- "Murder is wrong" is a statement of belief, but it also expresses an emotion (and morality's peculiar charm comes from appealing both to a person's analytical mind and to their instincts)
- Some people approach morality as a system of beliefs, others as gut reactions, and this is connected to their personalities in interesting ways
- Or perhaps the same person can shift over time from gut reactions to believing in absolute laws to believing that morality is a natural phenomenon, paralleling for example their growth from infant to child to adult
et cetera?
comment by NancyLebovitz · 2011-04-26T10:19:03.343Z · LW(p) · GW(p)
I'm wondering whether emotive responses lack logical content, and also whether belief-based morality requires emotive backing (failure of utilitarianism--yuck!) to move people to action.
comment by gimpf · 2011-04-25T19:55:13.495Z · LW(p) · GW(p)
Off-Topic: At least for me, your text feels like it is "cut off" -- it does not seem to have a closure -- like a classical solo concert which is stopped after the final cadence of the soloist, before the orchestra sets in again. Is this intended?
comment by Manfred · 2011-04-25T19:03:50.269Z · LW(p) · GW(p)
One major debate in moral psychology concerns whether moral judgments require some (defeasible) motivation to adhere to the moral judgment (motivational internalism), or whether one can make a moral judgment without being motivated to adhere to it (motivational externalism).
One of the first two "moral judgements" in this confusing sentence is probably a typo. "Defeasible" just makes things more confusing. Maybe follow the vein of your linked Wikipedia paragraph more closely?
comment by Perplexed · 2011-04-29T05:46:04.012Z · LW(p) · GW(p)
Our moral judgments are greatly affected by pointing magnets at the point in our brain that processes theory of mind.
The way this is worded makes it seem that the result is produced by static magnetic fields. And that makes it sound like 19th century pseudo-science.
We use our recently-evolved neocortex to make utilitarian judgments, and deontological judgments tend to come from our older 'chimp' brains.
And the way this is worded makes it seem that you think that the neo-cortex is something that evolved since we separated from the chimps.
Moral naturalists tend to think that moral facts can be accessed simply by doing science.
Do they think that it is important to get the science right? Or is it enough just to signal an interest in pop-science to be recognized as a moral meta-giant?
Replies from: lukeprog↑ comment by lukeprog · 2011-04-30T13:32:44.652Z · LW(p) · GW(p)
the way this is worded makes it seem that you think that the neo-cortex is something that evolved since we separated from the chimps
I was trying to make use of Greene's phrase: 'inner chimp.' But you're right; it's not that accurate. I've adjusted the wording above.
Replies from: Perplexed↑ comment by Perplexed · 2011-04-30T15:16:07.810Z · LW(p) · GW(p)
I was trying to make use of Greene's phrase: 'inner chimp.'
I don't think it is Greene's phrase. I spent some time searching, and can find only one place where he used it - a 2007 RadioLab interview with Krulwich. I would be willing to bet that he was primed to use that phrase by the journalist. He doesn't even use the word chimp in the cited paper.
In any case, Greene's arguments are incoherent even by the usual lax standards of evolutionary psychology and consequentialist naturalistic ethics. He suggests that a consequentialist foundation for ethics is superior to a deontological foundation because 'consequentialist moral intuitions' flow from a more recently evolved portion of the brain.
Now it should be obvious that one cannot jump from 'more recently evolved' to 'superior as a moral basis'. You can't even get from 'more recently evolved' to 'more characteristically human'. Maybe you can get to 'more idiosyncratically human'.
But even that only helps if you are comparing moral judgements on which deontologists and consequentialists differ. But Greene does not do that. Instead of comparing two different judgements about the same situation, he discusses two different situations, in both of which pretty-much everyone's moral intuitions agree. He calls the intuitions that everyone has in one situation 'consequentialist' and the intuitions in the other situation 'deontological'!
Now, most people would object that deontology has nothing to do with intuition. Greene has an answer:
In sum, if it seems that I have simply misunderstood what Kant and deontology are all about, it’s because I am advancing an alternative hypothesis to the standard Kantian/deontological understanding of what Kant and deontology are all about. I am putting forth an empirical hypothesis about the hidden psychological essence of deontology, and it cannot be dismissed a priori for the same reason that tropical islanders cannot know a priori whether ice is a form of water.
And so, having completely restructured the playing field, he reaches the following conclusions:
The argument presented above makes trouble for people in search of rationalist theories that can explain and justify their emotionally-driven deontological moral intuitions. But rationalist deontologists may not be the only ones who should think twice.
The arguments presented above cast doubt on the moral intuitions in question regardless of whether one wishes to justify them in abstract theoretical terms. This is, once again, because these intuitions appear to have been shaped by morally irrelevant factors having to do with the constraints and circumstances of our evolutionary history. This is a problem for anyone who is inclined to stand by these intuitions, and that “anyone” includes nearly everyone.
I’ve referred to these intuitions and the judgments they underwrite as “deontological,” but perhaps it would be more accurate to call them non- consequentialist. After all, you don’t have to be a card-carrying deontologist to think that it’s okay to eat in restaurants when people in the world are starving, that it’s inherently good that criminals suffer for their crimes, and that it would be wrong to push the guy off the footbridge. These judgments are perfectly commonsensical, and it seems that the only people who are inclined to question them are card-carrying consequentialists.
Does that mean that all non-consequentialists need to rethink at least some of their moral commitments? I humbly suggest that the answer is yes.
Let me get this straight. The portions of our brains that generate what Green dubs 'deontological intuitions' are evolutionarily ancient, present in all animals. So Greene dismisses those intuitions as "morally irrelevant" since they ultimately arise from "factors having to do with the constraints and circumstances of our evolutionary history". But our 'consequentialist intuitions' are morally relevant because they come from the neo-cortex; a region of the brain that exists only in mammals and which is particularly enlarged in humans. Yet, somehow, he doesn't think that these intuitions are tainted by the contingent nature of their evolutionary history.
Replies from: lukeprog, AlephNeil, Peterdjones↑ comment by lukeprog · 2011-04-30T15:23:52.471Z · LW(p) · GW(p)
I remember Greene's position being more nuanced than that, but it's been a while since I read his dissertation. In any case, I'm not defending his view. I only claimed that (in its revised wording) "We use our recently-evolved neocortex to make utilitarian judgments, and deontological judgments tend to come from evolutionarily older parts of our brains."
↑ comment by Peterdjones · 2011-04-30T16:10:43.909Z · LW(p) · GW(p)
That's a distinction that makes sense if deontology is hardwired whilst consequentialism varies with evidence.