Could evolution have selected for moral realism?
post by John_Maxwell (John_Maxwell_IV) · 2012-09-27T04:25:52.580Z · LW · GW · Legacy · 53 commentsContents
53 comments
I was surprised to see the high number of moral realists on Less Wrong, so I thought I would bring up a (probably unoriginal) point that occurred to me a while ago.
Let's say that all your thoughts either seem factual or fictional. Memories seem factual, stories seem fictional. Dreams seem factual, daydreams seem fictional (though they might seem factual if you're a compulsive fantasizer). Although the things that seem factual match up reasonably well to the things that actually are factional, this isn't the case axiomatically. If deviating from this pattern is adaptive, evolution will select for it. This could result in situations like: the rule that pieces move diagonally in checkers seems fictional, while the rule that you can't kill people seems factual, even though they're both just conventions. (Yes, the rule that you can't kill people is a very good convention, and it makes sense to have heavy default punishments for breaking it. But I don't think it's different in kind from the rule that you must move diagonally in checkers.)
I'm not an expert, but it definitely seems as though this could actually be the case. Humans are fairly conformist social animals, and it seems plausible that evolution would've selected for taking the rules seriously, even if it meant using the fact-processing system for things that were really just conventions.
Another spin on this: We could see philosophy as the discipline of measuring, collating, and making internally consistent our intuitions on various philosophical issues. Katja Grace has suggested that the measurement of philosophical intuitions may be corrupted by the desire to signal on the part of the philosophy enthusiasts. Could evolutionary pressure be an additional source of corruption? Taking this idea even further, what do our intuitions amount to at all aside from a composite of evolved and encultured notions? If we're talking about a question of fact, one can overcome evolution/enculturation by improving one's model of the world, performing experiments, etc. (I was encultured to believe in God by my parents. God didn't drop proverbial bowling balls from the sky when I prayed for them, so I eventually noticed the contradiction in my model and deconverted. It wasn't trivial--there was a high degree of enculturation to overcome.) But if the question has no basis in fact, like the question of whether morals are "real", then genes and enculturation will wholly determine your answer to it. Right?
Yes, you can think about your moral intuitions, weigh them against each other, and make them internally consistent. But this is kind of like trying to add resolution back in to an extremely pixelated photo--just because it's no longer obviously "wrong" doesn't guarantee that it's "right". And there's the possibility of path-dependence--the parts of the photo you try to improve initially could have a very significant effect on the final product. Even if you think you're willing to discard your initial philosophical conclusions, there's still the possibility of accidentally destroying your initial intuitional data or enculturing yourself with your early results.
To avoid this possibility of path-dependence, you could carefully document your initial intuitions, pursue lots of different paths to making them consistent in parallel, and maybe even choose a "best match". But it's not obvious to me that your initial mix of evolved and encultured values even deserves this preferential treatment.
Currently, I disagree with what seems to be the prevailing view on Less Wrong that achieving a Really Good Consistent Match for our morality is Really Darn Important. I'm not sure that randomness from evolution and enculturation should be treated differently from random factors in the intuition-squaring process. It's randomness all the way through either way, right? The main reason "bad" consistent matches are considered so "bad", I suspect, is that they engender cognitive dissonance (e.g. maybe my current ethics says I should hack Osama Bin Laden to death in his sleep with a knife if I get the chance, but this is an extremely bad match for my evolved/encultured intuitions, so I experience a ton of cognitive dissonance actually doing this). But cognitive dissonance seems to me like just another aversive experience to factor in to my utility calculations.
Now that you've read this, maybe your intuition has changed and you're a moral anti-realist. But in what sense has your intuition "improved" or become more accurate?
I really have zero expertise on any of this, so if you have relevant links please share them. But also, who's to say that matters? In what sense could philosophers have "better" philosophical intuition? The only way I can think of for theirs to be "better" is if they've seen a larger part of the landscape of philosophical questions, and are therefore better equipped to build consistent philosophical models (example).
53 comments
Comments sorted by top scores.
comment by drnickbone · 2012-09-27T11:19:25.124Z · LW(p) · GW(p)
I was surprised to see the high number of moral realists on Less Wrong
Just a guess, but this may be related to the high number of consequentialists. For any given function U to evaluate consequences (e.g. a utility function) there are facts about which actions maximize that function. Since what a consequentialist thinks of as a "right" action is what maximizes some corresponding U, there are (in the consequentialist's eyes) moral facts about what are the "right" actions.
Similar logic applies to rule consequentialism by the way (there may well be facts of the matter about which moral rules would maximize the utility function if generally adopted).
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2012-09-27T16:31:39.723Z · LW(p) · GW(p)
That may be true, but I don't think that accounts for what is meant by "moral realism". Yes, it's a confused term with multiple definitions, but it usually means that there is a certain utility function that is normative for everyone -- as in you are morally wrong if you have a different utility function.
Replies from: drnickbone↑ comment by drnickbone · 2012-09-27T19:47:11.605Z · LW(p) · GW(p)
I think this is more the distinction between "objectivism" and "subjectivism", rather than between "realism" and "anti-realism".
Let's suppose that different moral agents find they are using different U-functions to evaluate consequences. Each agent describes their U as just "good" (simpliciter) rather than as "good for me" or as "good from my point of view". Each agent is utterly sincere in their ascription. Neither agent has any inconsistency in their functions, or any reflective inconsistency (i.e. neither discovers that under their existing U it would be better for them to adopt some other U' as a function instead). Neither can be persuaded to change their mind, no matter how much additional information is discovered.
In that case, we have a form of moral "subjectivism" - basically each agent has a different concept of good, and their concepts are not reconcilable. Yet for each agent there are genuine facts of the matter about what would maximize their U, so we have a form of moral "realism" as well.
Agree though that the definitions aren't precise, and many people equate "objectivism" with "realism".
comment by lukeprog · 2012-09-27T13:21:49.782Z · LW(p) · GW(p)
Two books on evolutionary selection for moral realism:
Replies from: pragmatist, roystgnr↑ comment by pragmatist · 2012-10-02T14:51:13.266Z · LW(p) · GW(p)
A good article on the structure of evolutionary debunking arguments in ethics (sorry, gated):
http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0068.2010.00770.x/abstract
↑ comment by roystgnr · 2012-09-28T20:08:47.380Z · LW(p) · GW(p)
I would add The Science of Good and Evil
comment by Raiden · 2012-09-27T05:17:08.844Z · LW(p) · GW(p)
But it's not obvious to me that your initial mix of evolved and encultured values even deserves this preferential treatment.
But your initial mix of evolved and encultured values are all you have to go on. There is no other source of values or intuitions. Even if you decide that you disagree with a value, you're using other evolved or encultured intuitions to decide this. There is literally nothing can use except these. A person who abandons their religious faith after some thought is using the value "rational thought" against "religious belief." This person was lucky enough to have "rational thought" instilled by someone as a value, and have it be strong enough to beat "religious belief." The only way to change your value system is by using your value system to reflect upon your value system.
Replies from: None, John_Maxwell_IV↑ comment by [deleted] · 2012-09-27T07:05:49.118Z · LW(p) · GW(p)
The only way to change your value system is by using your value system to reflect upon your value system.
I agree with the message of your post and I up-voted it, but this sentence isn't technically true. Outside forces that aren't dependant on your value system can change your value system too. For example if you acquire a particular behaviour altering parasite or ingest substances that alter your hormone mix. This is ignoring things like you losing your memory or Omega deciding to rewire your
Our values are fragile, some see this as a reason to not be too concerned with them. I find this a rationalization similar to the ones use to deal with the fragility of life itself. Value deathism has parallel arguments to deathism.
What some here might call The Superinteligent Will but I see as a logical outgrowth of der Wille zur Macht, is to stamp your values on a uncaring universe.
Replies from: Multiheaded, endoself, John_Maxwell_IV↑ comment by Multiheaded · 2012-10-02T14:40:35.868Z · LW(p) · GW(p)
What some here might call The Superinteligent Will but I see as a logical outgrowth of der Wille zur Macht, is to stamp your values on a uncaring universe.
You totally stole that from me!
Replies from: None↑ comment by [deleted] · 2012-10-02T16:32:26.156Z · LW(p) · GW(p)
Yeah I totally did, it fit my previous thinking (was very into Nietzsche a few years back too) and I've been building on it since.
Since this is I think the second time you've made a comment like this I'm wondering why exactly you feel the need to point this out. I mean surely you realize you've stolen stuff from me too right? And we both stole loads from a whole bunch of other people. Is this kind of like a bonding fist bump of a call for me to name drop you more?
Those who read our public exchanges know we are on good terms and that I like your stuff, not sure what more name dropping would do for you beyond that, especially since this is material from our private email exchanges and not a public article I can link to. If I recall the exchange the idea was inspired by a one line reply you made in a long conversation, so its not exactly something easily quotable either.
↑ comment by endoself · 2012-09-27T23:15:45.346Z · LW(p) · GW(p)
What some here might call The Superinteligent Will but I see as a logical outgrowth of der Wille zur Macht, is to stamp your values on a uncaring universe.
Is this why people like Nietzsche, or do most people who like Nietzsche have different reasons?
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-09-27T09:58:26.415Z · LW(p) · GW(p)
Our values are fragile, some see this as a reason to not be too concerned with them.
I think it really depends on the exact value change we're talking about. There's an analogue for death/aging--you'd probably greatly prefer aging another 10 years, then being frozen at that biological age forever, over aging and dying normally. In the same way, I might not consider a small drift in apparently unimportant values to big a deal in the grand scheme of things, and might not choose to spend resources guarding against this (slippery slope scenarios aside).
In practice, people don't seem to be that concerned with guarding against small value changes. They do things like travel to new places, make new friends, read books, change religions, etc., all of which are likely to change what they value, often in unpredictable ways.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-09-27T05:46:52.077Z · LW(p) · GW(p)
But your initial mix of evolved and encultured values are all you have to go on.
I don't think this statement is expressing a factual question. If it is, hopefully "I could generate values randomly" is a workable counterargument.
It's also not even clear quite what you mean by "initial" mix. My values as a 3-year-old were much different than the ones I have today. My values prior to creating this post were likely different from my values after creating it. Which set of values is the "initial" one that is "all I have to go on"?
Where does inculturation stop and moral reflection begin? Is there any reason to distinguish them? Should we treat growing up in a particular society as an intuitions permutation of a different/preferred sort as happening to have a certain train of philosophical thought early on?
Abdul grew up in an extremist Pakistani village, but on reflection, he's against honor killings. Bruce grew up in England, but on reflection, he's in favor of honor killings. What do you say to each?
I think most LW readers don't see much sacrosanct about evolved values: Some people have added layers of enculturation and reflection that let them justify crazy stuff. (Ex: pretty much every "bad" thing anyone has done ever, if we're to believe that everyone's the hero of their own life story.) So we LWers already enculturated/reflected ourselves to the point where bare-bones "evolved" values would be considered a scary starting point, I suspect.
Infuriation and "righteous anger" are evolved intuitions; I assume most of us are past the point of endorsing "righteous anger" as being righteous/moral.
A person who abandons their religious faith after some thought is using the value "rational thought" against "religious belief." This person was lucky enough to have "rational thought" instilled by someone as a value, and have it be strong enough to beat "religious belief."
Do you consider God's existence to be an "is" factual question or an "ought" values question? I consider it a factual question myself.
Replies from: Viliam_Bur, Raiden, None↑ comment by Viliam_Bur · 2012-09-27T09:17:03.558Z · LW(p) · GW(p)
I think most LW readers don't see much sacrosanct about evolved values
Maybe because they think about them in far mode. If you think about values as some ancient commandments written on some old parchment, it does not seem like rewriting the parchment could be a problem.
Let's try it in the near mode. Imagine that 1000 years later you are defrosted and see a society optimized for... maximum suffering and torture. You are explained that it happened as a result of an experiment to initialize the superhuman AI with random values... and this was what the random generator has generated. It will be like this till the end of the universe. Enjoy the hell.
What is your reaction on this? Some values were replaced by some other values -- thinking abstractly enough, it seems like nothing essential has changed; we are just optimizing for Y instead of X. Most of the algorithm is the same. Even many of the AI actions are the same: it tries to better understand human psychology and physiology, get more resources, protect itself against failure or sabotage, self-improve, etc.
How could you explain what is wrong with this scenario, without using some of our evolved values in your arguments? Do you think that a pebblesorter, concerned only with sorting pebbles, would see an important difference between "human hell" and "human paradise" scenarios? Do you consider this neutrality of pebblesorter with regards to human concerns (and a neutrality of humans with regards to pebblesorter concerns) to be a desirable outcome?
(No offense to pebblesorters. If we ever meet them, I hope we can cooperate to create a universe with a lot of happy humans and properly sorted heaps of pebbles.)
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-09-27T09:43:17.474Z · LW(p) · GW(p)
How could you explain what is wrong with this scenario, without using some of our evolved values in your arguments?
It's only "wrong" in the sense that I don't want it, i.e. it doesn't accord with my values. I don't see the need to mention the fact that they may have been affected by evolution.
↑ comment by Raiden · 2012-09-27T12:47:01.361Z · LW(p) · GW(p)
It's also not even clear quite what you mean by "initial" mix. My values as a 3-year-old were much different than the ones I have today. My values prior to creating this post were likely different from my values after creating it. Which set >of values is the "initial" one that is "all I have to go on"?
Sorry, I should have been more clear about that. What I mean is that at any particular moment when one reflects upon their values, one can only use one's current value system to do so. The human value system is dynamic.
Where does inculturation stop and moral reflection begin? Is there any reason to distinguish them?
Like many things in nature, there is no perfectly clear distinction. I generally consider values that I have reflected upon to any degree, especially using my "rational thought" value, to be safe and not dogma.
Do you consider God's existence to be an "is" factual question or an "ought" values question? I consider it a factual question myself.
My "rational thought" value tells me it's an "is" question, but most people seem to consider it a value question.
↑ comment by [deleted] · 2012-09-27T07:13:29.627Z · LW(p) · GW(p)
If it is, hopefully "I could generate values randomly" is a workable counterargument.
But why would you do that if your existing value system wouldn't find that a good idea?
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-09-27T09:59:51.363Z · LW(p) · GW(p)
I wouldn't do that. You misunderstood my response. I said that was my response if he was trying to make an empirical assertion.
comment by thomblake · 2012-09-27T19:03:15.031Z · LW(p) · GW(p)
Here's my take:
The problem with talking about "objective" and "subjective" with respect to ethics (the terms that "realist" and "anti-realist" often get unpacked to) is that they mean different things to people with different intuitions. I don't think there actually is a "what most philosophers mean by the term" for them.
"Objective" either means:
- not subjective, or
- It exists regardless of whether you believe it exists
"Subjective" either means:
- It is different for different people, or
- not objective
So, some people go with definition 1, and some go with definition 2. Very few people go with both Objective[2] and Subjective[1] and recognize that they're not negations of one another.
So you have folks who think that different people have somewhat different utility functions, and therefore morality is subjective. And you have folks who think that a person's utility function doesn't go away when you stop believing in it, and therefore morality is objective. That they could both be true isn't considered within the realm of possibility, and folks on "both sides" don't realize they're talking past each other.
comment by pragmatist · 2012-09-27T07:42:53.811Z · LW(p) · GW(p)
I don't get why you think facts and conventions are mutually exclusive. Don't you think it's a fact that the American President's name is Barack Obama?
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-09-27T10:07:52.815Z · LW(p) · GW(p)
I think it's a fact that there's a widespread convention of referring to him by that name.
I also think it's a fact that there's a widespread taboo against stealing stuff. I don't think it's a fact that stealing stuff is wrong, unless you're using "wrong" as a shorthand to refer to things that have strong/widespread taboos against them. (Once you use the word this way, an argument about whether stealing is wrong becomes an argument over what taboos prevail in the population--not a traditional argument about ethics exactly, is it? So this usage is nonstandard.)
Replies from: drnickbone↑ comment by drnickbone · 2012-09-27T13:06:22.480Z · LW(p) · GW(p)
I don't think it's a fact that stealing stuff is wrong, unless you're using "wrong" as a shorthand to refer to things that have strong/widespread taboos against them.
But you also said that some such widespread conventions/taboos are good conventions. From your OP:
Yes, the rule that you can't kill people is a very good convention, and it makes sense to have heavy default punishments for breaking it.
So, here's a meta-question for you. Do you think it is a fact that "the rule that you can't kill people is a very good convention". Or was that just a matter of subjective opinion, which you expressed in the form of a factual claim for rhetorical impact? Or is it itself a convention (i.e. we have conventions to call certain things "good" and "bad" in the same way we have conventions to call certain things "right" and "wrong")?
On a related point, notice that certain conventions do create facts. It is a convention that Obama is called president, but also a fact that he is president. It is a convention that dollar bills can be used as money, and a fact that they are a form of money.
Or imagine arguing the following "It is a convention that objects with flat surfaces and four solid legs supporting them are called tables, but that doesn't mean there are any real tables".
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-09-28T00:14:40.869Z · LW(p) · GW(p)
Do you think it is a fact that "the rule that you can't kill people is a very good convention".
It's a fact that it's a good convention for helping to achieve my values. So yeah, "the rule that you can't kill people is a very good convention" is a subjective value claim. I didn't mean to frame it as a factual claim. Any time you see me use the word "good", you can probably interpret as shorthand for "good according to my values".
It is a convention that Obama is called president, but also a fact that he is president.
The "fact" that Obama is president is only social truth. Obama is president because we decided he is. If no one thought Obama was president, he wouldn't be president anymore.
The only sense in which "Obama is president" is a true fact is if it's shorthand for something like "many people think Obama is president and he has de facto power over the executive branch of the US government". (Or you could use it as shorthand for "Obama is president according to the Supreme Court's interpretation of US laws" or something like that, I guess.)
In medieval times, at one point, there were competing popes. If I said "Clement VII is pope", that would be a malformed factual claim, 'cause it's not clear how to interpret the shorthand (what sensory experiences would we expect if the proposition "Clement VII is pope" is true?). In this case, the shorthand reveals its insufficiency, and you realize that a conventional claim like this only becomes a factual claim when it's paired with a group of people that respects the convention ("Clement VII is considered the pope in France" is a better-formed factual claim, as is "Clement VII is considered the pope everywhere". Only the first is true.). Oftentimes the relevant group is implied and not necessary to state ("Obama is considered US president by 99+% of those who have an opinion on the issue").
People do argue over conventional stuff all the time, but these aren't arguments over anticipation ("My pope is legit, yours is not!"). Some moral arguments ("abortion is murder!") follow the same form.
Replies from: Spinning_Sandwich↑ comment by Spinning_Sandwich · 2012-09-30T20:18:24.121Z · LW(p) · GW(p)
You seem to be overlooking the fact that facts involving contextual language are facts nonetheless.
The "fact" that Obama is president is only social truth. Obama is president because we decided he is. If no one >thought Obama was president, he wouldn't be president anymore.
There is a counterfactual sense in which this holds some weight. I'm not saying agree with your claim, but I would at least have to give it more consideration before I knew what to conclude.
But that simply isn't the case (& it's a fact that it isn't, of course). Obama's (present) presidency is not contested, and it is a fact that he is President of the United States.
You could try to argue against admitting facts involving any vagueness of language, but you would run into two problems: this is more an issue with language than an issue with facts; and you have already admitted facts about other things.
comment by timtyler · 2012-09-27T09:41:24.318Z · LW(p) · GW(p)
But if the question has no basis in fact, like the question of whether morals are "real", then genes and enculturation will wholly determine your answer to it. Right?
Conventionally, it's genes, culture and environment. Most conventional definitions of culture don't cover all environmental influences, just those associated with social learning. However, not all learning is social learning. Some would also question the determinism - and pay homage to stochastic forces.
comment by [deleted] · 2012-09-27T07:23:43.727Z · LW(p) · GW(p)
I was surprised to see the high number of moral realists on Less Wrong, so I thought I would bring up a (probably unoriginal) point that occurred to me a while ago.
Surprised? I would say disappointed.
Except when dealing contrarian Newsome-like weirdness moral anti-realism doesn't rest on a complicated argument and is basic level sanity in my opinion. While certainly you can construct intellectual hipster positions in its favour, it is not something half the community should disagree with. The reason I think this is that I suspect most of those who are firmly against it don't know or understand the arguments for it or they are using "moral realism" in a way that is different from how philosophers use it.
Replies from: Furcas, ArisKatsaris, endoself↑ comment by Furcas · 2012-09-27T17:27:04.634Z · LW(p) · GW(p)
Most of the LWers who voted for moral realism probably believe that Eliezer's position about morality is correct, and he says that morality is subjunctively objective. It definitely fits Wikipedia's definition of moral realism:
Replies from: DanArmak, Matt_Simpson, J_TaylorMoral realism is the meta-ethical view which claims that:
- Ethical sentences express propositions.
- Some such propositions are true.
- Those propositions are made true by objective features of the world, independent of subjective opinion.
↑ comment by DanArmak · 2012-09-27T22:49:35.294Z · LW(p) · GW(p)
To the best of my understanding, "subjunctively objective" means the same thing that "subjective" means in ordinary speech: dependent on something external, and objective once that something is specified. So Eliezer's morality is objective once you specify that it's his morality (or human morality, etc.) and then propositions about it can be true or false. "Turning a person into paperclips is wrong" is an ethical proposition that is Eliezer-true and Human-true and Paperclipper-false, and Eliezer's "subjunctive objective" view is that we should just call that "true".
I disagree with that approach because this is exactly what is called being "subjective" by most people, and so it's misleading. As if the existing confusion over philosophical word games wasn't bad enough.
Replies from: Spinning_Sandwich, Furcas↑ comment by Spinning_Sandwich · 2012-09-30T20:30:08.369Z · LW(p) · GW(p)
"Turning a person into paperclips is wrong" is an ethical proposition that is Eliezer-true and Human-true and >Paperclipper-false, and Eliezer's "subjunctive objective" view is that we should just call that "true".
Despite the fact that we might have a bias toward the Human-[x] subset of moral claims, it's important to understand that such a theory does not itself favor one over the other.
It would be like a utilitarian taking into account only his family's moral weights in any calculations, so that a moral position might be Family-true but Strangers-false. It's perfectly coherent to restrict the theory to a subset of its domain (and speaking of domains, it's a bit vacuous to talk of paperclip morality, at least to the best of my knowledge of the extent of their feelings...), but that isn't really what the theory as a whole is about.
So if we as a species were considering assimilation, and the moral evaluation of this came up Human-false but Borg-true, the theory (in principle) is perfectly well equipped to decide which would ultimately be the greater good for all parties involved. It's not simply false just because it's Human-false. (I say this, but I'm unfamiliar with Eliezer's position. If he's biased toward Human-[x] statements, I'd have to disagree.)
↑ comment by Furcas · 2012-09-28T01:11:49.971Z · LW(p) · GW(p)
I disagree with that approach because this is exactly what is called being "subjective" by most people
Those same people are badly confused, because they usually believe that if ethical propositions are "subjective", it means that the choice between them is arbitrary. This is an incoherent belief. Ethical propositions don't become objective once you specify the agent's values; they were always objective, because we can't even think about an ethical proposition without reference to some set of values. Ethical propositions and values are logically glued together, like theorems and axioms.
You could say that the concept of something being subjective is itself a confusion, and that all propositions are objective.
That said, I share your disdain for philosophical word games. Personally, I think we should do away with words like 'moral' and 'good', and instead only talk about desires and their consequences.
↑ comment by Matt_Simpson · 2012-09-27T18:08:51.292Z · LW(p) · GW(p)
This is why I voted for moral realism. If instead Moral realism is supposed to mean something stronger, then I'm probably not a moral realist.
↑ comment by ArisKatsaris · 2012-09-28T00:08:54.084Z · LW(p) · GW(p)
moral anti-realism doesn't rest on a complicated argument
I've not studied the arguments of moral anti-realism, but if I had to make a guess it would be that moral anti-realism probably rests on how you can't extract "ought" statements from "is" statements.
But since "is" statements can be considered as functions operating on "ought" values (e.g. the is-statement "burning people causes them pain", would produce from an ought-statement "you oughtn't cause pain to people" the more specific ought-statement "you oughtn't burn people alive"), the possibility remains open that there can exist universal moral attractive fixed sets, deriving entirely from such "is" transformations, regardless of the opening person-specific or species-specific moral set, much like any starting shape that follows a specific set of transformations will become the Sierpinski triangle.
A possible example for a morally "real" position might e.g. be "You oughtn't decrease everyone's utility in the universe." or "You oughtn't do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn't do."
Baby-eaters and SuperHappies and Humans may not be in agreement about what is best, but all three of them could come up with some ideas about things which would be awful for all of them... I don't think that this need change, no matter how many species with moral instict one adds to the mix. So I "leaned" towards moral realism.
Of course, if all the above has nothing to do with what moral realism and moral anti-realism mean... oops.
Replies from: mwengler↑ comment by mwengler · 2012-10-03T16:03:27.498Z · LW(p) · GW(p)
the possibility remains open that there can exist universal moral attractive fixed sets, deriving entirely from such "is" transformations, regardless of the opening person-specific or species-specific moral set,
So you've got these attractive sets and maybe 90% or 99% or 99.9% or 99.99% of humans or humans plus some broader category of conscious/intelligent entities agree. What to do about the exceptions? Pretend they don't exist? Kill them because they are different and then pretend they never existed or couldn't exist? In my opinion, what you have as a fact is that 99.999% of humans agree X is wrong and .001% don't. The question of moral realism is not a factual one, it is a question of choice: do you CHOOSE to declare what 99.999% have an intuition towards as binding on the .001% that don't, or do you CHOOSE to believe that the facts are that the various intuitions have prevalences, some higher than others, some very high indeed, and that's all you actually KNOW.
I effectively feel bound by a lot of my moral intuitions, that is more or less a fact. As near as I can tell, my moral intuitions evolved as part of the social development of animals, then mammals, then primates, then homo. It is rational to assume that the mix of moral intuitions is fairly fine-tuned to optimize the social contribution to our species fitness, and it is more or less a condensation of facts to say that the social contribution to our fitness is larger than the social contribution to any other species on the planet to their fitness.
So I accept that human moral intuition is an organ like the brain or the islets of langerhans. I accept that a fair amount can be said about how the islets of langerhans function, and how the brain functions, when things are going well. Also, we know a lot about how the islets of langerhans and how the brain function when things are apparently not going so well, diseases one might say. I'd even go so far as to say I would prefer to live in a society dominated by people without diabetes and who are not sociopaths (people who seem to lack many common moral intuitions). I'd go so far as to say I would support policies including killing sociopaths and their minions, and including spending only a finite amount of resources on more expensive non-killing ways of dealing with sociopaths and diabetics.
But it is hard for me to accept that it is rational to fall in to the system instead of seeing it from outside. For me to conclude that my moral intuitions are objectively real like the charge on an electron of the electronic properties of doped silicon is projection, seems to me. It is identical to my concluding that one mammal is beautiful and sexy and another is dull, when it is really the triggering of an evolved sexual mechanism in me that paints the one mammal one way and the other the more boring way. If it is more accurate to understand that the fact that I am attracted to one mammal is not because she is objectively more beautiful than another, then it is more accurate to say that the fact that I have a moral intuition is not because I am plugged in to some moral fact of the universe, and not because of an evolved reaction I have. The fact that most men or many men find woman A beautiful and woman B to be blah doesn't mean that all men ought to find A beautiful and B blah, any more than the fact that many (modern) men feel slavery is wrong means they are not projecting their social construct into a realm of fact which could fruitfully be held to a higher standard.
Indeed, the fact that believing that our social constructs, our political truths, are REAL truths is clearly adaptive in the social species. Societies that encourage strong identifications with the values of the society are robust. Societies in which it is right to kill the apostates because they are wrong, evil, have a staying power. But my life as a scientist has consisted of my understanding that my wanting something to be true is not ANY evidence for its truth. I bring that to my American humanity. So even though I will support the killing of our enemies, I don't think that it is a FACT that it is right to kill the enemies of America any more than it is a FACT that it is right to kill the enemies of Islam.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-10-03T17:10:20.219Z · LW(p) · GW(p)
So you've got these attractive sets and maybe 90% or 99% or 99.9% or 99.99% of humans or humans plus some broader category of conscious/intelligent entities agree. What to do about the exceptions? Pretend they don't exist?
What does agreement have to do with anything? Anyway such moral attractive sets either include an injuction of what to do with people that disagree with them or they don't. And even if they do have such moral injuctions, it still doesn't mean that my preferences would necessarily be to follow said injuctions.
People aren't physically forced to follow their moral intuitions now, and they aren't physically forced to follow a universal moral attractive set either.
The question of moral realism is not a factual one
That's what a non moral-realist would say, definitely.
do you CHOOSE to declare what 99.999% have an intuition towards as binding on the .001% that don't
What does 'declaring' have to do with anything? For all I know this moral attractive set would contain an injuction against people declaring it true or binding. Or it might contain an injuction in favour of such declarations of course.
I don't think you understood the concepts I was trying to communicate. I suggest you tone down on the outrage.
Replies from: mwengler↑ comment by mwengler · 2012-10-04T18:33:49.907Z · LW(p) · GW(p)
Moral realism is NOT the idea that you can derive moral imperatives from a mixture of moral imperatives and other non-moral assumptions. Moral realism is NOT the idea that if you study humans you can describe "conventional morality," make extensive lists of things that humans tend, sometimes overwhelmingly, to consider wrong.
Moral realism IS the idea that there are things that are actually wrong.
If you are a moral realist, and you provide a mechanism for listing some moral truths, then you pretty much by definition are wrong, immoral, if you do not align your action with those moral truths.
An empirical determination of what are the moral rules of many societies, or most societies, or the moral rules that all societies so far have had in common is NOT an instantiation of a moral realist theory, UNLESS you assert that the rules you are learning about are real, that it is in fact immoral or evil to break them. If you meant something wildly different by "moral attractive sets" than what is incorporated by the idea of where people tend to come down on morality, then please elucidate, otherwise I think for the most part i am working pretty consistently with the attractive set idea in saying these things.
If you think you can be a "moral realist" without agreeing that it is immoral to break or not follow a moral truth, then we are just talking past each other and we might as well stop.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-10-04T19:07:52.591Z · LW(p) · GW(p)
Moral realism IS the idea that there are things that are actually wrong.
Okay, yes. I agree with that statement.
If you are a moral realist, and you provide a mechanism for listing some moral truths, then you pretty much by definition are wrong, immoral, if you do not align your action with those moral truths.
Well, I guess we can indeed define an "immoral" person as someone who does morally wrong things; though a more useful definition would probably be to define an immoral person as someone who does them more so than average. So?
If you think you can be a "moral realist" without agreeing that it is immoral to break or not follow a moral truth
It's reasonable to define an action as "immoral" if it breaks or doesn't follow a moral truth.
But how in the word are you connecting these definitions to all your earlier implications about pretending dissenters don't exist, or killing them and then pretending they never existed in the first place?
Fine, lots of people do immoral things. Lots of people are immoral. How does this "is" statement by itself, indicate anything about whether we ought ignore said people, execute them, or hug and kiss them? It doesn't say anything about how we should treat immoral people, or how we should respond to the immoral actions of others.
I'm the moral realist here, but it's you who seem to be deriving specific "ought" statements from my "is" statements.
Replies from: mwengler↑ comment by mwengler · 2012-10-04T21:31:17.854Z · LW(p) · GW(p)
Very interesting, the disagreement unravels.
At one level, yes, I am implicitly assuming certain moral imperatives. Things like "evildoers should be stopped," "evildoers should be punished." The smartest moral realists I have argued with before all proffered a belief in moral realism precisely so I would not think (or they would not have to admit) that their punishing wrongdoers and legislating against "wrong" things was in any way arbitrary or questionable. I think that "evildoers should be stopped" would be among the true statements a moral realist would almost certainly accept, but I was thinking that without stating it. Now it is stated. So my previous statements can be explicitly prefaced: "if morality is real and at some level evildoers should be stopped..."
And indeed the history of the western world, and I think the world as a whole, is that wrongdoers have always been stopped. Usually brutally. So I would ask for some consideration of this implicit connection I had made before you dismiss it as unnecessary.
I think the only meaning of moral realism can be that those things which I conclude are morally real can be enforced on others, indeed must be if "protecting the world from evil" and other such ideas are among the morally real true statements, and all intuition I maintain is that they are. I don't think you can be a moral realist and then sit back and say "yes I'm immoral, lots of other people are immoral, so what? Where does it say I'm supposed to do anything about that?" Because the essence of something being immoral is you ARE supposed to do something about it, I would maintain, and definitions in which morality is just a matter of taste or labelling I don't think will live under the label "moral realism."
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-10-05T11:08:22.219Z · LW(p) · GW(p)
I think the only meaning of moral realism can be that those things which I conclude are morally real can be enforced on others,
A moral statement M might perhaps say: "I ought do X." Agreeing perfectly in the moral universal validity and reality and absolute truth of M still doesn't take you one step closer to "I ought force others to do X.", nor even to "I am allowed to force others to do X.".
Real-life examples might be better:
Surely you can understand that a person might both believe "I oughtn't do drugs" and also "The government oughtn't force me not to do drugs."?
And likewise "I ought give money to charity" is a different proposition than "I ought force others to give money to charity"?
That's just from the libertarian perspective, but even the christian perspective says things like "Bless those who curse you. Pray for those who hurt you." it doesn't say "Force others not to curse you, force others not to hurt you". (Christendom largely abandoned that of course once it achieved political power, but that's a different issue...) The pure-pacifist response to violence is likewise pacifism. It isn't "Force pacifism on others".
There's a long history of moral realism that knows how to distinguish between "I ought X" and "I ought force X on others"
"Because the essence of something being immoral is you ARE supposed to do something about it, I would maintain"
The essense of something being immoral is that one oughtn't do it. Just that.
EDIT TO ADD: Heh, just thinking a bit further about it. Let me mathematize what you said a bit. You're effectively thinking of an inference rule which is as follows.
R1: For any statement M(n):"You ought X" present in the morally-real set, the statement M(n+1):"You ought force others to X" is also in the morally real set.
Such a inference rule (which I do not personally accept) would have horrifying repercussions, because of it's infinitely extending capacity. For example by starting with a supposed morally real statement:
M(1): You ought visit your own mother in the hospital.
it'd then go a bit like this.
M(2): You ought force others to visit their mothers in the hospital.
M(3). You ought force others to in turn force others to visit their mothers in the hospital.
...and then...
M(10). You ought establish a vast bureaurcracy of forcing others to establish other bureaucracies in charge of forcing people to visit their mothers in the hospital.
...or even
M(100). Genocide on those who don't believe in vast bureaucracy-establishing bureaucracies!
Heh, I can see why treating R1 as an axiom you find horror in the concept of morally real statements -- you resolve the problem by thinking the morally real set is empty, so that no further such statements can be added. I just don't accept R1 as an axiom at all.
Replies from: mwengler↑ comment by mwengler · 2012-10-05T14:33:48.352Z · LW(p) · GW(p)
I think you put your hand solidly on the dull end of the stick here. Lets consider some other moral examples whos violation does come up in real life.
1) I ought not steal candy from Walmart, but its OK if you do.
2) I ought not steal the entire $500,000 retirement from someone by defrauding them, but its OK if you want to.
3) I ought not pick a child at random, capture them, hold them prisoner in my house, torture them for my sexual gratification, including a final burst where I dismember them thus killing them painfully and somewhat slowly much to my delight, but its your choice if you want to.
4) Out of consideration, I won't dump toxic wastes over my neighbors stream, but that's just me.
My point is, the class of "victimless crime" types of morality is a tiny subset surrounded by moral hypotheses that directly speak to harms and costs accruing to others. Even libertarians who are against police (relatively extreme) are not against private body guards. These libertarians try to claim that their bodyguards would not be ordered to do anything "wrong" because 1) morality is real and 2) libertarians can figure out what the rules are with sufficient reliabillity and accuracy to be trusted to have their might unilaterally make right.
So that's my point about the philosophical basis of moral realism. Does that mean I would NOT enforce rules against dismembering children or stealing? Absolutely not. What it means is I wouldn't kid myself that the system I supported was the truth and that people that disagreed with me were evil. I would instead examine the rules I was developing in light of what kind of society they would produce. MOST conventional morality survives that test, evolution fine tuned our morality to work pretty economically for smart talkative primates who hunted and gathered in bands of less than a few hundred each.
But the rest of my point about morality not being "real", not being objectively true independent of the state of the species, is that I wouldn't have a fetish about the rightness of some moral conclusion I had reached. I would recognize that 1) we have more resources to spend on morality now than than what with being 100s of times richer than those hunter gatherers, 2) we have a significantly different environment to optimize upon, with the landscape of pervasive and inexpensive information and material items a rather new feature that moral intuitions didn't get to evolve upon.
My point is that morality is an engineering optimization, largely carried out by evolution, but absolutely under significant control of the neocortexes of our species. The moral realists I think will not do as good a job of making up moral systems because they fundamentally miss the point that the thing is plastic and there is in most cases no one "right" answer.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-10-05T14:47:01.325Z · LW(p) · GW(p)
This is getting a bit tiresome.
That I rejected the previously implied inference rule
R1: For any X where "I ought X" it also follows "I ought force others to X",
doesn't mean at all that I have to add a different inference rule
R2: For any X where "I ought X" it also follows "...but it's okay if you don't X."
To be perfectly clear to you: I'm rejecting both R1 and R2 as axioms. I've never stated them as axioms of moral realism, nor have I implied them to be such, nor do I believe that any theory of moral realism requires either of them.
I'm getting a bit tired of refuting implications you keep reading in my comments but which I never made. I suggest you stop reading more into my comment than what I actually write.
Replies from: mwengler↑ comment by mwengler · 2012-10-05T15:23:12.968Z · LW(p) · GW(p)
If you're tired, you're tired.
Truth isn't about making up axioms and throwing away the ones which are inconvenient to your argument. Rather I propose a program of looking at the world and trying to model it.
How successful do you think a sentient species that has evolved rules that allow it to thrive in significant cooperation but which has not thought to enforce those rules? How common is such a hands-off approach in the successful human societies which surround us in time and space? It is not deductively true that if you believe in morality as real that you will have some truths about enforcing morality on those around you, one way or another. Just as it is not deductively true that all electrons have the same charge or that all healthy humans are conscious and sentient or that shoddily made airplanes tend to crash. But what is the point of a map of the territory that leaves out the mountains surrounding the village for the sake of argument?
It seems to me that your moral realism is trivial. You don't think of morality, it seems to me, as anything other than just another label. Like some things are french others are not,, some are pointillist others are not, and some are moral others are not. Morality like so many other things MEANS something. This meaning has implications for human behavior and human choices.
If you're tired you're tired, but if you care to, let me ask you this. What is the difference between morality being real and morality being a "real label," just a hashtag we attach to statements that use certain words? The difference to me is that if it is just a hashtag, then I don't ought to enforce on myself or others that moral truths, whereas if it is something real, then the statement "people ought not allow innocent children to be kidnapped and tortured" means exactly what it says, we are obliged to do something about it.
Whether you are done or not, thank you for this exchange. I had not been aware that my belief that morality being real meant it ought to be enforced in some way, now I am aware. In my opinion,a moral realism that does not contain some true statements along those lines is an incomplete one at best, or an insincere or vapid one at worst. But at least I leanred not to assume that others talking about morality have this same opinion until I check.
Cheers, Mike
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-10-05T15:39:24.205Z · LW(p) · GW(p)
Truth isn't about making up axioms and throwing away the ones which are inconvenient to your argument
What argument? You've never even remotely understood my argument. All this thread has been about trying to explain that I never said those things that you're trying to place in my mouth.
If you want further discussion with me, I suggest you first go back and reread everything I said in the initial comment and only what I said, one more time. and then find a single statement which you think is wrong, and then I'll defend it. I'll defend it by itself, not whatever implications you'll add onto it.
I won't bother responding to anything else you say unless you first do that. I'm not obliged to defend myself against your demonisations of me based on things I never said or implied. Find me one of my statements that you disagree with, not some statement that you need to put to put on my mouth.
Replies from: mwengler↑ comment by mwengler · 2012-10-05T18:21:51.784Z · LW(p) · GW(p)
In your very first post you write:
A possible example for a morally "real" position might e.g. be "You oughtn't decrease everyone's utility in the universe." or "You oughtn't do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn't do."
If you wish to build a map for a real territory, but ignore that the map doesn't actually follow many details of the territory, it seems fair enough for others who can see the map and the territory to say "this isn't a very good map, it is missing X, Y, and Z." As you rightly point out, it would not make sense to say "it isn't a very good map because it is not internally consistent. The more oversimplified a map is, the more likely it is to be internally consistent.
I like the metaphor of map and territory: morality refers to an observable feature of human life and it is not difficult to look at how it has been practiced and make statements about it on that basis. A system of morality that accepts neither "morality is personal (my morality doesn't apply to others)" nor "Morality is univeral, the point is it applies to everybody" may fit the wonderful metaphor of a very simple axiomatic mathematical system, but in my opinion it is not a map of the human territory of morality.
If you are self-satisfied with an axiomatic system where "moral" is a label that means nothing in real life, then we are talking about different things. If you believe you are proposing a useful map for the human territory called morality, then you must address concerns of "it doesn't seem to really fit that well," and not limit yourself to concerns only of "I said a particular thing that wasn't true."
But if you want to play the axiomatic geometry game, then I do disagree that "You oughtn't do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn't do." is a good possible morally real statement. First off, its negation, which I take to be "It's OK if you do something which every person equipped with moral instinct in the universe, including yourself, judges you oughtn't do." doesn't seem particularly truer or less true than the statement itself. (And I would hope you can see why I was talking about 99% and 99.99% agreement given your original statement in your original post). Second, if your statement is morally real, objective, "made true by objective features of the world, independent of subjective opinion" then please show me how. (The quote is from http://en.wikipedia.org/wiki/Moral_realism )
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-10-05T18:47:17.378Z · LW(p) · GW(p)
tldr; you're overestimating my patience to read your page of text, especially since previous such pages just kept accusing me of various things, and they were all wrong. (edit to add: And now that I went back and read it, this one was no exception accusing me this time of being "self-satisfied with an axiomatic system where "moral" is a label that means nothing in real life" Sorry mate, am no longer bothering to defend against your various, diverse and constantly changing demonisations of me. If I defend against one false accusation, you'll just make up another, and you never update on the fact of how wrong all your previous attempts were.)
But since I scanned to the end to find your actual question:
Second, if your statement is morally real, objective, "made true by objective features of the world, independent of subjective opinion" then please show me how
First of all I said my statement "might" be a possible example of something morally real. I didn't argue that it definitely was such. Secondly, it would be made e.g. a possible candidate for being morally real because it include all agents capable of relevant subjective opinion inside it. At that point, it's no longer about subjective opinion, it's about universal opinion. Subjective opinion indicates something that changes from subject to subject. If it's the same for all subjects, it's no longer really subjective.
And I would hope you can see why I was talking about 99% and 99.99%
No, I don't see why. The very fact that my hypothetical statements specified "everyone" and you kept talking about what to do about the remainder was more like evidence to me that you weren't really addressing my points and possibly hadn't even read them.
Replies from: mwengler↑ comment by mwengler · 2012-10-08T14:44:49.674Z · LW(p) · GW(p)
you're overestimating my patience to read ...
Perhaps. And you are understimating your need to get the last word. But enough about you.
First of all I said my statement "might" be a possible example
I don't know how to have a discussion where the answer to the question "show me how it might be" is "First of all I said [it] might be."
The very fact that my hypothetical statements specified "everyone" and you kept talking about what to do about the remainder was more like evidence to me that you weren't really addressing my points and possibly hadn't even read them.
Well you already know there are nihilists in the world and others who don't believe morality is real. So You already know that there are nos uch statements that "everybody" agrees to. And then you reduce the pool of no statements that every human agrees to even further by bringing in all other sentient life that might exist in the required agreement.
Even if you were to tell the intelligent people who have thought about it, "no, you really DO believe in some morality, you are mistaken about yourself," can you propose a standard for developing a list or even a single statement that might be a GOOD candidate without attempting to estimate the confidence with which you achieve unanimity, and which does not yield answers like 90% or 99% as the limitations of its accuracy in showing you unanimity?
If you are able to state that you are talking about something which has no connection to the real world, I'll let you have the last word. Because that is not a discussion I have a lot of energy for.
This also accounts for my constantly throwing things in to the discussion that go outside a narrow axiomatic system. I'm not doing math here.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-10-09T08:07:38.984Z · LW(p) · GW(p)
I don't know how to have a discussion where the answer to the question "show me how it might be" is "First of all I said [it] might be."
You didn't say "show me how [it might be]", you said "show me how [it is]"
So you already know that there are no such statements that "everybody" agrees to.
Most people that aren't moral realists still have moral intuitions, you're confusing the categorization of beliefs about the nature of morality vs the actual moral instinct in people's brains. The moral instinct doesn't concern itself with whether morality is real; eyes don't concern themselves with viewing themselves; few algorithms altogether are are designed to analyze themselves.
As for moral nihilists, assuming they exist, an empty moral set can indeed never be transformed into anything else via is statements, which is why I specified from the very beginning "every person equipped with moral instinct".
If you are able to state that you are talking about something which has no connection to the real world,
The "connection to the real world" is that the vast majority of seeming differences in human moralities seem to derive from different understandings of the worlds, and different expectations about the consequences. When people share agreement about the "is", they also tend to converge on the "ought", and they most definitely converge on lots of things that "oughtn't". Seemingly different morality sets gets transformed to look like each other.
That's sort of like the CEV of humanity that Eliezer talks about, except that I talk about a much more limited set -- not the complete volition (which includes things like "I want to have fun"), but just the moral intuition system.
That's a "connection to the real world" that relates to the whole history of mankind, and to how beliefs and moral injuctions connect to one another; how beliefs are manipulated to produce injuctions, how injuctions lose their power when beliefs fall away.
Now with a proper debater that didn't just seek to heap insults on people I might discuss further on nuances and details-- whether it's only consequentialists that would get attractive moral sets, whether different species would get mostly different attractive moral sets, whether such attractive moral sets may be said to exist because anything too alien would probably not even be recognizable as morality by us; possible exceptions for deliberately-designed malicious minds, etc...
But you've just been a bloody jerk throughout this thread, a horrible horrible person who insults and insults and insults some more. So I'm done with you: feel free to have the last word.
↑ comment by endoself · 2012-09-27T23:13:08.408Z · LW(p) · GW(p)
The reason I think this is that I suspect most of those who are firmly against it don't know or understand the arguments for it or they are using "moral realism" in a way that is different from how philosophers use it.
This is pretty likely. I spent about a minute trying to determine what the words were actually supposed to mean, then decided that it was pointless, gave up, and refrained from voting on that question. (I did this for a few questions, though I did vote on some, then gave up on the poll.)
comment by Nighteyes5678 · 2012-09-27T06:57:10.258Z · LW(p) · GW(p)
I'm not sure that randomness from evolution and enculturation should be treated differently from random factors in the intuition-squaring process. It's randomness all the way through either way, right?
I think this statement is the fulcrum of my disagreement with your argument. You assert that "it's randomness all the way through either way". I disagree; it's not randomness all the way, not at all.
Evolution's mutations and changes are random; evolutions adaptions are not random - they happen in response to the outside world. Furthermore, the mutations and changes that survive aren't random either: they all meet the same criteria, that they didn't hamper survival.
I believe, then, that developing an internally consistent moral framework can be aided by recognizing the forces that have shaped our intuitions, and deciding whether the direction those forces are taking us is a worthy destination. We don't have to be blind and dumb slaves to Evolution any more. Not really.
comment by [deleted] · 2012-09-27T07:31:45.949Z · LW(p) · GW(p)
In what sense could philosophers have "better" philosophical intuition? The only way I can think of for theirs to be "better" is if they've seen a larger part of the landscape of philosophical questions, and are therefore better equipped to build consistent philosophical models (example).
The problem with this is that the kind of people likely to become philosophers have systematically different intuitions to begin with.
I'm not sure that randomness from evolution and enculturation should be treated differently from random factors in the intuition-squaring process. It's randomness all the way through either way, right?
I fear many readers will confuse this argument for the moral anti-realist argument. The moral anti-realist argument doesn't mean you shouldn't consider your goals superior to those of the pebble sorters or babyeaters, just that if they ran the same process you did to arrive at this conclusion they would likely get a different result. This probably wouldn't happen if you did this with the process used to try and establish say the value of the gravitational constant or the charge of an electron.
This suggests that morality is more like your particular taste in yummy foods and aversion to snakes than the speed of light. It isn't a fact about the universe it is a fact about particular agents or pseudo-agents.
Of course the pebble sorters or babyeaters or paper-clip maximizing AIs can figure out we have an aversion to snakes and crave salty and sugary food. But them learning this would not result in them sharing our normative judgements except for instrumental purposes in some very constrained scenarios where they are optimal for a wide range of goals.
Replies from: Eugine_Nier, John_Maxwell_IV↑ comment by Eugine_Nier · 2012-09-28T01:04:02.421Z · LW(p) · GW(p)
I fear many readers will confuse this argument for the moral anti-realist argument. The moral anti-realist argument doesn't mean you shouldn't consider your goals superior to those of the pebble sorters or babyeaters, just that if they ran the same process you did to arrive at this conclusion they would likely get a different result.
What is this "moral anti-realist argument"? Every argument against moral realism I've seen boils down to: "there are on universally compelling moral arguments, therefore morality is not objective". Well, as the linked article points out, there are no universally compelling physical arguments either.
This suggests that morality is more like your particular taste in yummy foods and aversion to snakes than the speed of light.
The difference between morality and taste in food is that I'm ok with you believing that chocolate is tasty even if I don't, but I'm not ok with you believing that it's moral to eat babies.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-09-27T10:10:47.407Z · LW(p) · GW(p)
The problem with this is that the kind of people likely to become philosophers have systematically different intuitions to begin with.
Interesting point, but where's the problem?
I fear many readers will confuse this argument for the moral anti-realist argument.
Yep, I kind of wandered around.
I think I agree with the rest of your comment.
Replies from: None