Is Morality Given?

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-06T08:12:26.000Z · LW · GW · Legacy · 100 comments

Contents

100 comments

Continuation ofIs Morality Preference?

(Disclaimer:  Neither Subhan nor Obert represent my own position on morality; rather they represent different sides of the questions I hope to answer.)

Subhan:  "What is this 'morality' stuff, if it is not a preference within you?"

Obert:  "I know that my mere wants, don't change what is right; but I don't claim to have absolute knowledge of what is right—"

Subhan:  "You're not escaping that easily!  How does a universe in which murder is wrong, differ from a universe in which murder is right?  How can you detect the difference experimentally?  If the answer to that is 'No', then how does any human being come to know that murder is wrong?"

Obert:  "Am I allowed to say 'I don't know'?"

Subhan:  "No.  You believe now that murder is wrong.  You must believe you already have evidence and you should be able to present it now."

Obert:  "That's too strict!  It's like saying to a hunter-gatherer, 'Why is the sky blue?' and expecting an immediate answer."

Subhan:  "No, it's like saying to a hunter-gatherer:  Why do you believe the sky is blue?"

Obert:  "Because it seems blue, just as murder seems wrong.  Just don't ask me what the sky is, or how I can see it."

Subhan:  "But—aren't we discussing the nature of morality?"

Obert:  "That, I confess, is not one of my strong points.  I specialize in plain old morality.  And as a matter of morality, I know that I can't make murder right just by wanting to kill someone."

Subhan:  "But if you wanted to kill someone, you would say, 'I know murdering this guy is right, and I couldn't make it wrong just by not wanting to do it.'"

Obert:  "Then, if I said that, I would be wrong.  That's common moral sense, right?"

Subhan:  "Argh!  It's difficult to even argue with you, since you won't tell me exactly what you think morality is made of, or where you're getting all these amazing moral truths—"

Obert:  "Well, I do regret having to frustrate you.  But it's more important that I act morally, than that I come up with amazing new theories of the nature of morality.  I don't claim that my strong point is in explaining the fundamental nature of morality.  Rather, my strong point is coming up with theories of morality that give normal moral answers to questions like, 'If you feel like killing someone, does that make it right to do so?'  The common-sense answer is 'No' and I really see no reason to adopt a theory that makes the answer 'Yes'.  Adding up to moral normality—that is my theory's strong point."

Subhan:  "Okay... look.  You say that, if you believed it was right to murder someone, you would be wrong."

Obert:  "Yes, of course!  And just to cut off any quibbles, we'll specify that we're not talking about going back in time and shooting Stalin, but rather, stalking some innocent bystander through a dark alley and slitting their throat for no other reason but my own enjoyment.  That's wrong."

Subhan:  "And anyone who says murder is right, is mistaken."

Obert:  "Yes."

Subhan:  "Suppose there's an alien species somewhere in the vastness of the multiverse, who evolved from carnivores.  In fact, through most of their evolutionary history, they were cannibals.  They've evolved different emotions from us, and they have no concept that murder is wrong—"

Obert:  "Why doesn't their society fall apart in an orgy of mutual killing?"

Subhan:  "That doesn't matter for our purposes of theoretical metaethical investigation.  But since you ask, we'll suppose that the Space Cannibals have a strong sense of honor—they won't kill someone they promise not to kill; they have a very strong idea that violating an oath is wrong.  Their society holds together on that basis, and on the basis of vengeance contracts with private assassination companies.  But so far as the actual killing is concerned, the aliens just think it's fun.  When someone gets executed for, say, driving through a traffic light, there's a bidding war for the rights to personally tear out the offender's throat."

Obert:  "Okay... where is this going?"

Subhan:  "I'm proposing that the Space Cannibals not only have no sense that murder is wrong—indeed, they have a positive sense that killing is an important part of life—but moreover, there's no path of arguments you could use to persuade a Space Cannibal of your view that murder is wrong.  There's no fact the aliens can learn, and no chain of reasoning they can discover, which will ever cause them to conclude that murder is a moral wrong.  Nor is there any way to persuade them that they should modify themselves to perceive things differently."

Obert:  "I'm not sure I believe that's possible—"

Subhan:  "Then you believe in universally compelling arguments processed by a ghost in the machine.  For every possible mind whose utility function assigns terminal value +1, mind design space contains an equal and opposite mind whose utility function assigns terminal value—1.  A mind is a physical device and you can't have a little blue woman pop out of nowhere and make it say 1 when the physics calls for it to say 0."

Obert:  "Suppose I were to concede this.  Then?"

Subhan:  "Then it's possible to have an alien species that believes murder is not wrong, and moreover, will continue to believe this given knowledge of every possible fact and every possible argument.  Can you say these aliens are mistaken?"

Obert:  "Maybe it's the right thing to do in their very different, alien world—"

Subhan:  "And then they land on Earth and start slitting human throats, laughing all the while, because they don't believe it's wrong.  Are they mistaken?"

Obert:  "Yes."

Subhan:  "Where exactly is the mistake?  In which step of reasoning?"

Obert:  "I don't know exactly.  My guess is that they've got a bad axiom."

Subhan:  "Dammit!  Okay, look.  Is it possible that—by analogy with the Space Cannibals—there are true moral facts of which the human species is not only presently unaware, but incapable of perceiving in principle?  Could we have been born defective—incapable even of being compelled by the arguments that would lead us to the light?  Moreover, born without any desire to modify ourselves to be capable of understanding such arguments?  Could we be irrevocably mistaken about morality—just like you say the Space Cannibals are?"

Obert:  "I... guess so..."

Subhan:  "You guess so?  Surely this is an inevitable consequence of believing that morality is a given, independent of anyone's preferences!  Now, is it possible that we, not the Space Cannibals, are the ones who are irrevocably mistaken in believing that murder is wrong?"

Obert:  "That doesn't seem likely."

Subhan:  "I'm not asking you if it's likely, I'm asking you if it's logically possible!  If it's not possible, then you have just confessed that human morality is ultimately determined by our human constitutions.  And if it is possible, then what distinguishes this scenario of 'humanity is irrevocably mistaken about morality', from finding a stone tablet on which is written the phrase 'Thou Shalt Murder' without any known justification attached?  How is a given morality any different from an unjustified stone tablet?"

Obert:  "Slow down.  Why does this argument show that morality is determined by our own constitutions?"

Subhan:  "Once upon a time, theologians tried to say that God was the foundation of morality.  And even since the time of the ancient Greeks, philosophers were sophisticated enough to go on and ask the next question—'Why follow God's commands?'  Does God have knowledge of morality, so that we should follow Its orders as good advice?  But then what is this morality, outside God, of which God has knowledge?  Do God's commands determine morality?  But then why, morally, should one follow God's orders?"

Obert:  "Yes, this demolishes attempts to answer questions about the nature of morality just by saying 'God!', unless you answer the obvious further questions.  But so what?"

Subhan:  "And furthermore, let us castigate those who made the argument originally, for the sin of trying to cast off responsibility—trying to wave a scripture and say, 'I'm just following God's orders!'  Even if God had told them to do a thing, it would still have been their own decision to follow God's orders."

Obert:  "I agree—as a matter of morality, there is no evading of moral responsibility.  Even if your parents, or your government, or some kind of hypothetical superintelligence, tells you to do something, you are responsible for your decision in doing it."

Subhan:  "But you see, this also demolishes the idea of any morality that is outside, beyond, or above human preference.  Just substitute 'morality' for 'God' in the argument!"

Obert:  "What?"

Subhan:  "John McCarthy said:  'You say you couldn't live if you thought the world had no purpose. You're saying that you can't form purposes of your own-that you need someone to tell you what to do. The average child has more gumption than that.'  For every kind of stone tablet that you might imagine anywhere, in the trends of the universe or in the structure of logic, you are still left with the question:  'And why obey this morality?'  It would be your decision to follow this trend of the universe, or obey this structure of logic.  Your decision—and your preference."

Obert:  "That doesn't follow!  Just because it is my decision to be moral—and even because there are drives in me that lead me to make that decision—it doesn't follow that the morality I follow consists merely of my preferences.  If someone gives me a pill that makes me prefer to not be moral, to commit murder, then this just alters my preference—but not the morality; murder is still wrong.  That's common moral sense—"

Subhan:  "I beat my head against my keyboard!  What about scientific common sense?  If morality is this mysterious given thing, from beyond space and time—and I don't even see why we should follow it, in that case—but in any case, if morality exists independently of human nature, then isn't it a remarkable coincidence that, say, love is good?"

Obert:  "Coincidence?  How so?"

Subhan:  "Just where on Earth do you think the emotion of love comes from?  If the ancient Greeks had ever thought of the theory of natural selection, they could have looked at the human institution of sexual romance, or parental love for that matter, and deduced in one flash that human beings had evolved—or at least derived tremendous Bayesian evidence for human evolution.  Parental bonds and sexual romance clearly display the signature of evolutionary psychology—they're archetypal cases, in fact, so obvious we usually don't even see it."

Obert:  "But love isn't just about reproduction—"

Subhan:  "Of course not; individual organisms are adaptation-executers, not fitness-maximizers.  But for something independent of humans, morality looks remarkably like godshatter of natural selection.  Indeed, it is far too much coincidence for me to credit.  Is happiness morally preferable to pain?  What a coincidence!  And if you claim that there is any emotion, any instinctive preference, any complex brain circuitry in humanity which was created by some external morality thingy and not natural selection, then you are infringing upon science and you will surely be torn to shreds—science has never needed to postulate anything but evolution to explain any feature of human psychology—"

Obert:  "I'm not saying that humans got here by anything except evolution."

Subhan:  "Then why does morality look so amazingly like a product of an evolved psychology?"

Obert:  "I don't claim perfect access to moral truth; maybe, being human, I've made certain mistakes about morality—"

Subhan:  "Say that—forsake love and life and happiness, and follow some useless damn trend of the universe or whatever—and you will lose every scrap of the moral normality that you once touted as your strong point.  And I will be right here, asking, 'Why even bother?'  It would be a pitiful mind indeed that demanded authoritative answers so strongly, that it would forsake all good things to have some authority beyond itself to follow."

Obert:  "All right... then maybe the reason morality seems to bear certain similarities to our human constitutions, is that we could only perceive morality at all, if we happened, by luck, to evolve in consonance with it."

Subhan:  "Horsemanure."

Obert:  "Fine... you're right, that wasn't very plausible.  Look, I admit you've driven me into quite a corner here.  But even if there were nothing more to morality than preference, I would still prefer to act as morality were real.  I mean, if it's all just preference, that way is as good as anything else—"

Subhan:  "Now you're just trying to avoid facing reality!  Like someone who says, 'If there is no Heaven or Hell, then I may as well still act as if God's going to punish me for sinning.'"

Obert:  "That may be a good metaphor, in fact.  Consider two theists, in the process of becoming atheists.  One says, 'There is no Heaven or Hell, so I may as well cheat and steal, if I can get away without being caught, since there's no God to watch me.'  And the other says, 'Even though there's no God, I intend to pretend that God is watching me, so that I can go on being a moral person.'  Now they are both mistaken, but the first is straying much further from the path."

Subhan:  "And what is the second one's flaw?  Failure to accept personal responsibility!"

Obert:  "Well, and I admit I find that a more compelling argument than anything else you have said.  Probably because it is a moral argument, and it has always been morality, not metaethics, with which I claimed to be concerned.  But even so, after our whole conversation, I still maintain that wanting to murder someone does not make murder right.  Everything that you have said about preference is interesting, but it is ultimately about preference—about minds and what they are designed to desire—and not about this other thing that humans sometimes talk about, 'morality'.  I can just ask Moore's Open Question:  Why should I care about human preferences?  What makes following human preferences right?  By changing a mind, you can change what it prefers; you can even change what it believes to be right; but you cannot change what is right.  Anything you talk about, that can be changed in this way, is not 'right-ness'."

Subhan:  "So you take refuge in arguing from definitions?"

Obert:  "You know, when I reflect on this whole argument, it seems to me that your position has the definite advantage when it comes to arguments about ontology and reality and all that stuff—"

Subhan:  "'All that stuff'?  What else is there, besides reality?"

Obert:  "Okay, the morality-as-preference viewpoint is a lot easier to shoehorn into a universe of quarks.  But I still think the morality-as-given viewpoint has the advantage when it comes to, you know, the actual morality part of it—giving answers that are good in the sense of being morally good, not in the sense of being a good reductionist.  Because, you know, there are such things as moral errors, there is moral progress, and you really shouldn't go around thinking that murder would be right if you wanted it to be right."

Subhan:  "That sounds to me like the logical fallacy of appealing to consequences."

Obert:  "Oh?  Well, it sounds to me like an incomplete reduction—one that doesn't quite add up to normality."

 

Part of The Metaethics Sequence

Next post: "Where Recursive Justification Hits Bottom"

Previous post: "Is Morality Preference?"

100 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Dynamically_Linked · 2008-07-06T10:14:51.000Z · LW(p) · GW(p)

Subhan: "You're not escaping that easily! How does a universe in which murder is wrong, differ from a universe in which murder is right? How can you detect the difference experimentally? If the answer to that is 'No', then how does any human being come to know that murder is wrong?" ... Obert: "Because it seems blue, just as murder seems wrong. Just don't ask me what the sky is, or how I can see it."

But we already know why murder seems wrong to us. It's completely explained by a combination of game theory, evolutionary psychology, and memetics. These explanations screen off our apparent moral perceptions from any other influence. In order words, conditioned on these explanations being true, our moral perceptions are independent of (i.e. uncorrelated with) any possible morality-as-given, even if it were to exist.

So there is a stronger argument against Obert than the one Subhan makes. It's not just that we don't know how we can know about what is right, but rather that we know we can't know, at least not through these apparent moral perceptions/intuitions.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-07T00:00:14.696Z · LW(p) · GW(p)

He does, in fact, point out that if morality came from evolution (which our minds came from evolution, then our moral preferences must have too, so why would evolution coincidentally choose values that correspond to some Universal Imperative?

comment by robin_brandt2 · 2008-07-06T10:23:09.000Z · LW(p) · GW(p)

As with human aesthetic sense, human morality may be approximations and versions of more absolutely definable optimal solutions to information-theoretic, game-theoretic, social, economic, intelligence, signaling, cooperation problems. Therefore it may be likely that an alien race could share some of the same values as we do, because it may turn out that they are "good" solutions for intelligent culture bearing species in general. But there is nothing in the universe in it self that says that theese optimal solutions, or any value what-so-ever can be valuable, and I don´t understand why some atheists even expect there to be something like this. It is minds that give value to certain phenomena, and it usually happens because our emotional circuitry where wired to value something by evolution.

But I so no problem in trying to choose some of the more general looking values evolution has given us and making them even more general and refined. We should do this in such a way as to keep us stable and happy but also so that we have a rich future mind-space to move in, and I think we have already done this in part with Goodness, Truth and Beauty for it´s own sake, but they are not easily definable. It is not easy to find a sweet-spot between a species-specific version and a more general one that is free and pleasing to as many minds as possible. I think we should continue to refine this sweet-spot and work towards some values for their own sake and not only because they give us pleasure. I think this is important and can add up to the stability of an individual as well as a society and an AI. It certainly is a dangerous thing, but I see it as essential at least for a human mind.

One problem may be though that we are wired so that, if we don´t believe that morality or some other value is intrinsically rooted and valued by some great preferably immortal and omnipresent authority(God, the Universe, Nature) then we may have some trouble behaving morally or in accordance with that value, just because we choose to do so. Some strong people may find this easy. But many finds this very hard, and I think that is the prime reason why religion still persists today, even though most people know somewhere deep down that it is a dead practice. And I will admit although not with any pride that I, myself has great trouble with pursuing and actually doing what I value, systematically and with pleasure and discipline, even though I know about all this. It may be that something is particularly wrong in my brain, or it may be quite a wide problem.

Thank you Eli for keeping on enlightening my path, day after day, I have been lurking on you for 2.5 years now, you have totally changed my life, before you, there was no person I could really trust in deep matters, and now, because you are so often right and I have to be so careful with trusting you, I have also become extremely careful in trusting my own intuitions and thought patterns. You learn people to think with rigorous self-critique, extream precision and dedication and mindful focus on the actual territory and possibility-space of study without getting trapped in the usual mind projection fallacy and other biases.

Keep on fighting! Your books will sell well, and it will fuel your goals!

comment by Dynamically_Linked · 2008-07-06T10:39:56.000Z · LW(p) · GW(p)

And to answer Obert's objection that Subhan's position doesn't quite add up to normality: before we knew game theory, evolutionary psychology, and memetics, nothing screened off our moral perceptions/intuitions from a hypothesized objective moral reality, so that was perhaps the best explanation available, given what we knew back then. And since that was most of human history, it's no surprise that morality-as-given feels like normality. But given what we know today, does it still make sense to insist that our meta-theory of morality add up to that normality?

comment by robin_brandt2 · 2008-07-06T10:41:08.000Z · LW(p) · GW(p)

I will try to express some of my points more accurately... A human value, may it concern, knowledge, morality or beauty gets it's meaning from it's emotional base although it may be a frequent value in the space of possible intelligent species. Only minds can attribute value to something. The thing it attributes it to, may be universal or specific, but the thing itself is can not be valued by something other than a mind. To value something is a cognitive, emotional process, not some intrinsic property of some phenomenon. But to believe this as the mind you are may not be the best way to achieve what you value and desire. We seem to work most efficiently towards a value when we believe it is intrinsically true and the only way. It may slow down that process considerably to know that values can´t be rooted in something outside other minds. It may also be liberating knowledge, and may fuel your productivity and mood. It may depend on your starting assumptions and expectations concerning values in general. My solution is to pick some very general values after serious consideration, and then to start to almost religiously work towards optimizing them, being open and critical of everything else, without ever introducing other magical thinking or supernatural phenomena, just trying to hack my own mind.

comment by michael_vassar3 · 2008-07-06T12:28:20.000Z · LW(p) · GW(p)

I think that we need a much better explanation of this word "mind". Supposedly mind space contains a -1 for every 1, but that simply sounds like system space. I honestly think that the ontology has to go deeper here before progress is possible. Similar problem to born postulates and why we aren't Boltzman Brains.

comment by Caledonian2 · 2008-07-06T13:45:37.000Z · LW(p) · GW(p)

Similar problem to born postulates and why we aren't Boltzman Brains.
We are Boltzmann Brains. You simply don't appreciate what restrictions are inherent in specifying the subset of Brains that can be called "we".

Not that this has anything to do with the topic, which everyone is very carefully skating around without addressing: what are operational definitions for right and wrong? When Obert says "Because it seems blue, just as murder seems wrong.", what collection of properties does wrong refer to? For that matter, what does blue refer to?

These questions have very simple and obvious answers which you will never grasp until you force yourself to face the questions. You mean something when you use the terms - you already recognize what is implied when you or someone else uses those terms. Now make that recognition explicit instead of implicit.

Do not "philosophize". That is attempting to understand the territory by making a diagram of map-making. It's adding another layer of analysis between you and the core concept, like an oyster adding a layer of nacre around an irritating bit of sand. You do not need a more complex ontology - you need to abolish the ontology.

comment by Richard8 · 2008-07-06T13:51:55.000Z · LW(p) · GW(p)

I don't think you have to postulate Space Cannibals in order to imagine rational creatures who don't think murder is wrong. For a recent example, consider Rwanda 1994.

And I think it's quite possible that there might exist moral facts which humans are incapable of perceiving. We aren't just universal Turing machines, after all. Billions of years of evolution might produce creatures with moral blind spots, anologous to the blind spot in the human eye. Just as the squid's eye has no blind spot, a different evolutionary path might produce creatures with a greater or lesser innate capacity to perceive goodness than ourselves.

comment by Caledonian2 · 2008-07-06T13:59:31.000Z · LW(p) · GW(p)

Maybe this will make it easier:

Obert says "just as murder seems wrong". There is a redundancy in that phrase. What is the redundancy, and why doesn't Obert perceive it as one?

What is the difference between saying something is a rube and not a blegg, and saying that someone appears to be a rube and not a blegg?

What is the difference between saying something is imperceivable, and saying something appears to be imperceivable?

comment by Phillip_Huggan · 2008-07-06T14:54:08.000Z · LW(p) · GW(p)

(Subhan wrote:) "And if you claim that there is any emotion, any instinctive preference, any complex brain circuitry in humanity which was created by some external morality thingy and not natural selection, then you are infringing upon science and you will surely be torn to shreds - science has never needed to postulate anything but evolution to explain any feature of human psychology -" Subhan: "Suppose there's an alien species somewhere in the vastness of the multiverse, who evolved from carnivores. In fact, through most of their evolutionary history, they were cannibals. They've evolved different emotions from us, and they have no concept that murder is wrong -"

The external morality thingy is other people's brain states. Prove the science comment, Subhan. It is obviously a false statement (once again, argument reduces to solipsism which can be a topic buts needs to be clearly stated as such). Evolution doesn't explain how I learned long division in grade 1. Our human brains are evolutionary horrible calculators, not usually able to chunk more than 8 memorized numbers or do division without learning math. Learning and self-reflection dominate reptillian brains in healthy individuals. The latter from a utilitiarian perspective, murder would generally be wrong, even if fun. There is the odd circumstance where it might be right, but it it is so difficult to game the future that it is probably better just to outlaw it altogether than raise the odds of anarchy. For instance, in Canada a head-of-state and abortionists have been targetted (though our head of state was ready to cave in the potential assassin's skull before the police finally apprehended him). In many developing countries it is much worse. Presumably the carnivore civilization would need a lot of luck just to industrialize; would be more prosperous by fighting their murder urges. Don't call them carnivores, call them Mugabe's Zimbabwe. We have an applied example of a militarily-weak government in the process of becoming a tyranny, raping women and initiating anarchy. There are lessons that could be learned here, Britian has just proposed to 2000 strong rapid-response military force, under what circumstances should they be used (I like regression from democracy plus plausible model of something better plus lower quality-of-living plus military weakness plus invasion acceptance of military alliance; if the African Union says no regime change, does that constitute a military alliance?). Does military weakness as a precursor condition do more harm than good by gaming nations to up-arm?

In Canada, there is a problem how to deal with youths, at what age should they be treated as mental competant adults. Brain science seems to show humans don't fully mature until about 25, so to me that is an argument to treat the onset of puberty to 25 or so as an in-between category when judging. Is alcohol and/or alcoholism analogous to mental health problems? I'd guess no, but maybe childhood trauma is a mitigating factor to consider. How strong does mental illness have to be before using it is a consideration? In Canada, an Afghanistan veteran used post-traumatic stress disorder as a mitigating factor in a violent crime. Is not following treatment or the absence fo treatment something to consider? Can a mentally ill individual sue a government or claim innocence for initiating $10 billion in tax cuts rather than a mental health programme? I'd guess only if it became clear how important such a program was, say, if it worked very successfully in another nation and the government had the fiscal means to do so. Should driving drunk itself be a crime? If so, why not driving with radio, infant, cellphone...as intersection video camera surveillence catches traffic offenders, should the offence fine be dropped proportionately to the increased level of surveillence? See, courts know there are other individuals and that the problems of mental health and children not understanding there are other people, don't prevent healthy adults from knowing other people are real. This reminds me of discussions about geopolitics on the WTA list, with seemingly progressive individuals not being able to condemn torture and the detaining indefinitely of innocent people, simply because the forum was overrepresented with Americans (who still don't score that bad, just not as good as Europe and Canada when it comes to Human Rights).

(robin brandt wrote:)"But we already know why murder seems wrong to us. It's completely explained by a combination of game theory, evolutionary psychology, and memetics."

Sure, but the real question is why murder is wrong, not seems wrong. Murder is wrong because it destroys human brains. Generally, Transhumanists have a big problem (Minsky or Moravec or Vinge religion despite evidence to the contrary) figuring out the human brains are conscious and calculators are not. I have a hard time thinking of any situation when it could be justified by among other things, proving the world is likely to be better off without murder. I guess killing Hitler during the latter years of the Holocaust might have stopped it if it was happening because of his active intervention. But you kill him off too early and Stalin and Hitler don't beat the shit out of eachother. This conversation is stuck at some 6th grade level. Could be talking about the death penalty, or income correlating with sentencing, or terrorism and Human Rights. Or employee dangerous technology Human Rights (will future gene sequencers require a Top Secret level of security clearance?). Right now the baseline is to treat all very potentially dangerous future technologies with a High Level of security clearance, I'm guessing. Does H+ have anything of value to add to existing security protocols? Have they even been analyzed? Nope.

If this is all just to brain storm about how to teach an AGI ethics, no one here is taking it from that angle. I've had a conversation with a Subhan friend as a teenager. If I was blogging about it, I'd do it under a forum titled Ethics for Dummies.

Replies from: None
comment by [deleted] · 2015-10-20T09:46:03.469Z · LW(p) · GW(p)

There are a lot of clever ideas in this post, despite the harsh downvotes.

You may have some misgivings about the extent to which say, mental health issues may be a barrier to security clearances. It's more like people disqualify themselves by lying or failing to apply in the first place. Those who do get through and get issues, are prisoners of their own misconceptions.

Austalia's protective security guidelines are based around subjective evaluations of

impair(ment of judgment, reliability, or trustworthiness. They explicitly state G11. There is no indication of a current problem. is a mitigating factor in any history of mental illness

see this. Caution, if you're speaked by getting tracked, note that this is a word document on a Aus gov website.

It also explicitly says that seeking help from mental health places shouldn't be the sole basis of exclusion, and the guidelines suggest that the opinion of a mental health professional should be given due consideration.

This wasn't always the way things were down, at least in the us.

The really contentious issue here is whether it is correct to privellage the hypothesis that those seeking mental health care are more likely to have worse judgment, reliability, or trustworthiness. Intuitions and stereotypes say yes. Research suggests they among those seeking treatment, they are not anymore violent, I'm not sure about those criteria specifically, but I suspect that there is far too much assumption of mental illness as a description of abberant behaviour, rather than as an exclusive construct resilient to black swans and that soon mental health and the military and intelligence fields will become subject to scrutinty by mental health activists, the same way other activists have scrutinised discrimination in security fields.

comment by Z._M._Davis · 2008-07-06T15:20:20.000Z · LW(p) · GW(p)

The notion of morality as subjectively objective computation seems a lot closer to Subhan's position than Obert's.

comment by Phillip_Huggan · 2008-07-06T15:40:02.000Z · LW(p) · GW(p)

Yes, EY's past positions about Morality are closer to Subhan's than Obert's. But AGI is software programming and hardware engineering, not being a judge or whoever writes laws. I wouldn't suggest deifying EY if your goal is to learn ethics.

comment by Constant2 · 2008-07-06T15:49:14.000Z · LW(p) · GW(p)

But we already know why murder seems wrong to us. It's completely explained by a combination of game theory, evolutionary psychology, and memetics. These explanations screen off our apparent moral perceptions from any other influence. In order words, conditioned on these explanations being true, our moral perceptions are independent of (i.e. uncorrelated with) any possible morality-as-given, even if it were to exist.

Let's try the argument with mathematics: we know why we think 5 is a prime number. It's completely explained by our evolution, experiences, and so on. Conditioned on these explanations being true, our mathematical perceptions are independent of mathematical-truth-as-given, even if it were to exist.

The problem is that mathematical-truth-as-given may shape the world and therefore shape our experiences. That is, we may have had the tremendous difficulty we had in factorizing the number 5 precisely because the number 5 is in fact a prime number. So one place where one could critique your argument is in the bit that goes: "conditioned on X being the case, then our beliefs are independent of Y". The critique is that X may in fact be a consequence of Y, in which case X is itself not independent of Y.

comment by Z._M._Davis · 2008-07-06T15:55:58.000Z · LW(p) · GW(p)

"But AGI is [...] not being a judge or whoever writes laws."

If Eliezer turns out to be right about the power of recursive self-improvement, then I wouldn't be so sure.

comment by RobinHanson · 2008-07-06T16:00:32.000Z · LW(p) · GW(p)

Richard, we can understand how there would be evolutionary pressure to produce an ability to see light, even if imperfect. But what possible pressure could produce an ability to see morality?

comment by Ben_Jones · 2008-07-06T16:28:26.000Z · LW(p) · GW(p)

"You're not escaping that easily! How does a universe in which murder is wrong, differ from a universe in which murder is right? How can you detect the difference experimentally? If the answer to that is 'No'...

Minor quibble - 'no' is not a sensical answer to any of those questions. Possibly remove the word 'how' from one of them?

Once again, no revelations that I haven't come across on my own, but crystallised and clarified brilliantly. Looking forward to the next few.

comment by marcio_rpsbrk · 2008-07-06T16:30:43.000Z · LW(p) · GW(p)

It seems to me that Obert makes a faulty interpretation of "there is no reason to talk about a 'morality' distinct from what people want.", but i would like to know what the author thinks. In my view, that assertion says not that ALL MORAL CLAIMS ARE WHIMS, but instead that to understand and parse and compare moral claims we have to resort to wants. In other words, that WANTS ARE THE OBJECT OF MORALITY, THOUGH NOT IT'S MATTER. To understand any moral claim we have to consider how it imparts onto what real, concrete persons feel and desire.

"I want pie" and "I deserve pie" are different, but i don't see how Subhan's arguments aspire to make them equal.

comment by Psy-Kosh · 2008-07-06T16:36:04.000Z · LW(p) · GW(p)

Obert's arguments seem much closer to "how it feels from the inside", Subhan in general does seem to have stronger actual arguments, however:

"For every kind of stone tablet that you might imagine anywhere, in the trends of the universe or in the structure of logic, you are still left with the question: 'And why obey this morality?'" This, to me, smells of zombieism. "for any configuration of matter/energy/whatever, we can ask 'and why should we believe that this is actually conscious rather than just a structure immitating a consciousness?'"

comment by Phillip_Huggan · 2008-07-06T16:59:03.000Z · LW(p) · GW(p)

(ZMDavis wrote:) "But AGI is [...] not being a judge or whoever writes laws."

If Eliezer turns out to be right about the power of recursive self-improvement, then I wouldn't be so sure."

Argh. I didn't mean that as a critique on EY's prowess as an AGI theorist or programmer. I doubt Jesus would've wanted people to deify him, just to be nice to eachother. I doubt EY meant for his learning of philosophy to be interpreted as some sort of Moral code, he was just arrogant enough not to state he was sometimes using his list to as a tool to develop his own philosophy. I'm assuming any AGI project would be a team, and I'd doubt he'd challenge his best comparitive advantage is not ethics. Maybe he plans on writing the part of the code that tells an AGI how to stop using resources for a given job.

comment by Paul_Gowder · 2008-07-06T17:02:10.000Z · LW(p) · GW(p)

So here's a question Eliezer: is Subhan's argument for moral skepticism just a concealed argument for universal skepticism? After all, there are possible minds that do math differently, that do logic differently, that evaluate evidence differently, that observe sense-data differently...

Either Subhan can distinguish his argument from an argument for universal skepticism, or I say that it's refuted by reductio, since universal skepticism fails to the complete impossibility of asserting it consistently + things like moorean facts.

comment by Z._M._Davis · 2008-07-06T17:11:40.000Z · LW(p) · GW(p)

Phillip, you're the one who brought up "deification," in response to my one-line comment, which you seem to have read a lot into. My second comment was intended to be humorous. I apologize for the extent to which I contributed to this misunderstanding.

comment by DonGeddis · 2008-07-06T17:23:55.000Z · LW(p) · GW(p)

Eliezer seems to suggest that the only possible choices are morality-as-preference or morality-as-given, e.g. with reasoning like this:

[...] the morality-as-preference viewpoint is a lot easier to shoehorn into a universe of quarks. But I still think the morality-as-given viewpoint has the advantage [...]

But really, evolutionary psychology, plus some kind of social contract for group mutual gain, seems to account for the vast bulk of what people consider to be "moral" actions, as well as the conflict between private individual desires vs. actions that are "right". (People who break moral taboos are viewed not much differently from traitors in wartime, who betray their team/side/cause.)

I don't understand this series. Eliezer is writing multiple posts about the problems with the metatheories of morality as either preferences or given. Sure, both those metatheories are wrong. Is that really so interesting? Why not start to tackle what morality actually is, rather than merely what it is not?

comment by George_Weinberg2 · 2008-07-06T17:45:44.000Z · LW(p) · GW(p)

I think it's probably useful to taboo the word "should" for this discussion. I think when people say you "should" do X rather than Y it means something like "experience indicates X is more likely to lead to a good outcome than Y". People tend to have rule-based rather than consequence based moral systems because the full consequences of one's actions are unforeseeable. A rule like "one shouldn't lie" comes about because experience has shown that lying often has negative consequences for the speaker and listener and possibly others as well, although the particular consequences of a particular lie may be unforeseeable.

I don't see how there can be agreement as to moral principles unless there is first a reasonably good agreement as to what constitutes good and bad final states.

comment by Ian_C. · 2008-07-06T17:53:41.000Z · LW(p) · GW(p)

Relationships are real. For example if a plant is "under" a table, that is a fact, not a subjective whim of the observer. So if morality is a relationship, then aliens and man can have different moralities but both be objective, not subjective. The relationship would be between the object sought and the entity seeking it, e.g. murder + man = bad, murder + alien = good.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-06T13:41:28.510Z · LW(p) · GW(p)

Correct. Morality is a function from facts about ones community's preferences to norms of behaviour. Plug in different facts, and you get different norms een if the function is essentially the same.

comment by Aleksei_Riikonen · 2008-07-06T17:56:20.000Z · LW(p) · GW(p)

Paul Gowder,

Yes, there are possible minds that do math/logic/deduction differently. Most of these logically possible minds perform even worse than humans in these aspects, and would die out.

In this universe, if one wishes to reach ones goals, one has to choose to (try to) do math/logic/deduction in the correct way; the way that delivers results. What works is determined by the laws of physics and logic that in our universe seem quite coherent and understandable (to a degree, at least).

There's no reason to be skeptical about whether I actually have some goals/preferences. And since I assume that I have some preferences, I have a need to conform to the correct way of doing math/logic/deduction, which is determined by what seems a rather coherent physical universe.

comment by James · 2008-07-06T17:57:59.000Z · LW(p) · GW(p)

Subhan's question here, "How does a universe in which murder is wrong, differ from a universe in which murder is right? How can you detect the difference experimentally?" is such a gem.

I wonder if Eliezer intended it as parody.

comment by poke · 2008-07-06T18:14:43.000Z · LW(p) · GW(p)

If somebody said to me "morality is just what we do." If they presented evidence that the whole apparatus of their moral philosophy was a coherent description of some subset of human psychology and sociology. Then that would be enough for me. It's just a description of a physical system. Human morality would be what human animals do. Moral responsibility wouldn't be problematic; moral responsibility could be as physical as gravity if it were psychologically and sociologically real. "I have a moral responsibility" would be akin to "I can lift 200 lbs." The brain is complicated, sure, but so are muscles and bones and motor control. That wouldn't make it a preference or a mere want either. That's probably where we're headed. But I don't think metaethics is the interesting problem. The deeper problem is, I think, the empirical one: Do humans really display this sort of morality?

comment by Unknown · 2008-07-06T18:27:38.000Z · LW(p) · GW(p)

I've thought about Space Cannibals and the like before (i.e. creatures that kill one of the sexes during sexual reproduction). My suspicion is that even if such creatures evolved and survived, by the time they had a civilization, many would be saying to one another, "There really should be a better way..."

Evidence for this is the fact that even now, there are many human beings claiming it is wrong to kill other animals, despite the fact that humans evolved to kill and eat other animals. Likewise, in the ancestral environment, various tribes usually did kill each other rather than cooperate. But this didn't stop them from beginning to cooperate. So I suspect that Space Cannibals would do something similar. And in any case, I would fully admit that murder couldn't in fact be wrong for the Space Cannibals in the same way it is for us, even if there is an external moral truth.

In answer to Robin's question, assuming that morality exists, it probably has a number of purposes. And if one of the purposes is to preserve things in existence (i.e. moral truths correspond roughly with what is necessary to preserve things), then of course there will be a selection pressure to perceive moral truth. The disclaimer should not be needed, but this is not in any way a claim that it is moral to maximize inclusive genetic fitness.

comment by Richard8 · 2008-07-06T18:38:46.000Z · LW(p) · GW(p)

Robin, As Eliezer has pointed out, evolution is a nonhuman optimizer which is in many ways more powerful than the human mind. On the assumption that humans have a moral sense, I don't think we should expect to be able to understand why. That might simply be a problem which is too difficult for people to solve. That aside, a man's virtues benefit the society he lives in; his inclination to punish sin will encourage others to act virtuously as well. If his society is a small tribe of his relatives, then even the weaker forms of kin selection theory can explain the benefit of knowledge of good and evil.

comment by Fiction_Man · 2008-07-06T18:39:37.000Z · LW(p) · GW(p)

Morality debates irritate me on so many levels. Treating everybody with respect seems to be a good solution for the moral reletivism debate.

comment by billswift · 2008-07-06T18:58:41.000Z · LW(p) · GW(p)

Treating those who do not deserve respect with respect is basically spitting on those who do deserve it, especially those who work hard for it. I think you need to treat those you don't know with the "presumption of respect"; that is, if you don't know that they don't deserve it, assume they do. Borrowed from Smith's "presumption of rationality"; when you argue with someone, assume that they are rational until they demonstrate otherwise.

comment by RobinHanson · 2008-07-06T18:59:23.000Z · LW(p) · GW(p)

Richard, would you accept the same argument about God, that we know there is a God but don't really understand how we know, but gosh darn it we feel like there must be one so there must be one? Yes we evolved to help kin, and we expect many but hardly all other species to do this as well. But unless we know whether that behavior is moral we don't know if that is a process that makes our moral intuitions correlate with moral truth.

comment by Constant2 · 2008-07-06T19:48:14.000Z · LW(p) · GW(p)

Richard, we can understand how there would be evolutionary pressure to produce an ability to see light, even if imperfect. But what possible pressure could produce an ability to see morality?

Let's detail the explanation for light to see if we can find a parallel explanation for morality. Brief explanation for light: light bounces off things in the environment in a way which can in principle be used to draw correct inferences about distant objects in the environment. Eventually, some animals evolve a mechanism for doing just this.

Let's attempt the same for morality. Brief explanation for morality: unlike light, evil is not a simple thing that comes in its own fundamental particles. It is more similar to illness. An alien looking at a human cell might not, from first principles, be able to tell whether the cell was healthy or sick - e.g. whether it has not, or has, fallen victim to an attack rewriting its genetic code. The alien may need to look at the wider context in order to draw a distinction between a healthy cell and an ill cell, and by extension, between a healthy human and an ill human. Nevertheless, illness is real and we are able to tell the difference between illness and health. We have at least two reasons for doing this: an illness might pass to us (if it is infectious), and if we select an ill partner for producing offspring we may produce no offspring.

Evil is more akin to illness than to light, and is even more akin to mental illness. Just to continue the case of mating, if we select a partner who is unusually capable of evil (as compared to the human average) then we may find ourselves dead, or harmed, or at odds with our neighbors who are victimized by our partner. If we select a business partner who is honest then we have an advantage over someone who selects a business partner who is dishonest. In order to tell apart an evil person from a good person we need to be able to distinguish an evil act from a good act.

This is only part of it, but there's a 400-word limit.

comment by Richard8 · 2008-07-06T20:27:10.000Z · LW(p) · GW(p)

Robin,

Our moral intuitions correspond with moral truths for much the same reason that our rational predictions correspond with more concrete physical truths. A man who ignores reason will stick his hand back in the fire after being burned the first time. Such behavior will kill him, probably sooner rather than later. An man who is blind to good and evil may do quite well for himself, but a society whose citizens ignore virtue will suffer approximately the same fate as the twice-burned fool.

comment by RobinHanson · 2008-07-06T20:39:17.000Z · LW(p) · GW(p)

Richard, I agree that some social norms help a society prosper while others can "burn" it. And we have the intuition that morally right acts correspond to social norms that help societies prosper. But we would have had that intuition even if morally right acts had corresponded to the opposite. What evolutionary pressure could have produced the correct intuitions about this meta question?

comment by Z._M._Davis · 2008-07-06T20:41:10.000Z · LW(p) · GW(p)

Constant, I would say that objective illness is just as problematic as objective morality; it's just less obviously problematic because in everyday contexts, we're more used to dealing with disputes about morality than about illness. You mention that "if we select an ill partner for producing offspring we may produce no offspring," and in an evolutionary context, probably we could give some fitness-based account of illness. However, this evolutionary concept of "illness" cannot be the ordinary meaning of the word, because no one actually cares about fitness.

I hate to use this example ("gender is the mind-killer," as we learned here so recently), but it's a classic one and a good one, so I'll just go ahead. Take homosexuality. It's often considered a mental disorder, but if someone is gay and happy being so, I would challenge (as evil, even) any attempt to define them as "ill" in anything more than the irrelevant evolutionary sense. I would indeed go much further and say that (for adults, at least) that which the patient desires in herself is health, and that which the patient does not desire in herself is sickness. (I actually seem to remember a similar viewpoint being advanced on The Distributed Republic a while back.) But this only puts us back at discussing preferences and morality.

comment by Dynamically_Linked · 2008-07-06T20:49:05.000Z · LW(p) · GW(p)

Constant wrote: So one place where one could critique your argument is in the bit that goes: "conditioned on X being the case, then our beliefs are independent of Y". The critique is that X may in fact be a consequence of Y, in which case X is itself not independent of Y.

Good point, my argument did leave that possibility open. But, it seems pretty obvious, at least to me, that game theory, evolutionary psychology, and memetics are not contingent on anything except mathematics and the environment that we happened to evolve in.

So if I were to draw a Bayesian net diagram, it would look it this:

math ---   --- game theory ------------
\ /                            \
--- evolutionary psychology - moral perceptions
/ \                            /
environment --   --- memetics ---------------
Ok, one could argue that each node in this diagram actually represents thousands of nodes in the real Bayesian net, and each edges is actually millions of edges. So perhaps the following could represent a simplification, for a suitable choice of "morality":
math ---              - game theory ------------
\            /                          \
-- morality -- evolutionary psychology --- moral perceptions
/            \                          /
environment --              - memetics ---------------
Before I go on, do you actually believe this to be the case?

comment by Caledonian2 · 2008-07-06T20:52:46.000Z · LW(p) · GW(p)

I wonder if Eliezer intended it as parody.
He'd be making a serious mistake if so.

comment by denis_bider · 2008-07-06T20:58:13.000Z · LW(p) · GW(p)

Eliezer: You have perhaps already considered this, but I think it would be helpful to learn some lessons from E-Prime when discussing this topic. E-Prime is a subset of English that bans most varieties of the verb "to be".

I find sentences like "murder is wrong" particularly underspecified and confusing. Just what, exactly, is meant by "is", and "wrong"? It seems like agreeing on a definition for "murder" is the easy part.

It seems the ultimate confusion here is that we are talking about instrumental values (should I open the car door?) before agreeing on terminal values (am I going to the store?).

If we could agree on some well-defined goal, e.g. maximization of human happiness, we could much more easily theorize on whether a particular case of murder would benefit or harm that goal.

It is, however, much harder to talk about murders in general, and infeasible to discuss this unless we have agreed on a terminal value to work for.

comment by denis_bider · 2008-07-06T21:09:53.000Z · LW(p) · GW(p)

My earlier comment is not to imply that I think "maximization of human happiness" is the most preferred goal.

An easily obvious one, yes. But faulty; "human" is a severely underspecified term.

In fact, I think that putting in place a One True Global Goal would require ultimate knowledge about the nature of being, to which we do not have access currently.

Possibly, the best we can do is come up with plausible global goal that suits us for medium run, while we try to find out more.

That is, after all, what we have always done as human beings.

comment by TGGP4 · 2008-07-06T21:48:53.000Z · LW(p) · GW(p)

Wanting to murder doesn't make it right. Nothing makes anything morally right.

comment by Richard8 · 2008-07-06T22:37:06.000Z · LW(p) · GW(p)

Robin,

I don't understand your counterfactual.

"Good" and "Evil" are the names for what people perceive with their moral sense. I think we've agreed that this perception correlates to something universally observable (namely, social survival), so these labels are firmly anchored in the physical world. It looks to me like you're trying to assign these names to something else altogether (namely, something which does not correlate with human moral intuitions), and it's not clear to me how this makes sense.

comment by RobinHanson · 2008-07-06T22:44:51.000Z · LW(p) · GW(p)

Richard, if morality just meant social norms that help societies prosper, then of course we have little problem understanding how the two could be correlated, and how we could come to know about them. But if morality means something else, then we face the much harder question of how it is we could know about this something else.

comment by Dynamically_Linked · 2008-07-06T22:58:06.000Z · LW(p) · GW(p)

For those impatient to know where Eliezer is going with this series, it looks like he gaves us a sneak preview a little more than a year ago. The answer is morality-as-computation.

Eliezer, hope I didn't upset your plans by giving out the ending too early. When you do get to morality-as-computation, can you please explain what exactly is being computed by morality? You already told us what the outputs look like: "Killing is wrong" and "Flowers are beautiful", but what are the inputs?

comment by Fly2 · 2008-07-06T23:43:03.000Z · LW(p) · GW(p)

EY: "human cognitive psychology has not had time to change evolutionarily over that period"

Under selective pressures, human populations can and have significantly changed in less than two thousand years. Various behavioral traits are highly heritable. Genghis Khan spread his behavioral genotype throughout Asia. (For this discussion this is a nitpick but I dislike seeing false memes spread.)

re: FAI and morality

From my perspective morality is a collection of rules that make cooperative behavior beneficial. There are some rules that should apply to any entities that compete for resources or can cooperate for mutual benefit. There are some rules that improved fitness in our animal predecessors and have become embedded in the brain structure of the typical human. There are some rules that are culture specific and change rapidly as the environment changes. (When your own children are likely to die of starvation, your society is much less concerned about children starving in distant lands. Much of modern Western morality is an outcome of the present wealth and security of Western nations.)

As a start I suggest that a FAI should first discover those three types of rules, including how the rules vary among different animals and different cultures. (This would be an ongoing analysis that would evolve as the FAI capabilities increased.) For cultural rules, the FAI would look for a subset of rules that permit different cultures to interact and prosper. Rules such as kill all strangers would be discarded. Rules such as forgive all trespasses would be discarded as they don't permit defense against aggressive memes. A modified form of tit-for-tat might emerge. Some punishment, some forgiveness, recognition that bad events happen with no one to blame, some allowance for misunderstandings, some allowance for penance or regret, some tolerance for diversity. Another good rule might be to provide everyone with a potential path to a better existence, i.e., use carrots as well as sticks. Look for a consistent set of cultural rules that furthers happiness, diversity, sustainability, growth, and increased prosperity. Look for rules that are robust, i.e., give acceptable results under a variety of societal environments.

A similar analysis of animal morality would produce another set of rules. As would an analysis of rules for transactions between any entities. The FAI would then use a weighted sum of the three types of moral rules. The weights would change as society changed, i.e., when most of society consists of humans then human culture rules would be given the greatest weight. The FAI would plan for future changes in society by choosing rules that permit a smooth transition from a human centered society to an enhanced human plus AI society and then finally to an AI with human origins future.

Humans might only understand the rules that applied to humans. The FAI would enforce a different subset of rules for non-human biological entities and another subset for AI's. Other rules would guide interactions between different types of entities. (My mental model is of a body made up of cells, each expressing proteins in a manner appropriate for the specific tissue while contributing to and benefitting from the complete animal system. Rules for each specific cell type and rules for cells interacting.)

The transition shouldn't feel too bad to the citizens at any stage and the FAI wouldn't be locked into an outdated morality. We might not recognize or like our children but at least we wouldn't feel our throats being cut.

comment by Richard8 · 2008-07-07T00:08:03.000Z · LW(p) · GW(p)

Robin,

I don't know how people are capable of discerning moral truths. I also don't know how people are capable of discerning scientific or mathematical truths. It seems to me that these are similar capabilities, and the one is no more suprising or unlikely than the other.

comment by RobinHanson · 2008-07-07T00:47:31.000Z · LW(p) · GW(p)

Richard, while there are surely many details we would like to understand better, surely we understand the basic outline of how we discern scientific and mathematical truths. For example, in math we use contradiction to eliminate possible implications of axiom sets, and in science we use empirical results to eliminate possible abstract theories. We have nothing remotely similar in morals. You never said whether you approved of a similar argument about knowledge of God.

comment by Constant2 · 2008-07-07T01:30:41.000Z · LW(p) · GW(p)

Z. M. Davis writes: ... objective illness is just as problematic as objective morality

I would argue that to answer Robin's challenge is not necessarily to assert that there is such a thing as objective illness.

Accounts have been given of the pressure producing the ability to see beauty (google sexual selection or see e.g. this). This does not require that there is some eternal beauty written in the fabric of the universe - it may be, for example, that each species has evolved its own standard of beauty, and that selection is operating on both sides, i.e., selecting against individuals who are insufficiently beautiful and also selecting against admirers who differ too far from the norm.

However, this evolutionary concept of "illness" cannot be the ordinary meaning of the word, because no one actually cares about fitness.

My argument is: people can distinguish illness because it enhances their fitness to do so. Compare this to the following argument: people can distinguish the opposite sex because it enhances their fitness to do so. Now, okay, suppose that people don't care about fitness, as you say. Nevertheless, unbeknownst to them, telling women apart from men enhances their fitness. Similarly for illness.

Take homosexuality. It's often considered a mental disorder, but if someone is gay and happy being so, I would challenge (as evil, even) any attempt to define them as "ill" in anything more than the irrelevant evolutionary sense.

Homosexuality reduces fitness (so you seem to to agree), but this does not make it an illness. Not everything that reduces fitness is an illness. Rather, illness tends to reduce fitness. Let me put it this way. Blindness tends to reduce fitness. But not everything that reduces fitness is blindness. Similarly, illness tends to reduce fitness. But that doesn't mean that everything that reduces fitness is illness.

... that which the patient desires in herself is health, and that which the patient does not desire in herself is sickness.

We can similarly say, that which a person desires in a mate is beauty. However, I think the most that can be said for this is that it is one concept of beauty. It is not the only concept. The idea that there is a shared standard of beauty is, despite much thought and argument to the contrary, still with us, and not illegitimate.

comment by Richard_C · 2008-07-07T02:15:07.000Z · LW(p) · GW(p)

"what possible pressure could produce an ability to see morality?"

Unlike the other Richard, I don't think we "see" morality with a special "sense", or anything like that. But if we instead understand morality as a rational idealization, building on our perfectly ordinary general capacity for systematizing judgments so as to increase their overall coherence (treating like cases alike, etc.), then there's no great mystery here.

comment by Constant2 · 2008-07-07T02:52:58.000Z · LW(p) · GW(p)

Dynamically Linked writes: But, it seems pretty obvious, at least to me, that game theory, evolutionary psychology, and memetics are not contingent on anything except mathematics and the environment that we happened to evolve in.

According to Tegmark "there is only mathematics; that is all that exists". Suppose he is right. Then moral truths, if there are any, are (along with all other truths) mathematical truths. Unless you presuppose that moral truths cannot be mathematical truths then you have not ruled out moral truths when you say that so-and-so is not contingent on anything except mathematics and such-and-such. For my part I fail to see why moral truths could not be mathematical truths.

Before I go on, do you actually believe this [Bayesian net diagram] to be the case?

I'm sorry to say that I can't read Bayesian net diagrams. Hopefully I answered your question anyway.

comment by Richard8 · 2008-07-07T03:37:43.000Z · LW(p) · GW(p)

Robin:

Discarding false mathematical and scientific conjectures is indeed much easier than discarding false moral conjectures. However, as Eliezer pointed out in an earlier post, a scientist who can come up with a hypothesis that has a 10% chance of being true has already gone most of the way from ignorance to knowledge. I would argue that hypothesis generation is a poorly-understood nonrational process in all three cases. A mathematician who believes he has found truth can undertake the further steps of writing a formal proof and submitting his work to public review, greatly improving his reliability. A man confronted with a moral dilemma must make a decision and move on.

I think that the universal tendency towards religion is indeed evidence in favor of the existence of God, but not very strong evidence. The adaptive advantage of discerning correct metaphysics was minimal in the ancestral environment.

Richard C:

I think if you try to use your "general capacity for systematizing judgments" to make moral decisions, you'll restrict yourself to moral systems which are fully accessible to human reason.

comment by Dynamically_Linked · 2008-07-07T04:06:00.000Z · LW(p) · GW(p)

Constant, if moral truths were mathematical truths, then ethics would be a branch of mathematics. There would be axiomatic formalizations of morality that do not fall apart when we try to explore their logical consequences. There would be mathematicians proving theorems about morality. We don't see any of this.

Isn't it simpler to suppose that morality was a hypothesis people used to explain their moral perceptions (such as "murder seems wrong") before we knew the real explanations, but now we find it hard to give up the word due to a kind of memetic inertia?

comment by Constant2 · 2008-07-07T04:58:00.000Z · LW(p) · GW(p)

Constant, if moral truths were mathematical truths, then ethics would be a branch of mathematics. There would be axiomatic formalizations of morality that do not fall apart when we try to explore their logical consequences. There would be mathematicians proving theorems about morality. We don't see any of this.

If Tegmark is correct, then everything is mathematics. Do you dispute Tegmark's claim that "there is only mathematics; that is all that exists"? Do you think your argument is any good against Tegmark's hypothesis? Will you tell Tegmark, "the department of physics and the department of biology are separate departments from the department of mathematics, and therefore you are wrong"? I don't think it is quite so easy to dismiss Tegmark's hypothesis merely on the basis that all the sciences are not treated as branches of mathematics. Tegmark's point is that something that we don't realize is mathematics nevertheless is mathematics. All your observation shows is that we don't treat it as mathematics. Which doesn't even touch Tegmark's hypothesis.

Isn't it simpler to suppose that morality was a hypothesis people used to explain their moral perceptions (such as "murder seems wrong") before we knew the real explanations, but now we find it hard to give up the word due to a kind of memetic inertia?

Moral truths pass some basic criteria of reality. They are, importantly, not a matter of opinion. If, as some claim, morality is intuitive game theory (which I think is very much on track), then morality is not a matter of opinion, because whether something is or is not a good strategy is not a matter of opinion. Optimal strategies are what they are regardless of what we think, and therefore pass an important criterion of reality.

Now, there seem to be some who think that discovering that morality is intuitive game theory debunks its reality. But to my mind that is a bit like discovering what fire is debunks the idea that fire is real. It does not: discovering what it is does not debunk it, if anything it reaffirms its reality. If fire is a kind of exothermic chemical reaction then it is most definitely not just in my imagination! And if morality is intuitive game theory then it is most definitely not just in my imagination.

And game theory happens to be... guess what... Starts with an "m".

comment by bamonster · 2008-07-07T05:24:00.000Z · LW(p) · GW(p)

What's so bad about morality being a mere human's construct - in other words, the notion that there is no "stone tablet" of morals? In fact, I think the notion that morality exists objectively, like some looming Platonic figure, raises more questions than would be solved by such a condition.

I think the best way to construct this "morality" is just to say that it's got a quasi-mathematical existence, it's axiomic, and it's all augmented by empirical/logical reasoning.

Why accept it, why be moral? I feel the same way about this question as I do about the question of why somebody who believes "if A, then B," and also believes that A, should also believe that B.

comment by [deleted] · 2008-07-07T06:35:00.000Z · LW(p) · GW(p)

sigh

Not even close, any of you ;)

The question of whether there are any moral givens or not is analagous to the question of whether there are any mathematical givens (which was covered by E.Yudkowsky in an earlier series). Unfortunately, he doesn't seem to have learned the lesson.

Firstly, in the series on mathematics it was (correctly I think) put forward that even mathematics is not in fact, a priori or axiomatic. 2+2=4 for instance, can be empirically based on the observation that when you have two apples, and you add another two apples, you end up with a total of four apples. So this empirical fact can be taken as empirical evidence that 2+2=4. We don't directly perceieve the fact that 2+2=4, this mathematical fact is indirectly inferred from the empirical evidence.

Similarly, if there are any moral givens, I would agree that they have to result in real empirical differences. Again, we could never percieve moral givens directly (since they are abstract) - they would have to be indirectly inferred from empricial data.

The question is: what empirical differences would enable us to infer the moral givens? Ah, now that would be giving the game away wouldn't it? ;)

But again, clues can be obtained by looking at the arguments over whether math is objective or not, and reasoning by analogy for morality.

pure math is concerned with the objective logical properties of physical systems. These properties do exist... as demonstrated by the example of taking 2 apples, adding another 2 and always getting a total of 4. This is the empirical evidence for the postulated objective logical/mathematics properties.

But... applied math (for example probability theory) is not about objective logical properties, instead, it is about cognitive systems, or the process of making inferences about the objective logical/mathematical properties. The key point to note here, is that probablity theory only works because there do exist objective logical/mathematical properties of systems independently of observers. So the existence of these logical/mathematical properties is what ensures the coherence of probability theory. If there were no objective logical/mathematical properties independent of observers (ie for example adding 2 apples to another 2 apples did not always result in 4 apples in a consistent way) then probability theory would not work.

Just as math was concerned with logical properties of physical systems, so, if moral givens exist, they would be concerned with teleological properties of physical systems. And what science deal with this? Decision theory of course. It is the moral givens that provide the explanatory justification for decision theory.

Just as postulating real mathematical entities provided the explanatory justification for probablity theory, so too, does postulating the existence of real moral givens provide the explanatory justification for decision theory.

Why does decision theory work? What are these mysterious 'utilities' that keep being referred to? Are they preferences? No (in some cases at least), they're moral givens ;)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-07T06:47:00.000Z · LW(p) · GW(p)

Geddes, if you can't keep the condescension out of your comments - just present the raw arguments, if you have any - then I'll have to ban you here, too. Just FYI. Also, your comments should be shorter.

comment by Ian_C. · 2008-07-07T07:41:00.000Z · LW(p) · GW(p)

I think that Subhan and Obert may represent two sides of a false dichotomy, namely the idea that there's either one absolute morality for all minds, or it's all subjective. But a third possibility exists - that of objective morality, where the results depend on the physical nature of the being in question, but not their whims.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-06T13:49:34.305Z · LW(p) · GW(p)

Correct. So much for "EY has the answer to everything".

comment by Lake · 2008-07-07T09:42:00.000Z · LW(p) · GW(p)

@ Ian C. Couldn't Subhan claim that as a restatement of his own position? His notion of wanting clearly encompasses more than mere whims. Perhaps he would say that a certain subset of desires, objectively grounded in the constitution of the mind, count as moral impulses.

Actually, is Subhan meant to be male? Apologies if not.

comment by Ian_C. · 2008-07-07T11:18:00.000Z · LW(p) · GW(p)

@Lake - I think Subhan is only about whims. Yes, he sees that values are tied closely to human nature, but only uses that to argue against Obert. What Obert should have pointed out is that he goes from "there is not one true morality" to "there is only preference" without arguing why those are the only two possibilities.

comment by Caledonian2 · 2008-07-07T12:07:00.000Z · LW(p) · GW(p)

FYI, it's physics that is fundamental. Math is deeper than our theories of physics - it's deeper than all our theories, because it makes up the languages we use to create and express them - but physics itself is deeper than everything.

Similarly, if there are any moral givens, I would agree that they have to result in real empirical differences.
Correct.
Again, we could never percieve moral givens directly (since they are abstract)
Not correct - everything we perceive is equally abstract. There are different kinds of abstractions defined by their interrelationships in a hierarchy. A 'virtual' computer is just as abstract as the hardware it's running on, but the hardware is on a deeper level of the hierarchy. But that's moving towards another topic.

What properties make a claim about morality, as opposed to something else? What is the basic definition of 'morality'? Answering that question is the necessary first step to resolving the issue. It is remarkable how little anyone here cares about doing that.

comment by Lake · 2008-07-07T12:16:00.000Z · LW(p) · GW(p)

I gestured at one possible answer to that question. A situation has a moral dimension if it engages moral emotions - which can presumably be listed.

comment by Caledonian2 · 2008-07-07T13:59:00.000Z · LW(p) · GW(p)

Still doesn't tell us what 'moral' means. We've just changed the category we stick that label on. What defines 'moral emotions'? Is it an arbitrary grouping, or do we use the label to refer to certain properties that things in that grouping possess? People seem to use the term in the latter way - so what properties are they?

Basic scientific methodology - you can't study what you can't produce a provisional definition for. Once you have that, you can learn more about what's defined, but you don't get anywhere without that starting point.

comment by Constant2 · 2008-07-07T17:00:00.000Z · LW(p) · GW(p)

Basic scientific methodology - you can't study what you can't produce a provisional definition for. Once you have that, you can learn more about what's defined, but you don't get anywhere without that starting point.

The first concepts that more less denoted, say, water, may have included things which today we would reject as not water (e.g., possibly clear alcohol), failed to distinguish water from things dissolved in the water, and excluded forms of water (such as steam and ice). The very first definitions of water were probably ostensive definitions (this here is water, that is water) rather than descriptive or explanatory definitions. The definitions were subject to revision as knowledge improved.

Are you willing to accept an ostensive and potentially erroneous definition of morality that may very well be subject to revision as knowledge improves? One is easy enough to supply by listing a bunch of acts currently believed to be evil, then listing a bunch of believed-to-be morally neutral acts, and pointing out that the first group is evil and the second group isn't. Would that be satisfactory?

Is it an arbitrary grouping, or do we use the label to refer to certain properties that things in that grouping possess?

I think the better question is, do recognized examples of evil have something in common - never mind what we intend by the label. Maybe by the label "water" we initially intended "Chronos's tears" or some such useless thing. The intention isn't necessarily of any particular interest. You are interested in scientific inquiry into morality, yes? - seeing as you talk about "scientific methodology." Science studies the properties of things in themselves independently of whatever nonsense ideas we might have about them; if you want to study our intents then become a philosopher, not a scientist.

Anyway, this question - do examples of evil have something in common - is something for the scientists to answer, no? It doesn't need to be answered before scientific inquiry begins.

comment by Caledonian2 · 2008-07-07T17:36:00.000Z · LW(p) · GW(p)

Are you willing to accept an ostensive and potentially erroneous definition of morality that may very well be subject to revision as knowledge improves?
That's how knowledge works, Constant. Everything we think we know may turn out to be wrong, and any conclusions will probably end up being revised later.

Are you willing to have a neverending discussion, with everyone talking past each other, and no working definition for the central concept we're supposed to be examining?

comment by Constant2 · 2008-07-07T17:48:00.000Z · LW(p) · GW(p)

Are you willing to have a neverending discussion, with everyone talking past each other, and no working definition for the central concept we're supposed to be examining?

I'm not in charge of the discussion, so it's not a question of what I'm willing to do. I've told you how to get the starting definition you're looking for. As I said: you can start with an ostensive definition by listing examples of evil acts. Then you can find common elements. For example, it might become apparent, after surveying them, that evil acts have in common that they all have victims against whose will the evil acts were committed and who are harmed by the evil acts. It might also become apparent that the evil acts involved one or another form of transgression or trespass against certain boundaries. You might like to study what the boundaries are.

comment by Richard_Hollerith · 2008-07-07T18:20:00.000Z · LW(p) · GW(p)
It seems the ultimate confusion here is that we are talking about instrumental values . . . before agreeing on terminal values . . .

If we could agree on some well-defined goal, e.g. maximization of human happiness, we could much more easily theorize on whether a particular case of murder would benefit or harm that goal.

denis bider, I would not be surprised to learn that refraining from murder is a terminal value for Eliezer. Eliezer's writings imply that he has hundreds of terminal values: he cannot even enumerate them all.

Defn. "Murder" is killing under particular circumstances, e.g., not by uniformed soldiers during a war, not in self-defense, not by accident.

comment by Terren_Suydam · 2008-07-07T18:43:00.000Z · LW(p) · GW(p)

Great dialog, which I think can be summarized in Nietzsche's aphorism: "Morality is the herd-instinct in the individual."

Actually, I think the dialog could have been a lot shorter if it became clear earlier on that preference (as in morality-as-preference) referred not to individual preference but the "preference of the collective". Which is to say, morality is determined by evolutionary psychology. There are however two assumptions built into the evolutionary psychological explanation of morality which ought to be made explicit.

The first is that one has to adopt the group-selection stance. As in, groups in which morals evolved had higher stability (its members were more likely to survive and procreate). If we focus only on the selfish individual, then it's obvious that morals make no sense.

The second assumption is that evolution, in humans, bifurcated into both physical and cultural domains. This is because morality is almost certainly not determined by genetics. At best, genetics predisposes us psychologically to accept morality (whatever that means), but the content of morality is too well specified to be so determined. Thus we have to assume a selection process that selects groups on the basis of culture as well as genetics. This is actually a common sense notion, that for example groups that develop better weapons will dominate or eliminate other groups.

comment by Richard_Hollerith · 2008-07-07T19:31:00.000Z · LW(p) · GW(p)

My comment is not charitable enough towards the CEVists. I ask the moderator to delete it, I will now submit a replacement.

comment by Richard_Hollerith · 2008-07-07T19:34:00.000Z · LW(p) · GW(p)
It seems the ultimate confusion here is that we are talking about instrumental values . . . before agreeing on terminal values . . .

If we could agree on some well-defined goal, e.g. maximization of human happiness, we could much more easily theorize on whether a particular case of murder would benefit or harm that goal.

denis bider, under the CEV plan for singularity, no human has to give an unambiguous definition or enumeration of his or her terminal values before the launch of the seed of the superintelligence. Consequently, those who lean toward the CEV plan feel much freer to regard themselves as having hundreds of terminal values. Consequently, refraining from murder might easily be a terminal value for them.

Defn. "Murder" is killing under particular circumstances, e.g., not by uniformed soldiers during a war, not in self-defense, not by accident.

comment by Caledonian2 · 2008-07-07T20:39:00.000Z · LW(p) · GW(p)
As I said: you can start with an ostensive definition by listing examples of evil acts. Then you can find common elements.

There aren't necessarily any common elements, besides utterly trivial ones. If you look at examples of misspelled words in various languages and examine their individual properties, you won't find what unites them in a category. You have to understand their relationship to the spelling rules in the various languages - rules which themselves are likely to be incompatible and mutually incoherent - to understand what properties make them examples of 'misspelled words'.

We need the concept of morality itself, the rules that define the incompatible and mutually incoherent rule systems that are examples of 'morality'. Looking at examples of 'evil acts' isn't going to cut it.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-07T20:56:00.000Z · LW(p) · GW(p)

Quick comment: Terren Suydam's version of "evolutionary psychology" is not the academically accepted one. Conventional academic evolutionary explanations of morality rely on neither group selection nor selection on cultures.

comment by RobinHanson · 2008-07-07T21:05:00.000Z · LW(p) · GW(p)

Richard, once we can see how to eliminate math or science views, then it doesn't seem particularly puzzling that people can generate plausible views to consider. The obvious hypothesis is that they generate many views in their mind, apply crude but effective filters, and then only tell others about the few best ones. So of course the views they propose will fare far better than average when tested against consistency and data.

comment by Terren_Suydam · 2008-07-07T21:13:00.000Z · LW(p) · GW(p)

Quick comment: Terren Suydam's version of "evolutionary psychology" is not the academically accepted one. Conventional academic evolutionary explanations of morality rely on neither group selection nor selection on cultures.

Be that as it may, I would have to say that an explanation of morality made strictly in terms of the academically accepted version of evolutionary psychology is not possible. I'm not trying to redefine the term - just saying what else would be necessary to make an explanation of morality on that basis possible.

comment by Constant2 · 2008-07-07T21:25:00.000Z · LW(p) · GW(p)

There aren't necessarily any common elements, besides utterly trivial ones.

Maybe, maybe not. You won't know without looking. You have to start somewhere.

If you look at examples of misspelled words in various languages and examine their individual properties, you won't find what unites them in a category.

But then, what about correctly spelled words? There will be many observable systematic relationships between those. I happen to think you have the analogy backwards. In the good/evil dichotomy, it is the evil acts, not the not-evil acts, which are narrowly defined and systematically related (I think). If you try to find what is in common between the not-evil acts, those are the acts which have nothing in particular in common. Meanwhile, in the well-spelled/misspelled dichotomy, it is the correctly-spelled words that are narrowly defined and systematically related. In short, I think morality is fundamentally a narrow set of prohibitions rather than a narrow set of requirements. In contrast, the rules of spelling form a narrow set of requirements.

But whether you are right or I am right is something that we won't know without looking.

You have to understand their relationship to the spelling rules in the various languages - rules which themselves are likely to be incompatible and mutually incoherent - to understand what properties make them examples of 'misspelled words'.

Nobody told Galileo and Newton what the rules generating the world's behavior were, but they were able to go a long way toward figuring them out. And isn't that what science is? If you claim that the science can't start without knowing the rules first, then aren't you asserting that science is hopeless?

comment by Fly2 · 2008-07-08T00:12:00.000Z · LW(p) · GW(p)

Terren Suydam: "The first is that one has to adopt the group-selection stance."

(Technical jargon nitpick.)

In studying evolutionary biology, "group-selection" has a specific meaning, an individual sacrifices its own fitness in order to improve the group fitness. I.e., individual loss for a group gain. E.g., suppose you have a species that consists of many small family groups. Suppose a mutation produces a self-sacrificing individual in one of the groups. His fitness is slightly lower but his family group fitness is higher. His group tends to grow faster than other groups. So his group produces more splinter groups, some of which will have his alleles. Within any one group his allele tends to die out, but the overall population frequency of the allele increases due to the increased number of splinter groups containing the allele. This is an example of group-selection.

Much more common is cooperation that doesn't lower the individual's fitness. In this case it is win-win, individual gain and group gain. Symbiosis is an example where the cooperation is between different species. Both individuals gain so it is not an example of group-selection.

There are a few known examples of group-selection but they tend to be the rare exception, not the rule. Often something appears to be group-selection but on closer analysis turns out to be regular selection. E.g., suppose a hunter shares his meat with the tribe. He isn't lowering his fitness because he already has enough meat for himself. He is publicly displaying his prowess as a food provider which increases his mating success. Thus his generosity directly improves his fitness. His generosity is a fitness increasing status display.

Cooperation can and usually does arise through regular selfish selection.

(I see EY also noted this.)

comment by Caledonian2 · 2008-07-08T01:54:00.000Z · LW(p) · GW(p)

Nobody told Galileo and Newton what the rules generating the world's behavior were, but they were able to go a long way toward figuring them out.
They started with rigorously-defined phenomena. Galileo was famous for taking careful and exacting measurements - he was able to figure out some of the world's rules because he could notice the relationships between precisely-defined and -measured things.

Nobody gets anywhere, most especially in philosophy, without rigorous definitions of the relevant concepts. You rely on vague, intuitive understandings, and you accomplish nothing.

comment by Terren_Suydam · 2008-07-08T02:00:00.000Z · LW(p) · GW(p)

In studying evolutionary biology, "group-selection" has a specific meaning, an individual sacrifices its own fitness in order to improve the group fitness.

I think it's quite limiting to think strictly in terms of genetics, because there is more than one level of description going on when it comes to selection pressure.

It is interesting to take that step back and view the culture as an individual. The human super-organism (e.g., a tribe, or more generally, a culture) competes with others for resources. It consumes, metabolizes, and excretes, which is to say that it lowers entropy locally and raises it globally. With others, it fights, defends, cooperates, and merges/assimilates. It gets sick, fights the antigens, heals itself, or dies. New super-organisms are spun off of or otherwise born from parents. We may look at the "evolution of language" through the lens of the human super-organism. Language is the DNA of culture, to make a rough analogy.

The dynamics that propagate the super-organism are not reducible to genetics. It's a different level of description, because culture emerges from the interaction of large numbers of individuals. And you can't deny that if one culture has guns and confronts another that doesn't, that dynamic is going to place a harsh selective pressure indeed on the culture without the firepower. So genetics is not the whole story, and that's what I mean by group selection.

comment by Z._M._Davis · 2008-07-08T02:25:00.000Z · LW(p) · GW(p)

Terren, I would doubt that changes between cultures are best explained by an evolutionary process--cf. "No Evolutions for Corporations or Nanodevices." There may be a selection effect in that cultures with guns are more likely to persist, but that's different from saying that selection pressures play a really important role in designing the particular features of a culture. So I am given to understand.

comment by Terren_Suydam · 2008-07-08T03:34:00.000Z · LW(p) · GW(p)

There may be a selection effect in that cultures with guns are more likely to persist, but that's different from saying that selection pressures play a really important role in designing the particular features of a culture.

That's what I'm saying - selection pressures are important in determining cultural features, because those features in turn determine a culture's viability. The global-level organization of a culture - including its moral code, political organization, and other important social structures - are key considerations in what makes a culture healthy or stable, and thus competitive in an arena of limited resources. Keep in mind, I think these ideas lend themselves more easily to ancient history, in which the boundaries between cultures were so much clearer, rather than the globalized cultures of our modern world.

A lot of these ideas come from Jared Diamond's fascinating Guns, Germs, and Steel, which talks in depth about cultural evolution throughout human history.

comment by Fly2 · 2008-07-08T04:17:00.000Z · LW(p) · GW(p)

Terren Suydam: "So genetics is not the whole story, and that's what I mean by group selection."

I use the term "multilevel selection" for what you are describing. I agree it has been important.

E.g., there has been selection between different species. Species with genomes that supported rapid adaptation to changing environments and that supported quick diversification when expanding into new niches spread far and wide. (Beetles have been extremely successful with around 350,000 known species.) Other specie branches died out. The genetic mechanisms and the animal body plans that persist to the present are the winners of a long between specie selection process.

My intuition is that selection operating at the individual level, whether genetic or cultural, suffices to produce cooperation and moral behavior. Multilevel selection probably played a supporting role.

comment by [deleted] · 2008-07-09T06:09:00.000Z · LW(p) · GW(p)

>Geddes, if you can't keep the condescension out of your comments - just present the raw arguments, if you have any - then I'll have to ban you here, too. Just FYI. Also, your comments should be shorter.

But all I'm asking for is an explanation as to why decision theory works? Perhaps someone like R.Hanson could explain?

After all, I know (admittedly only in very general terms) what the explanation for thermodynamics is (the underlying explanation is in the concepts of mechanics- energy, force etc.). Also, I know (in general), why probability theory works (the underlying explanation is in the concepts of algebra - relations, functions etc).

To shorten my comment then, here's the question I'm asking:

Science: Thermodynamics
Explanatory Justifcation: Mechanics

Science: Probability Theory
Explanatory Justification: Algebra

Science: Decision Theory
Explanatory Justification: ???????

And I'm postulating that knowing the explanatory justification for decision theory would give you the answers to the questions on morality. I'm also guessing that no-one here can provide that justfication ;)

comment by Arandur · 2011-08-18T15:41:16.383Z · LW(p) · GW(p)

"If morality exists independently of human nature, then isn't it a remarkable coincidence that, say, love is good?"

I'm going to play Devil's Advocate for a moment here. Anyone, please feel free to answer, but do not interpret the below arguments as correlating with my set of beliefs.

"A remarkable coincidence? Of course not! If we're supposing that this 'stone tablet' has some influence on the universe - and if it exists, it must exert influence, otherwise we wouldn't have any evidence wherewith to be arguing over whether or not it exists - then it had influence on our 'creation', whether (in order to cover all bases) we got here purely through evolution, or via some external manipulation as well. I should think it would be yet stranger if we had human natures that did not accord with such a 'stone tablet'."

Replies from: hairyfigment
comment by hairyfigment · 2011-08-20T01:35:48.997Z · LW(p) · GW(p)

By your Devil's logic here, we would expect at least part of human nature to accord with the whole of this 'stone tablet'. I think we could vary the argument to avoid this conclusion. But as written it implies that each 'law' from the 'tablet' has a reflection in human nature, even if perhaps some other part of human nature works against its realization.

This implies that there exists some complicated aspect of human nature we could use to define morality which would give us the same answers as the 'stone tablet'.

Replies from: Arandur
comment by Arandur · 2011-08-20T04:27:35.708Z · LW(p) · GW(p)

Which sounds like that fuzzily-defined "conscience" thing. So suppose I say that this "Stone tablet" is not a literal tablet, but is rather a set of rules that sufficiently advanced lifeforms will tend to accord to? Is this fundamentally different than the opposite side of the argument?

Replies from: hairyfigment
comment by hairyfigment · 2011-08-21T03:14:01.130Z · LW(p) · GW(p)

Well, that depends. What does "sufficiently advanced" mean? Does this claim have anything to say about Clippy?

If it doesn't constrain anticipation there, I suspect no difference exists.

Replies from: Arandur
comment by Arandur · 2011-08-21T17:35:12.110Z · LW(p) · GW(p)

Ha! No. I guess I'm using a stricter definition of a "mind" than is used in that post: one that is able to model itself. I recognize the utility of such a generalized definition of intelligence, but I'm talking about a subclass of said intelligences.

Replies from: hairyfigment
comment by hairyfigment · 2011-08-21T17:54:09.426Z · LW(p) · GW(p)

Er, why couldn't Clippy model itself? Surely you don't mean that you think Clippy would change its end-goals if it did so (for what reason?)

Replies from: Arandur
comment by Arandur · 2011-08-22T00:46:04.788Z · LW(p) · GW(p)

... Just to check: we're talking about Microsoft Office's Clippy, right?

Replies from: Alicorn, orthonormal
comment by Alicorn · 2011-08-22T01:02:16.113Z · LW(p) · GW(p)

Not likely.

Replies from: Arandur
comment by Arandur · 2011-08-22T01:05:30.503Z · LW(p) · GW(p)

Oh dear; how embarrassing. Let me try my argument again from the top, then.

comment by orthonormal · 2011-09-06T13:34:17.985Z · LW(p) · GW(p)

Actually, this is what we're really talking about, not MS Word constructs or LW roleplayers.

comment by kilobug · 2011-09-23T16:30:29.410Z · LW(p) · GW(p)

« Then you believe in universally compelling arguments processed by a ghost in the machine. For every possible mind whose utility function assigns terminal value +1, mind design space contains an equal and opposite mind whose utility function assigns terminal value -1. »

That's true but that doesn't say anything about the sustainability of that given mind design (the possibility for the mind design to survive, either by having the individual to survive, or to create new individuals with similar mind designs).

A mind design that would value its own death very positively would commit suicide (unless unable to) and stop existing. A mind design that value pain (in the abstract meaning, dealing damages to its own physical support) would be much less likely to survive.

Many other possible mind designs would become completely tied in contradictions and chaos, and be unable to achieve anything. Think about a mind without "modus ponens", or a mind for which the values are chosen randomly and change every nanosecond...

And it gets even narrower when the mind is not alone, but has to live in society with other similar minds.

So while mind design space contains the exact opposite of every possible terminal values, the "coherent mind design space" (minds that are coherent enough to not self-destruct, not paralyze themselves and not collapse into chaos) is probably more limited.

But that still is so big that it contains minds like the paper-clip optimizer wanting the tile the solar system with paper-clips, no doubt about that one.

comment by [deleted] · 2012-01-05T19:45:57.674Z · LW(p) · GW(p)

It would be a pitiful mind indeed that demanded authoritative answers so strongly, that it would forsake all good things to have some authority beyond itself to follow.

Having an authority to follow might actually be that mind's one good thing. Maybe it really likes having such authority beyond itself.

While humans obviously don't consider that their only good this and there is human variation (I don't think everyone values it, I'm only certain at least a few do), it seems pretty clear to me that one of our good things, as in stuff we find worth bothering to acquire, is authority beyond ourselves.

If we find such a hypothetical authority we might feel a strong preference to move in its direction.

comment by Peterdjones · 2013-01-06T14:05:55.206Z · LW(p) · GW(p)

Well, that was one almighty exercise in false dichotomy.

comment by Will_Lugar · 2014-08-18T17:39:49.794Z · LW(p) · GW(p)

It is sometimes argued that happiness is good and suffering is bad. (This is tentatively my own view, but explaining the meaning of "good" and "bad," defending its truth, and expanding the view to account for the additional categories of "right" and "wrong" is beyond the scope of this comment.)

If this is true, then depending on what kind of truth it is, it may also be true in all possible worlds--and a fortiori, on all possible planets in this universe. Furthermore, if it is true on all possible planets that happiness is good and suffering is bad, this does not preclude the possibility that on some planets, murder and theft might be the best way toward everyone's happiness, while compassion and friendship might lead to everyone's misery. In such a case, then to whatever degree this scenario is theoretically possible, compassion and friendship would be bad, while murder and theft would be good.

Hence we can see that it might be the case that normative ethical truths differ from one planet to another, but metaethical truths are the same everywhere. On one level, this is a kind of moral relativism, but it is also based on an absolute principle. I personally think it is a plausible view, while I admit that this comment provides little exposition and no defense of it.

Replies from: themusicgod1
comment by themusicgod1 · 2017-06-22T16:36:45.526Z · LW(p) · GW(p)

Similarly, an even more defensible position might be Buddhist one, or that happiness is transitory and mostly a construction of the mind, and virtually always attached to suffering, but suffering is real and worth minimizing.

comment by Basil Marte · 2020-08-17T01:03:30.832Z · LW(p) · GW(p)
By changing a mind, you can change what it prefers; you can even change what it believes to be right; but you cannot change what is right.  Anything you talk about, that can be changed in this way, is not 'right-ness'.

If the characters were real people, I'd say here Obert is "right" while having a wrong justification. Just extrapolate the evolutionary origins of moral intuitions into any society in approximate technological stasis. "Rightness" is how the evolutionarily stable strategy feels like from the inside, and that depends on the environment.

If the population is not limited by the availability of food, thus single mothers can feed their children, some form of low-key polygyny/promiscuity are the reproductive strategy that ends up as the only game in town.

If instead food limits population, monogamy comes out victorious (for the bulk of the population, at least). If additionally we come know that hand-labor is expensive, then we can say that women are economically valuable (even if outright regarded as assets, they are very precious) and they can negotiate comparatively good treatment (as in, compared to the next paragraph). We might see related rituals, like bride-price, or the marriage ceremony looking like a kidnapping (the theft of a valuable laborer).

On the other hand, if hand-labor is cheap, then the output of a worker may not even earn the food necessary to sustain herself, and women are economic liabilities apart from their reproductive capacity. It is under these circumstances that we can find veiling, guarding, honor killings, FGM, and sati (killing widows). Groom-price (often confused with a different form of dowry under a single label) and, to avoid it, groom kidnapping happen here, too.

"Moral progress" happens by the environment changing the payoffs to strategies. Hating other tribes goes away temporarily when they become allies, and permanently when the allied tribes merge and it becomes too difficult to tell who belongs to which tribe. ("I think I'm three-eights blegg, by my maternal grandfather and by my paternal...")

One implication is that we have so much discussion on the nature of morality exactly because it is unclear what (if any) human behavior stands the best chance of propagating itself into the future with high fidelity. Alternative phrasing: this is an age of whalefall, and we get to implement policies other than morality, the one that satisfies Moloch. (This is not a new claim: the evolutionary origins of moral intuitions means that morality is how past policies of satisfying Moloch feel like from the inside.)