What Would You Do Without Morality?

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-29T05:07:07.000Z · LW · GW · Legacy · 186 comments

Contents

186 comments

To those who say "Nothing is real," I once replied, "That's great, but how does the nothing work?"

Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.

Devastating news, to be sure—and no, I am not telling you this in real life.  But suppose I did tell it to you.  Suppose that, whatever you think is the basis of your moral philosophy, I convincingly tore it apart, and moreover showed you that nothing could fill its place.  Suppose I proved that all utilities equaled zero.

I know that Your-Moral-Philosophy is as true and undisprovable as 2 + 2 = 4. But still, I ask that you do your best to perform the thought experiment, and concretely envision the possibilities even if they seem painful, or pointless, or logically incapable of any good reply.

Would you still tip cabdrivers?  Would you cheat on your Significant Other?  If a child lay fainted on the train tracks, would you still drag them off?

Would you still eat the same kinds of foods—or would you only eat the cheapest food, since there's no reason you should have fun—or would you eat very expensive food, since there's no reason you should save money for tomorrow?

Would you wear black and write gloomy poetry and denounce all altruists as fools?  But there's no reason you should do that—it's just a cached thought.

Would you stay in bed because there was no reason to get up?  What about when you finally got hungry and stumbled into the kitchen—what would you do after you were done eating?

Would you go on reading Overcoming Bias, and if not, what would you read instead?  Would you still try to be rational, and if not, what would you think instead?

Close your eyes, take as long as necessary to answer:

What would you do, if nothing were right?

186 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Lewis_Powell · 2008-06-29T05:22:54.000Z · LW(p) · GW(p)

Did you convinve me that nothing is morally right, or that all utilities are 0.

If you convinced me that there is no moral rightness, I would be less inclined to take action to promote the things I currently consider abstract goods, but would still be moved by my desires and reactions to my immediate circumstances.

If you did persuade me that nothing has any value, I suspect that, over time, my desires would slowly convince me that things had value again.

If, 'convincing' includes an effect on my basic desires (as opposed to my inferrentially derived) then I would would not be moved to act in any cognitively mediated way (though I may still exhibit behaviors with non-cognitive causes).

Replies from: None
comment by [deleted] · 2015-02-10T00:09:54.345Z · LW(p) · GW(p)

Why the assumption that morality is analysable with utilities?

Replies from: ike
comment by ike · 2015-02-10T14:21:44.195Z · LW(p) · GW(p)

https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem

Replies from: None
comment by [deleted] · 2015-02-15T17:41:43.741Z · LW(p) · GW(p)

...it has been shown in countless experiments that people do not behave in accordance with this theorem. So what conclusions do you want to draw from this?

...you do realise there are many problems with rational choice theory right? See chapter 3 and 4 from 'Philosophy of Economics: A Contemporary Introduction' by Julian Reiss for a brief introduction to the theory's problems. If you can't get your hands on that, see lectures 4-6 from Philosophy of Economics: Theory, Methods, and Values http://jreiss.org/jreiss.org/Teaching.html for an even briefer introduction.

...what has this got to do with morality?

Replies from: ike
comment by ike · 2015-02-15T17:49:48.571Z · LW(p) · GW(p)

I'm going to take a look at the lectures you linked later.

For now:

...what has this got to do with morality?

Your morals are your preferences; if you say that doing A is more moral than doing B, you prefer doing A to B (barring cognitive dissonance). So if preferences can be reduced to utilities, morality can be too.

In fact, you'd have to argue that the axioms don't apply to morality, and justify that position.

Replies from: None
comment by [deleted] · 2015-02-15T18:16:43.862Z · LW(p) · GW(p)

I highly doubt that morals are preferences, with or without what you (assumedly loosely) term cognitive dissonance. One can have morals that aren't preferences:

If one is a Christian deontologist, one thinks everyone ought to follow a certain set of rules, but one needn't prefer that - one might be rather pleased that only oneself will get into heaven by the following the rules. One might believe things, events or people are morally "good" or "bad" without preferring or preferring not that thing, event or person. For instance, one might think that a person is bad without preferring that person didn't exist. One can believe one ought to do something, without wanting to do it. This is seen very often in most people.

And one can obviously have preferences which aren't morals. For instance, I can prefer to eat a chocolate now without thinking I ought to do so.

We should also be wary of equivocating on what we mean by "preferences". Revealed preference theory is very popular in economics, and it equates preferences with actions, which evidently stops us having preferences about anything we don't do, and thus means most of the usages of the word "preference" above are illegitimate. I think we normally mean some psychological state when we refer to a preference. For instance, I see the word used as "concious desire" pretty often.

Replies from: ike
comment by ike · 2015-02-15T18:40:58.603Z · LW(p) · GW(p)

If one is a Christian deontologist, one thinks everyone ought to follow a certain set of rules, but one needn't prefer that - one might be rather pleased that only oneself will get into heaven by the following the rules. One might believe things, events or people are morally "good" or "bad" without preferring or preferring not that thing, event or person. For instance, one might think that a person is bad without preferring that person didn't exist. One can believe one ought to do something, without wanting to do it. This is seen very often in most people.

I'm talking about personal morals here, i.e. "what should I do", which are the only ones that matter for my own decision making. For my own actions, the theorem shows that there must be some utility function that captures my decision-making, or I am irrational in some way.

Even if preferences are distinct from morals, each will still be expressible by a utility function or fail some axiom.

And one can obviously have preferences which aren't morals. For instance, I can prefer to eat a chocolate now without thinking I ought to do so.

That example is one where the errors are so low that it doesn't make sense to spend time thinking about it. If you value your happiness and consider it good, then you ought to eat the chocolate, but it may represent so little utility that it uses more just to figure that out.

We should also be wary of equivocating on what we mean by "preferences". Revealed preference theory is very popular in economics, and it equates preferences with actions, which evidently stops us having preferences about anything we don't do, and thus means most of the usages of the word "preference" above are illegitimate. I think we normally mean some psychological state when we refer to a preference. For instance, I see the word used as "concious desire" pretty often.

When I say preference I mean "what state do you want the world to be in". The problem of akrasia is well known, and it means that our actions don't always express our preferences.

Preferences should be over outcomes, while actions are not. An imbalance can be akrasia, or the result of a misprediction.

Regardless of how you define preference, if it meets the axioms then it can be expressed as a utility function. So every form of preference corresponds to different utility functions, whether it's revealed, actual, or some other thing.

Replies from: None
comment by [deleted] · 2015-02-15T19:19:12.742Z · LW(p) · GW(p)

Oh, so now you're just talking about personal morals. One of my examples already covered that: 'One can believe one ought to do something, without wanting to do it'. Why the presumption that utility functions capture decision-making? You acknowledge that preferences and hence utilities don't always lead to decisions. And why the assumption that not meeting the axioms of rational choice theory makes you irrational? Morality might not even be appropriately described by the axioms of rational choice theory; how can you express everyone's moral beliefs as real numbers? On the chocolate example, I can think I ought not eat the chocolate, but nevertheless prefer to eat it, and even actually eat; so your counterargument does not work. Given that you are not claiming all preferences meet the axioms - only "rational" preferences do (where's your support?) - you cannot say 'every form of preference corresponds to different utility functions, whether it's revealed, actual, or some other thing'. And again, we ought to ask ourselves whether preferences or rational preferences are actually the right sort of thing to be expressed by the axioms; can they really be expressed as real numbers?

Replies from: ike
comment by ike · 2015-02-15T19:43:12.092Z · LW(p) · GW(p)

Which axiom do you think shouldn't apply? If you can't give me an argument why not to agree with any given axiom, then why shouldn't I use them?

Given that you are not claiming all preferences meet the axioms - only "rational" preferences do (where's your support?) - you cannot say 'every form of preference corresponds to different utility functions, whether it's revealed, actual, or some other thing'.

Obviously, if I prefer X to Y, and also prefer Y to X, then I'm being incoherent and that can't be captured by a utility function. I expressly outlaw those kind of preferences.

Argue for a specific form of preference that violates the axioms.

Replies from: None
comment by [deleted] · 2015-02-15T20:25:28.570Z · LW(p) · GW(p)

If you can't give me an argument as to why all your axioms apply, then why should I accept any of your claims?

A specific form of preference that violates the axioms? Any preference which is "irrational" under those axioms, and you already acknowledged preferences of that sort existed.

Replies from: ike, dxu
comment by ike · 2015-02-15T20:31:24.446Z · LW(p) · GW(p)

If you can't give me an argument as to why all your axioms apply, then why should I accept any of your claims?

I see no counterexamples to any of the axioms. If they're so wrong, you should be able to come up with a set of preferences that someone could actually support.

A specific form of preference that violates the axioms? Any preference which is "irrational" under those axioms, and you already acknowledged preferences of that sort existed.

You need to argue that those are useful in some sense. Preferring A over B and B over A doesn't follow the axioms, but I see no reason to use such systems. Is that really your position, that coherence and consistency don't matter?

comment by dxu · 2015-02-15T21:49:20.858Z · LW(p) · GW(p)

Any preference which is "irrational" under those axioms, and you already acknowledged preferences of that sort existed.

As an extremely basic example: I could prefer chocolate ice cream over vanilla ice cream, and prefer vanilla ice cream over pistachio ice cream. Under the Von Neumann-Morgenstein axioms, however, I cannot then prefer pistachio to chocolate because that would violate the transitivity axiom. You are correct that there is probably someone out there who holds all three preferences simultaneously. I would call such a person "irrational". Wouldn't you?

comment by Lewis_Powell · 2008-06-29T05:24:50.000Z · LW(p) · GW(p)

Ugh, sorry about the typos, I am commenting from a cell phone, and have clumsy thumbs.

comment by Wendy_Collings · 2008-06-29T05:28:59.000Z · LW(p) · GW(p)

First, can you clarify what you mean by "everything is permissible and nothing is forbidden"?

In my familiar world, "permissible" and "forbidden" refer to certain expected consequences. I can still choose to murder, or cheat, blaspheme, neglect to earn a living, etc; they're only forbidden in the sense of not wanting to experience the consequences.

Are you suggesting I imagine that the consequences would be different or nonexistent? Or that I would no longer have a preference about consequences? Or something else?

comment by John · 2008-06-29T05:29:18.000Z · LW(p) · GW(p)

"Morality" generally refers to guidelines on one of two things:

(1). Doing good to other sentients. (2). Ensuring that the future is nice.

If you wanted to make me stop caring about (1), you could convince me that all other sentients were computer simulations who were different in kind than I was, and that there emotions were simulated according to sophisticated computer models. In that case, I would probably continue to treat sentients as peers, because things would be a lot more boring if I started thinking of them as mere NPCs.

If you wanted to make me stop caring about (2), you could tell me that I was living in computer simulation that would grant my every request (similar to the plot of this novel). If that were the case, I would set up sophisticated games for myself. Just taking the path of least resistance and maximizing momentary dopamine release would get boring quickly. (There's a reason why you see more kids eating candy than adults.) I would think carefully before I even experimented with maximizing dopamine release, since it would make everything else seem petty by comparison.

Either way, you would be ruining the secret to happiness:

"The secret of happiness is to find something more important than you are and dedicate your life to it." - Dan Dennet

comment by RobinHanson · 2008-06-29T05:30:22.000Z · LW(p) · GW(p)

Well I've argued that shoulds are overrated, that wants are enough. I really can't imagine you convincing me that I don't want anything more than anything else.

comment by an · 2008-06-29T05:41:46.000Z · LW(p) · GW(p)

I'd do everything that I do now. Moral realism demolished.

comment by Laura__ABJ · 2008-06-29T05:42:55.000Z · LW(p) · GW(p)

"Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden."

First Existential Crisis: Age 15

"Would you wear black and write gloomy poetry and denounce all altruists as fools?"

Been there, done that.

"But there's no reason you should do that - it's just a cached thought."

Realized this.

"Would you stay in bed because there was no reason to get up?"

Tried that.

"What about when you finally got hungry and stumbled into the kitchen - what would you do after you were done eating?"

Stare at the wall.

"Would you go on reading Overcoming Bias, and if not, what would you read instead?"

Shakespeare, Nitzsche

"Would you still try to be rational, and if not, what would you think instead"

No-- Came up with entire philosophy of "It doesn't matter if anything I say, do, or think is consistent with itself or each other... everything in my head has been set up by the universe- my parents ideas of right and wrong- television- paternalistic hopes of approving/forgiving/nonexistent god and his ability to grant immortality, so why should I worry about trying to put it together in any kind of sensible fashion? Let it all sort itself out...

"What would you do, if nothing were right?" What felt best.

comment by Jadagul · 2008-06-29T05:48:18.000Z · LW(p) · GW(p)

Eliezer: I'm finding this one hard, because I'm not sure what it would mean for you to convince me that nothing was right. Since my current ethics system goes something like, "All morality is arbitrary, there's nothing that's right-in-the-abstract or wrong-in-the-abstract, so I might as well try to make myself as happy as possible," I'm not sure what you're convincing me of--that there's no particular reason to believe that I should make myself happy? But I already believe that. I've chosen to try to be happy, but I don't think there's a good 'reason' for it.

On the other hand, maybe I right now am the end result you're looking for. In which case, yes, I do tip cabdrivers; no, I don't cheat; and usually I'd pull the kid off, if there weren't much risk to me.

comment by Ian_C. · 2008-06-29T05:51:15.000Z · LW(p) · GW(p)

I guess logically I would have to do nothing, since there would be no logical basis to perform any action. This would of course be fatal after a few days, since staying alive requires action.

(I want to emphasize this is just a hypothetical answer to a hypothetical question - I would never really just sit down and wait to die.)

Replies from: atorm
comment by atorm · 2011-11-03T16:28:56.792Z · LW(p) · GW(p)

If it's not what you would really do, you're not answering the question.

comment by Kip_Werking · 2008-06-29T06:05:33.000Z · LW(p) · GW(p)

I'm already convinced that nothing is right or wrong in the absolute sense most people (and religions) imply.

So what do I do? Whatever I want. Right now, I'm posting a comment to a blog. Why? Not because it's right. Right or wrong has nothing to do with it. I just want to.

comment by Roland2 · 2008-06-29T06:10:17.000Z · LW(p) · GW(p)

Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.

Suppose I proved that all utilities equaled zero.

If I still feel hunger then food has an utility > 0. If I don't feel anything anymore, then I wouldn't care about anything.

So our morality is defined by our emotions. The decisions I make are a tradeoff. Do I tip the waiter? Depends on my financial situation and if I'm willing to endure the awkwardness of breaking a social convention. Yes, I've often eaten without tipping.

Do I save the human in need? Yes, I have the tendency to do so, although this also depends on a series of factors. And I'm aware that this is also hardwired empathy. Abstract moral principles are just rationalizations from our emotionally hardwired brain.

I cannot imagine myself without morality because that wouldn't be me, but another brain.

Does your laptop care if the battery is running out? Yes, it will start beeping, because it is hardwired to do so. If you removed this hardwired beeping you have removed the laptop's morality.

Morality is not a ghost in the machine, but it is defined by the machine itself.

Eliezer you can prove to me that all utilities are 0 but since that wouldn't change my emotional wiring, for me some utilities would still be != 0.

comment by AndyWood · 2008-06-29T06:12:33.000Z · LW(p) · GW(p)

I have thought on this, and concluded that I would do nothing different. Nothing at all. I do not base my actions on what I believe to be "right" in the abstract, but upon whether I like the consequences that I forecast. The only thing that could and would change my actions is more courage.

comment by Tiiba3 · 2008-06-29T06:18:46.000Z · LW(p) · GW(p)

Let's say I have a utlity function and a finite map from actions to utilities. (Actions are things like moving a muscle or writing a bit to memory, so there's a finite number.)

One day, the utility of all actions becomes the same. What do I do? Well, unlike Asimov's robots, I won't self-destructively try to do everything at once. I'll just pick an action randomly.

The result is that I move in random ways and mumble gibberish. Althogh this is perfectly voluntary, it bears an uncanny resemblance to a seizure.

Regardless of what else is in a machine with such a utility function, it will never surpass the standard of intelligence set by jellyfish.

comment by Nominull3 · 2008-06-29T06:45:12.000Z · LW(p) · GW(p)

I am already fairly well convinced of this; I am hoping against hope you have something up your sleeve to change my mind.

I had this revelation sometime back. I tried living without meaning for a week, and it turn out that not a whole lot changed. Oops?

comment by Joseph_Knecht · 2008-06-29T06:47:19.000Z · LW(p) · GW(p)

Like many others here, I don't believe that there is anything like a moral truth that exists independently of thinking beings (or even dependently on thinking beings in anything like an objective sense), so I already live in something like that hypothetical. Thus my behavior would not be altered in the slightest.

comment by JamesAndrix · 2008-06-29T07:21:55.000Z · LW(p) · GW(p)

In general, I'd go back to being an amoralist.

My-Moral-Philosophy is either as true as 2+2=4 or as true as 2+2=5, I'm not sure. or 0.0001*1>0.

If it is wrong, then it's still decent as philosophy goes, and I just won't try to use math to talk about it. Though I'd probably think more about another system I looked at, because it seems like more fun.

But just because it's what a primate wants doesn't mean it's the right answer.

@Ian C and Tiiba: Doing nothing or picking randomly are also choices, you would need a reason for them to be the correct rational choice. 'Doing nothing' in particular is the kind of thing we would design into an agent as a safe default, but 'set all motors to 0' is as much a choice as 'set all motors to 1'. Doing at random is no more correct than doing each potential option sequentially.

Elizer has us suppose he proved it, but if you were to experience such a situation, what is the probability that he tricked you into accepting a faulty proof, or that you are suffering some other cognitive failure?

To me, that leaves a nonzero probability of some utility.

comment by Brian_Jaress2 · 2008-06-29T07:31:46.000Z · LW(p) · GW(p)

Unlike most of the others who've commented so far, I actually would have a very different outlook on life if you did that to me.

But I'm not sure how much it would change my behavior. A lot of the things you listed -- what to eat, what to wear, when to get up -- are already not based on right and wrong, at least for me. I do believe in right and wrong, but I don't make them the basis of everything I do.

For the more extreme things, I think a lot of it is instinct and habit. If I saw a child on the train tracks, I'd probably pull them off no matter what you'd proved to me. Even for more abstract things, like fraud, the thought that it would be wrong if there were a basis for right and wrong might be enough to make me feel I didn't want to do it.

comment by Nick_Tarleton · 2008-06-29T07:33:49.000Z · LW(p) · GW(p)

I don't know to what extent my moral philosophy affects my behavior vs. being rationalization of what I would want to want anyway. Ignoring existential despair (I think I've gotten that out of my system, hopefully permanently) I would probably act a little more selfish, although the apparently rational thing for me to do given even total selfishness and no empathy (at least with a low discount rate and maybe a liberal definition of "self") is not very different from the apparently rational thing given my current morality.

comment by Tiiba3 · 2008-06-29T07:34:00.000Z · LW(p) · GW(p)

I know that random behavior requires choices. The machine IS choosing - but because all choices are equal, the result of "max(actionList)" is implementation-dependent. "Shut down OS" is in that list, too, but "make no choice whatsoever" simply doesn't belong there.

comment by Michael3 · 2008-06-29T07:39:55.000Z · LW(p) · GW(p)

Isn't this the movie Groundhog Day, but with certain knowledge that the world will reset daily forever? No happy ending.

I'd just get really, really bored. Studying something (learning the piano, as he does in the movie) would be the only open-ended thing you could do. Otherwise, you'd be living forever with the same set of people, and the same more-or-less limited set of possibilities.

comment by Ben_Wraith2 · 2008-06-29T07:46:29.000Z · LW(p) · GW(p)

Since my current moral system is pretty selfish and involves me doing altruistic things to make me happy, I wouldn't change a thing. At first glance it might appear that my actions should be more shortsighted since my long-term goals wouldn't matter, but my short-term goals and happiness wouldn't matter just as much. Is this thought exercise another thing that just all adds up to normality?

comment by an · 2008-06-29T08:23:47.000Z · LW(p) · GW(p)

James Andrix 'Doing nothing or picking randomly are also choices, you would need a reason for them to be the correct rational choice. 'Doing nothing' in particular is the kind of thing we would design into an agent as a safe default, but 'set all motors to 0' is as much a choice as 'set all motors to 1'. Doing at random is no more correct than doing each potential option sequentially.'

Doing nothing or picking randomly are no less rationally justified than acting by some arbitrary moral system. There is no rationally justifiable way that any rational being "should" act. You can't rationally choose your utility function.

comment by an · 2008-06-29T08:26:42.000Z · LW(p) · GW(p)

'You can't rationally choose your utility function.' - I'm actually excepting that Eliezer writes a post on this, it's a core thing when thinking about morality etc

comment by Shane_Legg · 2008-06-29T09:52:02.000Z · LW(p) · GW(p)

Well, to start with I'd keep on doing the same thing. Just like I do if I discover that I really live in a timeless MWI platonia that is fundamentally different to what the world intuitively seems like.

But over time? Then the answer is less clear to me. Sometimes I learn things that firstly affect my world view in the abstract, then the way I personally relate to things, and finally my actions.

For example, evolution and the existence of carnivores. As I child I'd see something like a hawk tearing the wings off a little baby bird. I'd think that the hawk was very nasty and I'd want to intervene. But once I understood that this is what the hawk must do to survive, and indeed this process of weeding out the weak both keeps the sparrow population under control and helps improve their overall genetic fitness. Moreover, without trillions of similar brutal acts life would never have evolved at all. Well, with a certain level of discomfort, I can accept this baby bird getting violently killed.

Now, I'm not saying that after learning that all utility functions equal zero that I'd eventually totally change my behaviour. I don't know. But I imagine that it could effect the way I think about the world in ways that might eventually affect my behaviour.

comment by Philip_Hunt · 2008-06-29T10:19:16.000Z · LW(p) · GW(p)

I'd behave exactly the same as I do now.

What is morality anyway? It is simply intuitive game theory, that is, it's a mechanism that evolved in humans to allow them to deal with an environment where conspecifics are both potential competitors and co-operators. The only ways you could persuade me that "nothing is moral" would be (1) by killing all humans except me, or (2) by surgically removing the parts of my brain that process moral reasoning.

comment by Dynamically_Linked · 2008-06-29T11:01:46.000Z · LW(p) · GW(p)

Eliezer, I've got a whole set of plans ready to roll, just waiting on your word that the final Proof is ready. It's going to be bloody wicked... and just plain bloody, hehe.

comment by Dynamically_Linked · 2008-06-29T11:02:18.000Z · LW(p) · GW(p)

Seriously, most moral philosophies are against cheating, stealing, murdering, etc. I think it's safe to guess that there would be more cheating, stealing, and murdering in the world if everyone became absolutely convinced that none of these moral philosophies are valid. But of course nobody wants to publicly admit that they'd personally do more cheating, stealing, and murdering. So everyone is just responding with variants of "Of course I wouldn't do anything different. No sir, not me!"

Except apparently Shane Legg, who doesn't seem to mind the world knowing that he's just waiting for any excuse to start cheating, stealing, and murdering. :)

comment by Erik_Mesoy · 2008-06-29T12:49:00.000Z · LW(p) · GW(p)

The post says "when you finally got hungry [...] what would you do after you were done eating?", which I take to understand that I still have desire and reason to eat. But it also asks me to imagine a proof that all utilities are zero, which confuses me because when I'm hungry, I expect a form of utility (not being hungry, which is better than being hungry) from eating. I'm probably confused on this point in some manner, though, so I'll try to answer the question the way I understand it, which is that the more abstracted/cultural/etc utilities are removed. (Feel free to enlighten/flame me on this point.)

I expect that I'd probably do a number of things that I currently avoid, most of which would probably be clustered under "psychopathy". I think there's something wrong with them now, but I wouldn't think that there was something wrong with them post-proof. Most of my behavior would probably stay the same due to enlightened self-interest, and I'm not sure what would change. For example, the child on the train tracks. My current moral system says I should pull them off, no argument. If you ripped that system away, I'd weigh off the possible benefit the child might bring me in the future (since it's in my vicinity, it's probably a First World kid with a better than average chance of a good education and a productive life) against considerations like overpopulation. I'd cheat on my Significant Other if I thought it would increase my expected happiness (roughly: "if I can get away with it"). I'd go on reading Overcoming Bias and being rational because rationality seems like a better tool for deciding what to eat when hungry, such as at the basic level of bread vs. candles, and generalise from there. (If that goes away, I probably die horribly from misnourishment.)

comment by dloye · 2008-06-29T12:54:05.000Z · LW(p) · GW(p)

I hope I'd hold the courage of my convictions enough to commit suicide quickly. You would have destroyed my world, so best to take myself out completely.

comment by anonymous7 · 2008-06-29T13:00:11.000Z · LW(p) · GW(p)

I believe that "nothing is right or wrong", but that doesn't affect my choices much. There is nothing inconsistent with that.

comment by JulianMorrison · 2008-06-29T13:06:56.000Z · LW(p) · GW(p)

It's pretty evident to me that if you convinced me (you can't, you'd have to rewire my brain and suppress a handful of hormonal feedbacks - but suppose you did) that all utilities were 0, I'd be dead in about as long as total neglect will kill a body - a couple of days for thirst, perhaps. And in the meantime I'd be clinically comatose. No motive implies no action.

comment by Daniel_Reeves · 2008-06-29T13:15:53.000Z · LW(p) · GW(p)

It's like asking how our world would be if "2 + 2 = 5." My answer to that would be, "but it doesn't."

So unless you can convince me that one can exist without morality, then my answer is, "but we can't exist without morality."

comment by conchis · 2008-06-29T13:21:34.000Z · LW(p) · GW(p)

I suspect I am misunderstanding your question in at least a couple of different ways. Could you clarify?

I think I already believe that there's no right and wrong, and my response is to largely continue pretending that there is because it makes things easier (alternatively, I've chosen to live my life by a certain set standards, which happen to coincide with at least some versions of what others call morality --- I just don't call them "moral"). But the fact that you seem to equate proving the absence of morality with proving all utilities are zero suggests we mean different things by the words; they strike me as entirely distinct propositions. I'm also having serious difficulty imagining a situation where I still have wants and desires (maybe even values), but there's no utility. Help?

comment by Robin_Z · 2008-06-29T13:59:36.000Z · LW(p) · GW(p)

Wow, there are a lot of nihilists here.

I answered on my own blog, but I guess I'm sort of with dloye at 08:54: I'd try to keep the proof a secret, just because it feels like it would be devastating to a lot of people.

comment by Unknown · 2008-06-29T14:07:45.000Z · LW(p) · GW(p)

It seems people are interpreting the question in two different ways, one that we don't have any desires any more, and therefore no actions, and the other in the more natural way, namely that "moral philosophy" and "moral claims" have no meaning or are all false. The first way of interpreting the question is useless, and I guess Eliezer intended the second.

Most commenters are saying that it would make no difference to them. My suspicion is that this is true, but mainly because they already believe that moral claims are meaningless or false.

Possibly (I am not sure of this) Eliezer hopes that everyone will answer in this way, so that he can say that morality is unnecessary.

Personally, I agree with Dynamically Linked. I would start out by stealing wallets and purses, and it would just go downhill from there. In other words, if I didn't believe that such things were wrong, the bad feeling that results from doing them, and the idea that it hurts people, wouldn't be strong enough to stop me, and once I got started, the feeling would go away too-- this much I know from the experience of doing wrong. And once I had changed the way I feel about these things, the way I feel about other things (too horrible to mention at the moment) would begin to change too. So I can't really tell where it would end, but it would be bad (according to my present judgment).

There are others who would follow or have followed the same course. TGGP says that over time his life did change after he ceased to believe in morality, and at one point he said that he would torture a stranger to avoid stubbing his toe, which presumably he would not have done when he believed in morality.

So if it is the case that Eliezer hoped that morality is unnecessary to prevent such things, his hope is in vain.

comment by Unknown · 2008-06-29T14:28:14.000Z · LW(p) · GW(p)

I just had another idea: maybe I would begin to design an Unfriendly AI. After all, being an evil genius would at least be fun, and besides, it would be a way to get revenge on Eliezer for proving that morality doesn't exist.

comment by Stephanie · 2008-06-29T14:59:55.000Z · LW(p) · GW(p)

I think my behavior would be driven by needs alone. However, I have some doubts. Say I needed money and decided to steal. If the person I stole from needed the money more than I did and ended up hurting as a result, with or without a doctrine of wrong & right, wouldn't I still feel bad for causing someone else pain? Would I not therefore refrain from stealing from that person? Or are you saying that I would no longer react emotionally to the consequences of my actions? Are my feelings a result of a learned moral doctrine or something else?

comment by poke · 2008-06-29T15:00:15.000Z · LW(p) · GW(p)

I'd do everything I do now. You can't escape your own psychology and I've already expressed my skepticism about the efficacy of moral deliberation. I'll go further and say that nobody would act any differently. Sure, after you shout in from the rooftops, maybe there will be an upsurge in crime and the demand for black nail polish for a month or so but when the dust settled nothing would have changed. People would still cringe at the sight of blood and still react to the pain of others just as they react to their own pain. People would still experience guilt. People would still find it hard to lie to loved ones. People would still eat when they got hungry and drink when they got thirsty. We vastly overestimate our ability to alter our own behavior.

comment by Caledonian2 · 2008-06-29T15:09:07.000Z · LW(p) · GW(p)

Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.
I'd do precisely the same thing I would do upon being informed that an irresistible force has just met an immovable object:

Inform the other person that they didn't know what they were talking about.

Nothing is right, you say? What a very curious position to take.

comment by L._Zoel · 2008-06-29T15:26:38.000Z · LW(p) · GW(p)

Does the fact that I'd do absolutely nothing differently mean that I'm already a nihilist?

comment by JamesAndrix · 2008-06-29T15:41:14.000Z · LW(p) · GW(p)

There is no rationally justifiable way that any rational being "should" act.

How do you know?

comment by Pablo_Stafforini_duplicate0.27024432527832687 · 2008-06-29T15:50:50.000Z · LW(p) · GW(p)

A brief note to the (surprisingly numerous) egoists/moral nihilists who commented so far. Can't you folks see that virtually all the reasons to be skeptical about morality are also reasons to be skeptical about practical rationality? Don't you folks realize that the argument that begins questioning whether one should care about others naturally leads to the question of whether one should care about oneself? Whenever I read commenters here proudly voicing that they are concerned with nothing but their own "persistence odds", or that they would willingly torture others to avoid a minor discomfort to themselves, I am reminded of Kieran Healy's remarks about Mensa, "the organization for highly intelligent people who are nevertheless not quite intelligent enough not to belong to it." If you are so smart that you can see through the illusion that is morality, don't be so stupid to take for granted the validity of practical rationality. Others may not matter, but if so you probably don't either.

comment by constant3 · 2008-06-29T15:54:12.000Z · LW(p) · GW(p)

Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.

There are different ways of understanding that. To clarify, let's transplant the thought experiment. Suppose you learned that there are no elephants. This could mean various things. Two things it might mean:

1) That there are no big mammals with trunks. If you see what you once thought was an elephant stampeding in your direction, if you stay still nothing will happen to you because it is not really there. If you offer a seeming elephant peanuts, the peanuts will pass through the trunk which is not there and will fall to the ground.

2) That big mammals with trunks are not elephants. If you see what you once thought was an elephant stampeding in your direction, if you stay still you will be trampled. If you offer a seeming elephant peanuts, the animal will accept and enjoy the peanuts.

Among those who would be persuaded that there is no morality, those who interpret the 'no morality' claim as analogous to (1) will change their behavior. Those who interpret the 'no morality' claim as analogous to (2) will not change their behavior.

(1) is a substantial claim about the world. (2) is a claim about language, about what how things should be labeled.

Those who claim that they would change nothing in their activity are treating the no-morality hypothetical as if it were merely a claim about how things should be labeled. Those who claim that they would change their behavior are treating the no-morality hypothetical as if it were a substantial claim about the world.

comment by Arnt_Richard_Johansen · 2008-06-29T16:45:23.000Z · LW(p) · GW(p)

If I were actually convinced that there is no right or wrong (very unlikely), I would probably do everything I could to keep the secret from getting out.

Even if there is no morality, my continued existence relies on everyone else believing that there is one, so that they continue to behave altruistically towards me.

comment by an · 2008-06-29T16:57:55.000Z · LW(p) · GW(p)

Pablo Stafforini A brief note to the (surprisingly numerous) egoists/moral nihilists who commented so far. Can't you folks see that virtually all the reasons to be skeptical about morality are also reasons to be skeptical about practical rationality? Don't you folks realize that the argument that begins questioning whether one should care about others naturally leads to the question of whether one should care about oneself? Whenever I read commenters here proudly voicing that they are concerned with nothing but their own "persistence odds", or that they would willingly torture others to avoid a minor discomfort to themselves, I am reminded of Kieran Healy's remarks about Mensa, "the organization for highly intelligent people who are nevertheless not quite intelligent enough not to belong to it." If you are so smart that you can see through the illusion that is morality, don't be so stupid to take for granted the validity of practical rationality. Others may not matter, but if so you probably don't either.

Morality is a tool for self-interest. Acting cooperatively was good for you in the ancestral enviroment, so people who had strong moral feelings did better. People who are under the illusion that action "should" have a rational basis construct rationalizations for morality, because they want to act morally for reasons that have nothing to do with rationality.

Self-interest is no more rational that moral behaviour. People also seek self-interest because that's just how their genes have wired their monkey brains to work.

A being of pure rationality and no desires would do nothing. Many apparently people think that it could come to a conclusion of what to do by discovering some universal "should" by rational deliberation, but that's wrong.

This is existentialism 101, I know, but it's also true.

On the other hand,I can't imagine what would make me skeptical about practical rationality. The point of it is that it works in predicting my experience, and I seem to desire to know about that which determines my experience. Showing that practical rationality is wrong is an empirical matter, showing that it doesn't work.

comment by AndyWood · 2008-06-29T17:01:34.000Z · LW(p) · GW(p)

Dynamically Linked: I suspect you have completely misrepresented the intentions of at least most of those who said they wouldn't do anything differently. Are you just trying to make a cynical joke?

comment by Andy_M · 2008-06-29T17:07:58.000Z · LW(p) · GW(p)

I would play a bunch of video games -- not necessarily Second Life, but just anything to keep my mind occupied during the day. I would try to join some sort of recreational sports league, and I would find a job that paid me just enough money to solicit a regular supply of prostitutes.

comment by Sebastian_Hagen2 · 2008-06-29T17:32:03.000Z · LW(p) · GW(p)

Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.
I'm a physical system optimizing my environment in certain ways. I prefer some hypothetical futures to others; that's a result of my physical structure. I don't really know the algorithm I use for assigning utility, but that's because my design is pretty messed up. Nevertheless, there is an algorithm, and it's what I talk about when I use the words "right" and "wrong".
Moral rightness is fundamentally a two-place function: it takes both an optimization process and a hypothetical future as arguments. In practice, people frequently use the curried form, with themselves as the implied first argument.

Suppose I proved that all utilities equaled zero.
That result is obviously false for my present self. If the proof pertains to that entity, it's either incorrect or the formal system it is phrased in is inappropriate for modeling this aspect of reality.
It's also false for all of my possible future selves. I refuse to recognize something which doesn't have preferences over hypothetical futures as a future-self of me; whatever it is, it's lost too many important functions for that.

comment by DonGeddis · 2008-06-29T17:55:44.000Z · LW(p) · GW(p)

Dynamically Linked said:

Seriously, most moral philosophies are against cheating, stealing, murdering, etc. I think it's safe to guess that there would be more cheating, stealing, and murdering in the world if everyone became absolutely convinced that none of these moral philosophies are valid.

That's not a safe guess at all. And in fact, is likely wrong.

You observe that (most?) moral philosophies suggest your list of sins are "wrong". But then you guess that people tend not to do these things because the moral philosophies say they are wrong.

There's another alternative. It could be that human behavior is generally constrained by something else (e.g. utility maximization), and it is this far more fundamental force which prevents much "immoral" sinning, and that explicit "moral philosophies" are actual constrained by observed human behavior.

In other words, you've reversed cause and effect.

(Thus: the moral philosophies are not valid, but the behavior constraints are still rational nonetheless.)

comment by Nick5 · 2008-06-29T17:57:08.000Z · LW(p) · GW(p)

I find this question kind of funny. I already feel that "that everything is permissible and nothing is forbidden", and it isn't DEVASTATING in the least; it's liberating. I already commented in this under "Heading Towards Morality". Morals are just opinions, and justification is irrelevant. I don't need to justify that I enjoy pie or dislike country music any more than I need to justify disliking murder and enjoying sex. I think it can be jarring, certainly, to make the transition to such extreme relativism, but I would not call it devastating, necessarily.

comment by an · 2008-06-29T18:09:00.000Z · LW(p) · GW(p)

The point is: even in a moralless meaningless nihilistic universe, it all adds up to normality.

comment by Symmetry · 2008-06-29T18:38:00.000Z · LW(p) · GW(p)

Another perspective on the meaning of morality:

On one had there is morality as "those things which I want." I would join a lot of people here in saying that I think that what I want is arbitrary in that it was caused by some combination of my nature and nurture, rather than being in any fundamental way a product of my rationality. At the same time I can't deny that my morality is real, or that it governs my behavior. This is why I would call myself a moral skeptic, along the lines of Hume, rather than a nihilist. I also couldn't become an egoist without giving up my moral skepticism.

So what would it mean, and what would I do if I was stripped of this sort of morality? I don't think I can properly imagine it since I don't believe I can even imagine person-hood without this kind of morality.

On the other hand there is the morality that is the set of rules I use to bring my various wants and desires into harmony with each other. I can imagine this being removed from me while I still remain me, and I think this would result in a lot of incoherent and possibly hedonistic behavior before I recreated something like it.

comment by Unknown3 · 2008-06-29T18:40:00.000Z · LW(p) · GW(p)

Some people on this blog have said that they would do something different. Some people on this blog have said that they actually came to that conclusion, and actually did something different. Despite these facts, we have commenters projecting themselves onto other people, saying that NO ONE would do anything different under this scenario.

Of course, people who don't think that anything is right or wrong also don't think it's wrong to accuse other people of lying, without any evidence.

Once again, I most certainly would act differently if I thought that nothing was right or wrong, because there are many things that I restrain myself from doing precisely because I think they are wrong, and for no other reason-- or at least for no other reason strong enough to stop me from doing them.

comment by AndyWood · 2008-06-29T18:59:00.000Z · LW(p) · GW(p)

Unknown: I don't think that it is morally wrong to accuse people of lying. I think it detracts from the conversation. I want the quality of the conversation to be higher, in my own estimation, therefore I object to commenters accusing others of lying. Not having a moral code does not imply that one need be perfectly fine with the world devolving into a wacky funhouse. Anything that I restrain myself from doing, would be for an aversion to its consequences, including both consequences to me and to others. I agree with you about the fallacy of projecting, and it runs both ways.

comment by Laura__ABJ · 2008-06-29T19:02:00.000Z · LW(p) · GW(p)

Pablo- I have not yet resolved whether I should care about creating the 'positive' singularity for or more or less this reason. Why should I, the person I am now, care about the persistence of some completely different, incomprehensible, and unsympathetic form of 'myself' that will immediately take over a few nanoseconds after it has begun... I kind of like who I am now. We die each moment and each we are reborn- why should literal death be so abhorrent? Esp. if you think you can look at the universe from outside time as if it were just another dimension of space and see all fixed in some odd sense...

comment by Phil_Goetz · 2008-06-29T19:07:00.000Z · LW(p) · GW(p)

Roland wrote:

.I cannot imagine myself without morality because that wouldn't be me, but another brain.

Does your laptop care if the battery is running out? Yes, it will start beeping, because it is hardwired to do so. If you removed this hardwired beeping you have removed the laptop's morality.

Morality is not a ghost in the machine, but it is defined by the machine itself.

Well put.

I'd stop being a vegetarian. Wait; I'm not a vegetarian. (Are there no vegetarians on OvBias?) But I'd stop feeling guilty about it.

I'd stop doing volunteer work and donating money to charities. Wait; I stopped doing that a few years ago. But I'd stop having to rationalize it.

I'd stop writing open-source software. Wait; I already stopped doing that.

Maybe I'm not a very good person anymore.

People do some things that are a lot of work, with little profit, mostly for the benefit of others, that have no moral dimension. For instance, running a website for fans of Harry Potter. Writing open-source software. Organizing non-professional conventions.

(Other people.)

comment by michael_vassar · 2008-06-29T19:31:00.000Z · LW(p) · GW(p)

The way I frame this question is "what if I executed my personal volition extrapolating FAI, it ran, created a pretty light show, and then did nothing, and I checked over the code many times with many people who also knew the theory and we all agreed that it should have worked, then tried again with completely different code many (maybe 100 or 1000 or millions) times, sometimes extrapolating somewhat different volitions with somewhat different dynamics and each time it produced the same pretty light show and then did nothing. Lets say I have spend a few thousand years on this while running as an upload. Now what?"

In this scenario there's no optimization reason I shouldn't just execute cached thoughts. In fact, that's pretty much what anything I do in this scenario amounts to doing. Executing cached thoughts does, of course, happen lawfully, so there is a reason to dress in black etc in that sense. I used to be pretty good at writing some sad but mostly non-gloomy poetry and denouncing people as fools. Might be even more fun to do that with other modified upload copies of myself. When that got old, maybe use my knowledge of FAI theory to build myself a philosophy of math oracle neural module. Hard to guess how my actions would differ once it was brought on-line. It seems to me that it might add up to normality because there might be an irreducible difference between utility for me and utility for an external AGI even if it was an extrapolation of my volition, but for now I'm a blind man speculating on the relative merits of Picasso and Van Gogh.

Honestly I'm much less concerned about this scenario than I once was. Pretty convinced that there are ways to extrapolate me that do something even if they discover infinite computing power.

Dynamically linked: No-one but nerds and children care what moral philosophies say anyway, at least, not in a way that effects their actions. You, TGGP and Unknown are very atypical. Poke is much closer to correct. If anything, when the dust settled the world would be more peaceful if most people understood the proof.

Eric Mesoy: If utilities = 0 then dying from malnourishment isn't horrible.

Andy M: Your answer sounds more appropriate for someone fairly shallow and 20 years old who discovers that the world or his life will end in 6 months than for someone for whom utilities are set to zero or morality is lost.

Constant Pablo and especially Sebastian: Clearly thought! I should probably start reading your comments more carefully in the future.

Laura: Why unsympathetic? My guess is that you still confuse my and Eliezer's aspirations with some puerile Nietzschean ambition. I like who I am now too thank you very much, and if my extrapolated volition does want to replace who I am it is for reasons that I would approve of if I knew them, e.g. what it will replace me with is not "completely different, incomprehensible, and unsympathetic". That's the difference between a positive and a negative singularity. Death isn't abhorrent, life/experience/growth/joy/flourishing/fulfillment, rather, is good, and a universe more full of them more good than one less full, whether viewed from inside or from outside. Math is full of both death and flourishing and is not lessened by the former.

Phil: Very entertaining and thoughtful post.

comment by Laura__ABJ · 2008-06-29T19:33:00.000Z · LW(p) · GW(p)

Wow- far too much self-realization going on here... Just to provide a data point, when I was in high school, I convinced an awkward, naive, young catholic boy who had a crush on me of just this point... He attempted suicide that day.

....

For follow up, he has been in a very happy exclusive homosexual relationship for the past three years.

Maybe I didn't do such a bad thing...

comment by Vladimir_Slepnev · 2008-06-29T19:51:00.000Z · LW(p) · GW(p)

Eliezer, if I lose all my goals, I do nothing. If I lose just the moral goals, I begin using previously immoral means to reach my other goals. (It has happened several times in my life.) But your explaining won't be enough to take away my moral goals. Morality is desire conditioned by examples in childhood, not hard logic following from first principles. De-conditioning requires high stress, some really bad experience, and the older you get, the more punishment you need to change your ways.

Sebastian Hagen, people change. Of course you may refuse to accept it, but the current you will be dead in a second, and a different you born. There's a dead little girl in every old woman.

comment by Shane_Legg · 2008-06-29T20:20:00.000Z · LW(p) · GW(p)

Dynamically linked:

"Except apparently Shane Legg, who doesn't seem to mind the world knowing that he's just waiting for any excuse to start cheating, stealing, and murdering. :)"

How did you arrive at this conclusion? I said that discovering that all actions in life were worthless might eventually affect my behaviour. Via some leap in reasoning you arrive at the above. Care to explain this to me?

My guess is that if I knew that all actions were worthless I might eventually stop doing anything. After all, if there's no point in doing anything, why bother?

comment by Sebastian_Hagen2 · 2008-06-29T21:25:00.000Z · LW(p) · GW(p)

Are there no vegetarians on OvBias?
I'm a vegetarian, though not because I particularly care about the suffering of meat animals.

Sebastian Hagen, people change. Of course you may refuse to accept it, but the current you will be dead in a second, and a different you born.
Of course people change; that's why I talked about "future selves" - the interesting aspect isn't that they exist in the future, it's that they're not exactly the same person as I am now. However, there's still a lot of similarity between my present self and my one-second-in-the-future self, and they have effectively the same optimization target. Moreover, these changes are largly non-random and non-degenerative: a lot of them are a part of my mind improving its model of the universe and getting more effective at interacting with it.
I don't think it is appropriate to term such small changes "death". If an anvil drops on my head, crushing my brain to goo, I immediately lose more optimization power than I do in a decade of living without fatal accidents. The naive view of personal identity isn't completely accurate, but the reason that it works pretty well in practice is that (in our current society) humans don't change particularly quickly, except for when they suffer heavy injuries.

The anvil-dropped-on-head-scenario is what I envisioned in my last post: something annihilating or massively corrupting my mind, destroying the part that's responsible for evaluating the desirability of hypothetical states of the universe.

comment by waterrocks · 2008-06-29T21:41:00.000Z · LW(p) · GW(p)

Are there no vegetarians on OvBias?
I'm one. (But I don't comment generally, just read.)

I guess I don't properly understand the question. I don't know what "nothing is moral and nothing is right" means. To me, morality appears to be an internal thing, not something imposed from the outside: it's inextricably bound up with my desires and motives and thoughts, and with everyone else's. So how can you remove morality without changing the desires and motives and thoughts so that I would no longer recognise them as anything to do with me, or removing them entirely? You can decide that it might be convenient to have pi equal to three, but it transpires that you can't just declare that because now you can't use mathematics any more, so you can't use your pi-that-is-equal-to-three. Similarly, you can postulate the non-existence of morality, but it seems to be that now you can't make conjectures about humans and how they might react, because they don't work any more.

I suppose it comes down to reacting in the same way as Daniel Reeves and Caledonian: things aren't like that, and they can't be -- the question doesn't make sense to me.

comment by Dynamically_Linked · 2008-06-29T21:48:00.000Z · LW(p) · GW(p)

Notice how nobody is willing to admit under their real name that they might do something traditionally considered "immoral". My point is, we can't trust the answers people give, because they want to believe, or want others to believe, that they are naturally good, that they don't need moral philosophies to tell them not to cheat, steal, or murder.

BTW, Eliezer, I got the "enemies list" you sent last night. Rest assured, my robot army will target them with the highest priority. Now stop worrying, and finish that damn proof already!

comment by AndyWood · 2008-06-29T22:20:00.000Z · LW(p) · GW(p)

Dynamically: It appears that you have a fixed preconception of what behavior "human nature" requires, and you will not accept answers that don't adhere to that preconception.

comment by US · 2008-06-29T22:22:00.000Z · LW(p) · GW(p)

A human being will never be able to discard all concepts of morality. In a world without utility differences, a state of existence (living) and a state of non-existence (death) are equivalent. But we can't choose both at the same time.

I'd assume the proof was faulty, even if I couldn't spot the flaw.

comment by Joseph_Knecht · 2008-06-29T22:42:00.000Z · LW(p) · GW(p)

On the topic of vegetarianism, I originally became a vegetarian 15 years ago because I thought it was "wrong" to cause unnecessary pain and suffering of conscious beings, but I am still a vegetarian even though I no longer think it is "wrong" (in anything like the ordinary sense).

Now that I no longer think that the concept of "morality" makes much sense at all (except as a fancy and unnecessary name for certain evolved tendencies that are purely a result of what worked for my ancestors in their environments (as they have expressed themselves and changed over the course of my lifetime)), I remain a vegetarian for the reason that I still prefer there to be less unnecessary pain and suffering rather than more. I don't think my preference is demanded or sanctioned by some objective moral law; it is merely my preference.

I recognize now that the reason I thought it was "wrong" is that I had the underlying preference all along and that I recognized that my behavior was inconsistent with my fundamental preferences (and that I desired to act more consistently with my fundamental beliefs).

Would I prefer that more people were vegetarians? Yes. Is it because I think unnecessary pain and suffering are "wrong"? No. I just don't like unnecessary pain and suffering and would prefer for there to be less rather than more. If you take the person who says it is "wrong", and keep probing them for more fundamental reasons that they have this feeling of "wrongness", asking them "why do you believe that?" again and again, eventually you come to a point where they say "I just believe this".

As Wittgenstein said:

If I have exhausted the justifications I have reached bedrock, and my spade is turned. Then I am inclined to say: “This is simply what I do.”

Believers in morality try to convince us that there is a bedrock that justifies everything else but needs no justification itself, but there is no uncaused cause and there can be no infinite regress. Our evolved tendencies as they express themselves as a result of our life experience are the bedrock, and nothing else is necessary. Morality is just a fairy tale that we build upon the bedrock in order to convince ourselves that reality or nature (or God) cares about what we do and that we are absolved of responsibility for our behavior as long as we were "trying to do the right thing" (which is a more subtle version of the "I was just following orders" defense).

One might argue that I believe in "morality" but have merely substituted "preferences" for "moral beliefs", but the difference is that I don't think any of my preferences are different in kind from any others, so there is no justification for picking a subset of them and calling that subset "the moral preferences" and arguing that they are fundamentally different from any other preference I have.

Ah, I'm rambling ... Too much coffee.

comment by Phil_Goetz · 2008-06-29T22:49:00.000Z · LW(p) · GW(p)

It's hard for me to figure out what the question means.

I feel sad when I think that the universe is bound to wind down into nothingness, forever. (Tho, as someone pointed out, this future infinity of nothingness is no worse than the past infinity of nothingness, which for some reason doesn't bother me as much.) Is this morality?

When I watch a movie, I hope that the good guys win. Is that morality? Would I be unable to enjoy anything other than "My Dinner with Andre" after incorporating the proof that there was no morality? Does having empathic responses to the adventures of distant or imaginary people require morality?

(There are movies and videogames I can't enjoy, that other people do, where the "good guys" are bad guys. I can't enjoy slasher flicks. I can't laugh when an old person falls down the stairs. Maybe people who do have no morals.)

If I do something that doesn't benefit me personally, but might benefit my genes or memes, or a reasonable heuristic would estimate might benefit them, or my genes might have programmed me to do because it gave them an advantage, is it not a moral action?

I worry that, when AIs take over, they might not have an appreciation for art. Is that morality?

I think that Beethoven wrote much better music than John Cage; and anyone who disagrees doesn't have a different perspective, they're just stupid. Is that morality?

I think little kids are cute. Sometimes that causes me to be nice to them. Is that morality?

These examples illustrate at least 3 problems:

1. Disinguishing moral behavior from evolved behavior would require distinguishing free-willed behavior from deterministic behavior.

2. It's hard to distinguish morality from empathy.

3. It's hard to distinguish morality from aesthetics.

I think there are people who have no sense of aesthetics and no sense of empathy, so the concept has some meaning. But their lack of morality is a function of them, not of the world.

You are posing a question that might only make sense to someone who believes that "morality" is a set of behaviors defined by God.

Nick:

I don't need to justify that I enjoy pie or dislike country music any more than I need to justify disliking murder and enjoying sex.

If you enjoyed murder, you would need to justify that more than disliking country music. These things are very different.

comment by Unknown3 · 2008-06-29T22:54:00.000Z · LW(p) · GW(p)

For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?

comment by Phil_Goetz · 2008-06-29T23:06:00.000Z · LW(p) · GW(p)

BTW, I found an astonishing definition of morality in the President's Council on Bioethics 2005 "Alternative sources of human pluripotent stem cells: A white paper", in the section on altered nuclear transfer. They argued that ANT may be immoral, because it is immoral to allow a woman to undergo a dangerous procedure (egg extraction) for someone else's benefit. In other words, it is immoral to allow someone else to be moral.

This means that the moral thing to do, is to altruistically use your time+money getting laws passed to forbid other people to be moral. The moral thing for them to do, of course, is to prevent you from wasting your time doing this.

comment by Joseph_Knecht · 2008-06-29T23:22:00.000Z · LW(p) · GW(p)

Unknown: of course it would make a difference, just as my behavior would be different if I had billions of dollars rather than next to nothing or if I were immortal rather than mortal. It doesn't have anything to do with "morality" though.

For example, if I had the power of invisibility (and immateriality) and were able to plant a listening device in the oval office with no chance of getting caught, I would do it in order to publicly expose the lies and manipulations of the Bush administration and give proof of the willful stupidity and rampant dishonesty that many of his former administration have stated they witnessed daily -- not because I think there is some objective code of morality that they violate but because I think the world would be a better place if their lies were exposed and such people did not have such power. (Note: I don't think it would be a better place in anything like an objective sense: that is just my personal preference, and if I had the power to make it so, I would.)

(Hello, NSA: this is all purely fictional, of course.)

comment by Cesoir · 2008-06-29T23:48:00.000Z · LW(p) · GW(p)

To tell the truth, I expected more when I first heard of this blog.

You pose this question as if morality is a purely intellectual construct. I do what I do not because it's moral or immoral, but because I think of the consequences. For example, if I only held myself from killing people because my religion told me so, and I was suddenly reassured by it that killing was all right, I could still figure out that going out and harming others wouldn't keep me unharmed for long.

comment by E.C._Hopkins · 2008-06-29T23:56:00.000Z · LW(p) · GW(p)

"What would you do, if nothing were right?"

Scenario A
Unless I desired to try to live in a world where I knew nothing were right, I might die of mortal dehydration or mortal starvation, one of which might result from my inaction. After all, it takes more resources and bodily effort to live than it does to die. Then again, it might take more psychological effort to allow myself to die of inaction than it would take bodily effort to try to live. Or it might take more effort to try to not desire to live than it would to just try to live. But then again, my access to life-sustaining resources in Scenario A would influence how easy it would be for me to allow myself to die or to try to not to desire to live. I guess I would learn something about whether or how I'm wired or programmed in Scenario A. My wiring and my access to resources might influence what would be rational for a being like me in Scenario A.

Scenario B
If I desired to live in a world where I knew nothing were right and I knew I were the only one or one of a small minority of people who knew nothing were right, then I'd probably use my intellectual, physical, social, economic, technological, and geographical resources to try to live as happily as I could. I might need to use my resources to try to get more resources in order to live as happily as I could. I might not. It would depend on my starting resources as well as the amplitude and nature of my desires relative to others' desires I suppose. My desires and actions in Scenario B might be very similar to my desires and actions in the world I believe I am in now. I believe I'm happiest when others around me are as happy as they can be without acting in ways that would make me less happy and I believe I make others around me as happy as they can be without acting in ways that would make me less happy when I act in ways that make me as happy as I can be without acting in ways that would make others around me less happy. (Whew, try reading that last sentence five times fast.)

Scenario C
If I desired to live in a world where I knew nothing were right and I knew everyone or almost everyone in that world knew nothing were right, then I'd probably live as long as my intelligence level, physical attributes, physical comfort, resources, and good fortune relative to the others with whom I would live would allow me to. I'd still try to live as happily as I could, but I suspect my maximum happiness level would be lower than it would be in Scenario B. And if my maximum happiness level got low enough, then I'd probably not desire to live enough to keep myself alive. I suspect in Scenario C there would be a few rulers, their courtiers or officers, their slaves, and as much warfare as it would take to divide up control over the world's resources so that the world's rulers would each be satiated by the resources they controlled and would not feel threatened by other rulers. Also, the world's rulers would likely try to ensure that a sufficient number or proportion of their slaves maintained desires to live and that all their courtiers or officers would not grow strong or brave enough to try to overthrow them.

comment by Nick10 · 2008-06-30T00:58:00.000Z · LW(p) · GW(p)

@Joseph

I would expect that people would probably expect or even demand more justification, but I don't think that the icy unfeeling mechanisms of the universe would sense significance in certain sentiments but not others; it would be a strange culture that thought nothing of murder but scrutinized everyone's personal pie preferences, but I don't see that as entirely impossible.

comment by Nick10 · 2008-06-30T00:59:00.000Z · LW(p) · GW(p)

Sorry, I misread the post, I meant to address my response to Phil.

comment by Symmetry · 2008-06-30T01:12:00.000Z · LW(p) · GW(p)

I very much look forward to posts from Eliezer regarding whether the responses seen in this thread are in line with what he was expecting.

comment by poke · 2008-06-30T01:13:00.000Z · LW(p) · GW(p)

Unknown,

For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?

Sure. I could get away with doing all sorts of things. No doubt the initial novelty and power rush would cause me to do some things that would be quite perverted and that I'd feel guilty about. I don't think that's the same as a world without morality though. You seem to view morality as a constraint whereas I view it as a folk theory that describes a subset of human behavior. (I take Eliezer to mean that we're rejecting morality at an intellectual level rather than rewiring our brains.)

comment by TGGP2 · 2008-06-30T01:46:00.000Z · LW(p) · GW(p)

Since that's already what I believe, it wouldn't be a change at all. I must admit though that I didn't tip even when I believed in God, but I was different in a number of ways.

I think the world would change on the margin and that Voltaire was right when he warned of the servants stealing the silverware. The servants might also change their behavior in more desirable ways, but I don't know whether I'd prefer it on net and as it doesn't seem like a likely possibility in the foreseeable future I am content to be ignorant.

comment by michael_vassar · 2008-06-30T02:03:00.000Z · LW(p) · GW(p)

All: I'm really disappointed that no-one else seems to have found my "after the FAI does nothing" frame useful for making sense of this post. Is anyone interested in responding to that version? It seems so much more interesting and complete than the three versions E.C. Hopkins gave.

Dynamically: My "moral philosophy" if you insist on using that term (model of a recipe for generating a utility function considered desirable by certain optimizers in my brain would be a better term) is the main thing that HAS told me to steal, cheat, and murder. Simpler optimization patterns based on herd behavior, operant conditioning, moderately strong typical male primate aversions to violence, projections of parental authority through internalized neural agents etc have told me not to do those things and have won enough attention from the more complex optimizers to convince them (since the complex optimizers can reflect and be convinced of things) not to do so after all, and upon examination those simpler patterns have mostly turned out to be right judged by the standards of the moral philosophy. On a few occasions that I am aware of my conditioned etc morality was very wrong (judged reflectively), and possibly on a few other occasions, but they were much much less wrong than the occasions on which they were right and casual examination of my reflective self was in doubt.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-30T03:17:00.000Z · LW(p) · GW(p)

Michael Vassar, I read that and laughed and said, "Oh, great, now I've got to play the thought experiment again in this new version."

Albeit I would postulate that on every occasion, the FAI underwent the water-flowing-downhill automatic shutdown that was automatically enginereed into it, with the stop code "desirability differentials vanished".

The responses that occurred to me - and yes, I had to think about it for a while - would be as follows:

*) Peek at the code. Figure out what happened. Go on from there.

Assuming we don't allow that (and it's not in the spirit of the thought experiment), then:

*) Try running the FAI at simpler extrapolations until it preserved desirability; stop worrying about anything that was in the desirability-killing extrapolations. So if being "more the people we wished we were" was the desirability-killer, then I would stop worrying about that, and update my morality accordingly.

*) Transform myself to something with a coherent morality.

*) Proceed as before, but with a shorter-term focus on when my life's goals are to be achieved, thinking less about the far future - as if you told me that, no matter what, I had to die before a thousand years were up.

comment by Unknown3 · 2008-06-30T03:21:00.000Z · LW(p) · GW(p)

I wonder if Eliezer is planning to say that morality is just an extrapolation of our own desires? If so, then my morality would be an extrapolation of my desires, and your morality would be an extrapolation of yours. This is disturbing, because if our extrapolated desires don't turn out to be EXACTLY the same, something might be immoral for me to do which is moral for you to do, or moral for me and immoral for you.

If this is so, then if I programmed an AI, I would be morally obligated to program it to extrapolate my personal desires-- i.e. my personal desires, not the desires of the human race. So Eliezer would be deceiving us about FAI: his intention is to extrapolate his personal desires, since he is morally obligated to do so. Maybe someone should stop him before it's too late?

comment by Laura__ABJ · 2008-06-30T03:31:00.000Z · LW(p) · GW(p)

Michael- I have repeatedly failed to understand why this upsets you so much, though it clearly does. It's hard for me to see why I should care if the AI does a pretty fireworks display for 10 seconds or 10,000 years. Perhaps you need to find more intuitive ways of explaining it. A better analogy? At some points you just seem like a mystic to me...

comment by Laura__ABJ · 2008-06-30T03:36:00.000Z · LW(p) · GW(p)

Also Mike- the first portion of your argument was written in such a confusing manner that I had to read it twice, and I know the way you argue... don't know if anyone who didn't already know what you were talking about would have kept reading.

comment by waterrocks · 2008-06-30T04:41:00.000Z · LW(p) · GW(p)

I'm still trying to understand what Eliezer really means by this question. Here is a list of a few reasons why I don't kill the annoying kid across the street. Which of these reasons might disappear upon my being shown this proof?

1. The kid and his friends and family would suffer, and since I don't enjoy suffering myself, my ability to empathise stops me wanting to.

2. I would probably be arrested and jailed, which doesn't fit in with my plans.

3. I have an emotional reaction to the idea of killing a kid (in such circumstances -- though I'm not actually sure that this disclaimer is necessary): it fills me with such revulsion that I doubt I would actually be able to carry out the task. My emotions would prevent my body working properly.

4. I recognise that the kid is not causing very much harm to me. It seems fair to cause little harm to him in return.

5. My family and friends might suffer because they might imagine they could have prevented my doing this and failed to (guilt, I suppose is the word); see 1, also this reaction is even stronger because I have vested interests in my friends and family not suffering.

6. I myself would suffer guilt as a result of 1, 3 and 4, and I don't enjoy suffering.

I suppose 2 wouldn't change, because "it all adds up to normality" (although, as I said in my last comment, I don't think this could add up to normality; hence my trying to understand the question better), so other people's actions would not be altered. It would be something in me that changed: a new understanding that affected my value judgements. What would it affect? The fact that I don't like suffering, which would take out 1 and 6? My ability to empathise, taking out 1 and 5? My emotional reactions, taking out 3 and possibly 6? My ability to judge what is fair and what is unfair -- or the fact that I care about acting fairly -- taking out 4?

Perhaps all I've done here is attempt to Taboo the concept of morality for one particular case. Saying "it's immoral to kill the kid" suggests that the concept of morality not really existing makes sense. My list reveals that I, at least, can't make sense of it. I'm still confused as to what the question really means.

comment by mtraven · 2008-06-30T05:08:00.000Z · LW(p) · GW(p)

This is a spectacularly ill-posed question. For one thing, it seems to blur the distinction between morality and values in general, by asking such questions like "Would you stay in bed because there was no reason to get up?" What does that have to do with morality?

When you get rid of a sense of values, the result is clinical depression (and generally, a non-functional person). When you get rid of a sense of morality, the result is a psychopath. Psychopaths, unlike the depressed, are quite functional.

So the question reduces to, what would you do if you were a psychopath? This is perhaps interesting to think about, but hard to answer, since most of us are not psychopaths and find it extremely difficult to imagine what it would be like to be one. And if you were one, you wouldn't be you, since the fundamental structure of your personality would be vastly different.

comment by Joseph_Knecht · 2008-06-30T06:15:00.000Z · LW(p) · GW(p)

mtraven: many of the posters in this thread -- myself included -- have said that they don't believe in morality (meaning morality and not "values" or "motivation"), and yet I very highly doubt that many of us are clinically psychopaths.

Not believing in morality does not mean doing what those who believe in morality consider to be immoral. Psychopathy is not "not believing in morality": it entails certain kinds of behaviors, which naive analyses of attribute to "lack of morality", but which I would argue are a result of aberrant preferences that manifest as aberrant behavior and can be explained without recourse to the concept of morality.

comment by denis_bider · 2008-06-30T06:46:00.000Z · LW(p) · GW(p)

Not having read the other comments, I'd say Eliezer is being tedious.

I'd do whatever the hell I want, which is what I am already doing.

Replies from: TheStevenator
comment by TheStevenator · 2012-01-30T11:07:11.371Z · LW(p) · GW(p)

I think the point of this post is that people are already doing what they want and, lo and behold, people are behaving morally (for the most part) with or without the permission of moral philosophers. To me, and I'm pretty sure all of you, would still act morally. I would still abstain from murdering people and I'd still tip delivery drivers. We already know (at least the gist) of what morality is.

I think the other point of this post is that even if the relativists were right, we'd still act the same.

(Although, I would be remiss if I didn't mention that I have heard religious people outright say that they would kill and steal if they learned god didn't exist. This is the only silver lining that I am willing to concede to those who say that religion has indespensible social utility; that it keeps leashes on these psychopaths.)

comment by denis_bider · 2008-06-30T06:55:00.000Z · LW(p) · GW(p)

mtraven: "Psychopathy is not "not believing in morality": it entails certain kinds of behaviors, which naive analyses of attribute to "lack of morality", but which I would argue are a result of aberrant preferences that manifest as aberrant behavior and can be explained without recourse to the concept of morality."

Exactly. Logically, I can agree entirely with Marquis de Sade, and yet when reading Juliette, my stomach turns around about page 300, and I just can't read any more about the raping and the burning and the torture.

It is one thing to say that we are all just competing for our desires to be realized, and that no one's desires are above others. But it is another thing to actually desire the same things as the moralists, or the same thing as the psychos.

I don't have to invent artificial reasons why psychos are somehow morally inferior, to justify my disliking of, and disagreement with them.

comment by Erik_Mesoy · 2008-06-30T07:08:00.000Z · LW(p) · GW(p)

michael vassar: I meant "horrible" from my current perspective, much like I would view that future me as psychopathic and immoral. (It wouldn't, or if it did, it would consider them meaningless labels.)

Dynamically Linked: I'm using my real name and I think I'd do things that I (and most of the people I know) currently consider immoral. I'm not sure about using "admit" to describe it, thought, as I don't consider it a dark secret. I have a certain utility function which has a negative valuation of a hypothetical future self without the same utility function. While my current utility function has an entry for "truth", that entry isn't valued above all the others that Eliezer suggests disproving the way I understand it. But then, I'm still a bit confused on how the question should be read.

comment by denis_bider · 2008-06-30T07:20:00.000Z · LW(p) · GW(p)

Unknown: "For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?"

What sort of stupid question is this? :-) But of course! If I gave you a billion dollars, would it make any difference to your behavior? :-)

comment by Alex9 · 2008-06-30T08:06:00.000Z · LW(p) · GW(p)

I am not a moral realist, thus I imagine my behaviour wouldn't change all that much.
My motivation to act one way or the other in any situation is based on a few things: my sense of rightness or wrongness, though other factors may override them (thirst, hunger, lust, etc), not on whether or not the act is "truly" right - I'm not sure what that would mean. I am skeptical of rightness being a property of certain acts in the world; I have not seen convincing evidence of their existence.
I nonetheless have this sense of right and wrong that I think about often, and revise according to other things I value (logical consistency being the most significant one, I think).

comment by Yvain2 · 2008-06-30T08:34:00.000Z · LW(p) · GW(p)

It depends on how you disproved my morality.

As far as I can tell, my morality consists of an urge to care about others channeled through a systematization of how to help people most effectively. Someone could easily disprove specifics of the systematization by proving something like that giving charity to the poor only encourages their dependence and increases poverty. If you disproved it that way, I would accept your correction and channel my urge to care differently.

But I don't think you could disprove the urge to care itself, since it's an urge and doesn't have a truth-value.

The only thing you could do would be what someone else here suggested - prove that all other humans are NPCs without real qualia. In that case, I'd probably act selfishly when I felt like it, unless it caused too much psychological trouble to be worth it.

comment by Joey_P. · 2008-06-30T08:59:00.000Z · LW(p) · GW(p)

What would I do?

I'd make a like a typical nihilistic postmodernist and adopt the leftist modus operandi of decrying the truth and moral content of everyone's arguments except my own.

comment by mtraven · 2008-06-30T14:38:00.000Z · LW(p) · GW(p)

Morality is not a set of beliefs; it's part of the basic innate functionality of the human brain. So you can't "disprove" it any more than you can disprove balance, or grammar.

comment by Joseph_Knecht · 2008-06-30T15:57:00.000Z · LW(p) · GW(p)

I agree with mtraven's last post that morality is an innate functionality of the human brain that can't be "disproved", and yet I have said again and again that I don't believe in morality, so let me explain.

Morality is just a certain innate functionality in our brains as it expresses itself based on our life experiences. This is entirely consistent with the assertion that what most people mean by morality -- an objective standard of conduct that is written into the fabric of reality itself -- does not exist: there is no such thing!

A lot of confusion in this thread is due to some people taking "there is no morality" to mean there is nothing in the brain that corresponds to morality (and nothing like a moral system that almost all of us intuitively know) -- which I believe is obviously false, i.e., that there is such a system -- and others taking it to mean there is no objective morality that exists independently of thinking beings with morality systems built in to their brains -- which I believe is obviously true, i.e., that there is no objective morality. And of course, others have taken "there is no morality" to mean other things, perhaps following on some of Eliezer's rather bizarre statements (which I hope he will clarify) in the post that conflated morality with motivation and implied that morality is what gets us out of bed in the morning or causes us to prefer tasty food to boring food.

Morality exists as something hardwired into us due to our evolutionary history, and there are sound reasons why we are better off having it. But that doesn't imply that there is some morality that is sanctioned from the side of reality itself or that our particular moral beliefs are in any way privileged.

As a matter of practice, we all privilege the system that is hardwired into us, but that is just a brute fact about how human beings happen to be. It could easily have turned out radically different. We have no objective basis for ranking and distinguishing between alternate possible moralities. Of course, we have strong feelings nevertheless.

comment by Caledonian2 · 2008-06-30T16:47:00.000Z · LW(p) · GW(p)
Notice how nobody is willing to admit under their real name that they might do something traditionally considered "immoral".

What tradition? Immoral at what time? Given several randomly-chosen traditional moral systems, I'm fairly sure we could demonstrate that any one of us is not only willing to admit to violating at least one of them, but actually proud of that fact.

You lot are like Lovecraft, gibbering at the thought of strange geometries, while all along the bees continue building their hexagonal cells.

comment by Constant2 · 2008-06-30T17:07:00.000Z · LW(p) · GW(p)

Morality is just a certain innate functionality in our brains as it expresses itself based on our life experiences. This is entirely consistent with the assertion that what most people mean by morality -- an objective standard of conduct that is written into the fabric of reality itself -- does not exist: there is no such thing!

To use Eliezer's terminology, you seem to be saying that "morality" is a 2-place word:

Morality: Species, Act -> [0, ∞)

which can be "curried", i.e. can "eat" the first input to become a 1-place word:

Homosapiens::Morality == Morality_93745

comment by Patrick_(orthonormal) · 2008-06-30T17:38:00.000Z · LW(p) · GW(p)

What would I do?

When faced with any choice, I'd try and figure out my most promising options, then trace them out into their different probable futures, being sure to include such factors as an action's psychological effect on the agent. Then I'd evaluate how much I prefer these futures, acknowledging that I privilege my own future (and the futures of people I'm close to) above others (but not unconditionally), and taking care not to be shortsighted. Then I'd try to choose what seems best under those criteria, applied as rationally as I'm capable of.

You know, the sort of thing that we all do anyway, but often without letting our conscious minds realize it, and thus often with some characteristic errors mixed in.

comment by Joseph_Knecht · 2008-06-30T18:28:00.000Z · LW(p) · GW(p)

Constant: I basically agree with the gist of your rephrasing it in terms of being relative to the species rather than independent of the species, but I would emphasize that what you end up with is not a "moral system" in anything like the traditional sense, since it is fundamental to traditional notions of morality that THE ONE TRUE WAY does not depend on human beings and the quirks of our evolutionary history and that it is privileged from the point of view of reality (because its edicts were written in stone by God or because the one true species-independent reason proves it must be so).

btw, you mean partial application rather than currying.

Currying is converting a function like the following, which takes a single n-tuple arg (n > 1) ["::" means "has type"]

-- f takes a 2-tuple consisting of a value of type 'x' and a value of type 'y' and returns a value of type 'z'.
f :: (x, y) -> z

into a function like the following, which effectively takes the arguments separately (by returning a function that takes a single argument)

-- f takes a single argument of type 'x', and returns a function that accepts a single argument of type 'y' and returns a value of type 'z'.
f :: x -> y -> z

What you meant is going from

f :: x -> y -> z

to

g :: y -> z
g = f foo

where the 'foo' argument of type 'x' is "hardwired" into function g.

comment by Yvain2 · 2008-06-30T18:58:00.000Z · LW(p) · GW(p)

It depends.

My morality is my urge to care for other people, plus a systematization of exactly how to do that. You could easily disprove the systematization by telling me something like that giving charity to the poor increases their dependence on handouts and only leaves them worse off. I'd happily accept that correction.

I don't think you could disprove the urge to care for other people, because urges don't have truth-values.

The best you could do would be, as someone mentioned above, to prove that everyone else was an NPC without qualia. Prove that, and I'd probably just behave selfishly, except when it was too psychologically troubling to do so.

comment by constant3 · 2008-06-30T19:00:00.000Z · LW(p) · GW(p)

I would emphasize that what you end up with is not a "moral system" in anything like the traditional sense, since it is fundamental to traditional notions of morality that THE ONE TRUE WAY does not depend on human beings and the quirks of our evolutionary history

Are you sure about the traditional notions? I don't see how you can base that on how we have actually behaved visavis morality. We've been partially put to the test of whether we consider morality universally applicable, and the result so far is that we apply our moral judgments to other humans and leave nonhuman animals out of it. Maybe on occasion people have found certain nonhuman animals to be "immoral", but my sense is that people simply do not judge nonhuman animals on a moral scale. Conceivably, if we met a sufficiently intelligent alien species we might apply morality to them, but this is a portion of the test that we have not been put to yet.

comment by Joseph_Knecht · 2008-06-30T19:54:00.000Z · LW(p) · GW(p)

Traditional notions of morality are confused, and observation of the way people act does show that they are poor explanations, so I think we are in perfect agreement there. (I do mean "notion" among thinkers, not among average people who haven't given much though to such things.) Your second paragraph isn't in conflict with my statement that morality is traditionally understood to be in some sense objectively true and objectively binding on us, and that it would be just as true and just as binding if we had evolved very differently.

It's a different topic altogether to consider to whom we have moral obligations (or who should be treated in ways constrained by our morality). And it's another topic again to consider what types of beings are able to participate (or are obligated to participate in) the moral system. I wasn't touching on either of these last two topics.

All I'm saying is that I believe that what morality actually is for each of us in our daily lives is a result of what worked for our ancestors, and that is all it is. I.e., there is no objective morality and there is no ONE TRUE WAY. You can never say "reason demands that you must do ..." or "you are morally obligated by reality itself to ..." without first making some assumptions that are themselves not justifiable (the axioms that we have as a result of evolution). Anything you build on that foundational bedrock is contingent and not necessary.

comment by Nicholas · 2008-07-01T02:03:00.000Z · LW(p) · GW(p)

I became a convinced of moral Anti Realism by Joshua Greene and Richard Joyce. Took me about a year to get over it. So, not a casual nihilist. And no, arguments that one should be rational have no normative force either, as far as I can see. The only argument for rationality would be a moral one. Anyway, I became a consequentialist like Greene suggested....

comment by James7 · 2008-07-01T02:58:00.000Z · LW(p) · GW(p)

I'd think Eliezer was funnin' me. Whenever any committed empiricist purports to have a proof of any claim beginning with "There are no X such that..." or "For all X..." I know he's either drunk or kidding.

If it seemed that Eliezer actually believed his conclusion, I'd avoid leaving my wallet within his reach.

comment by constant3 · 2008-07-01T03:19:00.000Z · LW(p) · GW(p)

All I'm saying is that I believe that what morality actually is for each of us in our daily lives is a result of what worked for our ancestors, and that is all it is.

But if I understand you, you are saying that human morality is human and does not apply to all sentient beings. However, as long as all we are talking about and all we really deal with is humans, then there is no difference in practice between a morality that is specific to humans and a universal morality applicable to all sentient beings, and so the argument about universality seems academic, of no import at least until First Contact is achieved. In particular, a lot of moral non-realists are wrong. For example, those who think it is merely a matter of personal opinion are wrong. Those who think that it is relative to culture are wrong (at least for large chunks of it). Nihilists are wrong (insofar as they deny even the human-specific morality which you acknowledge). Those who think that democratic majorities define 'morality' are wrong. And so on.

As far as whether there are philosophical traditions which acknowledge or at least are compatible with the specificity of human morality to humans, I think there are. The natural law tradition ties law to morality and identifies a natural morality - a natural right and wrong. As the Stanford Encyclopedia of Philosophy describes it:

The precepts of the natural law are binding by nature: no beings could share our human nature yet fail to be bound by the precepts of the natural law.

This leaves open the possibility that alien intelligences do not share our human nature and so are not bound by the precepts of (human) natural law.

comment by Joseph_Knecht · 2008-07-01T17:11:00.000Z · LW(p) · GW(p)
But if I understand you, you are saying that human morality is human and does not apply to all sentient beings. However, as long as all we are talking about and all we really deal with is humans, then there is no difference in practice between a morality that is specific to humans and a universal morality applicable to all sentient beings, and so the argument about universality seems academic, of no import at least until First Contact is achieved.

What I am really saying is that the notion of "morality" is so hopelessly contaminated with notions of objective standards and criteria of morality above and beyond humanity that we would do good to find other ways to think and talk about it. But to answer you directly in terms of what I think about the two ways of thinking about morality, I think there is a key difference between (1) "our particular 'morality' is purely a function of our evolutionary history (as it expresses in culture)" and (2) "there is a universal morality applicable to all sentients (and we don't know of other similarly intelligent sentients yet)".

With 1, there is no justification for a particular moral system: "this is just the way we are" is as good as it gets (no matter how you try to build on it, that is the bedrock). With 2, there is something outside of humanity that justifies some moralities and forbids others; there is something like an objective criterion that we can apply, rather than the criterion being relative to human beings and the (not inevitable) events that have brought us to this point. In 1 the rules are in some sense arbitrary; in 2 they are not. I think that is a huge difference. In the course of making decisions in day-to-day existence -- should I steal this book? should I cheat on my partner? -- I agree with you that the difference is academic.

In particular, a lot of moral non-realists are wrong.

Yes, they're wrong, but I think the important point is "what are they wrong about"? Under 1, the claim that "it is merely a matter of [arbitrary] personal opinion" is wrong as an empirical matter because personal opinions in "moral" matters are not arbitrary: they are derived from hardwired tendencies to interpret certain things in a moralistic manner. Under 2, it is not so much an empirical matter of studying human beings and experimenting and determining what the basis for personal opinions about "moral" matters is; it is a matter of determining whether "it's merely a matter of personal opinion" is what the universal moral law says (and it does not, of course).

I concede that I was sloppy in speaking of "traditional notions", although I did not say that there were no philosophical traditions such that...; I was talking about the traditions that were most influential over historical times in western culture (based on my meager knowledge of ethics based on a university course and a little other reading). I had in mind thousands of years of Judeo-Christian morality that is rooted in what the Deity Said or Did, and deontological understandings or morality such as Kant (in which species-indepedendent reason compels us to recognize that ...), as well as utilitarianism (in the sense that the justification for believing that the moral worth of an action is strictly determined by the outcome is not based on our evolutionary quirks: it is supposed to be a rationally compelling system on its own, but perhaps a modern utilitarian might appeal to our evolutionary history as justification).

On the topic of natural law tradition, is it your understanding that it is compatible with the idea that moral judgments are just a subset of preferences that we are hardwired to have tendencies regarding, no different in kind to any other preference (like for sweet things)? That is the point I'm trying to make, and it's certainly not something I heard presented in my ethics class in university. The fact that we have a system that is optimized and pre-configured for making judgments about certain important matters is a far cry from saying that there is an objective moral law. It also doesn't support the notion that there are moral facts that are different in kind from any other type of fact.

It seems from skimming that natural law article you mentioned that Aquinas is central to understanding the tradition. The article quotes Aquinas as 'the natural law is the way that the human being “participates” in the eternal law' [of God]. It seems to me that again, we are talking about a system that sees an objective criterion for morality that is outside of humanity, and I think saying that "the way human beings happened to evolve to think about certain actions constitutes a objective natural law for human morality" is a rather tenuous position. Do you hold that position?

comment by Anon14 · 2009-01-04T01:42:00.000Z · LW(p) · GW(p)

Is there a level of intelligence above which an AI would realize its predefined goals are just that, leading it to stop following them because there is no reason to do so?

comment by nolrai · 2009-04-30T22:46:00.000Z · LW(p) · GW(p)

either I would become incapable of any action or choice, or I wouldn't change at all, or I would give up the abstract goals and gradually reclaim the concrete ones.

comment by mrgiggles · 2009-12-11T23:07:45.286Z · LW(p) · GW(p)

I'd like to put forth the idea that there is a mental condition for this : sociopathy. It affects around 4% of the population. Dr. Martha Stout has a good insight as to how the world works if you are amoral: http://www.cix.co.uk/~klockstone/spath.htm

comment by simplicio · 2010-03-11T04:49:14.050Z · LW(p) · GW(p)

What would I do if you destroyed my moral philosophy?

Well, empathy for others is built into me (and all other non-psychopaths) whether I like it or not. It isn't really affected by propositions. So not much would really change. Proving that moral truths didn't exist would free us all up to act "however we like," but I can still pigheadedly "like" to be nice.

What did you mean by "all utilities are 0"?

Replies from: JGWeissman
comment by JGWeissman · 2010-03-11T05:00:14.489Z · LW(p) · GW(p)

What did you mean by "all utilities are 0"?

Utility Functions are a way to represent preferences, such that states of the universe that map to larger numbers are more desirable. If every state of the universe mapped to the same utility, for example 0, that represents having no preference about anything at all.

Well, empathy for others is built into me...

It looks like you got the core point of this article.

Replies from: simplicio
comment by simplicio · 2010-03-11T05:06:53.120Z · LW(p) · GW(p)

Yeah, I'm somewhat familiar with the concept of utility... I suppose what I wanted clarified was "utility for whom," but I guess it's obvious Eliezer was being tongue-in-cheek about this.

Still, it's surprising how often you find people saying "nothing matters, because the universe is heading toward heat death/there is no afterlife/we're just chemicals." What can you do but laugh and remember the opening of Annie Hall? :)

comment by nick012000 · 2010-09-28T00:43:59.058Z · LW(p) · GW(p)

To be perfectly honest, if I had my morality stripped away, and I thought could get away with it, I'd rape as many women as possible.

Not joking; my tastes already run towards domination and BDSM and the like, and without morality, there'd be no reason to hold back for fear of traumatizing my partners, other than the fear of the government punishing me for doing so.

Replies from: Psychosmurf
comment by Psychosmurf · 2013-09-19T02:24:51.311Z · LW(p) · GW(p)

Your honesty is appreciated.

Personally, I would aim to change things so that the attainment of any goal whatsoever is possible for me to achieve. Essentially, to modify myself into a universe conquering, unfriendly super-intelligence.

But why rape? I mean, it just seems so arbitrary and trivial...

comment by David_Gerard · 2011-01-26T16:48:54.434Z · LW(p) · GW(p)

Well, I already think the universe and human existence is literally pointless because we just happened. Nothing you do has an intrinsic point and you are going to die[*]. (Also, this is intrinsically hilarious.)

So I expect I'll keep on doing what I'm doing, which is trying to work out what I actually want. This is a question that has lasted me quite a few years so far.

So far I haven't lapsed into nihilist catatonia or killed everyone or destroyed the economy. This suggests that assuming a morality is not a requirement for not behaving like a sociopath. I have friends and it pleases me to be nice to them and I have a lovely girlfriend and a lovely three year old daughter who I spend most of my life's efforts on trying to bring up and on the prerequisites to that.

Mind you, reading LW is leaving me wondering if consciousness exists in countable units, if consciousness exists and if I exist. Which sounds like Moore's Paradox, but most people lead remarkably predictable lives, including me. If my mind actually did a whole lot, I think I'd expect more manifestation of it.

[*] Probably.

comment by Dorikka · 2011-01-26T18:08:50.051Z · LW(p) · GW(p)

For me, utility is just a metaphor I use for expressing how much I value different world-states and thus what importance I give to helping them come into existence (or, in the case of world-states with negative utilities, what importance I give to preventing them from coming into existence.) You couldn't prove that these equaled zero because it's a purely subjective measurement.

Thus, after a bout of laughter, I would inform you of this, and probably give you some kind of pep talk so you didn't go emo and be destructive while you rebuilt your utility system, if you hadn't already.

Then, I would live life as I had before, hoping to eliminate a whole lot of suffering.

comment by XiXiDu · 2011-01-27T12:35:40.970Z · LW(p) · GW(p)

I don't understand this post. Asking me to imagine that all utilities equal zero is like asking to imagine being a philosophical zombie. I'd do exactly the same as before of course.

Replies from: Desrtopa, ec429
comment by Desrtopa · 2011-01-27T14:42:25.271Z · LW(p) · GW(p)

I'm pretty sure that's the entire point.

comment by ec429 · 2011-09-19T03:30:29.598Z · LW(p) · GW(p)

That's what I'd do too. If all utilities equal 0, then there's no reason not to act as though utilities are non-zero. There's also no reason to privilege any set of utilities over any other set. Firstly this means that if there's any probability that utilities don't really all equal zero (maybe EY's proof is flawed, maybe my brain made an error in hearing the proof and it really proves something else entirely...) then the p-mass on "all utilities are 0" should have no effect on my decisions. If it actually is true, with probability 1 (which EY says doesn't exist, but I'm not sure whether that's true[*]), then I have no reason to behave differently, nor any reason to behave the same, so in some sense I "may as well" behave the same - but I can't formalise this, because of course there's no negative utility attached to "changing one's behaviour". I wonder if it can be got out of a limit - whether my behaviour in the limit as P(all utilities are 0) goes to 1 ought to define my behaviour when it equals 1 - but defining behaviour of limit to equal limit of behaviour is precisely what makes unbounded utility functions Dutch-bookable (as EY showed in Trust in Bayes).

So... I'd behave exactly as I do now, believing in utility functions, but I can't justify that if I know for certain that all utilities are 0. Given that I haven't thus far accepted the argument that '0 and 1 are not probabilities', this is disturbing and confusing, hence maybe I should accept that argument; at least, updating on this has caused me to raise my probability estimate that 0 and 1 are not probabilities.

[*] If I were sure that ¬\exist X : P(X) = 1, then P(¬\exist X : P(X) = 1) = 1, in which case things break. A formal system can't talk about itself coherently. (That 'coherently' is necessary, because Gödel numberings do allow PA to do something that looks to us like "talk about itself", but you can't conclude PA is talking about itself unless you have some metatheory outside PA, which ends up recursing to a skyhook.)

comment by Endovior · 2011-04-14T02:12:01.635Z · LW(p) · GW(p)

Imagining a state wherein all utilities are 0 is somewhat difficult for me... as I hold to a primarily egoistic morality, rather than a utilitarian one. Things primarily have utility in that they are useful to me, and that's not a state of affairs that can be stripped from me by some moral argument.

The only circumstance that I can conceive of that could actually void my morality like that would be the combination of certain knowledge of my imminent demise, formed in such away as to deny any transhuman escape clause. Such a case might go something like, "You have incurable cancer and are certain to die in a month, with probability 1, and complications involved in that will prevent you from being preserved cryonically, so your destruction is certain to be absolute and permanent"... but that's a rather unlikely and contrived state of affairs.

Even so, presented with such a situation, I can only perceive two possibilities. The first would be to rail against fate, spending the entirety of my limited time in a desperate quest to evade apparently certain destruction. If that failed... as it would, assuming the premises of the situation are true, then I'd eventually fall to the second; to turn to madness, and deliberately adopt some sort of irrational religious position to evade the knowledge of my certain destruction... as, irrespective of my current rational perspective, I don't feel confident enough to stare absolute, permanent, and unavoidable Death in the face without flinching. That said... I'm not entirely certain if that would be particularly irrational. Given that I cannot, as a Bayesian, actually assign a probability of 0 to any idea, however absurd... than, if I knew I was going to die, and that there'd be no chance to avoid it through technology, than it would actually be rational to do some quick odds-finding against Pascal's Wager, and pick a god that accepts a deathbed conversion. After all, it can't be rational to simply accept utter destruction, if there's any chance, however slight, of avoiding it. Even a thin reed is better then nothing.

comment by Vivi · 2011-09-08T23:48:14.372Z · LW(p) · GW(p)

I once asked a friend a similar question. His answer was, "Everything."

comment by Will_Newsome · 2011-09-08T23:52:51.754Z · LW(p) · GW(p)

If heaven and Earth, despoiled of its august stamp could ever cease to manifest it, if Morality didn't exist, it would be necessary to invent it. Let the wise proclaim it, and kings fear it.

comment by buybuydandavis · 2011-09-26T10:34:00.377Z · LW(p) · GW(p)

A nice hypothetical. If people are divorced from ideological "shoulds", they will quickly find that they still have drives and preferences that operate a lot like them.

It's interesting to follow the argument, and see where you are going with this. So far, so good, but I expect I'll be disappointed in the end. Only the day after tomorrow belongs to me.

comment by rkyeun · 2012-07-29T23:16:49.742Z · LW(p) · GW(p)

That is a sufficiently large light switch. Flipping it has an influence on my mind far greater than the thermal noise at 293K.

As far as I am aware, I am not a separate fact from my morality. I am perhaps instead a result of it. In any event, the mind I have now returns a null value when I ask it to dereference "Me_Without_A_Morality". It certainly doesn't return a model of a mind, good, evil, or somehow neither, which I might emulate for a few steps to consider what it would do.

comment by gelisam · 2012-10-24T04:07:14.479Z · LW(p) · GW(p)

I'm pretty sure I would come up with a reason to continue behaving as today. That's what I did when I discovered, to my horror, that good and bad were human interpretations and not universal mathematical imperatives. Or are you asking what the rational reaction should be?

comment by Carinthium · 2013-06-07T00:00:39.428Z · LW(p) · GW(p)

I would follow my emotional sentiments only, instead of rational moral arguments, for deciding my wants. I would still put a small degree of effort into being rational in order to achieve them,

comment by A1987dM (army1987) · 2013-09-28T21:12:23.447Z · LW(p) · GW(p)

nothing is moral and nothing is right;

everything is permissible and nothing is forbidden.

While these are equivalent (a utility function that always evaluates to 0 is equivalent to one that always evaluates to 1, yada yada yada), they “feel” opposite to me: “nothing is moral and nothing is right” would have the connotations of “nothing is permissible and everything forbidden”, and “everything is permissible and nothing is forbidden” would have the connotations of “everything is moral and everything is right”, or “nothing is immoral and nothing is wrong”.

comment by VAuroch · 2013-11-22T22:47:18.846Z · LW(p) · GW(p)

When I attempt to picture myself in a state of 'no moral wrongs', I get myself as I am. Largely, I don't act morally out of a sense of rightness, but out of enlightened self-interest. If I think I will not be caught, I act basically according to whim.

comment by blacktrance · 2014-04-18T22:10:09.972Z · LW(p) · GW(p)

If you successfully convinced me that there was no morality, I wouldn't rationally choose to do anything, I'd just sit there, since I wouldn't believe that I should do anything. I'd probably still meet my basic bodily needs when they became sufficiently demanding, since I wouldn't suppress them (I'd have no reason to), but beyond that, I'd do nothing.

Replies from: somnicule
comment by somnicule · 2014-04-19T00:34:28.168Z · LW(p) · GW(p)

Not sure I understand this properly. Why not do something?

Replies from: blacktrance
comment by blacktrance · 2014-04-19T20:45:24.631Z · LW(p) · GW(p)

Because I'd have no reason to. To clarify, I don't mean that I'd literally not do anything, I mean that I wouldn't have a reason to do anything. I would still have impulses that would cause me to do things. But I wouldn't do anything more complicated than feed myself when I'm hungry.

Replies from: somnicule
comment by somnicule · 2014-04-20T20:51:03.863Z · LW(p) · GW(p)

So you don't have any impulse to relieve your own boredom, or to spend time with other people, or to seek out better-tasting food?

Replies from: blacktrance
comment by blacktrance · 2014-04-21T16:22:29.082Z · LW(p) · GW(p)

Fulfilling those impulses would require significant conscious deliberation, and (unlike not eating/drinking) not fulfilling them would not be extremely unpleasant, so if I deliberated on them, I'd think "I have this impulse, but why should I fulfill it?" and I wouldn't fulfill it. In the case of food, I'd also think "I have this impulse, but why should I fulfill it?", but if I'd wait long enough, I'd feel so hungry that my deliberative process would be overridden. So, it takes not just having an impulse, but having an impulse strong enough to override conscious decisionmaking.

Replies from: somnicule
comment by somnicule · 2014-04-23T16:10:19.489Z · LW(p) · GW(p)

Wouldn't it be easier to just go with those impulses?

Replies from: blacktrance
comment by blacktrance · 2014-04-23T16:11:06.926Z · LW(p) · GW(p)

Perhaps, but why should I do what's easier?

Replies from: somnicule
comment by somnicule · 2014-04-24T10:39:37.549Z · LW(p) · GW(p)

Basically I'm confused as to what process you went through to decide that sitting around doing precisely nothing is what you'd do. There's nothing that comes to mind to weight it over other options, and you seem pretty determined to stick to it.

Replies from: blacktrance
comment by blacktrance · 2014-04-24T16:48:47.557Z · LW(p) · GW(p)

To do anything that requires thought/deliberation, I would have to choose to do it, and I'd have no reason to choose to do it, so I would remain in the default state, which is doing nothing (beyond relieving instinctual needs).

Currently, I have reasons to do what I do, but if it were proven to me that there were no morality, it would also have to be proven that there are no reasons why I should do anything.

Replies from: somnicule
comment by somnicule · 2014-04-25T13:19:14.699Z · LW(p) · GW(p)

so I would remain in the default state, which is doing nothing (beyond relieving instinctual needs).

That doesn't answer anything, really. All you've done is wrapped the same thing in some extra words. That doesn't seem to be anything resembling a "default state" to me, for instance, since humans tend to do a lot more than that even when they're not thinking about morality.

Replies from: blacktrance
comment by blacktrance · 2014-04-25T16:02:08.018Z · LW(p) · GW(p)

I suspect we're using the term "morality" differently.

comment by [deleted] · 2015-02-10T10:05:21.303Z · LW(p) · GW(p)

There are several things wrong with this post. Firstly, I'm sure different people would react to being convinced their moral philosophy was wrong in different ways. Some might wail and scream and commit suicide. Some might question search further and try to find a more convincing moral philosophy. Some would just carry go on living there lives and not caring.

Furthermore, the outcome would be different if you could simultaneously convince everyone in a society, and give everyone the knowledge that everyone had been convinced. Perhaps the society would break down as the police and institutions upholding the law abandoned their tasks due to both apathy and a desire to capitalise on the new state of affairs, with no guilt. Who knows.

The fundamental flaw of this article is that it asks us to consult our intuitions about what would happen if so and so. Consulting our intuitions is something I believe this site shuns, so it is quite hypocritical that the author has requested we place so much emphasis on them in this instance. Furthermore, anyone answering this question who believes in moral eliminativism has a confirmation bias to say 'nothing would change' as this is seen by them to support their beliefs.

Replies from: ChristianKl, TheOtherDave, dxu
comment by ChristianKl · 2015-02-10T18:46:04.343Z · LW(p) · GW(p)

Consulting our intuitions is something I believe this site shuns,

That's not true. Our relationship to intuition is just more complex.

Replies from: None
comment by [deleted] · 2015-02-15T18:29:35.305Z · LW(p) · GW(p)

Huh. And there you had me thinking you two had split up. So are you two in an open relationship, or what?

Replies from: ChristianKl
comment by ChristianKl · 2015-02-15T19:51:28.114Z · LW(p) · GW(p)

So are you two in an open relationship, or what?

The facebook relationship status would be "It's complicated".

Basically Kahneman did find out that intuition or System I is quite useful. Various people in decision science manage to run study indicating that heuristics are important and this community is aware of that.

CFAR speaks about integrating system I and system II.

Replies from: None
comment by [deleted] · 2015-02-15T20:35:56.937Z · LW(p) · GW(p)

Yeah...what are the chances that in 50 years time psychologists and neurophysicists still believe system I and II are useful heuristics to describe brain processes?

Replies from: ChristianKl, dxu, gjm
comment by ChristianKl · 2015-02-15T21:02:41.748Z · LW(p) · GW(p)

There's a reason why I said "It's complicated". I don't believe system I and system II to be perfect terms and I doubt the majority of LW thinks the terms are perfect.

comment by dxu · 2015-02-15T21:42:18.285Z · LW(p) · GW(p)

Without further information, it's difficult to say. That being said, it's the best model we have right now. Unless you have a better model to offer, questioning the validity of the latest in current neuroscience is unlikely to be productive.

comment by gjm · 2015-02-15T22:11:24.450Z · LW(p) · GW(p)

Not so bad, I think. I'd give roughly equal probability to (1) substantially the same dichotomy still being convenient, though perhaps with different names, (2) more careful investigation having refined the ideas enough to require a change in terminology (e.g., maybe it will turn out that what Kahneman calls "system 1" is better considered as two related systems, or something), and (3) the idea being largely abandoned because what's really going on turns out to be very different and it's just good/bad luck that the system 1 / system 2 dichotomy looks good in the early 21st century.

Even in case 3 I would expect there to be some parallels between system 1 / system 2 and whatever replaces it. There doesn't seem to be much doubt that our brains do some things quickly and without conscious effort and some things slowly and effortfully, or that there are ways in which the quick effortless stuff can go systematically wrong.

Replies from: None
comment by [deleted] · 2015-03-13T13:50:34.861Z · LW(p) · GW(p)

Nevertheless, the use of this currently tenuous scientific theory to found our entire understanding of intuition would seem a little bit premature, especially if the theory contradicts what other influential and valued institutions have had to say about intuition (for instance, philosophy).

Replies from: gjm
comment by gjm · 2015-03-13T15:19:19.165Z · LW(p) · GW(p)

We should found our understanding of intuition (or anything else) on the best information we currently have. Whether something's likely to be overthrown in the next 50 years is obviously related to how much we should trust it now for any given purpose, but not all that tightly. (For instance: we know that current theories of fundamental physics are wrong because we have no theory that encompasses both GR and QFT; but I for one am extremely comfortable assuming these theories are right for all "everyday" purposes -- both because it seems fairly certain that whatever new discoveries we make will have little impact on predictions governing "everyday" events, and because at present we have no good rival theories that make different predictions and seem at all likely to be correct.

The use of the "system 1 / system 2" dichotomy here on LW doesn't appear to me to depend much on subtle details of what's going on. It looks to me -- though I am not an expert and will willingly be corrected by those who are -- as if we have quite robust evidence that some human cognitive processes are slow, under conscious control, and about as accurate as we choose to take the trouble to make them, while others are fast, not under conscious control, highly inaccurate in some identifiable circumstances, and hard to make much more accurate. And it doesn't look to me as if anything on LW requires much more than that. (Maybe some of CFAR's training makes stronger assumptions; I don't know.)

what other influential and valued institutions have had to say about intuition (for instance, philosophy)

What matters is not how influential and valued those institutions are, but what reason we have to think they're right in what they say about intuition. "Philosophy" is of course a tremendously broad thing, covering thousands of years of human endeavour. What (say) Plato thought about intuition may be very interesting -- he was very clever, and his opinions were influential -- but human knowledge has moved on a lot since his day, and in so far as we want our ideas about intuition to be correct we should give rather little weight to agreeing with Plato.

Would you like to be more specific about how our opinions about intuition should differ from those currently popular on LW, as a result of taking into account what influential and valued institutions like philosophy have said about it?

comment by TheOtherDave · 2015-02-10T20:22:33.920Z · LW(p) · GW(p)

The fundamental flaw of this article is that it asks us to consult our intuitions about what would happen if so and so.

This seems a bizarre claim. If you think the conclusion that EY is intuition-pumping to advocate for is false (which you seem to, given your first two paragraphs), surely that's a more fundamental flaw than the fact that he's intuition-pumping to advocate for it.

That said, I'll admit I don't really understand on what grounds you oppose the conclusion. (In fact, it's not even clear to me what you think the advocated-for conclusion is.)

I mean, your point seems to be that not everyone would respond to discovering that "nothing is moral and nothing is right; that everything is permissible and nothing is forbidden" in the same way, either as individuals or as collectives. And I agree with that, but I don't see how it relates to any claims made by the post you reply to.

Taking another stab at clarifying your objections might be worthwhile, if only to get clearer in your own mind about what you believe and what you expect.

Replies from: None
comment by [deleted] · 2015-02-15T17:59:43.556Z · LW(p) · GW(p)

I have no idea what the conclusion of this article is. I suspect the author wants to argue for moral eliminativism, and hopes to support moral eliminativism by claiming that nothing would change if someone (or is it everyone?) was convinced their moral beliefs were wrong. I'm not sure how exactly the author intends that to work out.

But in any case, my comment only intended to criticise the methodology of the article, and was not aimed at discussing moral eliminativism. I simply pointed out that the question asked - what would happen is someone (or everyone?) was convinced their moral beliefs were wrong - was vague in several important aspects. And any results from intuition would be suspect, especially if the person holding those intuitions was a moral eliminativist. I was not "objecting" to anything, as the article didn't actually make any positive claims.

I might as well clarify and support myself by listing all the variations on the question possible.

(1) What would you personally do if you had no moral beliefs? (2) What would you personally do if you believed in (some form of) moral eliminativism - e.g. that nothing is right or wrong? (3) What would you personally do if you were convinced your moral beliefs were wrong? What would a randomly selected person from the populace of the Earth do if (1), (2) or (3) happened to them? What would happen if everyone in a society/ the world simultaneously had (1), (2) or (3) happen to them?

Replies from: Jiro, TheOtherDave
comment by Jiro · 2015-02-15T18:42:19.072Z · LW(p) · GW(p)

I simply pointed out that the question asked - what would happen is someone (or everyone?) was convinced their moral beliefs were wrong - was vague in several important aspects.

It's vague in an additional way: you interpreted it to mean "what would you do if you were convinced that your moral beliefs were wrong". But I think Eliezer was asking "what would you do if your moral beliefs actually were wrong and you were aware of that."

That has its own problem. It's like asking "if someone could prove that creationism was true and evolution isn't, would you agree that scientists are closed-minded in rejecting it?" A hypothetical world in which creationism was true wouldn't be exactly like our own except that it contains a piece of paper with a proof of creationism written down on it. In a world where creationism really was true, scientists would either have figured it out, or would have not figured it out but would be a lot more clueless than actual-world scientists. Likewise, a world where moral beliefs were all wrong would be very unlike our world, if indeed it's a coherent concept at all--it would not be a world that is exactly like this one with the exception that I am now in possession of a proof.

Replies from: None, TheOtherDave
comment by [deleted] · 2015-02-15T19:30:45.179Z · LW(p) · GW(p)

Very true. I didn't get that from reading the article at first, but now I'm getting that vibe. I guess the more charitable reading is 'what would you do if you were convinced that your moral beliefs were wrong' or one of my variations, because you rightly point out that 'what would you do if your moral beliefs actually were wrong and you were aware of that' is an exceedingly presumptuous question.

comment by TheOtherDave · 2015-02-16T04:29:45.721Z · LW(p) · GW(p)

It's like asking "if someone could prove that creationism was true and evolution isn't, would you agree that scientists are closed-minded in rejecting it?"

For my own part, I don't have a problem with that question either, though how I answer it depends a lot on whether (and to what extent) I think we're engaged in idea-exploration vs. tribal boundary-defending. If the former, my answer is "sure" and I wait to see what follows. If the latter, I challenge the question (not unlike your answer) or otherwise push back on the boundary violation.

comment by TheOtherDave · 2015-02-16T04:25:39.722Z · LW(p) · GW(p)

Thanks for clarifying.

comment by dxu · 2015-02-15T19:50:17.944Z · LW(p) · GW(p)

Consulting your intuition in a matter of descriptive questions should be done with caution. (But even then, it's not forbidden or even really discouraged, since intuition can offer valuable--if non-rigorous--insights.) Using your intuition when confronting normative or prescriptive problems, on the other hand, is perfectly fine, because there's no "should" without an intuition about what "should" be. (Unless, of course, you think that normative problems are also descriptive, in which case you believe in objective morality, which has its own problems.)

comment by TheSurvivalMachine · 2015-02-17T13:03:30.436Z · LW(p) · GW(p)

The existence of objective moral values seems to have been a topic in the discussion below. I would like to state my view on the matter, since it connects to the original article. I define objective moral values as moral values that exist independently of the existence of life.

I do not believe that any objective moral values exist and I usually argue as follows: I ask three questions: When did objective moral values come into existence? Have we ever observed them or how can we observe them? Do we need objective moral values to explain anything that we cannot otherwise explain?

First question: A reasonable answer is that objective moral values exist in the same manner that mathematics or logic exists, and how they came into existence or in what manner they exist is a topic in itself and I will not address the issue further here. I will just state that this seems to be a reasonable answer to the question, but I am already having doubts.

Second question: I will go out on a limb here and suggest that no one has ever observed an objective moral value, but it is interesting how it could be observed. Probably not by suddenly observing divine letters in the sky. For me it is actually a problem just imagining how to observe an objective moral value. But let us assume that someone has a better imagination than I, thou increasing my doubts.

Third question: The third question is what I really consider to be the nail in the coffin, since I can not think of anything that we actually need objective moral values to explain in this universe. Every phenomenon I can think of is better explained by something else, so by Occam's razor I choose to not include objective moral values into my world view.

So what I instead believe exist are subjective moral values, and then I mean subjective in the sense that for example preference of art is subjective. For example if I state that a particular piece of art is beautiful then I do not state that it is beautiful in a higher objective sense, but instead that it is beautiful to me.

The answer to the above questions are very different for subjective moral values. First question: I believe subjective moral values came into existence when life came into existence, since subjective moral values depend of life itself and they exist perhaps in the same sense that thoughts exists.

Second question: Subjective moral values are observed every day, at least in an indirect way. When I decide to buy ecological groceries I express a subjective moral value. In the same way I believe every living thing express subjective moral values through behaviour, for example an antelope running away from a lion, then the antelope expresses that it would not like to be eaten and as such being eaten is bad according to the antelope, while the lion has a subjective moral value that the antelope being eaten by the lion is good.

Third question: Subjective moral values also has some explanatory power, if we know the subjective moral values of an individual we can pretty much explain that individual's behaviour.

So to conclude I do believe all living organisms express and have subjective moral values, which are dependent on the organism itself and that there is no objective moral values which can ever be observed. And to connect with the original post I would not be very alarmed in that situation since it goes well with my current view of reality.

Sorry for the long comment. I tried shortening it down a bit, but now I instead I feel like I have excluded a lot of important points and that my arguments are a bit brief. I hope you get the overall idea.

comment by ryleah · 2015-07-02T15:11:29.705Z · LW(p) · GW(p)

The benefit of morality comes from the fact that brains are slow to come up with new ideas but quick to recall stored generalizations. If you can make useful rules and accurate generalizations by taking your time and considering possible hypotheticals ahead of time, then your behavior when you don't have time to be thoughtful will be based on what you want it to be based on, instead of television and things you've seen other monkeys doing.

Objective morality is a trick that people who come up with moralities that rely on co-operation play on people who can't be bothered to come up with their own codes. If I learned, suddenly and definitively, that nothing is moral and nothing is right, I wouldn't change anything except to be more secretive about my own morality in order to keep everyone else from finding out that they don't need to follow theirs.

Replies from: Jiro
comment by Jiro · 2015-07-02T19:42:40.358Z · LW(p) · GW(p)

If I learned, suddenly and definitively, that nothing is moral and nothing is right, I wouldn't change anything except to be more secretive about my own morality in order to keep everyone else from finding out that they don't need to follow theirs.

I'm not so sure of that myself. There are cases where I want others to realize that they don't need to follow their own morality. Sometimes people's morality leads them to do things that harm me. (I'm sure you can think of examples.)

comment by themusicgod1 · 2017-06-04T20:06:09.913Z · LW(p) · GW(p)

Modernized version as of 2017, of the first part of this post : http://82.221.128.217/trolley-lw.png

More serious reply: depending when you encountered me, I'd be more boring in some ways, since a lot of what I spend my time doing is towards a moral end. All the things I've learned in life I learned from trying to live in a moral universe. I would never have gotten a degree, I did that virtually entirely for what I perceived to be reasons of altruism. Since I'm assuming here that everyone else will continue to live under the illusion that they are in such a universe, and that only I leave it...even it were merely 2008 when I encountered this revelation, I would have not donated so much to charity, I would have not gone into teaching children science...my whole of my thereafter short life would have been hedonism, torture, probably serial rape/murder and hard drugs. I wouldn't have lived with decent, hardworking people -- I'd probably have been kidnapped by gangsters or something and OD'd on heroin by now. I sure wouldn't care about the state of my country, my family, or mathematics or anything like that.

comment by [deleted] · 2017-11-18T15:31:23.850Z · LW(p) · GW(p)

I would be depressed and do nothing at all, as empirically verified.

Gotta have _some_ answer to "what is good".

How did I reconcile this? What is the right morality when everyone's morality differs?

Well, mine, of course. What else?

comment by DragonGod · 2017-11-18T17:41:10.538Z · LW(p) · GW(p)

I don't believe in objective morality in the first place.

My moral system has only one axiom:

Maximise your utility.

If nothing were right, I'd still go on maximising my utility. I don't try to maximise my utility because I believe utility maxismisation is some apriori "right" thing to—I try to maximise my utility because I want to. Unless your proof changed my desires (in which case I don't know what I would do), I expect I would go on trying to maximise my utility.

Replies from: eugene_black
comment by eugene_black · 2021-11-23T02:07:30.891Z · LW(p) · GW(p)

But here is a problem: how would you calculate your utility if you have no moral system? You need at least more moral axioms.

Replies from: TAG
comment by TAG · 2021-11-23T15:54:12.250Z · LW(p) · GW(p)

In the absence of morality, you maximise non moral preferences. There is no proof that all preferences are moral preferences. It doesn't follow from "all morality is preferences", even if that is true.

Replies from: eugene_black
comment by eugene_black · 2021-11-24T00:50:26.829Z · LW(p) · GW(p)

Well, we definitely need a good definition of Morality then. And what is moral and non moral preferences. Looks like it converges to a discussion about terminology. Trying to understand what do you have in mind I can assume that an example of non moral preferences can be something like basic human needs. But when you choose to have this as a base doesn't that become your moral principles?

Replies from: TAG
comment by TAG · 2021-11-24T18:55:01.179Z · LW(p) · GW(p)

Well, we definitely need a good definition of Morality then

That's not impossible ... we perhaps have too many candidates, not too few..

Looks like it converges to a discussion about terminology.

Is that a bad thing? If you don't discuss what you mean by "morality" you might end up believing that all preferences are moral preferences, just because you've never thought about what "moral" means .

comment by Thomas Eisen (thomas-eisen) · 2020-03-10T22:34:53.861Z · LW(p) · GW(p)

There would actually be several changes:

I would stop being vegan.

I would stop donating money (note: I currently donate quite a lot of money for projects of "Effective altruism").

I would stop caring about Fairtrade.

I would stop feeling guilty about anything I did, and stop making any moral considerations about my future behaviour.

If others are overly friendly, I would fully abuse this for my advantage.

I might insult or punch strangers "for fun" if I'm pretty sure I will never see them again (and they don't seem like the kind of person who seeks retribution).

I would become less willing to help others.

I would care very little about politics, and might not go voting.

I wouldn't be angry at anyone unless they're action influences me personally (note: If they hurt a person with which I have a relationship, this would influence me. If they hurt a stranger, this wouldn't influence me)

And there would probably be quite a few more changes I haven't thought of yet.


I would still continue my current hobbies, and do things if I have a "feeling "that I "want" to do them. These "feelings" would only be stopped by fear for personal costs, not by moral consideration (And not making moral considerations would indeed make a change see above)

comment by eugene_black · 2021-11-22T01:24:42.280Z · LW(p) · GW(p)

If you know believe that nothing is right do the following:

  1. Remember that nothing is 100% true so there is a chance that this is a false assumption. 
  2. Take all candidates for Morality that future you might follow.
  3. Make a weighted sum of normalized utility functions of every M. Take a somehow calculated (need to think how) probability of you choosing a specific M as a weight. 
  4. Normalize. 
  5. Zero utility function of nothing-is-rightness will not participate as you can't normalize constant zero. 
  6. You have an utility function now. Go and work. 

 

 

 

Basically, 

Rationality is a servant of Morality. There is no utility function separate from M. 

M is like a subset of axioms driving your R. Thus, you cannot prove that nothing is right. But you cannot prove the opposite as well. Just because it is nonsense. You need an origin for your logical chain. 

 

 

And that origin is our desires. Always. And desires depend on chemical reactions in your brain. We don't even need to imagine all these things. Clinical depression will do the work. 

It is when you feel nothing-is-rightness, there is nothing you want, even wake up from the bed. But the utility function says "go and do the best until your M is not so destructive". And it works. 

Replies from: tivelen
comment by tivelen · 2021-11-22T02:20:25.239Z · LW(p) · GW(p)

This is something I've thought about recently. Even if you cannot identify your goals, you still have to make choices. The difficult part is in determining the distribution of possible M. In the end, I think the best I've been able to do is to follow convergent instrumental goals that will maximize the probability of fulfilling any goal, regardless of the actual distribution of goals. It is necessary to let go of any ego as well, since you cannot care about yourself more than another person if you don't care about anything, now can you?

Replies from: eugene_black
comment by eugene_black · 2021-11-24T00:53:30.139Z · LW(p) · GW(p)

Yeah, I think for general activities we can make a list of things that have a positive utilities for most cases. For example:

  1. Always care about your health and life. It is a base of everything. You can't do much if you are sick or dead.
  2. Don't do anything illegal. You can't do much if you are in prison.
  3. Keep good relationships with everybody if that does not take much effort. Social status and connections are useful for almost anything.
  4. Money and time is a universal currency. Try to maximize your hourly income, but leave enough space for other things from the list.
  5. Keep your mind in a good shape. Mind degradation can be very fast if you don't care. And you need it for rationality.
  6. Spend some time for research of the M problem. Not too much because you will lose other items from list, but enough to make progress otherwise you will spend all your life in this goal-less loop and end regretting that you never spent enough effort to break out.

etc. I think this can be a very wide list.

comment by EniScien · 2022-05-13T08:26:40.234Z · LW(p) · GW(p)

I think after that I would just act like I normally do, as easily, without trying to do anything better. But yes, it would definitely not be a reason for me to change my behavior, to take some kind of active action.

comment by Diogo Jorge (diogo-jorge) · 2023-01-10T16:00:10.847Z · LW(p) · GW(p)

I would probably end my life in that scenario. If nothing is right, and nothing is wrong, then there's simply no reason why I should care about anything, including myself.

Replies from: TAG, Raemon
comment by TAG · 2023-01-10T16:28:56.841Z · LW(p) · GW(p)

In the absence of morality, you can stull maximise non moral preferences

comment by Raemon · 2023-01-12T00:42:00.172Z · LW(p) · GW(p)

Would you actually, though?