The Trouble With "Good"
post by Scott Alexander (Yvain) · 2009-04-17T02:07:32.881Z · LW · GW · Legacy · 137 commentsContents
137 comments
Related to: How An Algorithm Feels From Inside [LW · GW], The Affect Heuristic [LW · GW], The Power of Positivist Thinking
I am a normative utilitarian and a descriptive emotivist: I believe utilitarianism is the correct way to resolve moral problems, but that the normal mental algorithms for resolving moral problems use emotivism.
Emotivism, aka the yay/boo theory, is the belief that moral statements, however official they may sound, are merely personal opinions of preference or dislike. Thus, "feeding the hungry is a moral duty" corresponds to "yay for feeding the hungry!" and "murdering kittens is wrong" corresponds to "boo for kitten murderers!"
Emotivism is a very nice theory of what people actually mean when they make moral statements. Billions of people around the world, even the non-religious, happily make moral statements every day without having any idea what they reduce to or feeling like they ought to reduce to anything.
Emotivism also does a remarkably good job capturing the common meanings of the words "good" and "bad". An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad", "the book was good, but the movie was bad", "atheism is good, theism is bad", "evolution is good, creationism is bad", and "dogs are good, but cats are bad". Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left". It feels like they're all just the same thing. The moral theory that captures that feeling is emotivism. Yay pizza, books, Israelis, atheists, dogs, and evolution! Boo seafood, Palestinians, movies, theists, creationism, and cats!
Remember, evolution is a crazy tinker who recycles everything. So it would not be surprising to find that our morality is a quick hack on the same machinery that runs our decisions about which food to eat or which pet to adopt. To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote "cats" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote "Palestinians" a few points. Richard Dawkins just said something especially witty, so you up-vote "atheism". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.1
Remember back during the presidential election, when a McCain supporter claimed that an Obama supporter attacked her and carved a "B" on her face with a knife? This was HUGE news. All of my Republican friends started emailing me and saying "Hey, did you hear about this, this proves we've been right all along!" And all my Democratic friends were grumbling and saying how it was probably made up and how we should all just forget the whole thing.
And then it turned out it WAS all made up, and the McCain supporter had faked the whole affair. And now all of my Democrat friends started emailing me and saying "Hey, did you hear about this, it shows what those Republicans and McCain supporters are REALLY like!" and so on, and the Republicans were trying to bury it as quickly as possible.
The overwhelmingly interesting thing I noticed here was that everyone seemed to accept - not explicitly, but implicitly very much - that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash. But to an emotivist, where any bad feelings associated with Obama count against him, it sort of makes sense. All those people emailing me about this were saying: Look, here is something negative associated with Obama; downvote him!2
So this is one problem: the inputs to our mental karma system aren't always closely related to the real merit of a person/thing/idea.
Another problem: our interpretation of whether to upvote or downvote something depends on how many upvotes or downvotes it already has. Here on Less Wrong we call this an information cascade. In the mind, we call it an Affective Death Spiral [LW · GW].
Another problem: we are tempted to assign everything about a concept the same score. Eliezer Yudkowsky currently has 2486 karma. How good is Eliezer at philosophy? Apparently somewhere around the level it would take to get 2486 karma. How much does he know about economics? Somewhere around level 2486 would be my guess. How well does he write? Probably well enough to get 2486 karma. Translated into mental terms, this looks like the Halo Effect [LW · GW]. Yes, we can pick apart our analyses in greater detail; having read Eliezer's posts, I know he's better at some things than others. But that 2486 number is going to cause anchoring-and-adjustment issues even so.
But the big problem, the world-breaking problem, is that sticking everything good and bad about something into one big bin and making decisions [LW · GW] based on whether it's a net positive or a net negative is an unsubtle, leaky heuristic completely unsuitable for complicated problems.
Take gun control. Are guns good or bad? My gut-level emotivist response is: bad. They're loud and scary and dangerous and they shoot people and often kill them. It is very tempting to say: guns are bad, therefore we should have fewer of them, therefore gun control. I'm not saying gun control is therefore wrong: reversed stupidity is not intelligence. I'm just saying that before you can rationally consider whether or not gun control is wrong, you need to get past this mode of thinking about the problem.
In the hopes of using theism less often, a bunch of Less Wrongers have agreed that the War on Drugs would make a good stock example of irrationality. So, why is the War on Drugs so popular? I think it's because drugs are obviously BAD. They addict people, break up their families, destroy their health, drive them into poverty, and eventually kill them. If we've got to have a category "drugs"3, and we've got to call it either "good" or "bad", then "bad" is clearly the way to go. And if drugs are bad, getting rid of them would be good! Right?
So how do we avoid all of these problems?
I said at the very beginning that I think we should switch to solving moral problems through utilitarianism. But we can't do that directly. If we ask utilitarianism "Are drugs good or bad?" it returns: CATEGORY ERROR. Good for it.
Utilitarianism can only be applied to states, actions, or decisions, and it can only return a comparative result. Want to know whether stopping or diverting the trolley in the Trolley Problem would be better? Utilitarianism can tell you. That's because it's a decision between two alternatives (alternate way of looking at it: two possible actions; or two possible states) and all you need to do is figure out which of the two is higher utility.
When people say "Utilitarianism says slavery is bad" or "Utilitarianism says murder is wrong" - well, a utilitarian would endorse those statements over their opposites, but it takes a lot of interpretation first. What utilitarianism properly says is "In this particular situation, the action of freeing the slaves leads to a higher utility state than not doing so" and possibly "and the same would be true of any broadly similar situation".
But why in blue blazes can't we just go ahead and say "slavery is bad"? What could possibly go wrong?
Ask an anarchist. Taxation of X% means you're forced to work for X% of the year without getting paid. Therefore, since slavery is "being forced to work without pay" taxation is slavery. Since slavery is bad, taxation is bad. Therefore government is bad and statists are no better than slavemasters.4
(again, reversed stupidity is not intelligence. There are good arguments against taxation. But this is not one of them.)
Emotivism is the native architecture of the human mind. No one can think like a utilitarian all the time. But when you are in an Irresolvable Debate, utilitarian thinking may become necessary to avoid dangling variable problems around the word "good" (cf. Islam is a religion of peace). Problems that are insoluble at the emotivist level can be reduced, simplified, and resolved on the utilitarian level with enough effort.
I've used the example before, and I'll use it again. Israel versus Palestine. One person can go on and on for months about all the reasons the Israelis are totally right and the Palestinians are completely in the wrong, and another person can go on just as long about how the Israelis are evil oppressors and the Palestinians just want freedom. And then if you ask them about an action, or a decision, or a state - they've never thought about it. They'll both answer something like "I dunno, the two-state solution or something?". And if they still disagree at this level, you can suddenly apply the full power of utilitarianism to the problem in a way that tugs sideways to all of their personal prejudices.
In general, any debate about whether something is "good" or "bad" is sketchy, and can be changed to a more useful form by converting the thing to an action and applying utilitarianism.
Footnotes:
1: It should be noted that this karma analogy can't explain our original perception of good and bad, only the system we use for combining, processing and utilizing it. My guess is that the original judgment of good or bad takes place through association with other previously determined good or bad things, down to the bottom level which are programmed into the organism (ie pain, hunger, death) with some input from the rational centers.
2: More evidence: we tend to like the idea of "good" or "bad" being innate qualities of objects. Thus the alternative medicine practioner who tells you that real medicine is bad, because it uses scary pungent chemicals, which are unhealthy, and alternative medicine is good, because it uses roots and plants and flowers, which everyone likes. Or fantasy books, where the Golden Sword of Holy Light can only be wielded for good, and the Dark Sword of Demonic Shadow can only be wielded for evil.
3: Of course, the battle has already been half-lost once you have a category "drugs". Eliezer once mentioned something about how considering {Adolf Hitler, Joe Stalin, John Smith} a natural category isn't going to do John Smith any good, no matter how nice a man he may be. In the category "drugs", which looks like {cocaine, heroin, LSD, marijuana}, LSD and marijuana get to play the role of John Smith.
4: And, uh, I'm sure Louis XVI would feel the same way. Sorry. I couldn't think of a better example.
137 comments
Comments sorted by top scores.
comment by PhilGoetz · 2009-04-17T17:38:35.835Z · LW(p) · GW(p)
This very good post! Yay Yvain! You have high karma. Please give me stock advice.
I know a guy who constructed a 10-dimensional metric space for English words, then did PCA on it. There were only 4 significant components: good-bad, calm-exciting, open-closed, basic-elaborate. They accounted for 65%, 20%, 9%, and 5% of the variance in the 10-dimensional space, leaving 1% for everything else. This means that we need only 8 adjectives in English 99% of the time.
So it would not be surprising to find that our morality is a quick hack on the same machinery that runs our decisions about which food to eat or which pet to adopt.
This could be explored more deeply in another post.
Replies from: Yvain, jmmcd, Peter_de_Blanc, nazgulnarsil↑ comment by Scott Alexander (Yvain) · 2009-05-02T22:28:42.276Z · LW(p) · GW(p)
Sorry, I didn't see this until today.
Can you give me a link to some more formal description of this? I don't understand how you would use a ten dimensional metric space to capture English words without reducing them to a few broad variables, which seems to be what he's claiming as a result.
Replies from: hylleddin↑ comment by Peter_de_Blanc · 2009-04-17T21:56:32.514Z · LW(p) · GW(p)
Are you talking about Alexei Samsonovich? I saw a very similar experiment that he did.
↑ comment by nazgulnarsil · 2009-04-17T20:38:17.865Z · LW(p) · GW(p)
I agree that it could use more exploration. I suspect that many of our biases stem from simple preference ranking errors.
Replies from: FlakAttack↑ comment by FlakAttack · 2009-04-19T06:09:25.084Z · LW(p) · GW(p)
I'm pretty sure I actually saw this in a philosophy textbook, which would mean there are likely observations or studies on the subject.
Replies from: taiwanjohn↑ comment by taiwanjohn · 2009-04-20T16:45:59.358Z · LW(p) · GW(p)
Buddhism says there are two basic emotions, fear and love, and all other human emotions are some permutation or combination of these two elements.
--jrd
comment by MBlume · 2009-04-17T02:46:41.783Z · LW(p) · GW(p)
Of course, the battle has already been half-lost once you have a category "drugs".
Especially since the category itself is determined by governmental fiat. I once saw an ad for employment at Philip Morris with a footnote to the effect that Philip Morris is a "drug-free workplace". I'm sure they've plenty of nicotine and caffeine there, they're simply using 'drugs' to mean "things to which the federal government has already said 'boo'"
comment by David Althaus (wallowinmaya) · 2011-06-05T17:52:14.414Z · LW(p) · GW(p)
Eliezer Yudkowsky currently has 2486 karma.
Ah, the good old days!
comment by andrewc · 2009-04-17T04:00:25.890Z · LW(p) · GW(p)
pizza is good, seafood is bad
When I say something is good or bad ("yay doggies!") it's usually a kind of shorthand:
pizza is good == pizza tastes good and is fun to make and share
seafood is bad == most cheap seafood is reprocessed offcuts and gave me food poisoning once
yay doggies == I find canine companions to be beneficial for my exercise routine, useful for home security and fun to play with.
I suspect when most people use the words 'good' and 'bad' they are using just this kind of linguistic compression. Or is your point that once a 'good' label is assigned we just increment its goodness index and forget the detailed reasoning that led us to it? Sorry, the post was an interesting read but I'm not sure what you want me to conclude.
Replies from: jimrandomh↑ comment by jimrandomh · 2009-04-17T04:33:40.547Z · LW(p) · GW(p)
Or is your point that once a 'good' label is assigned we just increment its goodness index and forget the detailed reasoning that led us to it?
Exactly that. We may be able to recall our reasoning if we try to, but we're likely to throw in a few extra false justifications on top, and to forget about the other side.
Replies from: andrewc, SoullessAutomaton↑ comment by andrewc · 2009-04-17T06:23:24.610Z · LW(p) · GW(p)
OK, 'compression' is the wrong analogy as it implies that we don't lose any information. I'm not sure this is always a bad thing. I might have use of a particular theorem. Being the careful sort, I work through the proof. Satisfied, I add the theorem to my grab bag of tricks (yay product rule!). In a couple of weeks (hours even...) I have forgotten the details of the proof, but I have enough confidence in my own upvote of the theorem to keep using it. The details are no longer relevant unless some other evidence comes along that brings the theorem, and thus the 'proof' into question.
Replies from: ciphergoth, whpearson, AndySimpson↑ comment by Paul Crowley (ciphergoth) · 2009-04-17T13:23:47.646Z · LW(p) · GW(p)
This drives me crazy when it happens to me.
- Someone: "Shall we invite X?"
- Me: "No, X is bad news. I can't remember at all how I came to this conclusion, but I recently observed something and firmly set a bad news flag against X."
↑ comment by arthurlewis · 2009-04-18T00:09:43.358Z · LW(p) · GW(p)
Those kinds of flags are the only way I can remember what I like. My memory is poor enough that I lose most details about books and movies within a few months, but if I really liked something, that 5-Yay rating sticks around for years.
Hmm, I guess that's why part of my brain still thinks Moulin Rouge, which I saw on a very enjoyable date, and never really had the urge to actually watch again, is one of my favorite movies.
Compression seems a fine analogy to me, as long as we're talking about mp3's and flv's, rather than zip's and tar's.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-04-21T07:22:56.354Z · LW(p) · GW(p)
Compression seems a fine analogy to me, as long as we're talking about mp3's and flv's, rather than zip's and tar's.
tar's are archived, not compressed. tar.gz's are compressed.
↑ comment by whpearson · 2009-04-18T21:40:06.511Z · LW(p) · GW(p)
I think of it as memoisation rather than compression.
↑ comment by AndySimpson · 2009-04-17T11:39:24.568Z · LW(p) · GW(p)
I'm not sure this is always a bad thing.
It may be useful shorthand to say "X is good", but when we forget the specific boundaries of that statement and only remember the shorthand, it becomes a liability. When we decide that the statement "Bayes' Theorem is valid, true, and useful in updating probabilities" collapses into "Bayes' Theorem is good," we invite the abuse of Bayes' Theorem.
So I wouldn't say it's always a bad thing, but I'd say it introduces unnecessary ambiguity and contributes to sub-optimal moral reasoning.
Replies from: janos↑ comment by janos · 2009-04-17T14:22:00.721Z · LW(p) · GW(p)
Do you have some good examples of abuse of Bayes' theorem?
Replies from: AndySimpson↑ comment by AndySimpson · 2009-04-17T15:15:32.952Z · LW(p) · GW(p)
That is a good question for a statistician, and I am not a statistician.
One thing that leaps to mind, however, is two-boxing on Newcomb's Problem using assumptions about the prior probability of box B containing $1,000,000. Some new work using math that I don't begin to understand suggests that either response to Newcomb's problem is defensible using Bayesian nets.
There could be more trivial cases, too, where a person inputs unreasonable prior probabilities and uses cargo-cult statistics to support some assertion.
Also, it's struck me that a frequentist statistician might call most Bayesian uses of the theorem "abuses."
I'm not sure those are really good examples, but I hope they're satisfying.
↑ comment by SoullessAutomaton · 2009-04-17T12:11:39.219Z · LW(p) · GW(p)
Exactly that. We may be able to recall our reasoning if we try to, but we're likely to throw in a few extra false justifications on top, and to forget about the other side.
I suspect it's more likely that we won't remember it at all; we'd simply increase the association between the thing and goodness and, if looking for a reason, will rationalize one on the spot. Our minds are very good at coming up with explanations but not good at remembering details.
Of course, if your values and knowledge haven't changed significantly, you'll likely confabulate something very similar to the original reasoning; but as the distance increases between the points of decision and rationalization, the accuracy is likely to drop.
comment by Alicorn · 2009-04-17T02:41:04.380Z · LW(p) · GW(p)
I'm taking an entire course called "Weird Forms of Consequentialism", so please clarify - when you say "utilitarianism", do you speak here of direct, actual-consequence, evaluative, hedonic, maximizing, aggregative, total, universal, equal, agent-neutral consequentialism?
Replies from: Yvain, Eliezer_Yudkowsky↑ comment by Scott Alexander (Yvain) · 2009-04-17T03:34:12.928Z · LW(p) · GW(p)
Uh.....er....maybe!
I'm familiar with Bentham, Mill, Singer, Eliezer, and random snippets of utilitarian theory I picked up here and there. I'm not confident enough with my taxonomy to use quite so many adjectives with confidence. I will add that article to the list of things to read.
I agree that your course sounds awesome. If you hear anything particularly enlightening, please turn it into an LW post.
Replies from: Alicorn, Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-17T04:10:09.619Z · LW(p) · GW(p)
Seconded.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-17T03:15:15.879Z · LW(p) · GW(p)
This sounds like an awesome course.
Replies from: Alicorn↑ comment by Alicorn · 2009-04-17T03:53:34.682Z · LW(p) · GW(p)
It is one. I am taking it for its awesomeness in spite of the professor being a mean person who considered it appropriate to schedule the class from seven to nine-thirty in the evening. (His scheduling decision and his meanness are separate qualities.)
Replies from: CarlShulman↑ comment by CarlShulman · 2009-04-17T06:14:39.268Z · LW(p) · GW(p)
Could you give us the course website?
comment by A1987dM (army1987) · 2014-01-12T09:35:47.610Z · LW(p) · GW(p)
The overwhelmingly interesting thing I noticed here was that everyone seemed to accept - not explicitly, but implicitly very much - that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash.
It would be Bayesian evidence of the right sign. But its magnitude would be vanishingly tiny.
Replies from: AshwinVcomment by Kaj_Sotala · 2009-04-17T10:50:10.149Z · LW(p) · GW(p)
Another problem: our interpretation of whether to upvote or downvote something depends on how many upvotes or downvotes it already has. Here on Less Wrong we call this an information cascade. In the mind, we call it an Affective Death Spiral.
Upvoted because of this bit. It's obvious in retrospect, but I hadn't made the connection between the two concepts previously.
Replies from: PhilGoetz, rabidchicken↑ comment by PhilGoetz · 2009-04-17T17:40:22.793Z · LW(p) · GW(p)
We sometimes do it the opposite way on LW: We'll upvote something that we wouldn't normally if it has < 0 points, because we're seeking its appropriate level rather than just voting.
I don't know that anyone downvotes something because they think it's too popular. I have refrained from voting for things that I thought had enough upvotes.
Replies from: randallsquared↑ comment by randallsquared · 2009-04-17T21:17:17.572Z · LW(p) · GW(p)
I don't think I have on LW, but on reddit I have downvoting things that seemed too popular, though I technically agreed with them, so it does happen.
↑ comment by rabidchicken · 2010-08-17T06:45:05.845Z · LW(p) · GW(p)
troll (){ Downvoted because it was already Upvoted. I hate being controlled by Affective death spirals. }
comment by PhilGoetz · 2010-11-08T22:08:55.928Z · LW(p) · GW(p)
I saw a presentation where someone took thousands of English words, placed them in a high-dimensional space based on, I think, what other words they co-occurred with, ran PCA on this space, and analyzed the top 4 resulting dimensions. The top dimension was "good/bad".
Replies from: jmmcd, Unnamed, AdeleneDawner↑ comment by jmmcd · 2010-11-09T07:25:36.820Z · LW(p) · GW(p)
You already said so in this very thread :)
↑ comment by Unnamed · 2010-11-08T22:31:49.510Z · LW(p) · GW(p)
This sounds like semantic differential research. The standard finding is three dimensions: good-bad (evaluation), strong-weak (potency), and active-passive (activity).
comment by John_Maxwell (John_Maxwell_IV) · 2010-08-16T00:09:43.386Z · LW(p) · GW(p)
"Considered harmful" considered harmful.
comment by [deleted] · 2012-01-19T20:11:34.797Z · LW(p) · GW(p)
How good is Eliezer at philosophy? Apparently somewhere around the level it would take to get 2486 karma.
Clearly this means I'm better at philosophy than Eliezer (2009). But to be serious, this reminds me how I need to value the karma scores of articles differently according to when they where made. The effects are pretty big. Completely irrelevant discussion threads now routinely get over 20 votes, while some insightful old writing hovers around 10 or 15.
comment by John_Maxwell (John_Maxwell_IV) · 2010-08-16T00:12:22.823Z · LW(p) · GW(p)
"If we ask utilitarianism "Are drugs good or bad?" it returns: CATEGORY ERROR. Good for it."
It wouldn't be contradictory for someone to assign high utility to the presence of drugs and low utility to their absence. What you really mean is upon reflection, most people would not do this.
comment by thomblake · 2009-04-17T02:14:55.766Z · LW(p) · GW(p)
Emotivism has its problems. Notably, you can't use 'yay' and 'boo' exclamations in arguments, and they can't be reasons.
"Should I eat this apple?" Becomes simply "how do I feel about eating this apple" (or otherwise it's simply meaningless). But really there are considerations that go into the answer other than mere feelings (for example, is the apple poisonous?).
Because utilitarianism has a theory of right action and a theory of value, I don't think it's compatible with emotivism. But I haven't read much in the literature detailing this particular question, as I don't read much currently about utilitarianism.
Replies from: Yvain, orthonormal, conchis↑ comment by Scott Alexander (Yvain) · 2009-04-17T02:25:08.489Z · LW(p) · GW(p)
Well, what's interesting about that comment is that our beliefs about our own justifications and actions are usually educated guesses and not privileged knowledge. Or consider Eliezer's post about the guy who said he didn't respect Eliezer's ideas because Eliezer didn't have a Ph.D, and then when Eliezer found a Ph.D who agreed with him, the guy didn't believe him either.
My guess would be that we see the apple is poisonous and "downvote" it heavily. Then someone asks what we think of the apple, we note the downvotes, and we say it's bad. Then the person asks why we think it's bad, and our unconscious supplies whatever rationale it thinks is most plausible and feeds it to us. Which is probably that it's poisonous.
See also: footnote 1
Replies from: pjeby↑ comment by pjeby · 2009-04-17T02:41:09.076Z · LW(p) · GW(p)
Then the person asks why we think it's bad, and our unconscious supplies whatever rationale it thinks is most plausible and feeds it to us.
Don't blame the unconscious. It only makes up explanations when you ask for them.
My first lesson in this was when I was 17 years old, at my first programming job in the USA. I hadn't been working there very long, maybe only a week or two, and I said something or other that I hadn't thought through -- essentially making up an explanation.
The boss reprimanded me, and told me of something he called "Counter man syndrome", wherein a person behind a counter comes to believe that they know things they don't know, because, after all, they're the person behind the counter. So they can't just answer a question with "I don't know"... and thus they make something up, without really paying attention to the fact that they're making it up. Pretty soon, they don't know the difference between the facts and their own bullshit.
From then on, I never believed my own made-up explanations... at least not in the field of computers. Instead, I considered them as hypotheses.
So, it's not only a learnable skill, it can be learned quickly, at least by 17 year-old. ;-)
Replies from: vizikahn, Yvain↑ comment by vizikahn · 2009-04-17T10:00:59.313Z · LW(p) · GW(p)
When I had a job behind a counter, one of the rules was: "We don't sell 'I don't know'". We were encouraged to look things up as hard as possible, but it's easy to see how this turns into making things up. I'm going to use the term Counter man syndrome from now on.
↑ comment by Scott Alexander (Yvain) · 2009-04-17T03:31:28.498Z · LW(p) · GW(p)
I think we're talking about subtly different things here. You're talking about explanations of external events, I'm talking about explanations for your own mind states, ie why am I sad right now.
I don't like blaming the "unconscious" or even using the word - it sounds too Freudian - but there aren't any other good terms that mean the same thing.
Replies from: pjeby, SoullessAutomaton↑ comment by pjeby · 2009-04-17T06:16:37.063Z · LW(p) · GW(p)
I think we're talking about subtly different things here. You're talking about explanations of external events, I'm talking about explanations for your own mind states, ie why am I sad right now.
I'm pointing out that there is actually no difference between the two. Your "explainer" (I call it the Speculator, myself), just makes stuff up with no concern for the truth. All it cares about are plausibility and good self-image reflection.
I don't see the Speculator as entirely unconscious, though. In fact, most of us tend to identify with the Speculator, and view its thoughts as our own. Or I suppose, you might say that the Speculator is an tool that we can choose to think with... and we tend to reach for it by default.
I don't like blaming the "unconscious" or even using the word - it sounds too Freudian - but there aren't any other good terms that mean the same thing.
Sometimes I refer to the other-than-conscious, or to non-conscious processes. But finer distinctions are useful at times, so I also refer to the Savant (non-verbal, sensory-oriented, single-stepping, abstraction/negation-free) and the Speculator (verbal, projecting, abstracting, etc.)
I suppose it's open to question whether the Speculator is really "other-than-conscious", in that it sounds like a conscious entity, and we consciously tend to identify with it, in the absence of e.g. meditative or contemplative training.
↑ comment by SoullessAutomaton · 2009-04-17T11:51:09.721Z · LW(p) · GW(p)
I think we're talking about subtly different things here. You're talking about explanations of external events, I'm talking about explanations for your own mind states, ie why am I sad right now.
What makes you think the mental systems to construct either explanation would be different? Especially given the research showing that we have dedicated mental systems devoted to rationalizing observed events.
↑ comment by orthonormal · 2009-04-17T16:45:03.436Z · LW(p) · GW(p)
Emotivism has its problems. Notably, you can't use 'yay' and 'boo' exclamations in arguments, and they can't be reasons.
Right. I think that most people hold the belief that their system of valuations is internally consistent (i.e. that you can't have two different descriptions of the same thing that are both complete, accurate, and assign different valences), which requires them (in theory) to confront moral arguments.
I think of basic moral valuations as being one other facet of human perception: the complicated process by which we interpret sensory data to get a mental representation of objects, persons, actions, etc. It seems that one of the things our mental representation often includes is a little XML tag indicating moral valuation.
The general problem is that these don't generally form a coherent system, which is why intelligent people throughout the ages have been trying to convince themselves to bite certain bullets. Your conscious idea of what consistent moral landscape lies behind these bits of 'data' inevitably conflicts with your immediate reactions at some point.
↑ comment by conchis · 2009-04-17T17:13:43.126Z · LW(p) · GW(p)
I may be misintepreting, but I wonder whether Yvain's use of the word "emotivism" here is leading people astray. He doesn't seem to be committing himself to emotivism as a metaethical theory of what it means to say something is good, as much as an empirical claim about most people's moral psychology (that is, what's going on in their brains when they say things like "X is good"). The empirical claim and the normative commitment to utilitarianism don't seem incompatible. (And the empirical claim is one that seems to be backed up by recent work in moral psychology.)
comment by CronoDAS · 2009-04-17T07:30:45.735Z · LW(p) · GW(p)
Ask an anarchist. Taxation of X% means you're forced to work for X% of the year without getting paid. Therefore, since slavery is "being forced to work without pay" taxation is slavery. Since slavery is bad, taxation is bad. Therefore government is bad and statists are no better than slavemasters.4
Nitpick: Under most current systems of taxation, you choose how much to work, and then lose a certain percentage of your income to taxes. A slave does not have the power to choose how much (or whether) to work. This is generally considered a relevant difference between taxation and slavery.
Replies from: Annoyance, Peter_Twieg↑ comment by Annoyance · 2009-04-17T17:04:04.429Z · LW(p) · GW(p)
See the draft. See also the varied attempts to mandate 'community service' or 'national service' for high school students.
One who is not a slave is not necessarily a free man.
Replies from: conchis, CronoDAS↑ comment by CronoDAS · 2009-04-17T19:49:36.948Z · LW(p) · GW(p)
In general, "because that person is a minor" is one of the few remaining justifications for denying someone civil rights that people still consider valid. Try comparing the status of a 15-year-old in the United States today with that of a black man or white woman in the in the United States of 1790 and see if you come up with any interesting similarities.
↑ comment by Peter_Twieg · 2009-04-17T14:43:18.338Z · LW(p) · GW(p)
So if the slave were allowed to choose his own level of effort, he would no longer be a slave?
I think you have a point with what you're saying (and I'm predisposed against believing that the taxation/slavery analogy has meaning), but I don't think being a slave is incompatible with some autonomy.
Replies from: CronoDAS↑ comment by CronoDAS · 2009-04-17T19:42:21.339Z · LW(p) · GW(p)
I think we'd better kill this discussion before it turns into an "is it a blegg or rube" debate. - the original anarchist's argument falls into at least one of the fallacies on that page, and I suspect my nitpick might do so as well.
comment by SirBacon · 2009-04-17T05:03:33.916Z · LW(p) · GW(p)
I would venture that emotivism can be a way of setting up short-run incentives for the achievement of sub-goals. If we think "Bayesian insights are good," we can derive some psychological satisfaction from things which, in themselves, do not have direct personal consequences.
By attaching "goodness" to things too far outside our feedback loops, like "ending hunger," we get things like counterproductive aid spending. By attaching "goodness" too strongly to subgoals close to individual feedback loops, like "publishing papers," we get a flood of inconsequential academic articles at the expense of general knowledge.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-17T11:45:37.674Z · LW(p) · GW(p)
I would venture that emotivism can be a way of setting up short-run incentives for the achievement of sub-goals. If we think "Bayesian insights are good," we can derive some psychological satisfaction from things which, in themselves, do not have direct personal consequences.
This seems related to the tendency to gradually reify instrumental values as terminal values. e.g., "reading posts on Less Wrong helps me find better ways to accomplish my goals therefore is good" becomes "reading posts on Less Wrong is good, therefore it is a valid end goal in itself". Is that what you're getting at?
comment by pjeby · 2009-04-17T02:46:28.258Z · LW(p) · GW(p)
To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma.
It's not outrageous at all, actually. Affective asynchrony shows that we have independent ratings of goodness and badness, just like LW votes... but on the outside of the system, all that shows is the result of combining them.
That is, we can readily see what someone "votes" for different things in their environment, but not what inputs are being summed. And when we look at ourselves, we expect to find a single "score" on a thing.
The main difference is that in humans, upvotes and downvotes don't count the same, and a sufficiently high imbalance between the two can squelch the losing direction. On the other hand, a close match between the two results in "mixed feelings" and a low probability of acting, even if the idea really is a good one.
And good decision-making in humans requires (at minimum) examining the rationale behind any downvotes, and throwing out the irrational ones.
comment by A1987dM (army1987) · 2014-01-12T09:32:05.411Z · LW(p) · GW(p)
Emotivism also does a remarkably good job capturing the common meanings of the words "good" and "bad". An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad", "the book was good, but the movie was bad", "atheism is good, theism is bad", "evolution is good, creationism is bad", and "dogs are good, but cats are bad". Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left".
They do feel like different meanings to me.
Granted, I'm not a native English speaker, but my native language also uses the same word for many (though not all) of those. (For books and movies we'd use the words for ‘beautiful’ and ‘ugly’ instead. And for weather we use ‘beautiful’ and ‘bad’.)
Replies from: blacktrance↑ comment by blacktrance · 2014-01-13T20:23:17.006Z · LW(p) · GW(p)
Seconded, and I'm not a native English speaker either, although in my case I think they feel different because of how much I talk about ethics.
comment by Relsqui · 2010-10-04T03:20:19.716Z · LW(p) · GW(p)
I remember coming to the sudden realization that I don't have to sort people into the "good" box or the "bad" box--people are going to have a set of traits and actions which I would divide into both, and therefore the whole person won't fit into either. I don't remember what triggered the epiphany, but I remember that it felt very liberating. I no longer had to be confused or frustrated when someone who usually annoyed me did something I approved of, or someone I liked made a choice I disagreed with.
So, you see, this idea already has a high karma score for me, so I'm upvoting it. ;)
comment by MrHen · 2009-04-17T19:15:33.996Z · LW(p) · GW(p)
An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad" [...] Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left".
To be annoying, "good" does have different uses. The opposite of moral "good" is "evil" the opposite of quality "good" is "poor" and the opposite of correctness "good" is "incorrect". These opposites can all use the word "bad" but they mean completely different things.
If I say murder is bad I mean murder is evil.
If I say that pizza is bad I mean that pizza is of poor quality.
If I say a result was bad I mean that the result was incorrect.
I can not remember if there is a word that splits the moral "good" and quality "good" apart.
This has nothing to do with the majority of your post or the points made another than to say that "Boo, murder!" means something different than "Boo, that pizza!" Trying to lump them all together is certainly plausible, but I think the distinctions are useful. If they only happen to be useful in a framework built on emotivism, fair enough.
Replies from: thomblake, PhilGoetz↑ comment by thomblake · 2009-04-18T19:29:47.555Z · LW(p) · GW(p)
I can not remember if there is a word that splits the moral "good" and quality "good" apart.
No, there's isn't. Depending on context, you can use 'righteous' but it doesn't quite mean the same thing.
For what it's worth, some ethicists such as myself make no distinction between 'moral' good and 'quality' good - utilitarians (especially economists) basically don't either, most of the time. Sidgwick defines ethics as "the study of what one has most reason to do or want", and that can apply equally well to 'buying good vs. bad chairs' and 'making good vs bad decisions'
Replies from: pangloss↑ comment by pangloss · 2009-04-21T05:55:50.256Z · LW(p) · GW(p)
This reminds me of a Peter Geach quote: "The moral philosophers known as Objectivists would admit all that I have said as regards the ordinary uses of the terms good and bad; but they allege that there is an essentially different, predicative use of the terms in such utterances as pleasure is good and preferring inclination to duty is bad, and that this use alone is of philosophical importance. The ordinary uses of good and bad are for Objectivists just a complex tangle of ambiguities. I read an article once by an Objectivist exposing these ambiguities and the baneful effects they have on philosophers not forewarned of them. One philosopher who was so misled was Aristotle; Aristotle, indeed, did not talk English, but by a remarkable coincidence ἀγαθός had ambiguities quite parallel to those of good. Such coincidences are, of course, possible; puns are sometimes translatable. But it is also possible that the uses of ἀγαθός and good run parallel because they express one and the same concept; that this is a philosophically important concept, in which Aristotle did well to be interested; and that the apparent dissolution of this concept into a mass of ambiguities results from trying to assimilate it to the concepts expressed by ordinary predicative adjectives."
↑ comment by PhilGoetz · 2009-04-17T19:31:19.818Z · LW(p) · GW(p)
To be annoying, but "good" does have different uses. The opposite of moral "good" is "evil" the opposite of quality "good" is "poor" and the opposite of correctness "good" is "incorrect". These opposites can all use the word "bad" but they mean completely different things.
He knows that. He's pointing out the flaws with that model.
Replies from: MrHen↑ comment by MrHen · 2009-04-17T21:32:47.369Z · LW(p) · GW(p)
But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left". It feels like they're all just the same thing.
This is from his article. Speaking for myself, when I use the word "good" I use it in several different ways in much the same way I do when I use the word "right".
Replies from: Relsqui↑ comment by Relsqui · 2010-10-04T03:14:35.419Z · LW(p) · GW(p)
I think the point was that we do use the word in multiple ways, but those ways don't feel as different as the separate meanings of "right." The concepts are similar enough that people conflate them. If you never do this, that's awesome, but the post posits that many people do, and I agree with it.
comment by cousin_it · 2009-04-17T07:57:26.430Z · LW(p) · GW(p)
...that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash.
Uh, to a Bayesian this would be relevant evidence. Or do I misunderstand something?
Replies from: PhilGoetz, mattnewport↑ comment by PhilGoetz · 2009-04-17T17:41:46.630Z · LW(p) · GW(p)
A Hitler supporter acting violently is evidence against Hitler. But it takes a lot of them to reach significance.
Replies from: MBlume↑ comment by MBlume · 2009-04-17T17:44:21.726Z · LW(p) · GW(p)
A single Hitler supporter acting violently isn't much evidence against Hitler. Thousands of apparently sane individuals committing horrors is pretty damning though.
Replies from: ciphergoth, FlakAttack↑ comment by Paul Crowley (ciphergoth) · 2009-04-17T18:01:50.750Z · LW(p) · GW(p)
I haven't done the math, but I would have thought that a hundred incidents would be more than a hundred times as much evidence as one, because it says that it's not just the unsurprising lunatic fringe of your supporters who are up for violence.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-18T05:06:47.408Z · LW(p) · GW(p)
I don't think that's possible, unless the first incident makes it conditionally less likely that the second incident will occur unless Hitler is ungood.
Unless you mean, "the total information that a sum of one incident has occurred is less than a hundredth the evidence than the total information that a sum of a hundred incidents have occurred", in which case I agree, because in the former case you're also getting the information on all the people who didn't commit violent acts.
Replies from: ciphergoth, SoullessAutomaton↑ comment by Paul Crowley (ciphergoth) · 2009-04-18T11:04:58.067Z · LW(p) · GW(p)
unless the first incident makes it conditionally less likely that the second incident will occur unless Hitler is ungood.
That wasn't what I had in mind (and what I did have in mind is pretty straightforward to express and test mathematically, so I'll do that later today) but it's a possibility worth taking seriously: are you the sort of organisation that responds to reports of violence with a memo saying "don't go carving a backwards B on people"?
↑ comment by SoullessAutomaton · 2009-04-18T10:01:07.990Z · LW(p) · GW(p)
Assuming the prior probability of politically-motivated violent incidents to be greater than zero, X incidents where X/(number of supporters) is roughly equal to the incidence for the entire population offers very little evidence of anything, so X*100 is trivially more than a hundred times the evidence.
↑ comment by FlakAttack · 2009-04-19T06:32:59.948Z · LW(p) · GW(p)
I guess the question being asked here is whether those Hitler supporters acting so violently should affect your decision on whether to support Hitler or not. Rationally speaking, it should not, because his supporters and the man himself are two separate things, but the initial response will likely be to assign both things to the same category and have both be affected by the negative perception of the supporters.
I think if you use examples that are less confrontational or biased you can get the message across better. Hitler is usually not a useful subject for examples or comparisons.
↑ comment by mattnewport · 2009-04-17T08:04:51.523Z · LW(p) · GW(p)
To a Bayesian, all evidence is relevant. These two pieces of evidence would seem to have very low weights though. Do you think the weights would be significant?
Replies from: cousin_it↑ comment by cousin_it · 2009-04-17T10:03:26.574Z · LW(p) · GW(p)
If I were a McCain supporter, the rumor's turning out to be false would've carried significant weight for me. You?
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-17T12:02:07.627Z · LW(p) · GW(p)
Assigning significant weight to this event (on either side) is likely a combination of sensationalist national mass media and availability heuristic bias.
Uncoordinated behavior of individual followers reflects very weakly on organizations or their leaders. Without any indication of wider trends or mass behavior, the evidence would be weighted so little as to demand disregarding by a bounded rationalist.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-17T16:10:03.192Z · LW(p) · GW(p)
Yes, I was being dumb. Sorry.
Edit: stop with the upvotes already!
Replies from: SoullessAutomaton, Eliezer_Yudkowsky↑ comment by SoullessAutomaton · 2009-04-17T17:52:04.365Z · LW(p) · GW(p)
Yes, I was being dumb. Sorry.
I see you've been upvoted anyways so I'm likely not the only one, but I want to personally thank you for this. People being more willing to admit that they made a mistake and carry on is an excellent feature of Less Wrong and extremely rare in most online communities.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2009-04-20T17:13:20.140Z · LW(p) · GW(p)
I disagree that it is extremely rare. I've seen a good number of apologies reading reddit, and I think it might be bad to upvote them because it could lead to the motives of any apologizer becoming suspect.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-18T05:04:26.628Z · LW(p) · GW(p)
Voted up because it asked not to be upvoted.
Replies from: rabidchicken, loqi, ciphergoth↑ comment by rabidchicken · 2010-08-17T06:49:08.553Z · LW(p) · GW(p)
Hey, that's my line.
↑ comment by Paul Crowley (ciphergoth) · 2009-04-18T10:44:50.867Z · LW(p) · GW(p)
I'll probably get downvoted for this, but please don't upvote this comment.
EDIT: OK, looks like that wasn't as funny as I thought, lesson learned!
Replies from: whpearsoncomment by komponisto · 2009-04-17T06:05:42.462Z · LW(p) · GW(p)
Excellent post. Upvoted! (Literally.)
Replies from: evtujo, Document↑ comment by Document · 2010-04-11T09:41:31.108Z · LW(p) · GW(p)
Are you generally not literal when you say "upvoted"?
Replies from: komponisto↑ comment by komponisto · 2010-04-11T18:05:37.525Z · LW(p) · GW(p)
Um, did you miss the following paragraph (emphasis added)?:
To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote "cats" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote "Palestinians" a few points. Richard Dawkins just said something especially witty, so you up-vote "atheism". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.
And...the rest of the post? Upvoting/karma as a metaphor was the whole point! In such a context, it was perfectly sensible (and even, I daresay, slightly witty) of me to append "literally" to the above comment.
(Honestly, did I really need to explain this?)
Replies from: Document↑ comment by Document · 2010-04-11T19:16:01.480Z · LW(p) · GW(p)
No and yes, respectively. In my defense, your comment is 64th in New order, so it's not like it was closely juxtaposed with that paragraph.
Replies from: komponisto↑ comment by komponisto · 2010-04-11T20:44:41.883Z · LW(p) · GW(p)
That wasn't just some random paragraph; it was the whole freaking point of the post! It introduced a conceit that was continued throughout the whole rest of the article!
Before accusing me of hindsight bias (or the illusion of transparency, which is what I think you really meant), you might have noticed this reply, which should have put its parent into context immediately, or so I would have thought.
Replies from: Documentcomment by mattnewport · 2009-04-17T07:54:36.144Z · LW(p) · GW(p)
Is the usual definition of utilitarianism taken to weight the outcomes for all people equally? While utilitarian arguments often lead to conclusions I agree with, I can't endorse a moral system that seems to say I should be indifferent to a choice between my sister being shot and a serial killer being shot. Is there a standard utilitarian position on such dilemmas?
Replies from: gjm, Kaj_Sotala, Peter_Twieg↑ comment by gjm · 2009-04-17T09:04:29.610Z · LW(p) · GW(p)
I fear you may be thinking "serial killer: karma -937; my sister: karma +2764".
A utilitarian would say: consider what that person is likely to do in the future. The serial killer might murder dozens more people, or might get caught and rot in jail. Your sister will most likely do neither. And consider how other people will feel about the deaths. The serial killer is likely to have more enemies, fewer friends, fewer close friends. So the next utility change from shooting the serial killer is much less negative (or even more positive) than from shooting your sister, and you need not (should not) be indifferent between those.
In general, utilitarianism gets results that resemble those of intuitive morality, but it tends to get them indirectly. Or perhaps it would be better to say: Intuitive morality gets results that resemble those of utilitarianism, but it gets them via short-cuts and heuristics, so that things that tend to do badly in utilitarian terms feel like they're labelled "bad".
Replies from: mattnewport↑ comment by mattnewport · 2009-04-17T17:46:12.453Z · LW(p) · GW(p)
In a least convenient possible world, where the serial killer really enjoys killing people and only kills people who have no friends and family and won't be missed and are quite depressed, would it ever be conceivable that utilitarianism would imply indifference to the choice?
Replies from: gjm, Kingreaper, SoullessAutomaton↑ comment by gjm · 2009-04-17T18:51:13.375Z · LW(p) · GW(p)
It's certainly possible in principle that it might end up that way. A utilitarian would say: Our moral intuitions are formed by our experience of "normal" situations; in situations as weirdly abnormal as you'd need to make utilitarianism favour saving the serial killer at the expense of an ordinary upright citizen, or to make slavery a good thing overall, or whatever, we shouldn't trust our intuition.
Replies from: mattnewport↑ comment by mattnewport · 2009-04-17T19:49:35.180Z · LW(p) · GW(p)
And this is the crux of my problem with utilitarianism I guess. I just don't see any good reason to prefer it over my intuition when the two are in conflict.
Replies from: randallsquared↑ comment by randallsquared · 2009-04-17T21:27:19.050Z · LW(p) · GW(p)
Even though your intuition might be wrong in outlying cases, it's still a better use of your resources not to think through every case, so I'd agree that using your intuition is better than using reasoned utilitarianism for most decisions for most people.
It's better to strictly adhere to an almost-right moral system than to spend significant resources on working out arbitrarily-close-to-right moral solutions, for sufficiently high values of "almost-right", in other words. In addition to the inherent efficiency benefit, this will make you more predictable to others, lowering your transaction costs in interactions with them.
Replies from: mattnewport↑ comment by mattnewport · 2009-04-17T21:35:53.384Z · LW(p) · GW(p)
My problem is a bit more fundamental than that. If the premise of utilitarianism is that it is morally/ethically right for me to provide equal weighting to all people's utility in my own utility function then I dispute the premise, not the procedure for working out the correct thing to do given the premise. The fact that utilitarianism can lead to moral/ethical decisions that conflict with my intuitions seems to me a reason to question the premises of utilitarianism rather than to question my intuitions.
Replies from: Virge, Matt_Simpson↑ comment by Virge · 2009-04-18T04:30:05.301Z · LW(p) · GW(p)
Your intuitions will be biased to favoring a sibling over a stranger. Evolution has seen to that, i.e. kin selection.
Utilitarianism tries to maximize utility for all, regardless of relatedness. Even if you adjust the weightings for individuals based on likelihood of particular individuals having a greater impact on overall utility, you don't (in general) get weightings that will match your intuitions.
I think it is unreasonable to expect your moral intuitions to ever approximate utilitarianism (or vice versa) unless you are making moral decisions about people you don't know at all.
In reality, the money I spend on my two cats could be spent improving the happiness of many humans - humans that I don't know at all who are living a long way away from me. Clearly I don't apply utilitarianism to my moral decision to keep pets. I am still confused about how much I should let utilitarianism shift my emotionally-based lifestyle decisions.
↑ comment by Matt_Simpson · 2009-04-18T04:43:14.198Z · LW(p) · GW(p)
I think you are construing the term "utilitarianism" too narrowly. The only reason you should be a utilitarian is if you intrinsically value the utility functions of other people. However, you don't have to value the entire thing for the label to be appropriate. You still care about a large part of that murderer's utility function, I assume, as well as that of non-murderers. Not classical utilitarianism, but the term still seems appropriate.
Replies from: mattnewport↑ comment by mattnewport · 2009-04-18T05:26:07.782Z · LW(p) · GW(p)
Utilitarianism seems a fairly unuseful ethical system if the utility function is subjective, either because individuals get to pick and choose which parts of others' utility functions to respect or because individuals are allowed to choose subjective weights for others' utilities. It would seem to degenerate into an impractical-to-implement system for everybody just justifying what they feel like doing anyway.
Replies from: Matt_Simpson↑ comment by Matt_Simpson · 2009-04-18T05:43:55.099Z · LW(p) · GW(p)
Well, assuming you get to make up your own utility function, yes. However, I don't think this is the case. It seems more likely that we or born with utility functions or, rather, something we can construct a coherent utility function out of. Given the psychological unity of mankind, there is likely to be a lot of similarities in these utility functions across the species.
Replies from: mattnewport↑ comment by mattnewport · 2009-04-18T06:19:44.834Z · LW(p) · GW(p)
Didn't you just suggest that we don't have to value the entirety of a murderer's utility function? There are certainly similarities between individual's utility functions but they are not identical. That still doesn't address the differential weighting issue either. It's fairly clear that most people do in fact put greater weight on the utility of their family and friends than on that of strangers. I believe that is perfectly ethical and moral but it conflicts with a conception of utilitarianism that requires equal weights for all humans. If weights are not equal then utility is not universal and so utilitarianism does not provide a unique 'right' answer in the face of any ethical dilemma and so seems to me to be of limited value.
Replies from: Virge↑ comment by Virge · 2009-04-18T13:27:28.860Z · LW(p) · GW(p)
It's fairly clear that most people do in fact put greater weight on the utility of their family and friends than on that of strangers. I believe that is perfectly ethical and moral but it conflicts with a conception of utilitarianism that requires equal weights for all humans. If weights are not equal then utility is not universal and so utilitarianism does not provide a unique 'right' answer in the face of any ethical dilemma and so seems to me to be of limited value.
If you choose to reject any system that doesn't provide a "unique 'right' answer" then you're going to reject every system so far devised. Have you read Greene's The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it ?
However, I agree with you that any form of utilitarianism that has to have different weights when applied by different people is highly problematic. So we're left with:
Pure selfless utilitarianism conflicts with our natural intuitions about morality when our friends and relatives are involved.
Untrained intuitive morality results in favoring humans unequally based on relationships and will appear unfair from a 3rd party viewpoint.
You can train yourself to some extent to find a utilitarian position more intuitive. If you work with just about any consistent system for long enough, it'll start to feel more natural. I doubt that anyone who has any social or familial connections can be a perfect utilitarian all the time: there are always times when family or friends take priority over the rest of the world.
Replies from: mattnewport, ciphergoth↑ comment by mattnewport · 2009-04-18T18:45:24.825Z · LW(p) · GW(p)
If you choose to reject any system that doesn't provide a "unique 'right' answer" then you're going to reject every system so far devised.
It seems to me that utilitarianism is trying to answer the wrong question. I don't think there's anything inherently wrong with individuals simply trying their best to satisfy their own unique utility functions (which generally include some concern for the utility functions of others but not equal concern for all others). I see morality and ethics as to a large extent not theoretical questions about what is 'right' but as empirical questions about what moral and ethical decision processes produce an evolutionarily stable strategy for co-existing with other agents with different goals.
On my view of morality it's accepted that different agents will have different utilities for different outcomes and that there is not in general one outcome which all agents will agree is optimal. Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal. It is not a problem of achieving an outcome that all agents can agree is optimal. For humans, biological and cultural evolution have equipped us with a set of rules and heuristics for the resolution of conflicts of interest that have worked well enough to get us to where we are today. My interest in morality/ethics is in improving the process, not in some mythical quest for what is 'right'.
Have you read Greene's The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it ?
I haven't, but I've seen it mentioned before so I should check it out at some point. To be honest the title put me off when I first saw it linked because it makes it sound like it's aimed at someone who still holds the naive view of morality that it's about doing what is 'right'.
Replies from: Virge↑ comment by Virge · 2009-04-19T03:07:34.586Z · LW(p) · GW(p)
Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal.
I think we're in agreement here.
For me the difficult questions arise when we try to take one universalizable moral principle and try to apply it at every level of organization, from the personal "what should I be doing with my time and energy at this moment?" to the public "what should person A be permitted/obliged to do?"
I was thinking about raising the question of utilitarianism being difficult to institute as an ESS when writing my previous post. To a certain extent, we (in democratic cultures with independent judiciary) train our intuitions to accept the idea of fairness as we grow up. Our parents, kindergarten and school teachers do their best to instill certain values. The fact that racism and sexism can become entrenched during formative years suggests to me that the equality and fairness principles I've grown up with can also be trained. We share a psychological architecture, but there is enough flexibility that we can train our moral intuitions (to some extent).
Utilitarianism is in principle universalizable, but is it practically universalizable at all decision levels? What training (or brainwashing) and threats of defector punishment would we need to implement to completely override our natural nepotism? To me this seems like an impractical goal.
I've been somewhat confused by the idea of anyone wanting to make all their decisions on utilitarian principles (even at the expense of familial obligations), so I wondered if I've been erecting an extreme utilitarian strawman. I think I have, and I'm seeing a glimmer of a solution to the confusion.
Given that we all have relationships we value, and to force ourselves to ignore those relationships in our daily activities represents negative utility, we cannot maximize utility with a moral system that requires everyone to treat everyone else as equal at all times and in all decisions. Any genuine utilitarian calculation must account for everyone's emotional satisfaction from relationship activities.
(I feel less confused now. I'll have to think about this some more.)
↑ comment by Paul Crowley (ciphergoth) · 2009-04-18T18:04:35.505Z · LW(p) · GW(p)
Have you read Greene's The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it?
I have skimmed it and will return to it ASAP. Thank you very much for recommending it!
↑ comment by Kingreaper · 2010-11-26T16:25:04.450Z · LW(p) · GW(p)
Yes. But if the "serial killer" is actually somone who enjoys helping others, who want to (and won't harm anyone when they), commit suicide; are they really a bad person at all?
Is shooting them really better than shooting a random person?
↑ comment by SoullessAutomaton · 2009-04-17T17:49:17.655Z · LW(p) · GW(p)
In a least convenient possible world, where the serial killer really enjoys killing people and only kills people who have no friends and family and won't be missed and are quite depressed, would it ever be conceivable that utilitarianism would imply indifference to the choice?
Also, would the verdict on this question change if the people he killed had attempted but failed at suicide, or wanted to suicide but lacked the willpower to?
↑ comment by Kaj_Sotala · 2009-04-17T10:46:37.457Z · LW(p) · GW(p)
There isn't a standard utilitarian position on such dilemmas, because there is no such thing as standard utilitarianism. Utiliarianism is a meta-ethical system, not an ethical system. It specifies the general framework by which you think about morality, but not the details.
There are plenty of variations of utilitarianism - negative or positive utilitarianism, average or total utiliarianism, and so on. And there is nothing to prevent you from specifying that, in your utility function, your family members are treated preferrentially to everybody else.
Replies from: steven0461↑ comment by steven0461 · 2009-04-17T15:32:51.143Z · LW(p) · GW(p)
Utilitarianism is an incompletely specified ethical (not meta-ethical) system, but part of what it does specify is that everyone gets equal weight. If you're treating your family members preferentially, you may be maximizing your utility, but you're not following "utilitarianism" in that word's standard meaning.
Replies from: ciphergoth, AndySimpson↑ comment by Paul Crowley (ciphergoth) · 2009-04-17T15:48:09.812Z · LW(p) · GW(p)
Replies from: conchis, MBlume[...] classic utilitarianism is actually a complex combination of many distinct claims, including the following claims about the moral rightness of acts:
[...] Equal Consideration = in determining moral rightness, benefits to one person matter just as much as similar benefits to any other person (= all who count count equally).
↑ comment by conchis · 2009-04-17T16:16:16.230Z · LW(p) · GW(p)
I'd put a slight gloss on this.
The problem is that that "utilitarianism", as used in much of the literature, does seem to have more than one standard meaning. In the narrow (classical) utilitarian sense, steven0461 and the SEP are absolutely right to insist that it imposes equal weights. However, there's definitely a literature that uses the term in a more general sense, which includes weighted utilitarianism as a possibility. Contra Kaj, however, even this sense does seem to exclude agent-relative weights.
As much of this literature is in economics, perhaps it's non-standard in philosophy. It does, however, have a fairly long pedigree.
Replies from: steven0461, Kaj_Sotala↑ comment by steven0461 · 2009-04-17T21:14:27.686Z · LW(p) · GW(p)
I was actually uneasy about making the comment because I had a vague recollection that that might be true, but I'm not sure a definition that says "maximize Kim Jong-Il's welfare" is a form of utilitarianism, is a good definition.
↑ comment by Kaj_Sotala · 2009-04-17T20:36:29.967Z · LW(p) · GW(p)
Contra Kaj, however, even this sense does seem to exclude agent-relative weights.
Utilitarianism that includes animals vs. utilitarianism that doesn't include animals. If some people can give more / less weight to a somewhat arbitrarily defined group of subjects (animals), it doesn't seem much of a stretch to also allow some people to weight another arbitrarily chosen group (family members) more (or less).
Classical utilitarianism is more strictly defined, but as you point out, we're not talking about just classical utilitarianism here.
Replies from: conchis↑ comment by conchis · 2009-04-17T21:09:32.570Z · LW(p) · GW(p)
I don't think that's a very good example of agent-relativity. Those who would argue that only humans matter seldom (if ever) do so on the basis of agent-relative concerns: it's not that I am supposed to have a special obligation to humans because I'm human; it's that only humans are supposed to matter at all.
In any event, the point wasn't that agent relative weights don't make sense, it's that they're not part of a standard definition of utilitarianism, even in a broad sense. I still think that's accurate characterization of professional usage, but if you have specific examples to the contrary, I'd be open to changing my mind.
Gratuitous nitpick: humans are animals too.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-04-18T07:46:05.066Z · LW(p) · GW(p)
You may be right. But we're inching pretty close towards arguing by definition now. So to avoid that, let me rephrase my original response to mattnewport's question:
You're right, by most interpretations utilitarianism does weigh everybody equally. However, if that's the only thing in utilitarianism that you disagree with, and like the ethical system otherwise, then go ahead and adopt as your moral system a utilitarianism-derived one that differs from normal utilitarianism only in that you weight your family more than others. It may not be utilitarianism, but why should you care about what your moral system is called?
Replies from: conchis↑ comment by conchis · 2009-04-18T14:37:30.292Z · LW(p) · GW(p)
I completely agree with your reframing.
I (mistakenly) thought your original point was a definitional one, and that we had been discussing definitions the entire time. Apologies.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-04-19T19:32:22.710Z · LW(p) · GW(p)
No problem. It happens.
↑ comment by MBlume · 2009-04-17T16:01:49.296Z · LW(p) · GW(p)
The SEP
For just a moment I was thinking "How is the Somebody Else's Problem field involved?"
↑ comment by AndySimpson · 2009-04-17T15:48:25.241Z · LW(p) · GW(p)
In utilitarianism, sometimes some animals can be more equal than others.. It's just that their lives must be of greater utility for some reason. I think sentimental distinctions between people would be rejected by most utilitarians as a reason to consider them more important.
↑ comment by Peter_Twieg · 2009-04-17T14:40:47.290Z · LW(p) · GW(p)
Utilitarianism doesn't describe how you should feel, it simply describes "the good". It's very possible that if accepting utilitarianism's implications is so abhorrent to you that the world would be a worse place because you do it (because you're unhappy, or because embracing utilitarianism might actually make you worse at promoting utility), then by all means... don't endorse it, at least not at some given level you find repugnant. This is what Derek Parfit labels a "self-effacing" philosophy, I believe.
There are a variety of approaches to actually being a practicing utilitarian, however. Obviously we don't have the computational power required to properly deduce every future consequence of our actions, so at a practical level utilitarians will always support heuristics of some sort. One of these heuristics may dictate that you should always prefer serial killers to be shot over your sister for the kinds of reasons that gjm describes. This might not always lead to the right conclusion from a utilitarian perspective, but it probably wouldn't be a blameworthy one, as you did the best you could under incomplete information about the universe.
comment by Annoyance · 2009-04-17T16:59:26.825Z · LW(p) · GW(p)
"To a Bayesian, this would be balderdash."
Um, not the 'Bayesians' here. There is a distinct failure to acknowledge that not everything is evidence regarding everything else.
If the people here wished to include the behavior of a political candidate's supporter in their evaluation of the candidate, they'd make excuses for doing so. If they wished to exclude it, they would likely pass over it in silence - or, if it were brought up, actively denigrate the idea.
Judging what is and is not evidence is an important task that has been completely ignored here.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-17T17:19:02.364Z · LW(p) · GW(p)
Judging what is and is not evidence is an important task that has been completely ignored here.
In the most literal, unbounded application of Bayesian induction, anything within the past light cone of what is being considered counts as "evidence". Clearly, an immense majority of it is all but completely independent of most propositions, but it is still evidence, however slight.
Having cleared up that everything is evidence, determining the weight to give any particular piece of evidence is left as an exercise for the reader.
comment by mwengler · 2013-03-12T15:15:01.781Z · LW(p) · GW(p)
It seems to me that ANY moral theory is, at its root, emotive. A utilitarian in the form of "do utile things!" decides that maximizing utility feels good, and so is moral. In other words, the argument for the basic axiom of utilitarianism is "Yay utility!"
A non-emotive utilitarianism, or any consequentialist theory, could never go beyond "A implies B." That is, if people do A, the result they will get is B. Without "Yay B!" this is not an argument for doing A.
Am I missing something?
Replies from: Leonhart, whowhowho↑ comment by Leonhart · 2013-03-12T16:46:12.971Z · LW(p) · GW(p)
If I am moved by a should-argument to an x-ism, then "Yay x-ism!" is what being moved by that argument feels like, not an additional part of the argument.
Otherwise, aren't you're the tortoise demanding "Yay (Yay X!)!", "Yay (Yay (Yay X!)!)!" and so on?
↑ comment by whowhowho · 2013-03-12T15:31:16.452Z · LW(p) · GW(p)
You seem to be assuming, without argument, that emotion is the only motivation for doing anything.
Replies from: incogn, mwengler↑ comment by incogn · 2013-03-12T16:11:15.408Z · LW(p) · GW(p)
I tend to agree with mwengler - value is not a property of physical objects or world states, but a property of an observer having unequal preferences for different possible futures.
There is a risk we might be disagreeing because we are working with different interpretations of emotion.
Imagine a work of fiction involving no sentient beings, not even metaphorically - can you possibly write a happy or tragic ending? Is it not first when you introduce some form of intelligence with preferences that destruction becomes bad and serenity good? And are not preferences for this over that the same as emotion?
↑ comment by mwengler · 2013-03-12T15:44:35.650Z · LW(p) · GW(p)
You are right, the only reason I can think for doing anything is because I feel like it, because I want to, which is emotional. In some more detail, think this includes doing things to avoid things I am afraid of or that I find painful, also emotional. Certainly pleasure seeking is emotional. I attribute playing sudoku to my feeling of pleasure of having my mind occupied.
If you come up with something like a Kantian categorical imperative, I will tell you I don't follow categorical imperatives because I don't feel like it, and nothing in the real world of "is" seems to break when I act that way. And it does suggest to me that those who do follow a categorical imperative do it because they feel like it, the feeling of logical consistency or superiority appeals to them.
Please let me know what OTHER reasons, non-emotional reasons, there are to do something.
Replies from: whowhowho↑ comment by whowhowho · 2013-03-12T16:50:50.599Z · LW(p) · GW(p)
There's no logical reason why any given entity, human or otherwise, would have to be motivated by emotion. You may be over generalising from the single example of yourself. Also, you would have to believe that highly logical, vulcan-like people are motivated by some emotion they don't show.
Replies from: Leonhart↑ comment by Leonhart · 2013-03-12T17:01:13.029Z · LW(p) · GW(p)
There's no logical reason why any given entity, human or otherwise, would have to be motivated by emotion.
There's a trivial "logical" reason why this could be the case - tautology - if the person you are talking to defines "emotion" as "those mental states which directly motivate behaviour". Which seems like a perfectly good starting place to me.
In other words, this conversation will likely go nowhere until you taboo "emotion" so we can know what work that word does for you.
Replies from: whowhowhocomment by Boyi · 2011-12-06T14:18:24.068Z · LW(p) · GW(p)
Hi, I really enjoyed your essay. I also enjoyed the first half of the comments. The question it brought me to was: whether or not there is no higher utilty than transformation? I was wondering if I could hear your opinion on this matter.
It seems to me if transformation of external reality is the primer assesment of utility, then humans should ratioanlity question their emotivism based on pratical solutions. But what if the abiilty to transform external reality was not teh primer assesement of utility? Recently I have been immersed in Confucian thinkinng, which places harmony as the pinnicale of importance. If you do not mind I would like to share some thoughts from this perspetive.
When faced with a problem it seems that as humasn our inital solution is to increase the complexity of our interaction with said aspect of the external world through expanding scale, organization, detail, of our involvement with that portion of reality in hopes of transforming that reality to our will. Is this logical? Yes, we have clearly demonstrated a potential to transform reality, but have any of our transformations justify the rationale that transformation will eventually lead to a uptoian plateau? Or to put it another way, does the transformation of one good/bad scenario ever completely deplete the nessecity for further transformation? If anything, it seems that our greatest acheivements of transformation have only created an even more dire need for transformation. The creation of nuclear power/weapons was supposed to end war and provide universal energy; now we are faced with the threat of nuclear waste and global anhilation. Genetically engineering food was supposed to feed the world; in ameriac we have created a obessity epidemic, and the modern agricultral practices of the world walk a fine line between explosive yeild and ecological destruction.
I was somewhat hesitant to say it because of a preceived emotivism of this blog, but what I am questioning is the discourse of progress. Transformation is progress. You say:
"In general, any debate about whether something is "good" or "bad" is sketchy, and can be changed to a more useful form by converting the thing to an action and applying utilitarianism." But is that not soley based on a emotive value of progress?
From the harmonizing perspective emotivism in itself contains utilty because it is in our common irratioanlity that humans can truly relate. If we did institutionally preceed arbitrary value wtih a logic of transformational utility would this not marganilze a huge portion of humanity that is not properly equipped to rationalize action in such a way? It legitimizes intellectual dominace. In my opinion this is no different than if we were to say that whoever wins in an offical arm wrestle/ foot race has the correct values. That may seem completely absurd to you, but I would argue only because you are intellectually rather than physically dominate.
It should be noted that my argument is based on the premise that there are graduated levels of intellegence, and the level required to rationalize one potential transformation over another is sequesterd from the lower tiers.
I also write under the assumption that the discourse of progress (I think I called it utiltiy of transformation?) is emotive not rational in the sense that it is clearly the most effective cogntive paradigm for human evolution. Before my words come back to bite me, my concepts of "progress" and "evolution" are very different here. Progress is power to transform external reality (niche construction), evolution is transformation of the human structure (I will not comment on whether such orgnaic transformation is orthogenic or not)