Wanting to Want

post by Alicorn · 2009-05-16T03:08:10.257Z · LW · GW · Legacy · 199 comments

In response to a request, I am going to do some basic unpacking of second-order desire, or "metawanting".  Basically, a second-order desire or metawant is a desire about a first-order desire.

Example 1: Suppose I am very sleepy, but I want to be alert.  My desire to be alert is first-order.  Suppose also that there is a can of Mountain Dew handy.  I know that Mountain Dew contains caffeine and that caffeine will make me alert.  However, I also know that I hate Mountain Dew1.  I do not want the Mountain Dew, because I know it is gross.  But it would be very convenient for me if I liked Mountain Dew: then I could drink it, and I could get the useful effects of the caffeine, and satisfy my desire for alertness.  So I have the following instrumental belief: wanting to drink that can of Mountain Dew would let me be alert.  Generally, barring other considerations, I want things that would get me other things I want - I want a job because I want money, I want money because I can use it to buy chocolate, I want chocolate because I can use it to produce pleasant taste sensations, and I just plain want pleasant taste sensations.  So, because alertness is something I want, and wanting Mountain Dew would let me get it, I want to want the Mountain Dew.

This example demonstrates a case of a second-order desire about a first-order desire that would be instrumentally useful.  But it's also possible to have second-order desires about first-order desires that one simply does or doesn't care to have.

Example 2: Suppose Mimi the Heroin Addict, living up to her unfortunate name, is a heroin addict.  Obviously, as a heroin addict, she spends a lot of her time wanting heroin.  But this desire is upsetting to her.  She wants not to want heroin, and may take actions to stop herself from wanting heroin, such as going through rehab.

One thing that is often said is that what first-order desires you "endorse" on the second level are the ones that are your most true self.  This seems like an appealing notion in Mimi's case; I would not want to say that at her heart she just wants heroin and that's an intrinsic, important part of her.  But it's not always the case that the second-order desire is the one we most want to identify with the person who has it:

Example 3: Suppose Larry the Closet Homosexual, goodness only knows why his mother would name him that, is a closet homosexual.  He has been brought up to believe that homosexuality is gross and wrong.  As such, his first-order desire to exchange sexual favors with his friend Ted the Next-Door Neighbor is repulsive to him when he notices it, and he wants desperately not to have this desire.

In this case, I think we're tempted to say that poor Larry is a gay guy who's had an alien second-order desire attached to him via his upbringing, not a natural homophobe whose first-order desires are insidiously eroding his real personality.

A less depressing example to round out the set:

Example 4: Suppose Olivia the Overcoming Bias Reader, whose very prescient mother predicted she would visit this site, is convinced on by Eliezer's arguments about one-boxing in Newcomb's Problem.  However, she's pretty sure that if Omega really turned up, boxes in hand, she would want to take both of them.  She thinks this reflects an irrationality of hers.  She wants to want to one-box.

 

1Carbonated beverages make my mouth hurt.  I have developed a more generalized aversion to them after repeatedly trying to develop a taste for them and experiencing pain every time.

199 comments

Comments sorted by top scores.

comment by RobinHanson · 2009-05-16T13:00:40.777Z · LW(p) · GW(p)

I suspect most cases of "wanting to want" are better described as cases of internal conflict, where one part of us wishes that there weren't other parts of us with different conflicting wants.

Replies from: SoullessAutomaton, MichaelVassar, Jack, freyley, JamesAndrix, steven0461
comment by SoullessAutomaton · 2009-05-16T13:12:55.311Z · LW(p) · GW(p)

Particularly where one part is responsible for the "internal narrative" and the other is responsible for motivation and prioritization, because the latter usually wins out and the former complains loudest.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2009-05-16T15:29:58.089Z · LW(p) · GW(p)

furthermore, the internal narrative has been carefully honed to be able to be disingenuous for signaling purposes.

Replies from: MichaelHoward
comment by MichaelHoward · 2009-05-17T15:23:18.948Z · LW(p) · GW(p)

More to the point, the internal narrative part largely doesn't need to be disingenuous for signaling purposes, because it's kept in the dark about what the motivation and prioritization part is really up to.

comment by MichaelVassar · 2009-05-18T11:05:45.709Z · LW(p) · GW(p)

Agreed, but the parts in conflict may be of vastly different reflectivity. Some relevant parts may not have anything analogous to awareness of some other parts.

comment by Jack · 2009-05-16T22:05:06.130Z · LW(p) · GW(p)

Just so everyone is clear:

That is one way of describing cases where second order desires conflict with first order desires, perhaps. But one can want to to want X and want X... its just that Alicorn used only examples where the two conflict (and probably the distinction is best illustrated by looking at the conflicts). But right now I have both a first order desire not to use heroin and a second order desire to not use heroin. In fact, the vast majority of our desires are probably like this. So most cases of "wanting to want" are not cases of internal conflict but perhaps these cases can be described as instances of internal consistency.

Replies from: stcredzero
comment by stcredzero · 2009-05-18T15:17:56.396Z · LW(p) · GW(p)

In any case, I think Occam's Razor demands that we reject the notion of generalized second-order desires. We can leave that concept out entirely and explain everything as conflicts between first order desires and a generalized desire for consistency and/or resolution. Note that in all the examples, there are conflicting goals. In (1) it's the desire to stay awake vs. avoiding noxious stimuli. In (2) it's the desire to stay alive vs. cop another high. In (3) it's the desire to live up to his upbringing vs. follow his sexual urges.

I'm not even sure that a generalized desire for consistency and/or resolution would be a second-order desire. I think that the feeling of conflict over not being able to decide which speaker to buy is a lot like resolving conflicts between incompatible desires. The only difference is that choosing to buy a speaker is usually morally neutral, but there is societal pressure to choose one option as the "right" one in 2 and 3, and an imperative to preserve one's life in 1 and 2, so we are steered towards a particular outcome. It may well be only a trick of language that prompts us to say "I want to want X." I suspect that we could also say, "I want X and I want Y. I cannot have both, and I know I'm supposed to want X. I wish I wasn't conflicted." But that is much longer than, "I want to want X."

comment by freyley · 2009-05-16T21:06:20.310Z · LW(p) · GW(p)

"Better" in what way?

Do you mean better in that you think it's a more accurate view of the inside of your head?

Or better in that it's a more helpful metaphorical view of the situation that can be used to overcome the difficulties described?

I think the view of it as a conflict between different algorithms is useful, and it's the one that I start with, but I wonder whether different views of this problem might be helpful in developing more methods for overcoming it.

comment by JamesAndrix · 2009-05-17T20:36:52.459Z · LW(p) · GW(p)

The thing I'm getting from all this is: Any time you have two desires that turn out in the environment to be contradictory, you could also have a desire to change one of them (the 'lesser' one?)

But we don't always get this desire to want something different, I'm wondering if always should, never should, or if there is some clear rule.

comment by steven0461 · 2009-05-16T18:47:48.135Z · LW(p) · GW(p)

Seconded; more specifically it seems to me that if one does not want something but one wants to want it, it has to be the case that either:

  • there's one entity doing the not wanting, and some other entity that wants the one entity to stop doing the not wanting
  • one values wanting the thing for the sake of wanting the thing or for the sake of some result of wanting the thing other than getting the thing, not for the sake of getting the thing, and moreover one values wanting the thing so much that this outweighs the extra likelihood of getting the thing
  • one does not want to want the thing unconditionally, but one does want the thing to turn into a different thing that one does want (e.g. Alicorn's Mountain Dew example; there the thing one wants is to enjoy and want to drink Mountain Dew, which is not the same thing as wanting to want to drink Mountain Dew even though one does not enjoy it)
  • "wanting" here is some more informal human thing that isn't captured by rational decision theory (which is bad!)

Are those all the possibilities?

comment by PhilGoetz · 2009-05-17T01:34:17.794Z · LW(p) · GW(p)

Example 2: Suppose Mimi the Heroin Addict, living up to her unfortunate name, is a heroin addict. Obviously, as a heroin addict, she spends a lot of her time wanting heroin. But this desire is upsetting to her. She wants not to want heroin, and may take actions to stop herself from wanting heroin, such as going through rehab.

Example 3: Suppose Larry the Closet Homosexual, goodness only knows why his mother would name him that, is a closet homosexual. He has been brought up to believe that homosexuality is gross and wrong. As such, his first-order desire to exchange sexual favors with his friend Ted the Next-Door Neighbor is repulsive to him when he notices it, and he wants desperately not to have this desire.

I'm really bothered by my inability to see how to distinguish between these two classes of meta-wants. I suppose you just punt it off to your moral system, or your expected-value computations.

Replies from: Ghatanathoah, orthonormal, MichaelBishop, Cyan, Alicorn
comment by Ghatanathoah · 2012-10-24T03:25:56.402Z · LW(p) · GW(p)

Looking at it, I think that the difference is that Larry the Closet Homosexual probably doesn't really have a second order desire to not be gay. What he has is a second order desire to Do the Right Thing, and mistakenly believes that homosexuality isn't the Right Thing. So we naturally empathize with Larry, because his conflict between his first and second order desires is unnecessary. If he knew that homosexuality wasn't wrong the conflict would disappear, not because his desires had changed, but because he had better knowledge about how to achieve them.

Mimi the Heroin Addict, by contrast, probably doesn't want to want heroin because it obstructs her from obtaining other important life goals that she genuinely wants and approves of. If we were too invent some sort of Heroin 2.0 that lacked most of heroin's negative properties (i.e. removing motivation to achieve your life goals, health problems) Mimi would probably be much less upset about wanting it.

Replies from: Vaniver
comment by Vaniver · 2012-10-24T03:28:45.500Z · LW(p) · GW(p)

and mistakenly believes that homosexuality isn't the Right Thing

What reasoning process did you use to determine his belief was mistaken? When and where does Larry live? What are his other terminal goals?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-10-24T04:51:16.625Z · LW(p) · GW(p)

In the interests of avoiding introducing complications into the thought experiment, I assumed that Larry was, aside from his sexual orientation, a fairly psychologically normal human who had normal human terminal goals, like an interest in sex and romantic love. I also assumed, again to avoid complications (and from clues in the story) that he probably lived, like most Less Wrong readers and writers, in a First World liberal democracy in the early 21st century.

The reasoning process I used to determine his belief was mistaken was a consequentialist meta-ethic that produces the results "Consensual sex and romance are Good Things unless they seriously interfere with some other really important goal." I assumed that Larry, being a psychologically normal human in a tolerant country, did not have any other important goals they interfered with. He probably either mistakenly believed that a supernatural creature of immense power existed and would be offended by his homosexuality, or mistakenly believed in some logically incoherent deontological set of rules that held that desires for consensual sex and romance somehow stop being Good Things if the object of those desires is of the same sex as the desirer.

Obviously if Larry lived in some intolerant hellhole of a country or time period it might be well to change his orientation to be bisexual or heterosexual so that he could satisfy his terminal goals of Sex and Romance without jeopardizing his terminal goals of Not Being Tortured and Killed. But that would be a second best the solution, the ideal solution would be to convince his fellows that their intolerance was unethical.

Replies from: Vaniver
comment by Vaniver · 2012-10-24T16:40:23.927Z · LW(p) · GW(p)

PhilGoetz wrote:

I suppose you just punt it off to your moral system, or your expected-value computations.

I am having trouble seeing a significant difference between that and what you've described. Mimi's enabler could argue "human happiness is a Good Thing unless it seriously interferes with some other really important goal," and then one would have to make the engineering judgment of whether heroin addiction and homosexuality fall on opposite sides of the "serious interference" line. Similarly, the illegality of heroin and the illegality of homosexuality seem similarly comparable; perhaps Mimi should convince her fellows that their intolerance of her behavior is unethical.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-10-25T07:30:00.461Z · LW(p) · GW(p)

Let me try using an extended metaphor to explain my point: Remember Eliezer's essay on the Pebblesorters, the aliens obsessed with sorting pebbles into prime-numbered heaps?

Let's imagine a race of Pebblesorters that's p-morality consists of sorting pebbles into prime-numbered heaps. All Pebblesorters have a second-order desire to sort pebbles into prime-numbered heaps, and ensure that others do so as well. In addition to this, individual Pebblesorters have first order desires that make them favor certain prime numbers more than others when they are sorting.

Now let's suppose there is a population of Pebblesorters who usually favor pebble heaps consisting of 13 pebbles but occasionally a mutant is born that likes to make 11-pebble heaps best of all. However, some of the Pebblesorters who prefer 13-pebble heaps have somehow come to the erroneous conclusion that 11 isn't a prime number. Something, perhaps some weird Pebblesorter versions of pride and self-deception, makes them refuse to admit their error.

The 13-Pebble Favorers become obsessed with making sure no Pebblesorters make heaps of 11 pebbles, since 11 obviously isn't a prime number. They begin to persecute 11-Pebble Favorers and imprison or kill them. They declare that Sortulon Prime, the mighty Pebblesorter God that sorts stars into gigantic prime-numbered constellations in the sky, is horribly offended that some Pebblesorters favor 11 pebble piles and will banish any 11-Pebble Favorers to P-Hell, where they will be forced to sort pebbles into heaps of 8 and 9 for all eternity.

Now let's take a look at an individual Pebblesorter named Larry the Closet 11-Pebble Favorer. He was raised by devout 13-Pebble Favorer parents and brought up to believe that 11 isn't a prime number. He has a second order desire to sort pebbles into prime-numbered heaps, and a first order desire to favor 11-pebble heaps. Larry is stricken by guilt that he wants to make 11-pebble heaps. He knows that 11 isn't a prime number, but still feels a strong first order desire to sort pebbles into heaps of 11. He wishes he didn't have that first order desire, since it obviously conflicts with his second order desire to sort pebbles into prime numbered heaps.

Except, of course, Larry is wrong. 11 is a prime number. His first and second order desires are not in conflict. He just mistakenly thinks they are because his parents raised him to think 11 wasn't a prime number.

Now let's make the metaphor explicit. Sorting pebbles into prime-numbered heaps represents Doing the Right Thing. Favoring 13-pebble heaps represents heterosexuality, favoring 11-pebble heaps represents homosexuality. Heterosexual sex and love and homosexual sex and love are both examples of The Right Thing. The people who think homosexuality is immoral are objectively mistaken about what is and isn't moral, in the same way the 13-Pebble Favorers are objectively mistaken about the primality of the number 11.

So the first and second order desires of Larry the Closet Homosexual and Larry the Closet 11-Pebble Favorer aren't really in conflict. They just think they are because their parents convinced them to believe in falsehoods.

I am having trouble seeing a significant difference between that and what you've described. Mimi's enabler could argue "human happiness is a Good Thing unless it seriously interferes with some other really important goal," and then one would have to make the engineering judgment of whether heroin addiction and homosexuality fall on opposite sides of the "serious interference" line.

Again, I assumed that Mimi was a psychologically normal human who had normal human second order desires, like having friends and family, being healthy, doing something important with her life, challenging herself, and so on. I assumed she didn't want to use heroin because doing so interfered with her achievement of these important second order desires.

I suppose Mimi could be a mindless hedonist whose second order desires are somehow mistaken about what she really wants, but those weren't the inferences I drew.

Mimi's enabler could argue "human happiness is a Good Thing unless it seriously interferes with some other really important goal,"

Again, recall my mention of a hypothetical Heroin 2.0 in my earlier comment. It seems to me that if Heroin 2.0 was suddenly invented, and Mimi still didn't want to use heroin, even though it no longer seriously interfered with her other important values, that she might be mistaken. Her second order desire might be a cached thought leftover from when she was addicted to Heroin 1.0 and she can safely reject it.

But I will maintain that if Larry and Mimi are fairly psychologically normal humans, that Mimi's second order desire to stop using heroin is an authentic and proper desire, because heroin use seriously interferes with the achievement of important goals and desires that normal humans (like Mimi, presumably) have. Larry's second order desire, by contrast, is mistaken, because it's based on the false belief that homosexuality is immoral. Homosexual desires do not interfere with important goals humans have. Rather, they are an important goal that humans have (love, sex, and romance), it's just that the objective of that goal is a bit unusual (same sex instead of opposite).

EDITED: To change some language that probably sounded too political and judgemental. The edits do not change the core thesis in any way.

Replies from: Eliezer_Yudkowsky, Epiphany, Vaniver, CCC, Jayson_Virissimo
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-26T04:28:09.411Z · LW(p) · GW(p)

We should point people to this whenever they're like "What's special about Less Wrong?" and we can be like "Okay, first, guess how Less Wrong would discuss a reluctant Christian homosexual. Made the prediction? Good, now click this link."

Replies from: Epiphany
comment by Epiphany · 2012-10-26T07:21:48.424Z · LW(p) · GW(p)

I'm surprised you regarded it so highly. The flaws I noticed are located in a response to Ghatanathoah's comment.

comment by Epiphany · 2012-10-26T07:09:12.227Z · LW(p) · GW(p)

First, I would like to make one thing clear: I have absolutely nothing against homosexuals and in fact qualify as queer because my attractions transcend gender entirely. I call my orientation "sapiosexual" because it is minds that I am sexually attracted to, and good character, never mind the housing.

Stops at "pigheaded jerks"

downvotes

You know where this is going, oh yes, I am going right to fundamental attribution error and political mindkill.

The parents are deemed "pigheaded jerks" - a perception of their personality.

Larry the homosexual, convinced by the exact same reasoning, is given something subtly different - an attack on his behavior -- "he gullibly believed them" and you continue with "They (the Larrys) just think they are because their parents fed them a load of crap." attributing his belief to the situation that Larry is in.

Do you think Larry's grandparents didn't teach Larry's parents the same thing? And that Larry's great grandparents didn't teach it to Larry's grandparents?

This was a "good solid dig" at the other side.

Replies from: MugaSofer, Ghatanathoah
comment by MugaSofer · 2012-10-26T08:33:00.251Z · LW(p) · GW(p)

I upvoted despite this. If you overlook that one problem, everything else is gold. That single flawed sentence does not effect the awesome of the other 14 paragraphs, as it does not contribute to the conclusion.

Replies from: Epiphany
comment by Epiphany · 2012-10-27T00:25:32.845Z · LW(p) · GW(p)

My experience of it was more like:

"Oh, this is nice and organized... Still orderly... Still orderly... OHMYSPAGHETTIMONSTER I DID NOT JUST READ THAT!"

To me, it was a disappointment. Like if I were eating ice cream and then it fell to the ground.

If Eliezer is going to praise it like it's the epitome of what LessWrong should be, then it should be spotless. Do you agree?

comment by Ghatanathoah · 2012-10-26T09:48:37.846Z · LW(p) · GW(p)

You make an excellent point. I will edit my post to make it sound less political and judgemental.

Replies from: Epiphany
comment by Epiphany · 2012-10-27T00:31:28.255Z · LW(p) · GW(p)

I am charmed by your polite acknowledgement of the flaw and am happy to see that this has been updated. Thanks for letting me know that pointing it out was useful. :)

comment by Vaniver · 2012-10-25T22:59:51.835Z · LW(p) · GW(p)

I think you're looking at this discussion from the wrong angle. The question is, "how do we differentiate first-order wants that trump second-order wants from second-order wants that trump first-order wants?" Here, the order only refers to the psychological location of the desire: to use Freudian terms, the first order desires originate in the id and the second order desires originate in the superego.

In general, that is a complicated and difficult question, which needs to be answered by careful deliberation- the ego weighing the very different desires and deciding how to best satisfy their combination. (That is, I agree with PhilGoetz that there is no easy way to distinguish between them, but I think this is proper, not bothersome.)

Some cases are easier than others- in the case of Sally, who wants to commit suicide but wants to not want to commit suicide, I would generally recommend methods of effective treatment for suicidal tendencies, not the alternative. But you should be able to recognize that the decision could be difficult, at least for some alteration of the parameters, and is the alteration is significant enough it could swing the other way.

There is also another factor which clouds the analysis, which is that the ego has to weigh the costs of altering, suppressing, or foregoing one of the desires. It could be that Larry has a twin brother, Harry, who is not homosexual, and that Harry is genuinely happier that Larry is, and that Larry would genuinely prefer being Harry to being himself; he's not mistaken about his second-order want.

However, the plan to be (or pretend to be) straight is much more costly and less likely to succeed than the plan to stop wanting to be straight, and that difference in costs might be high enough to determine the ego's decision. Again, it should be possible to imagine realistic cases in which the decision would swing the other way. (Related.)

It's also worth considering how much one wants to engage in sour grapes thinking- much of modern moral intuitions about homosexuality seem rooted in the difficulty of changing it. (Note Alicorn's response. Given that homosexuality is immutable, then plans to change homosexuals are unlikely to succeed, and they might as well make the best of their situation. But I hope it's clear that, at its root, this is a statement about engineering reality, not moral principles- if there were a pill that converted homosexuals to heterosexuals, then the question of how society treats homosexuals would actually be different, and if Larry asked you to help him make the decision of whether or not to take the pill, I'm sure you could think of some things to write in the "pro" column for "take the pill" and in the "con" column for "don't take the pill."

Why I said this is worth considering is that, as should be unsurprising, two wants conflict. Often, we don't expect the engineering reality to change. Male homosexuality is likely to be immutable for the lifetimes of the ones that are currently alive, and it's more emotionally satisfying to declare that homosexual desires don't conflict with important goals than reflect on the tradeoffs that homosexuals face that heterosexuals don't. Doing so, however, requires a sort of willful blindness, which may or may not be worth the reward gained by engaging in it.

Replies from: Ghatanathoah, JaySwartz
comment by Ghatanathoah · 2012-10-26T00:29:45.440Z · LW(p) · GW(p)

if there were a pill that converted homosexuals to heterosexuals, then the question of how society treats homosexuals would actually be different, and if Larry asked you to help him make the decision of whether or not to take the pill, I'm sure you could think of some things to write in the "pro" column for "take the pill" and in the "con" column for "don't take the pill."

I don't deny that there may be some good reasons to prefer to be heterosexual. For instance, imagine Larry lives in an area populated by very few homosexual and bisexual men, and moving somewhere else is prohibitively costly for some reason. If this is the case, then Larry may have a rational second-order desire to become bisexual or heterosexual, simply because doing so would make it much easier to find romantic partners.

However, I would maintain that the specific reason given in Alicorn's orignal post for why Larry desires to not be homosexual is that he is confused about the morality of homosexuality and is afraid he is behaving immorally, not because he has two genuine desires that conflict.

It's also worth considering how much one wants to engage in sour grapes thinking- much of modern moral intuitions about homosexuality seem rooted in the difficulty of changing it.

I find it illuminating to compare intuitions about homosexuality to intuitions about bisexuality. If homosexual relationships were really inferior to heterosexual ones in some important way then it would make sense to encourage bisexual people to avoid homosexual relationships and focus on heterosexual ones. This seems wrong to me however, if I was giving a bisexual person relationship advice I think the good thing to do would be advise them to focus on whoever is most compatible with them regardless of sex.

In general, that is a complicated and difficult question, which needs to be answered by careful deliberation- the ego weighing the very different desires and deciding how to best satisfy their combination. (That is, I agree with PhilGoetz that there is no easy way to distinguish between them, but I think this is proper, not bothersome.)

I think you are probably right, this is proper. I think I may feel biased in favor of second order desires because right now it seems like in my current life I have difficulty preventing my first order desires from overriding them. But if I think about it, it seems like I have many first order desires I cherish and would really prefer to avoid changing.

comment by JaySwartz · 2012-11-28T21:33:44.239Z · LW(p) · GW(p)

While the Freudian description is accurate relative to sources, I struggle to order them. I believe it is an accumulated weighting that makes one thought dominate another. We are indeed born with a great deal of innate behavioral weighting. As we learn, we strengthen some paths and create new paths for new concepts. The original behaviors (fight or flight, etc.) remain.

Based on this known process, I conjecture that experiences have an effect on the weighting of concepts. This weighting sub-utility is a determining factor in how much impact a concept has on our actions. When we discover fire burns our skin, we don't need to repeat the experience very often to weigh fire heavily as something we don't want touching our skin.

If we constantly hear, "blonde people are dumb," each repetition increases the weight of this concept. Upon encountering an intelligent blond named Sandy, the weighting of the concept is decreased and we create a new pattern for "Sandy is intelligent" that attaches to "Sandy is a person" and "Sandy is blonde." If we encounter Sandy frequently, or observe many intelligent blonde people, the weighting of the "blonde people are dumb" concept is continually reduced.

Coincidentally, I believe this is the motivation behind why religious leaders urge their followers to attend services regularly, even if subconsciously. The service maintains or increases weighting on the set of religious concepts, as well as related concepts such as peer pressure, offsetting any weighting loss between services. The depth of conviction to a religion can potentially be correlated with frequency of religious events. But I digress.

Eventually, the impact of the concept "blonde people are dumb" on decisions becomes insignificant. During this time, each encounter strengthens the Sandy pattern or creates new patterns for blondes. At some level of weighting for the "intelligent" and "blonde" concepts associated to people, our brain economizes by creating a "blond people are intelligent" concept. Variations of this basic model is generally how beliefs are created and the weights of beliefs are adjusted.

As with fire, we are extremely averse to incongruity. We have a fundamental drive to integrate our experiences into a cohesive continuum. Something akin to adrenaline is released when we encounter incongruity, driving us to find a way to resolve the conflicting concepts. If we can't find a factual explanation, we rationalize one in order to return to balanced thoughts.

When we make a choice of something over other things, we begin to consider the most heavily weighted concepts that are invoked based on the given situation. We work down the weighting until we reach a point where a single concept outweighs all other competing concepts by an acceptable amount.

In some situations, we don't have to make many comparisons due to the invocation of very heavily weighted concepts, such as when a car is speeding towards us while we're standing in the roadway. In other situations, we make numerous comparisons that yield no clear dominant concept and can only make a decision after expanding our choice of concepts.

This model is consistent with human behavior. It helps to explain why people do what they do. It is important to realize that this model applies no division of concepts into classes. It uses a fluid ordering system. It has transient terminal goals based on perceived situational considerations. Most importantly, it bounds the recursion requirements. As the situation changes, the set of applicable concepts to consider changes, resetting the core algorithm.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-11-28T22:45:14.659Z · LW(p) · GW(p)

From what I've heard, the typical response to believing that blond people are dumb and observing that blond Sandy is intelligent is to believe that Sandy is an exception, but blond people are dumb.

Most people are very attached to their generalizations.

Replies from: JaySwartz
comment by JaySwartz · 2012-11-29T15:52:23.912Z · LW(p) · GW(p)

Quite right about attachment. It may take quite a few exceptions before it is no longer an exception. Particularly if the original concept is regularly reinforced by peers or other sources. I would expect exceptions to get a bit more weight because they are novel, but no so much as to offset higher levels of reinforcement.

comment by CCC · 2012-10-26T07:38:56.055Z · LW(p) · GW(p)

Your example does an exemplery job of explaining your viewpoint on Larry's situation. To explain the presumed viewpoint of Larry's parents on his situation requires merely a very small change; replacing all occurrances of the number 11 with the number 9.

The people who think homosexuality is immoral are objectively mistaken about what is and isn't moral, in the same way the 13-Pebble Favorers are objectively mistaken about the primality of the number 11.

How do you define objective morality? I've heard of several possible definitions, most of which conflict with each other, so I'm a little curious as to which one you've selected.

Replies from: Ghatanathoah, Peterdjones
comment by Ghatanathoah · 2012-10-26T09:37:17.821Z · LW(p) · GW(p)

To explain the presumed viewpoint of Larry's parents on his situation requires merely a very small change; replacing all occurrances of the number 11 with the number 9.

I'm not sure I understand you, do you mean that a more precise description of Larry's parent's viewpoint is that the Pebblesorter versions of them think 11 and 9 are the same numbers? Or are you trying to explain how a religious fundamentalist would use the Pebblesorter metaphor if they were making the argument.

How do you define objective morality?

I define morality as being a catch-all term to describe what are commonly referred to as the "good things in life," love, fairness, happiness, creativity, people achieving what they want in life, etc. So something is morally good if it tends to increase those things. In other words, "good" and "right" are synonyms. Morality is objective because we can objectively determine whether people are happy, being treated fairly, getting what they want out of life, etc. In Larry's case having a relationship with Ted-the-Next-Door neighbor would be the morally right thing to do because it would increase the amount of love, happiness, people-getting what they want, etc. in the world.

I think the reason that people have such a problem with the idea of objective morality is that they subscribe, knowingly or not, to motivational internalism. That is, they believe that moral knowledge is intrinsically motivating, simply knowing something is right motivates someone to do it. They then conclude that since intrinsically motivating knowledge doesn't seem to exist, that morality must be subjective.

I am a motivational externalist, so I do not buy this argument. I believe that that people are motivated to act morally by our conscience and moral emotions (i.e. compassion, sympathy). If someone has no motivation to act to increase the "good things in life" that doesn't mean morality is subjective, that simply means that they are a bad person. People who lack moral emotions exist in real life, and they seem to lack any desire to act morally at all, unless you threaten to punish them if they don't.

The idea of intrinsically motivating knowledge is pretty scary if you think about it. What if it motivated you to kill people? Or what if it made you worship Darkseid? The Anti-Life equation from Final Crisis works pretty much exactly the way motivational internalists think moral knowledge does, except that instead of motivating people to care about others and treat people well, it instead motivates them to serve evil pagan gods from outer space.

Replies from: CCC
comment by CCC · 2012-10-26T10:59:28.760Z · LW(p) · GW(p)

Or are you trying to explain how a religious fundamentalist would use the Pebblesorter metaphor if they were making the argument.

Yes, exactly. Larry's parents' do not believe that they are mistaken, and are not easily proved mistaken.

I define morality as being a catch-all term to describe what are commonly referred to as the "good things in life," love, fairness, happiness, creativity, people achieving what they want in life, etc. So something is morally good if it tends to increase those things.

That's a good definition, and it avoids most of the obvious traps. A bit vague, though. Unfortunately, there is a non-obvious trap; this definition leads to the city of Omelas, where everyone is happy, fulfilled, creative... except for one child, locked in the dark in a cellar, starved; one child on whose suffering the glory of Omelas rests. Saving the child decreases overall happiness, health, achievement of goals, etc., etc. Despite all this, I'd still think that leaving the child locked away in the dark is a wrong thing. (This can also lead to Pascal's Mugging, as an edge case)

I think the reason that people have such a problem with the idea of objective morality is that they subscribe, knowingly or not, to motivational internalism.

In my case, it's because every attempt I've seen at defining an objective morality has potential problems. Given to you by an external source? But that presumes that the external source is not Darkseid. Written in the human psyche? There are some bad things in the dark corners of the human psyche. Take whatever action is most likely to transform the world into a paradise? Doesn't usually work, because we don't know enough to always select the correct actions. Do unto others as you would have them do unto you? That's a very nice one - but not if Bob the Masochist tries to apply it.

Of course, subjective morality is no better - and is often worse (mainly because a society in general can reap certain benefits from a shared idea of morality).

What does seem to work is to pick a society whose inhabitants seem happy and fulfilled, and trying to use whatever rules they use. The trouble with that is that it's kludgy, uncertain, and could often do with improvement (though it's been improved often enough in human history that many - not all, but many - obvious 'improvements' aren't in practice).

Replies from: Nornagest, Ghatanathoah, nshepperd
comment by Nornagest · 2012-10-27T00:46:21.637Z · LW(p) · GW(p)

Unfortunately, there is a non-obvious trap; this definition leads to the city of Omelas, where everyone is happy, fulfilled, creative... except for one child, locked in the dark in a cellar, starved; one child on whose suffering the glory of Omelas rests. Saving the child decreases overall happiness, health, achievement of goals, etc., etc. Despite all this, I'd still think that leaving the child locked away in the dark is a wrong thing.

Aside from its obvious artificiality, and despite the fact that all our instincts cry out against it, it's not at all clear to me that there are any really good reasons to reject the Omelasian solution. This is of course a fantastically controversial position (just look at the response to Torture vs. Dust Specks, which might be viewed as an updated and reframed version of the central notion of The Ones Who Walk Away From Omelas), but it nonetheless seems to be a more or less straightforward consequence of most versions of consequential ethics.

As a matter of fact, I'm inclined to view Omelas as something between an intuition pump and a full-blown cognitive exploit: a scenario designed to leverage our ethical heuristics (which are well-adapted to small-scale social groups, but rather less well adapted to exotic large-scale social engineering) in order to discredit a viewpoint which should rightfully stand or fall on pragmatic grounds. A tortured child is something that hardly anyone can be expected to think straight through, and trotting one out in full knowledge of this fact in order to make a point upsets me.

Replies from: satt, CCC, Desrtopa
comment by satt · 2012-11-03T18:56:38.105Z · LW(p) · GW(p)

Aside from its obvious artificiality, and despite the fact that all our instincts cry out against it, it's not at all clear to me that there are any really good reasons to reject the Omelasian solution.

There's a real world analogue to Omelas. The UK (like other countries) has a child protection system, intended to minimize abuse & neglect of children. The state workers (health visitors, social workers, police officers, hospital staff, etc.) who play roles in the system can try to intervene when the apparent risk of harm to a child reaches some threshold.

If the threshold is too low, the system gratuitously interferes with families' autonomy, preventing them from living their lives in peace. If the threshold is too high, the system fails to do its job of preventing guardians from tormenting or killing their children. Realistically, a trade-off is inevitable, and under any politically feasible threshold "some children will die to preserve the freedom of others", as the sociologist Robert Dingwall put it. So the UK's child protection system takes the Omelasian route.

The real life situation is less black & white than Omleas, but it looks like the same basic trade-off in a non-artificial setting. I wonder whether people's intuitions about Omelas align with their intuitions about the real life child protection trade-off (and indeed whether both align with society's revealed collective preference).

Replies from: Nornagest, TheOtherDave
comment by Nornagest · 2012-11-03T20:32:58.090Z · LW(p) · GW(p)

I'd agree that these are consequentially similar, but I don't think they're psychologically similar at all. There's an element of exploitation in Omelas that isn't present in social services: state workers are positioned as protecting children from an evil unrelated to the state, while Omelas is cast as willfully perpetrating an evil in order to ensure its own prosperity. People tend to think of moral culpability in terms of blame, and although some blame might attach itself to social workers for failing to stop abuses that they might prevent with more intrusive intervention thresholds, it's much diluted by the vastly more viscerally appalling culpability carried by actual abusers. Omelas offers no subjects for condemnation other than the state apparatus and the citizens supporting it.

On top of that, intrusive child protection services have very salient failings: most parents (and most children) would find government intrusion into their family lives extremely unpleasant, unpleasant enough to fear and take political action to avoid. Meanwhile, the consequences of no longer torturing Omelas' sacrificial lamb are unspecified and thus about as far from salient (re-entrant?) as it's possible to get. Even in a hypothetical fully specified Omelas where we could point to a chain of effects, I'd expect that chain to be a lot longer and harder to follow, and its endpoints hence less emotionally weighty.

Replies from: Multiheaded
comment by Multiheaded · 2012-11-20T11:10:55.203Z · LW(p) · GW(p)

There's an element of exploitation in Omelas that isn't present in social services: state workers are positioned as protecting children from an evil unrelated to the state, while Omelas is cast as willfully perpetrating an evil in order to ensure its own prosperity....

...Omelas offers no subjects for condemnation other than the state apparatus and the citizens supporting it.

Link to me mentioning both Omelas and another "eternally tortured child" short story, SCP-231 (potentially highly distressing so I'm not hotlinking it), as an intuition pump against Mencius Moldbug's "Patchwork" proposal (the "strong"/"total" vision of Patchwork, with absolute security of sovereigns) along very similar lines of analogy, over in the Unqualified Reservations comments.

Replies from: None
comment by [deleted] · 2012-11-21T16:19:12.735Z · LW(p) · GW(p)

Disagree SCP-231 is bad source of intuition because it is crafted to be torture & horror porn.

Replies from: Multiheaded
comment by Multiheaded · 2012-11-21T18:57:36.288Z · LW(p) · GW(p)

There are many, many things in the similarly exploitative franchise known as "Real Life" that also appear to be crafted as "torture & horror porn". So I don't see the problem with linking to a fictionalized version.

I dare say that any story without elements that induce horror and revulsion in a reader would be an inadequate source of intuition for considering the most shocking aspects of our own world... or the ethics of knowingly creating a system which offers absolute security indiscriminately to those who would create such nightmares and those who'd seek to prevent them.

Example of a victim testimony 1: - trigger warnings for extreme child abuse, rape, pedophilia and psychological damage.

...Jura V jnf gjb lrnef byq zl zbgure zneevrq zl fgrcsngure. Jung sbyybjrq jnf fvkgrra lrnef bs frkhny nffnhyg...

Example of a victim testimony 2: - all of the above, except even more outspoken descriptions of the author's mental anguish. (NSFanywhere. The main blog has... images... that are more gore than extreme porn; don't look unless you're massively desensitized.)

...Gurfr cubgbf nyy rkcerff gur fvqr bs zlfrys V fgehttyr jvgu rirel qnl. Guvf vf gur fvqr bs zr gung yrnearq jung frk vf guebhtu encr. Guvf vf gur fvqr bs zr gung gevrq gb pbzzvg fhvpvqr sbe gur svefg gvzr jura V jnf frira lrnef byq. Guvf vf gur cneg bs zr gung V srne jvyy arire urny. Vgf orra fb znal lrnef ohg abg n qnl tbrf ol jurer V qba’g guvax nobhg jung jnf qbar gb zr...

...Nsgre lbh’ir orra encrq naq orngra jvguva na vapu bs lbhe yvsr rabhtu gvzrf, rirelguvat ryfr ybbfrf pbybe, gur jbeyq orpbzrf funqrf bs terl. Lbh orpbzr ahzo. V pna’g pel nalzber hayrff V’z orvat encrq. Abg sebz cnva, abg sebz fnqarff, abg sebz bavbaf, abg sebz nalguvat ryfr bgure guna fgehttyvat juvyr zl obql vf gbegherq naq hfrq. Yngryl V’ir orra nfxvat zl OQFZ cnegaref gb cynl-encr zr erthyneyl fb V pna pel. Vgf ernyyl dhvgr greevoyr npghnyyl; vg srryf yvxr V’z eryvivat gur uryy V penjyrq bhg bs nyy bire ntnva. Lrg V unir gb qb vg, whfg gb or noyr gb srry uhzna ntnva, gb srry nalguvat ntnva rabhtu gb pel...

Want some amnesiacs yet? You might be able to forget those stories faster if you don't think about the fact that something similar must be happening somewhere in your country, probably in your city, at this very moment. Oops, too late!

(Again, sorry for the confrontational tone and such - I wanted to hammer home the point that sometimes it's the violently emotional reaction to an objectively terrible problem that would be true to your desires, and trying to stay "detached" and "reasonable" would be self-deception. See: deathism.)

Replies from: None, JaySwartz, DaFranker
comment by [deleted] · 2012-11-21T20:48:56.321Z · LW(p) · GW(p)

(Again, sorry for the confrontational tone and such - I wanted to hammer home the point that sometimes it's the violently emotional reaction to an objectively terrible problem that would be true to your desires, and trying to stay "detached" and "reasonable" would be self-deception. See: deathism.)

Indeed. But in the context of the discussion the story primes you to live one kind of horror and not another when making trade offs between the two. This is why I objected to it.

comment by JaySwartz · 2012-11-22T09:47:29.313Z · LW(p) · GW(p)

I struggle with conceiving wanting to want, or decision making in general, as a tiered model. There are a great many factors that modify the ordering and intensity of utility functions. When human neurons fire they trigger multiple concurrent paths leading to a set of utility functions. Not all of the utilities are logic-related.

I posit that our ability to process and reason is due to this pattern ability and any model that will approximate human intelligence will need to be more complex than a simple linear layer model. The balance of numerous interactive utilities combine to inform decision making. A multiobjective optimization model, such as PIBEA, is required.

I'm new to LW, so I can't open threads just yet. I'm hoping to find some discussions around evolutionary models and solution sets relative to rational decision processing.

Replies from: Multiheaded
comment by Multiheaded · 2012-11-22T13:20:43.424Z · LW(p) · GW(p)

I sort of agree with your comment, but you should've probably posted it as a reply to the main post, not to this specific comment of mine, as the topics seem to be somewhat apart.

(BTW you can open threads in Discussion just fine seeing as you're at 10 karma - go ahead!)

comment by DaFranker · 2012-11-21T21:21:58.130Z · LW(p) · GW(p)

don't look unless you're massively desensitized.)

Disagree. Do look, even if it hurts. Especially if it hurts a lot (if it doesn't hurt at all, there's a lot more wrong to fix first). Update your model of reality, avoid availability and (dis)confirmation bias. Face reality head-on. That kind of thing.

Replies from: MugaSofer, Multiheaded
comment by MugaSofer · 2012-11-22T06:33:32.022Z · LW(p) · GW(p)

All due respect, but I don't think there is any imperative to view material that will cause serious psychological harm. If the price of knowledge is your right hand you are not required to pay it. The most relevant example would be showing hardcore pornography to small children - it may be knowledge of a kind, but it does more harm than good.

EDIT: Of course, I read them anyway. Very, ah ... deep.

comment by Multiheaded · 2012-11-22T06:28:00.241Z · LW(p) · GW(p)

I only said that in regards to the many photos of staged degradation, multilation, self-harm, etc on the second blog - they're cringe-inducing yet presumably taken in a consensual way (for a loose definition of consent), and thus have a worse "unpleasantness-to-facing-reality" ratio than the textual descriptions of real abuse & its consequences. I quite agree that reading those shocking descriptions is epistemologically and morally imperative.

Replies from: DaFranker
comment by DaFranker · 2012-11-26T14:45:34.420Z · LW(p) · GW(p)

Good point. If there are written descriptions available, they're a better weighted alternative.

comment by TheOtherDave · 2012-11-03T19:48:08.913Z · LW(p) · GW(p)

I wonder whether people's intuitions about Omelas align with their intuitions about the real life child protection trade-off

It would surprise me if they did, given that Omelas was constructed as an intuition pump.

comment by CCC · 2012-10-28T14:35:57.999Z · LW(p) · GW(p)

Omelas is a cognitive exploit, yes. That's really the point - it forces people to consider how appropriate their heuristics really are. Some people would make Omelas if they could; some wouldn't, for the sake of the one child. A firm preference for either possibility can be controversial, partially because there are good reasons for both states and partially because different ethical heuristics get levered in different directions. (A heuristic that compares the number of people helped vs. the number hurt will pull one way; a heuristic that says "no torture" will pull the other way).

comment by Desrtopa · 2012-10-27T01:15:06.052Z · LW(p) · GW(p)

Personally, on reading the story, I decided immediately that not only would I not walk away from Omelas (which solves nothing anyway,) I was fully in favor of the building of Omelases, provided that even more efficient methods of producing prosperity were not forthcoming.

The prevention of dust specks may vanish into nothing in my intuitive utility calculations, but it immediately hits me that a single tortured child is peanuts besides the cumulative mass of suffering that goes on in our world all the time. With a hundred thousand dollars or so to the right charity, you could probably prevent a lot more disutility than that of the tortured child. If for that money I could either save one child from that fate, or create a city like Omelas minus the sacrifice, then it seems obvious to me that creating the city is a better bargain.

comment by Ghatanathoah · 2012-10-26T12:39:00.443Z · LW(p) · GW(p)

A bit vague, though.

That's true, but I think that human values are so complex that any attempt to compress morality into one sentence is pretty much obligated to be vague.

Unfortunately, there is a non-obvious trap; this definition leads to the city of Omelas, where everyone is happy, fulfilled, creative... except for one child, locked in the dark in a cellar, starved; one child on whose suffering the glory of Omelas rests.

One rather obvious rejoinder is that there are currently hundreds, if not thousands of children who are in the same state as the unfortunate Omelasian right now in real life, so reducing the number to just one child would be a huge improvement. But you are right that even one seems too many.

A more robust possibility might be to add "equality" to the list of the "good things in life." If you do that then Omelas might be morally suboptimal because the vast inequality between the child and the rest of the inhabitants might overwhelm the achievement of the other positive values. Now, valuing equality for its own sake might add other problems, but these could probably be avoided if you were sufficiently precise and rigorous in defining equality.

In my case, it's because every attempt I've seen at defining an objective morality has potential problems. Given to you by an external source? But that presumes that the external source is not Darkseid. Written in the human psyche?

I think the best explanation I've seem is something like the metaethics Eliezer espouses, which is (if I understand them correctly), that morality is a series of internally consistent concepts related to achieving what I called "the goods things in life," and that human beings (those who are not sociopaths anyway) care a lot about these concepts of wellbeing and want to follow and fulfill them.

In other words, morality is like mathematics in some ways, it generates consistent answers(on the topic of people's wellbeing) that are objectively correct. But it is not like the Anti-Life Equation because it is not intrinsically motivating. Humans care about morality because of our consciences and our positive emotions, not because it is universally compelling.

To put it another way, I think that if you were to give a superintelligent paperclipper a detailed description of human moral concepts and offered to help it make some more paperclips if it elucidated these concepts for you, that it would probably generate a lot of morally correct answers. It would feel no motivation to obey these answers of course, since it doesn't care about morality, it cares about making paperclips.

This is a little like morality being "embedded in the human psyche" in the sense that the desire to care about morality is certainly embedded in their somewhere (probably in the part we label "conscience"). But it is also objective in the sense that moral concepts are internally consistent independent of the desires of the mind. To use the Pebblesorter metaphor again, caring about sorting pebbles into prime numbered heaps is "embedded in the Pebblesorter psyche," but which numbers are prime is objective.

There are some bad things in the dark corners of the human psyche.

That's certainly true, but that simply means that humans are capably of caring about other things besides morality, and these other things that people sometimes care about can be pretty bad. This obviously makes moral reasoning a lot harder, since it's possible that one of your darker urges might be masquerading as a moral judgement. But that just means that moral reasoning is really hard to do, it doesn't mean that it's wrong in principle.

Replies from: CCC
comment by CCC · 2012-10-26T13:49:28.681Z · LW(p) · GW(p)

That's true, but I think that human values are so complex that any attempt to compress morality into one sentence is pretty much obligated to be vague.

Vague or flawed. Given those options, I think I'd prefer vague.

One rather obvious rejoinder is that there are currently hundreds, if not thousands of children who are in the same state as the unfortunate Omelasian right now in real life, so reducing the number to just one child would be a huge improvement. But you are right that even one seems too many.

I agree completely. If I had any idea how Omelas worked, I might be tempted to try seeing if any of those ideas could be used to improve current societies.

A more robust possibility might be to add "equality" to the list of the "good things in life." If you do that then Omelas might be morally suboptimal because the vast inequality between the child and the rest of the inhabitants might overwhelm the achievement of the other positive values. Now, valuing equality for its own sake might add other problems, but these could probably be avoided if you were sufficiently precise and rigorous in defining equality.

Hmmm. To avoid Omelas, equality would have to be fairly heavily weighted; any finite weighting given to equality, however, will simply mean that Omelas is only possible given a sufficiently large population (by balancing the cost of the inequality with the extra happiness of the extra inhabitants).

Personally, I think that valuing equality itself is a good idea, if mixed in with a suitable set of other values; one possible failure mode for overvaluing equality is an equality of wretchedness, a state of "we're all equal because we all have nothing and no hope" (this is counteracted by providing suitable weights for the other good things, like happiness and the freedom to try to achieve goals (but what about the goal of vengeance? For a real slight? An imagined slight?)).

An example of a society failing due to an over-reliance on equality, is France shortly after the French Revolution.

I think the best explanation I've seem is something like the metaethics Eliezer espouses

I think that I should read through that entire sequence in the near future. I'd like to see his take on metaethics.

In other words, morality is like mathematics in some ways, it generates consistent answers(on the topic of people's wellbeing) that are objectively correct.

Huh. I think we're defining 'morality' slightly differently here.

My definition of 'morality' would be 'a set of rules, decided by some system, such that one can feed in a given action and (usually) get out whether that action was a good or a bad action'. Implicit in that definition is the idea that two people may disagree on what those rules actually are - that there might be better or worse moralities, and that therefore the answers given by a randomly chosen morality need not be objectively correct.

To take an example; certain ancient cultures may have had the belief that human sacrifice was necessary, on Midwinter's Day, to persuade summer to come back and let the crops grow. In such a culture, strapping someone down and killing them in a particularly painful way may have been considered the right thing to do; and a member of that society would argue for it on the basis that the tribe needs the crops to grow next year (and, if selected, might even walk up voluntarily to be killed). In his morality, these annual deaths are a good thing, because they make the crops grow; in my morality, these annual deaths are a bad thing, and moreover, they don't make the crops grow.

For what it's worth, I do agree with you that getting the result out of a moral system does not, in and of itself, force an intrinsic motivation to follow that course of action; people can be trained from a young age to feel that motivation, and many people are, but there's really no reason to assume that it is always there.

If there is an objectively correct morality, that can apply to all situations, then I don't know what it is - my current system of morality (based heavily on the biblical principal of 'Love thy neighbour') covers many situations, but is not good at the average villain's sadistic choice (where I can save the lives of group A or group B but not both)

There are some bad things in the dark corners of the human psyche.

That's certainly true, but that simply means that humans are capably of caring about other things besides morality, and these other things that people sometimes care about can be pretty bad.

Hmmm. A lot of the darkness in the human psyche can be explained in this manner; but I'd think that there are other parts which cannot be explained in this way (when a person goes out of his way to hurt someone that he'll likely never see again, for example). A lot of these, I'd think, are attributable to a lack of empathy; a person who sees other people as non-people (or as Not True People, for some self-including definition of True People).

Replies from: Ghatanathoah, TheOtherDave
comment by Ghatanathoah · 2012-10-26T14:53:57.657Z · LW(p) · GW(p)

To avoid Omelas, equality would have to be fairly heavily weighted

I think a possible solution would be to have equality and the other values have diminishing returns relative to each other. So in a society with a lot of other good things there is a great obligation to increase equality, whereas in a society with lots of suffering people it's more important to do whatever it takes raise the general level of happiness and not worry as much about equality. So a place as wondrous as Omelas would have a great obligation to help the child.

one possible failure mode for overvaluing equality is an equality of wretchedness, a state of "we're all equal because we all have nothing and no hope"

I think one possible way to frame equality to avoid this is to imagine, metaphorically, that positive things give a society "morality points" and negative things give it "negative morality points." Then have it so that a positive deed that also decreases equality gets "extra points," while a negative deed that also exasperates inequality gets "extra negative points." So in other words, helping the rich isn't bad, it's just much less good than helping the poor.

This also avoids another failure mode: Imagine an action that hurts every single person in the world, and hurts the rich 10 times as much as it hurts the poor. Such an action would increase equality, but praising it seems insane. Under the system I proposed such an action would still count as "bad," though it would be a bit less bad than a bad action that also increased inequality.

Huh. I think we're defining 'morality' slightly differently here.

My definition of 'morality' would be 'a set of rules, decided by some system, such that one can feed in a given action and (usually) get out whether that action was a good or a bad action'.

I don't think that's that different from what I'm saying, I may be explaining it poorly. I do think that morality is essentially like a set of rules or an equation that one uses to evaluate actions. And I consider it objective in that the same equation should produce the same result each time an identical action is fed into it, regardless of what entity is doing the feeding. Then it is up to our moral emotions to motivate us to take actions the equation would label as "good."

Describing it like that sounds a bit clinical though, so I'd like to emphasize that moral rules and equations, are ultimately about people's wellbeing and increasing the good things in life. If you feed an action that improves these values into a rule-set and it comes out labelled "bad" then those rules probably don't even deserve to be called morality, they are some other completely different concept.

Implicit in that definition is the idea that two people may disagree on what those rules actually are - that there might be better or worse moralities, and that therefore the answers given by a randomly chosen morality need not be objectively correct.

This relates to Eliezer's metaethics again, he basically describes morality as an equation or "fixed computation" related to wellbeing that is so complex that it's impossible to wrap your mind about it, so you have to work in approximations. So what you would label a "better" morality is one that more closely resembles the "ideal equation."

To take an example; certain ancient cultures may have had the belief that human sacrifice was necessary, on Midwinter's Day, to persuade summer to come back and let the crops grow.

It seems to me that this is more a disagreement about certain facts of nature than about morality per se. It seems to me that if there really was some sort of malevolent supernatural entity that wouldn't let summer come unless you made sacrifices to it, and it was impossible to stop such an entity, that sacrificing to it might be the only option left. If the choice is "everyone dies of starvation" vs. "one person dies from being sacrificed, everyone else lives" it seems like any worthwhile set of moral rules would label the second option as the better one (though it would not be nearly as good as somehow stopping the entity). The reason that sacrificing people is bad is because such entities do not exist, so such a sacrifice tortures someone, but doesn't save anyone elses' life.

If there is an objectively correct morality, that can apply to all situations, then I don't know what it is

I think the problem is that an objectively correct set of moral rules that could perfectly evaluate any situation would be so complicated no one would be able to use it effectively. Even if we obtained such a system we would have to use crude approximations until we managed to get a supercomputer big enough to do the calculations in a timely manner.

A lot of these, I'd think, are attributable to a lack of empathy; a person who sees other people as non-people

I count empathy as one of the "moral emotions" that motivates people to act morally. So a lack of empathy would be a type of lack of motivation towards moral behavior.

Replies from: CCC
comment by CCC · 2012-10-28T15:05:01.312Z · LW(p) · GW(p)

I think a possible solution would be to have equality and the other values have diminishing returns relative to each other.

That seems to work very well. So the ethical weight of a factor can be proportional to the reciprocal thereof (perhaps with a sign change). Then, for any number of people, there is a maximum happiness-factor that the equation can produce.

So. This can be used to make an equation that makes Omelas bad for any sized population. But not everyone agrees that Omelas is bad in the first place; so is that necessarily an improvement to your ethical equation?

I think one possible way to frame equality to avoid this is to imagine, metaphorically, that positive things give a society "morality points" and negative things give it "negative morality points." Then have it so that a positive deed that also decreases equality gets "extra points," while a negative deed that also exasperates inequality gets "extra negative points." So in other words, helping the rich isn't bad, it's just much less good than helping the poor.

This also avoids another failure mode: Imagine an action that hurts every single person in the world, and hurts the rich 10 times as much as it hurts the poor. Such an action would increase equality, but praising it seems insane. Under the system I proposed such an action would still count as "bad," though it would be a bit less bad than a bad action that also increased inequality.

That failure mode can also be dealt with by combining equality with other factors, such as not being hurt. (The relative weightings assigned to these factors would be important, of course).

I don't think that's that different from what I'm saying, I may be explaining it poorly. I do think that morality is essentially like a set of rules or an equation that one uses to evaluate actions. And I consider it objective in that the same equation should produce the same result each time an identical action is fed into it, regardless of what entity is doing the feeding. Then it is up to our moral emotions to motivate us to take actions the equation would label as "good."

That seems like a reasonable definition; my point is that not everyone uses the same equation.

It seems to me that this is more a disagreement about certain facts of nature than about morality per se.

Hmmm. You're right - that was a bad example. (I don't know if you're familiar with the Chanur series, by C. J. Cherryh? I ask because my first thought for a better example came straight out of there - she does a god job of presenting alien moralities)

Let me provide a better one. Consider Marvin, and Fred. Marvin's moral system considers the total benefit to the world of every action; but he tends to weight actions in favour of himself, because he knows that in the future, he will always choose to do the right thing (by his morality) and thus deserves ties broken in his favour.

Fred's moral system entirely discounts any benefits to himself. He knows that most people are biased to themselves, and does this in an attempt to reduce the bias (he goes so far as to be biased in the opposite direction).

Both of them get into a war. Both end up in the following situation:

Trapped in a bunker, together with one allied soldier (a stranger, but on the same side). An enemy manages to throw a grenade in. The grenade will kill both of them, unless someone leaps on top of it, in which case it will only kill that one.

Fred leaps on top of the grenade. His morality values the life of the stranger over his own, and he thus acts to save the stranger first.

Marvin throws the stranger onto the grenade. His morality values his own life over a stranger who might, with non-trivial probability, be a truly villainous person.

Here we have two different moralities, leading to two different results, in the same situation.

I think the problem is that an objectively correct set of moral rules that could perfectly evaluate any situation would be so complicated no one would be able to use it effectively. Even if we obtained such a system we would have to use crude approximations until we managed to get a supercomputer big enough to do the calculations in a timely manner.

That is worth keeping in mind. Of course, if such a system is found, we could feed in dozens of general situations in advance - and if in a tough situation, then after resolving it one way or another, we could feed that situation into the computer and find out for future reference which course of action was correct (that eliminates a lot of the time constraint).

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-10-28T19:02:38.201Z · LW(p) · GW(p)

That seems like a reasonable definition; my point is that not everyone uses the same equation.

That's true, the question is, how often is this because people have totally different values, and how often is it that they have extremely similar "ideal equations," but different "approximations" of what they think that equation is. I think for sociopaths, and other people with harmful ego-syntonic mental disorders it's probably the former, but its more often the later for normal people.

Eliezer has argued that it is confusing and misleading to use the word "morality" to refer to codes of behavior entities possess that have nothing to do with improving people's wellbeing, making the world a happier, fairer, freer place, and similar concepts. He argues that creatures like the Pebblesorters do not care about morality at all, they care about sorting pebbles and calling sorting pebbles a type of "morality" confuses two separate concepts.

Let me provide a better one. Consider Marvin, and Fred.

It sounds to me like Fred and Marvin both care about achieving similar moral objectives, but have different ideas about how to go about it. I'd say that again, which moral code is better can only be determined by trying to figure out which one actually does a better job of achieving moral goals. "Moral progress" can be regarded as finding better and better heuristics to achieve those moral goals, and finding a closer representation of the ideal equation.

Again, I think I agree with Eliezer that a truly alien code of behavior, like that exhibited by sociopaths, and really inhuman aliens like the Pebblesorters or paperclippers, should maybe be referred to by some word other than morality. This is because since the word "morality" usually refers to doing things like making the world a happier place and increasing the positive things in life. So if we refer to the behavior code of a creature that cares nothing for doing those things as "morality," we will give the subconscious impression that that creature really does care about doing good and simply disagrees about how to go about it. This isn't correct, a sociopaths and paperclippers doesn't care about other people at all, so we shouldn't give the impression they do.

I am less sure about whether the term "morality" should be used to refer to the behavior codes of aliens that care about some of the same positive things that normal humans do, but also differ in important ways, like the Babyeaters and Super-Happy-People. Maybe we could call it "semi-morality?"

(I don't know if you're familiar with the Chanur series, by C. J. Cherryh? I ask because my first thought for a better example came straight out of there - she does a god job of presenting alien moralities)

Sorry, the only Cherryh I've read is "The Scapegoat." I thought it gave a good impression of how alien values would look to humans, but wish it had given some more ideas about what it was that made elves think so differently.

Replies from: CCC
comment by CCC · 2012-10-29T10:14:23.156Z · LW(p) · GW(p)

That seems like a reasonable definition; my point is that not everyone uses the same equation.

That's true, the question is, how often is this because people have totally different values, and how often is it that they have extremely similar "ideal equations," but different "approximations" of what they think that equation is. I think for sociopaths, and other people with harmful ego-syntonic mental disorders it's probably the former, but its more often the later for normal people.

I'd say sometimes A, and sometimes B. But I think that's true even in the absence of mental disorders; I don't think that the "ideal equation" necessarily sits somewhere hidden in the human psyche.

It sounds to me like Fred and Marvin both care about achieving similar moral objectives, but have different ideas about how to go about it. I'd say that again, which moral code is better can only be determined by trying to figure out which one actually does a better job of achieving moral goals. "Moral progress" can be regarded as finding better and better heuristics to achieve those moral goals, and finding a closer representation of the ideal equation.

That is valid, as long as both systems have the same goals. Marvin's system includes the explicit goal "stay alive", more heavily weighted then the goal "keep a stranger alive"; Fred's system explicitly entirely excludes the goal "stay alive".

If two moral systems agree both on the goals to be achieved, and the weightings to give those goals, then they will be the same moral system, yes. But two people's moral systems need not agree on the underlying goals.

Again, I think I agree with Eliezer that a truly alien code of behavior, like that exhibited by sociopaths, and really inhuman aliens like the Pebblesorters or paperclippers, should maybe be referred to by some word other than morality. This is because since the word "morality" usually refers to doing things like making the world a happier place and increasing the positive things in life.

Well, to be fair, in a Paperclipper's mind, paperclips are the positive things in life, and they certainly make the paperclipper happier. I realise that's probably not what you intended, but the phrasing may need work.

Which really feeds into the question of what goals a moral system should have. To the Babyeaters, a moral system should have the goal of eating babies, and they can provide a lot of argument to support that point - in terms of improved evolutionary fitness, for example.

I think that we can agree that a moral system's goals should be the good things in life. I'm less certain that we can agree on what those good things necessarily are, or on how they should be combined relative to each other. (I expect that if we really go to the point of thoroughly dissecting what we consider to be the good things in life, then we'll agree more than we disagree; I expect we'll be over 95% in agreement, but not quite 100%. This is what I generally expect for any stranger).

For example, we might disagree on whether it is more important to be independant in our actions, or to follow the legitimate instructions of a suitably legitimate authority.

Sorry, the only Cherryh I've read is "The Scapegoat." I thought it gave a good impression of how alien values would look to humans, but wish it had given some more ideas about what it was that made elves think so differently.

Hmmm. I haven't read that one.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-10-30T01:14:39.695Z · LW(p) · GW(p)

I'd say sometimes A, and sometimes B. But I think that's true even in the absence of mental disorders; I don't think that the "ideal equation" necessarily sits somewhere hidden in the human psyche.

It's not that I think there's literally a math equation locked in the human psyche that encodes morality. It's more like there are multiple (sometimes conflicting) moral values and methods for resolving conflicts between them and that the sum of these can be modeled as a large and complicated equation.

That is valid, as long as both systems have the same goals. Marvin's system includes the explicit goal "stay alive", more heavily weighted then the goal "keep a stranger alive"; Fred's system explicitly entirely excludes the goal "stay alive".

You gave me the impression that Marvin valued "staying alive" less as an end in itself, and more as a means to achieve the end of improving the world. in particular when you said this:

Marvin's moral system considers the total benefit to the world of every action; but he tends to weight actions in favour of himself, because he knows that in the future, he will always choose to do the right thing (by his morality) and thus deserves ties broken in his favour.

This is actually something that bothers me in fiction when a character who is superhumanly good and power (i.e. Superman, the Doctor) risks their lives to save a relatively small amount of people. It seems short-sighted of them to do that since they regularly save much larger groups of people and anticipate continuing to do so in the future, so it seems like they should preserve their lives for those people's sakes.

Well, to be fair, in a Paperclipper's mind, paperclips are the positive things in life, and they certainly make the paperclipper happier. I get the impression that the paperclipper doesn't feel happiness, just a raw motivation to increase the amount of paperclips.

I think that we can agree that a moral system's goals should be the good things in life. I'm less certain that we can agree on what those good things necessarily are, or on how they should be combined relative to each other.

If you define "the good things in life" as "whatever an entity wants the most," then you can agree, whatever someone wants is "good," be it paperclips or eudaemonia. On the other hand, I'm not sure we should do this, there are some hypothetical entities I can imagine where I can't see it as ever being good that they get what they want. For instance I can imagine a Human-Torture-Maximizer that wants to do nothing but torture human beings. It seems to me that even if there were a trillion Human-Torture-Maximizers and one human in the universe it would be bad for them to get what they want.

For more neutral, but still alien preferences, I'm less sure. It seems to me that I have a right to stop Human-Torture-Maximizers from getting what they want. But would I have the right to stop paperclippers? Making the same paperclip over and over again seems like a pointless activity to me, but if the paperclippers are willing to share part of the universe with existing humans do I have a right to stop them? I don't know, and I don't think Eliezer does either.

(I expect that if we really go to the point of thoroughly dissecting what we consider to be the good things in life, then we'll agree more than we disagree; I expect we'll be over 95% in agreement, but not quite 100%. This is what I generally expect for any stranger).

I think that we, and most humans, have the same basic desires, where we differ is the object of those desires, and the priority of those desires.

For instance, most people desire romantic love. But those desires usually have different objects, I desire romantic love with my girlfriend, other people desire it with their significant others. Similarly, most people desire to consume stories, but the object of that desire differs, some people like Transformers, others The Notebook.

Similarly, people often desire the same things, but differ as to their priorities, how much of those things they want. Most people desire both socializing, and quiet solitude, but some extroverts want lots of one and less of the other, while introverts are the opposite.

In the case of the paerclippers, my first instinct is to regard opposing paperclipping as no different from the many ways humans have persecuted each other for wanting different things in the past. But then it occurred to me that paperclip-maximizing might be different because most persecutions in the past involve persecuting people who have different objects and priorities, not people who actually have different desires. For instance homosexuality is the same kind of desire as heterosexuality, just with a different object (same sex instead of opposite).

Does this mean it isn't bad to oppose paperclipping? I don't know, maybe, but maybe not. Maybe we should just try to avoid creating paperclippers or similar creatures so we don't have to deal with it.

For example, we might disagree on whether it is more important to be independant in our actions, or to follow the legitimate instructions of a suitably legitimate authority.

This seems like a difference in priority, rather than desire, as most people would prefer differing proportions of both both. It's still a legitimate disagreement, but I think it's more about finding a compromise between conflicting priorities, rather than totally different values.

Compounding this problem is the fact that people value diversity to some extent. We don't value all types of diversity obviously, I think we'd all like to live in a world where people held unanimous views on the unacceptability of torturing innocent people. But we would like other people to be different from us in some ways. Most people, I think, would rather live in a world full of different people with different personalities than a world consisting entirely of exact duplicates (in both personality and memory) of one person. So it might be impossible to reach full agreement on those other values without screwing up the achievement of the Value of Diversity.

Replies from: CCC
comment by CCC · 2012-10-31T08:29:39.317Z · LW(p) · GW(p)

It's not that I think there's literally a math equation locked in the human psyche that encodes morality. It's more like there are multiple (sometimes conflicting) moral values and methods for resolving conflicts between them and that the sum of these can be modeled as a large and complicated equation.

I'm sorry, there's an ambiguity there - when you say "the sum of these", are you summing across the moral values and imperatives of a single person, or of humanity as a whole?

You gave me the impression that Marvin valued "staying alive" less as an end in itself, and more as a means to achieve the end of improving the world. in particular when you said this:

You are quite correct. I apologise; I changed that example several times from where I started, and it seems that one of my last-minute changes actually made it a worse example (my aim was to try to show how the explicit aim of self-preservation could be a reasonable moral aim, but in the process I made it not a moral aim at all). I should watch out for that in the future.

This is actually something that bothers me in fiction when a character who is superhumanly good and power (i.e. Superman, the Doctor) risks their lives to save a relatively small amount of people. It seems short-sighted of them to do that since they regularly save much larger groups of people and anticipate continuing to do so in the future, so it seems like they should preserve their lives for those people's sakes.

I've always felt that was because one of the effects of great power, is that it's so very easy to let everyone die. With great power, as Spiderman is told, comes great responsibility; one way to ensure that you're not letting your own power go to your head, is by refusing to not-rescue anyone. After all, if the average science hero lets everyone he thinks is an idiot die, then who would be left?

Sometimes there's a different reason, though; Sherlock Holmes would ignore a straightforward and safe case to catch a serial killer in order to concentrate on a tricky and deadly case involving a stolen diamond; he wasn't in the detective business to help people, he was in the detective business in order to be challenged, and he would regularly refuse to take cases that did not challenge him.

(That's probably a fair example as well, actually; for Holmes, only the challenge, the mental stimulation of a worthy foe, is important; for Superman, what is important is the saving of lives, whether from a mindless tsunami or Lex Luthor's latest plot).

I think that we, and most humans, have the same basic desires, where we differ is the object of those desires, and the priority of those desires.

Hmmm. If you're willing to accept zero, or near-zero, as a priority, then that statement can apply to any two sets of desires. Consider Sherlock holmes and a paperclipper; Holmes' desire for mental stimulation is high-priority, his desire for paperclips is zero-priority, while the paperclipper's desire for paperclips is high priority, and its desire for mental stimulation is zero-priority. (Some desires may have negative priority, which can then be interpreted as a priority to avoid that outcome - for example, my desire to immerse my hand in acid is negative, but a masochist may have a positive priority for that desire)

This implies that, in order to meaningfully differentiate the above statement from "some people have different desires", I may have to designate some very low priority, below which the desire is considered absent (I may, of course, place that line at exactly zero priority). Some desires, however, may have no priority on their own, but inherit priority from another desire that they feed into; for example, a paperclipper has zero desire for self-preservation on its own, but it will desire self-preservation so that it can better create more paperclips.

Now, given a pool of potential goals, most people will pick out several desires from that pool, and there will be a large overlap between any two people (for example, most humans desire to eat - most but not all, there are certain eating disorders that can mess with that), and it is possible to pick out a set of desires that most people will have high priorities for.

It's even probably possible to pick out a (smaller) set of desires such that those who do not have those desires at some positive priority are considered psychologically unhealthy. But such people nonetheless do exist.

Does this mean it isn't bad to oppose paperclipping? I don't know, maybe, but maybe not.

In my personal view, it is neutral to paperclip or to oppose paperclipping. It becomes bad to paperclip only when the paperclipping takes resources away from something more important.

And there are circumstances (somewhat forced circumstances) where it could be good to paperclip.

For example, we might disagree on whether it is more important to be independant in our actions, or to follow the legitimate instructions of a suitably legitimate authority.

This seems like a difference in priority, rather than desire, as most people would prefer differing proportions of both both. It's still a legitimate disagreement, but I think it's more about finding a compromise between conflicting priorities, rather than totally different values.

There exist people who would place negative value on the idea of following the instuctions of any legitimate authority. (They tend to remain a small and marginal group, because they cannot in turn form an authority for followers to follow without rampant hypocrisy).

Compounding this problem is the fact that people value diversity to some extent. We don't value all types of diversity obviously, I think we'd all like to live in a world where people held unanimous views on the unacceptability of torturing innocent people. But we would like other people to be different from us in some ways. Most people, I think, would rather live in a world full of different people with different personalities than a world consisting entirely of exact duplicates (in both personality and memory) of one person. So it might be impossible to reach full agreement on those other values without screwing up the achievement of the Value of Diversity.

Yes, diversity has many benefits. The second-biggest benefit of diversity is that some people will be more correct than others, and this can be seen in the results they get; then everyone can re-diversify around the most correct group (a slow process, taking generations, as the most successful group slowly outcompetes the rest and thus passes their memes to a greater and/or more powerful proportion of the next generation). By a similar tokem, it means that one something happens that detroys one type of person, it doesn't destroy everyone (bananas have a definite problem there, being a bit of a monoculture).

The biggest benefit is that it leads to social interaction. A completely non-diverse society would have to be a hive mind (or different experiences would slowly begin to introduce diversity), and it would be a very lonely hive mind, with no-one to talk to.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-10-31T09:26:11.107Z · LW(p) · GW(p)

I'm sorry, there's an ambiguity there - when you say "the sum of these", are you summing across the moral values and imperatives of a single person, or of humanity as a whole?

Nearly all of humanity as a whole. There are obviously some humans who don't really value morality, we call them sociopaths, but I think most humans care about very similar moral concepts. The fact that people have somewhat different personal preferences and desires at first might seem to challenge this idea, but I don't really think it does. It just means that there are some desires that generate the same "value" of "good" when fed into the "equation." In fact, if diversity is a good, as we discussed previously, then people having different personal preferences might in fact be morally desirable.

Hmmm. If you're willing to accept zero, or near-zero, as a priority, then that statement can apply to any two sets of desires......This implies that, in order to meaningfully differentiate the above statement from "some people have different desires", I may have to designate some very low priority, below which the desire is considered absent

That's a good point. I was considering using the word "proportionality" instead of "priority" to better delineate that I don't accept zero as a priority, but rejected it because it sounded clunky. Maybe I shouldn't have.

In my personal view, it is neutral to paperclip or to oppose paperclipping. It becomes bad to paperclip only when the paperclipping takes resources away from something more important.

I agree with that. What I'm wondering is, would I have a moral duty to share resources with a paperclipper if it existed, or would pretty much any of the things I spend the resources on if I kept them for myself count (i.e. eudaemonic things) as "something more important."

There exist people who would place negative value on the idea of following the instuctions of any legitimate authority.

I think there might actually be lots of people like this, but most appear normal because they place even greater negative value on doing something stupid because they ignored good advice just because it came from an authority. In other words, following authority is a negative terminal value, but an extremely positive instrumental value.

The biggest benefit is that it leads to social interaction. A completely non-diverse society would have to be a hive mind (or different experiences would slowly begin to introduce diversity), and it would be a very lonely hive mind, with no-one to talk to.

Exactly. I would still want the world to be full of a diverse variety of people, even if I had a nonsentient AI that was right about everything and could serve my every bodily need.

Replies from: CCC
comment by CCC · 2012-10-31T10:08:17.377Z · LW(p) · GW(p)

I'm sorry, there's an ambiguity there - when you say "the sum of these", are you summing across the moral values and imperatives of a single person, or of humanity as a whole?

Nearly all of humanity as a whole. There are obviously some humans who don't really value morality, we call them sociopaths, but I think most humans care about very similar moral concepts.

Okay then, next question; how do you decide which people to exclude? You say that you are excluding sociopaths, and I think that they should be excluded; but on exactly what basis? If you're excluding them simply because they fail to have the same moral imperatives as the ones that you think are important, then that sounds very much like a No True Scotsman argument to me. (I exclude them mainly on an argument of appeal to authority, myself, but that also has logic problems; in either case, it's a matter of first sketching out what the moral imperative should be, then throwing out the people who don't match).

And for a follow-up question; is it necessary to limit it to humanity? Let us assume that, ten years from now, a flying saucer lands in the middle of Durban, and we meet a sentient alien form of life. Would it be necessary to include their moral preferences in the equation as well?

Even if they are Pebblesorters?

In fact, if diversity is a good, as we discussed previously, then people having different personal preferences might in fact be morally desirable.

It may be, but only within a limited range. A serial killer is well outside that range, even if he believes that he is doing good by only killing "evil" people (for some definition of "evil").

What I'm wondering is, would I have a moral duty to share resources with a paperclipper if it existed, or would pretty much any of the things I spend the resources on if I kept them for myself count (i.e. eudaemonic things) as "something more important."

Hmmm. I think I'd put "buying a packet of paperclips for the paperclipper" as on the same moral footing, more or less, as "buying an icecream for a small child". It's nice for the person (or paperclipper) recieving the gift, and that makes it a minor moral positive by increasing happiness by a tiny fraction. But if you could otherwise spend that money on something that would save a life, then that clearly takes priority.

I think there might actually be lots of people like this, but most appear normal because they place even greater negative value on doing something stupid because they ignored good advice just because it came from an authority. In other words, following authority is a negative terminal value, but an extremely positive instrumental value.

Hmmm. Good point; that is quite possible. (Given how many people seem to follow any reasonably persuasive authority, though, I suspect that most people have a positive priority for this goal - this is probably because, for a lot of human history, peasants who disagreed with the aristocracy tended to have fewer descendants unless they all disagreed and wiped out said aristocracy).

Exactly. I would still want the world to be full of a diverse variety of people, even if I had a nonsentient AI that was right about everything and could serve my every bodily need.

Here's a tricky question - what exactly are the limits of "nonsentient"? Can a nonsentient AI fake it by, with clever use of holograms and/or humanoid robots, cause you to think that you are surrounded by a diverse variety of people even when you are not (thus supplying the non-bodily need of social interaction)? The robots would all be philosophical zombies, of course; but is there any way to tell?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-10-31T11:02:54.213Z · LW(p) · GW(p)

Okay then, next question; how do you decide which people to exclude?

I don't think I'm coming across right. I'm not saying that morality is some sort of collective agreement of people in regards to their various preferences. I'm saying that morality is a series of concepts such as fairness, happiness, freedom etc., that these concepts are objective in the sense that it can be objectively determined how much fairness, freedom, happiness etc. there is in the world, and that the sum of these concepts can be expressed as a large equation.

People vary in their preference for morality, most people care about fairness, freedom, happiness, etc. to some extent. But there are some people who don't care about morality at all, such as sociopaths.

Morality isn't a preference. It isn't the part of a person's brain that says "This society is fair and free and happy, therefore I prefer it." Morality is those disembodied concepts of freedom, fairness, happiness, etc. So if a person doesn't care about those things, it doesn't mean that freedom, fairness, happiness, etc. aren't part of their morality. It means that person doesn't care about morality, they care about something else."

To use the Pebblesorter analogy again, the fact that you and I don't care about sorting pebbles into prime-numbered heaps isn't because we have our own concept of "primeness" that doesn't include 2, 3, 5 and 7. It just means we don't care about primeness.

To make another analogy, if most people preferred wearing wool clothes but one person preferred cotton, that wouldn't mean that that person had their own version of wool, which was cotton. It means that that person doesn't prefer wool.

Look inward, and consider why you think most people should be included. Presumably it's because you really care a lot about being fair. But that necessarily means that you cared about fairness before you even considered what other people might think. Otherwise it wouldn't have even occurred to you to think about what they preferred in the first place.

The fact that most humans care, to some extent, about the various facets of morality, is a very lucky thing, a planet full of sociopaths would be most unpleasant. But it isn't relevant to the truth of morality. You'd still think torturing people was bad if all the non-sociopaths on Earth except you were killed, wouldn't you? If, in that devastated world, you came across a sociopath torturing another sociopath or an animal, and could stop them at no risk to yourself, you'd do it, wouldn't you?

You say that you are excluding sociopaths, and I think that they should be excluded; but on exactly what basis?

I suspect that your intuition comes from the fact that a central part of morality is fairness, and sociopaths don't care about fairness. Obviously being fair to the unfair is as unwise as tolerating the intolerant.

And for a follow-up question; is it necessary to limit it to humanity? Let us assume that, ten years from now, a flying saucer lands in the middle of Durban, and we meet a sentient alien form of life. Would it be necessary to include their moral preferences in the equation as well?

Again, I want to emphasize that morality isn't the "preference" part, it's the "concepts" part. But the question of the moral significance of aliens is relevant, I think it would depend on how many of the concepts that make up morality they cared about. I think that at a bare minimum they'd need fairness and sympathy.

So if the Pebblesorters that came out of that ship were horrified that we didn't care about primality, but were willing to be fair and share the universe with us, they'd be a morally worthwhile species. But if they had no preference for fairness or any sympathy at all, and would gladly kill a billion humans to sort a few more pebbles, that would be a different story. In that case we should probably, after satisfying ourselves that all Pebblesorters were psychologically similar, start prepping a Relativistic Kill Vehicle to point at their planet if they try something.

Here's a tricky question - what exactly are the limits of "nonsentient"? Can a nonsentient AI fake it by, with clever use of holograms and/or humanoid robots, cause you to think that you are surrounded by a diverse variety of people even when you are not (thus supplying the non-bodily need of social interaction)? The robots would all be philosophical zombies, of course; but is there any way to tell?

I don't know if I could tell, but I'd very much prefer that the AI not do that, and would consider myself to have been massively harmed if it did, even if I never found out. My preference is to actually interact with a diverse variety of people, not to merely have a series of experiences that seem like I'm doing it.

Replies from: TheOtherDave, CCC
comment by TheOtherDave · 2012-10-31T15:12:46.260Z · LW(p) · GW(p)

So, OK. Suppose, on this account, that you and I both care about morality to the same degree... that is, you don't care about morality more than I do, and I don't care about morality more than you do. (I'm not sure how we could ever know that this was the case, but just suppose hypothetically that it's true.)

Suppose we're faced with a situation in which there are two choices we can make. Choice A causes a system to be more fair, but less free. Choice B leaves that system unchanged. Suppose, for simplicity's sake, that those are the only two choices available, and we both have all relevant information about the system.

On your account, will we necessarily agree on which choice to make? Or is it possible, in that situation, that you might choose A and I choose B, or vice-versa?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-10-31T21:12:44.270Z · LW(p) · GW(p)

I think it depends on the degree of the change. If the change is very lopsided (i.e -100 freedom, +1 fairness) I think we'd both choose B.

If we assume that the degree of change is about the same (i.e. +1 fairness, -1 freedom) it would depend on how much freedom and fairness already exist. If the system is very fair, but very unfree, we'd both choose B, but if it's very free and very unfair we'd both choose A.

However, if we are to assume that the gain in fairness and the loss in freedom are of approximately equivalent size and the current system has fairly large amounts of both freedom and fairness (which I think is what you meant) then it might be possible that we'd have a disagreement that couldn't be resolved with pure reasoning.

This is called moral pluralism, the idea that there might be multiple moral values (such as freedom, fairness, and happiness) which are objectively correct, imperfectly commensurable with each other, and can be combined in different proportions that are of approximately equivalent objective moral value. If this is the case then your preference for one set of proportions over the other might be determined by arbitrary factors of your personality.

This is not the same as moral relativism, as these moral values are all objectively good, and any society that severely lacks one of them is objectively bad. It's just that there are certain combinations with different proportions of values that might be both "equally good," and personal preferences might be the "tiebreaker." To put it in more concrete terms, a social democracy with low economic regulation and a small welfare state might be "just as good" as a social democracy with slightly higher economic regulation and a slightly larger welfare state, and people might honestly and irresolvably disagree over which one is better. However, both of those societies would definitely be objectively better than Cambodia under the Khmer Rouge, and any rational, fully informed person who cares about morality would be able to see that.

Of course, if we are both highly rational and moral, and disagreed about A vs. B, we'd both agree that fighting over them excessively would be morally worse than choosing either of them, and find some way to resolve our disagreement, even if it meant flipping a coin.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-31T22:14:48.604Z · LW(p) · GW(p)

I agree with you that in sufficiently extreme cases, we would both make the same choice. Call that set of cases S1.

I think you're saying that if the case is not that extreme, we might not make the same choice, even though we both care equally about the thing you're using "morality" to refer to. I agree with that as well. Call that set of cases S2.

I also agree that even in S2, there's a vast class of options that we'd both agree are worse than either of our choices (as you illustrate with the Khmer Rouge), and a vast class of options that we'd both agree are better than either of our choices, supposing that we are as you suggest rational informed people who care about the thing you're using "morality" to refer to.

If I'm understanding you, you're saying in S2 we are making different decisions, but our decisions are equally good. Further, you're saying that we might not know that our decisions are equally good. I might make choice A and think choice B is wrong, and you might make choice B and think choice A is wrong. Being rational and well-informed people we'd agree that both A and B are better than the Khmer Rouge, and we might even agree that they're both better than fighting over which one to adopt, but it might still remain true that I think B is wrong and you think A is wrong, even though neither of us thinks the other choice is as wrong as the Khmer Rouge, or fighting about it, or setting fire to the building, or various other wrong things we might choose to evaluate.

Have I followed your position so far?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-01T04:23:51.269Z · LW(p) · GW(p)

Yes, I think so.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-11-01T05:16:04.251Z · LW(p) · GW(p)

OK, good.

It follows that if a choice can go one of three ways (c1, c2, c3) and if I think c1> c2 > c3 and therefore endorse c1, and if you think c2 > c1 > c3 and therefore endorse c2, and if we're both rational informed people who are in possession of the same set of facts about that choice and its consequences, and if we each think that the other is wrong to endorse the choice we endorse (while still agreeing that it's better than c3), that there are (at least) two possibilities.

One possibility is that c1 and c2 are, objectively, equally good choices, but we each think the other is wrong anyway. In this case we both care about morality, even though we disagree about right action.

Another possibility is that c1 and c2 are, objectively, not equally good. For example, perhaps c1 is objectively bad, violates morality, and I endorse it only because I don't actually care about morality. Of course, in this case I may use the label "morality" to describe what I care about, but that's at best confusing and at worst actively deceptive, because what I really care about isn't morality at all, but some other thing, like prime-numbered heaps or whatever.

Yes?

So, given that, I think my question is: how might I go about figuring out which possibility is the case?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-01T06:04:40.817Z · LW(p) · GW(p)

One possibility is that c1 and c2 are, objectively, equally good choices, but we each think the other is wrong anyway.

I'd say it's misleading to say we thought the other person was "wrong," since in this context "wrong" is a word usually used to describe a situation where someone is in objective moral error. It might be better to say: "c1 and c2 are, objectively, equally morally good, but we each prefer a different one for arbitrary, non-moral reasons."

This doesn't change your argument in any way, I just think it's good to have the language clear to avoid accidentally letting in any connotations that don't belong.

So, given that, I think my question is: how might I go about figuring out which possibility is the case?

This is not something I have done a lot of thinking on since the odds of ever encountering such a situation are quite low at the present. It seems to me, however, that if you are this fair to your opponent, and care this much about finding out the honest truth, that you probably care at least somewhat about morality.

(This brings up an interesting question, which is: might there be some "semi-sociopathic" humans who care about morality, but incrementally, not categorically? That is, if one of these people was rational, fully informed, lacking in self deception, and lacked akrasia, they would devote maybe 70% of their time and effort to morality and 30% to other things? Such a person, if compelled to be honest, might admit that c2 is morally worse than c1, but they don't care because they've used up their 70% of moral effort for the day. It doesn't seem totally implausible that such people might exist, but maybe I'm missing something about how moral psychology works, maybe it doesn't work unless it's all or nothing.)

As for determining if your opponent care about morality, you might look to see if they exhibit any of the signs of sociopathy. You might search their arguments for signs of anti-epistemology, or plain moral errors. If you don't notice any of these things, you might assign a higher probability to prediction that your disagreement is to preferring different forms of pluralism

Of course, in real life perfectly informed, rational humans who lack self deception, akrasia, and so on do not exist. So you should probably assign a much, much, much higher probability to one of those things causing your disagreement.

Replies from: TheOtherDave, CCC
comment by TheOtherDave · 2012-11-01T15:51:39.361Z · LW(p) · GW(p)

It might be better to say: "c1 and c2 are, objectively, equally morally good, but we each prefer a different one for arbitrary, non-moral reasons."

OK. In which case I can also phrase my question as, when I choose c1 over c2, how can I tell whether I'm making that choice for objective moral reasons, as opposed to making that choice for arbitrary non-moral reasons?

You're right that it doesn't really change the argument, I'm just trying to establish some common language so we can communicate clearly.

For my own part, I agree with you that ignorance and akrasia are major influences, and I also believe that what you describe as "incremental caring about morality" is pretty common (though I would describe it as individual values differing).

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-01T19:20:00.839Z · LW(p) · GW(p)

Wikipedia's page on internalism and externalism calls an entity that understands moral arguments, but is not motivated by them an "amoralist." We could say that a person who cares about morality incrementally to have individual values that are part moralist and part amoralist.

It's hard to tell how many people are like this due to the confounding factors of irrationality and akrasia. But I think it's possible that there are some people who, if their irrationality and akrasia were cured, would not act perfectly moral. These people would say "I know that the world would be a better place if I acted differently, but I only care about the world to a limited extent."

However, considering that these people would be rational and lack akrasia, they would still probably do more moral good than the average person does today.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-11-01T21:30:15.036Z · LW(p) · GW(p)

These people would say "I know that the world would be a better place if I acted differently, but I only care about the world to a limited extent."

Would they say necessarily say that? Or might they instead say "I know you think the world would be a better place if I acted differently, but actually it seems to me the world is better if I do what I'm doing?"

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-06T01:05:49.728Z · LW(p) · GW(p)

Would they say necessarily say that? Or might they instead say "I know you think the world would be a better place if I acted differently, but actually it seems to me the world is better if I do what I'm doing?"

That depends on if they are using the term "better" to mean "morally better" or "more effectively satisfies the sum total of all my values, both moral or non-moral."

If the person is fully rational, lacking in self-deception, is being totally honest with you, and you and they had both agreed ahead of time that the word "better" means "morally better" then yes, I think they would say that. If they were lying, or they thought you were using the term "better" to mean "more effectively satisfies the sum total of all my values, both moral or non-moral," then they might not.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-11-06T02:33:05.743Z · LW(p) · GW(p)

I agree with you that IF (some of) my values are not moral and I choose to maximally implement my values, THEN I'm choosing not to act so as to make the world a morally better place, and IF I somehow knew that those values were not moral, THEN I would say as much if asked (supposing I was aware of my values and I was honest and so forth).

But on your account I still don't see any way for me to ever know which of my values are moral and which ones aren't, no matter how self-aware, rational, or lacking in self-deception I might be.

Also, even if I did somehow know that, and were honest and so forth, I don't think I would say "I only care about the world to a limited extent." By maximizing my values as implemented in the world, I would be increasing the value of the world, which is one way to express caring about the world. Rather, I would say "I only care about moral betterness to a limited extent; there are more valuable things for the world to be than morally better."

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-06T08:46:51.356Z · LW(p) · GW(p)

But on your account I still don't see any way for me to ever know which of my values are moral and which ones aren't, no matter how self-aware, rational, or lacking in self-deception I might be.

How does a Pebblesorter know it's piles are prime? The less intelligent and rational probably use some sort of vague intuition. The more intelligent and rational probably try dividing the numbers of pebbles by a number other than one.

If you had full knowledge of the concept of "morality" and all the various sub-concepts it included, you could translate that concept into a mathematical equation (the one I've been discussing with CC lately), and see if the various values of yours that you feed into it return positive numbers.

If your knowledge is more crude (i.e. if you're a real person who actually exists), then a possible way to do it would be to divide the nebulous super-concept of "morality" into a series of more concrete and clearly defined sub-concepts that compose it (i.e. freedom, happiness, fairness, etc.). It might also be helpful to make a list of sub-concepts that are definitely not part of morality (possible candidates include malice, sadism, anhedonia, and xenophobia).

After doing that you could, if you are not self-deceived, use introspection to figure what you value. If you find that your values include the various moral sub-concepts, then it seems like you value morality. If you find yourself not valuing the moral sub-concepts, or valuing some nonmoral concept, then you do not value morality, or value non-moral things.

As Eliezer puts it:

There is no pure ghostly essence of goodness apart from things like truth, happiness and sentient life.

The moral equation we are looking for isn't something that will provide us with a ghostly essence. It is something that will allow us to sum up and aggregate all the seperate good things like truth, happiness, and sentient life, so that we can effectively determine how good various combinations of these things are relative to each other, and reach an optimal combo.

Do you want people to be happy, free, be treated fairly, etc? Then you value morality to some extent. Do you love torturing people just for the hell of it, or want to convert all the matter in the universe into paperclips? Then you, at the very least, definitely value other things than morality.

Also, even if I did somehow know that, and were honest and so forth, I don't think I would say "I only care about the world to a limited extent." By maximizing my values as implemented in the world, I would be increasing the value of the world, which is one way to express caring about the world.

By "caring" I meant "caring about whether the world is a good and moral place." If you instead use the word "caring" to mean "have values that assign different levels of desirability to various possible states that the world could be in" then you are indeed correct that you would not say you didn't care about the world.

Rather, I would say "I only care about moral betterness to a limited extent; there are more valuable things for the world to be than morally better."

If by "valuable" you mean "has more of the things that I care about," then yes, you could say that. Remember, however, that in that case what is "valuable" is subjective, it changes from person to person depending on their individual utility functions. What is "morally valuable," by contrast, is objective. Anyone regardless of their utility function, can agree on whether or not the world has great quantities of things like truth, freedom, happiness, and sentient life. What determines the moral character of a person is how much they value those particular things.

Also, as an aside, when I mentioned concepts that probably aren't part of morality earlier, I did not mean to say that pursuit of those concepts always neccessarily leads to immoral results. For instance, imagine a malicious sadist who wants to break someone's knees. This person assaults someone else out of pure malice and breaks their knees. The injured person turns out to be an escaped serial killer who was about to kill again, and the police are able to apprehend them in their injured state. In this case the malicious person has done good. However, this is not because they have intentionally increased the amount of malicious torture in the universe. It is because they accidentally decreased the amount of murders in the universe.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-11-06T17:10:45.564Z · LW(p) · GW(p)

I 100% agree that there is no ghostly essence of goodness.

I agree that pursuing amoral, or even immoral, values can still lead to moral results. (And also vice-versa.)

I agree that if I somehow knew what was moral and what wasn't, then I would have a basis for formally distinguishing my moral values from my non-moral values even when my intuitions failed. I could even, in principle, build an automated mechanism for judging things as moral or non-moral. (Similarly, if a Pebblesorter knew that primeness was what it valued and knew how to factor large numbers, it would have a basis for formally distinguishing piles it valued from piles it didn't value even when its intuitive judgments failed, and it could build an automated mechanism for distinguishing such piles.)

I agree with you that real people who actually exist can't do this, at least not in detail.

You suggest we can divide morality into subconcepts that comprise it (freedom, happiness, fairness, etc.) and that it excludes (anhedonia, etc.). What I continue to not get is, on your account, how I do that in such a way as to ensure that what I end up with is the objectively correct list of moral values, which on your account exists, rather than some different list of values.

That is, suppose Sam and George both go through this exercise, and one of them ends up with "freedom" on their list but not "cooperation", and the other ends up with "cooperation" but not "freedom." On your account it seems clear that at least one of them is wrong, because the correct list of moral values is objective.

So, OK... what would we expect to experience if Sam were right? How does that differ from what we would expect to experience if George were right, or if neither of them were?

Do you want people to be happy, free, be treated fairly, etc? Then you value morality to some extent.

Again: how do we know that? What would I expect to experience differently if, instead, happiness, freedom, fairness, etc. turned out not to be aspects of morality, just like maximizing paperclips does? What should I be looking for, to notice if this is true, or confirm that it isn't? I would still want people to be happy, free, be treated fairly, etc. in either case, after all. What differences would I experience between the two cases?

If you instead use the word "caring" to mean "have values that assign different levels of desirability to various possible states that the world could be in"

Yes, that's more or less what I mean by "caring". More precisely I would say that caring about X consists of desiring states of the world with more X more than states of the world with less X, all else being equal, but that's close enough to what you said.

If by "valuable" you mean "has more of the things that I care about," then yes, you could say that. Remember, however, that in that case what is "valuable" is subjective, it changes from person to person depending on their individual utility functions.

Yes, that's what I mean by "valuable." And yes, absolutely, what is valuable changes from person to person. If I act to maximize my values and you act to maximize yours we might act in opposition (or we might not, depending, but it's possible).

And I get that you want to say that if we both gave up maximizing our values and instead agreed to implement moral values, then we would be cooperating instead, and the world would be better (even if it turned out that both of us found it less valuable). What I'm asking you is how (even in principle) we could ever reach that point.

To say that a little differently: you value some things (Vg) and I value some things (Vd). Supposing we are both perfectly rational and honest and etc., we can both know what Vg and Vd are, and what events in the world would maximize each. We can agree to cooperate on maximizing the intersection of (Vg,Vd), and we can work out some pragmatic compromise about the non-overlapping stuff. So far so good; I see how we could in principle reach that point, even if in practice we aren't rational or self-aware or honest enough to do it.

But I don't see how we could ever say "There's this other list, Vm, of moral values; let's ignore Vg and Vd altogether and instead implement Vm!" because I don't see how we could ever know what Vm was, even in principle. If we happened to agree on some list Vm, either by coincidence or due to social conditioning or for other reasons, we could agree to implement Vm... which might or might not make the world better, depending on whether Vm happened to be the objectively correct list of moral values. But I don't see how we could ever, even in principle, confirm or deny this, or correct it if we somehow came to know we had the objectively wrong list.

And if we can't know or confirm or deny or correct it, even in principle, then I don't see what is added by discussing it. It seems to me I can just as usefully say, in this case, "I value happiness, freedom, fairness, etc. I will act to maximize those values, and I endorse acting this way," and nothing is added by saying "Those values comprise morality" except that I've asserted a privileged social status for my values.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-06T20:04:12.375Z · LW(p) · GW(p)

So, OK... what would we expect to experience if Sam were right? How does that differ from what we would expect to experience if George were right, or if neither of them were?......Again: how do we know that? What would I expect to experience differently if, instead, happiness, freedom, fairness, etc. turned out not to be aspects of morality, just like maximizing paperclips does?

Well, I am basically asserting that morality is some sort of objective equation, or "abstract idealized dynamic," as Eliezer calls it, concerned with people's wellbeing. And I am further asserting that most human beings care very much about this concept. I think this would make the following predictions:

  1. In a situation where a given group of humans had similar levels of empirical knowledge and a similar sanity waterline there would be far more moral agreement among them than would be predicted by chance, and far less moral disagreement than is mentally possible.

  2. It is physically possible to persuade people to change their moral values by reasoned argument.

  3. Inhabitants of a society who are unusually rational and intelligent will be the first people in that society to make moral progress, as they will be better at extrapolating answers out of the "equation."

  4. If one attempted to convert the moral computations people make into an abstract, idealized process, and determine it's results, many people would find those results at least somewhat persuasive, and may find their ethical views changed by observing them.

All of these predictions appear to be true:

  1. Human societies tend to have a rather high level of moral agreement between their members. Conformity is not necessarily an indication of rightness, it seems fairly obvious that whole societies have held gravely mistaken moral views, such as those that believed slavery was good. However, it is interesting that all those people in those societies were mistaken in exactly the same way. That seems like evidence that they were all reasoning towards similar conclusions, and the mistakes they made were caused by common environmental factors that impacted all of them. There are other theories that explain this data of course, (peer pressure, for instance), but I still find it striking.

  2. I've had moral arguments made by other people change my mind, and changed the minds of other people by moral argument. I'm sure you have also had this experience.

  3. It is well known that intellectuals tend to develop and adopt new moral theories before the general populace does. Common examples of intellectuals whose moral concepts have disseminated into the general populace include John Locke, Jeremy Bentham, and William Lloyd Garrison. Many of these peoples' principles have since been adopted into the public consciousness.

  4. Ethical theorists who have attempted to derive new ethical principles by working from an abstract, idealized form of ethics have often been very persuasive. To name just one example, Peter Singer ended up turning thousands of people into vegetarians with moral arguments that started on a fairly abstract level.

It seems to me I can just as usefully say, in this case, "I value happiness, freedom, fairness, etc. I will act to maximize those values, and I endorse acting this way," and nothing is added by saying "Those values comprise morality"

Asserting that those values comprise morality seems to be effective because it seems to most people that those values are related in some way, because they form the superconcept "morality." Morality is a useful catchall term for certain types of values, and it would be a shame to lose it.

Still, I suppose that asserting "I value happiness, freedom, fairness, etc" is similar enough to saying "I care about morality" that I really can't object terribly strongly if that's what you'd prefer to do.

except that I've asserted a privileged social status for my values.

Why does doing that bother you? Presumably, because you care about the moral concept of fairness, and don't want to claim an unfair level of status for you and your views. But does it really make sense to say "I care about fairness, but I want to be fair to other people who don't care about it, so I'll go ahead and let them treat people unfairly, in order to be fair." That sounds silly, doesn't it? it has the same problems that come with being tolerant of intolerant people.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-11-06T20:27:30.812Z · LW(p) · GW(p)

I think this would make the following predictions:

All of those predictions seem equally likely to me whether Sam is right or George is, so don't really engage with my question at all. At this point, after several trips 'round the mulberry bush, I conclude that this is not because I'm being unclear with my question but rather because you're choosing not to answer it, so I will stop trying to clarify the question further.

If I map your predictions and observations to the closest analogues that make any sense to me at all, I basically agree with them.

I suppose that asserting "I value happiness, freedom, fairness, etc" is similar enough to saying "I care about morality" that I really can't object terribly strongly if that's what you'd prefer to do.

It is.

Why does doing that [asserting a privileged social status for my values] bother you?

It doesn't bother me; it's a fine thing to do under some circumstances. If we can agree that that's what we're doing when we talk about "objective morality," great. If not (which I find more likely), never mind.

Presumably, because you care about the moral concept of fairness, and don't want to claim an unfair level of status for you and your views.

As above, I don't see what the word "moral" is adding to this sentence. But sure, unfairly claiming status bothers me to the extent that I care about fairness. (That said, I don't think claiming status by describing my values as "moral" is unfair; pretty much everybody has an equal ability to do it, and indeed they do. I just think it confuses any honest attempt at understanding what's really going on when we decide on what to do.)

But does it really make sense to say "I care about fairness, but I want to be fair to other people who don't care about it, so I'll go ahead and let them treat people unfairly, in order to be fair."

It depends on why and how I value ("care about") fairness.

If I value it instrumentally (which I do), then it makes perfect sense to say that being fair to people who treat others unfairly is net-valuable, although it might be true or false in any given situation depending on what is achieved by the various kind of fairness that exist in tension in that situation.

Similarly, if I value it in proportion to how much of it there is (which I do), then it makes sense to say that, although it might be true or false depending on how much fairness is gained or lost by doing so.

That sounds silly, doesn't it?

(nods) Totally. And the ability to phrase ideas in silly-sounding ways is valuable for rhetorical purposes, although it isn't worth much as an analytical tool.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-07T00:46:58.321Z · LW(p) · GW(p)

All of those predictions seem equally likely to me whether Sam is right or George is, so don't really engage with my question at all.

I'm really sorry, I was trying to kill two birds with one stone and simultaneously engage that question and your later question ["What would I expect to experience differently if, instead, happiness, freedom, fairness, etc. turned out not to be aspects of morality, just like maximizing paperclips does?"] at the same time, and I ended up doing a crappy job of answering both of them. I'll try to just answer the Sam and George question now.

I'll start by examining the Pebblesorters P-George and P-Sam. P-George thinks 9 is p-right and 16 is p-wrong. P-Sam thinks 9 is p-wrong and 16 is p-right. They both think they are using the word "p-right" to refer to the same abstract, idealized process. What can they do to see which one is right?

  1. They assume that most other Pebblesorters care about the same abstract process they do, so they can try to persuade them and see how successful they are. Of course, even if all the Pebblesorters agree with one of them, that doesn't necessarily mean that one is p-correct, those sorters may be making the same mistake as P-George or P-Sam. But I think it's non-zero Bayesian evidence of the p-rightness of their views.
  2. They can try to control for environmentally caused error by seeing if they can also persuade Pebblesorters who live in different environments and cultures.
  3. They can find the most rational and p-sane Pebblesorting societies and see if they have an easier time persuading them.
  4. They can actually try to extrapolate what the abstract, idealized equation that the word "p-right" represents is and compare it to their views. They read up on Pebblesorter philospher's theories of p-rightness and see how they correlate with their views. Pebblesorting is much simpler than morality, so we know that the abstract, idealized dynamic that the concept "p-right" represents is "primality." So we know that P-Sam and P-George are both partly right and partly wrong, 9 and 16 both aren't prime.

Now let's translate that into human.

We would expect if Sam was right and George was wrong:

  1. He would have an easier time persuading non-sociopathic humans of the rightness of his views than George, because his views are closer to the results of the equation those people have in their head.

  2. If he went around to different societies with different moral views and attempted to persuade the people there of his views he should, on average, also have an easier time of it than George, again because his views are closer to the results of the equation those people have in their head.

  3. Societies with higher levels of sanity and rationality should be especially easily persuaded, because they are better at determining what the results of that equation would be.

  4. When Sam compared his and George's views to views generated by various attempts by philosophers to create an abstract idealized version of the equation (ie. moral theories), his view should be a better match to many of them, and the results they generate, than George's are.

The problem is that the concept of morality is far more complex than the concept of primality, so finding the right abstract idealized equation is harder for humans than it is for Pebblesorters. We still haven't managed to do it yet. But I think that comparing Sam and George's views to the best approximations we have so far (various forms of consequentialism, in my view) we can get some Bayesian evidence of the rightness of their views.

If George is right, he will achieve these results instead of Sam. If they are both wrong, they will both fail at doing these things.

If I value it instrumentally (which I do), then it makes perfect sense to say that being fair to people who treat others unfairly is net-valuable, although it might be true or false in any given situation depending on what is achieved by the various kind of fairness that exist in tension in that situation.

Sorry, I was probably being unclear as to what I meant because I was trying to sound clever. When I said it was silly to be fair to unfair people what I meant was that you should not regard their advice on how to best treat other people with the same consideration you'd give to a fair-minded person's advice.

For instance, you wouldn't say "I think it's wrong to enslave black people, but that guy over there thinks it's right, so let's compromise and believe it's okay to enslave them 50% of the time." I suppose you might pretend to believe that if the other guy had a gun and you didn't, but you wouldn't let his beliefs affect yours.

I did not mean that, for example, if you, two fair-minded people, and one unfair-minded person are lost in the woods and find a pie, that you shouldn't give the unfair-minded person a quarter of the pie to eat. That is an instance where it does make sense to treat unfair people fairly.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-11-07T01:08:30.360Z · LW(p) · GW(p)

OK. Thanks for engaging with the question; that was very helpful. I now have a much better understanding of what you believe the differences-in-practice between moral and non-moral values are.

Just to echo back what I'm hearing you say: to the extent that some set of values Vm is easier to convince humans to adopt than other sets of values and easier to convince sane, rational societies to adopt than less sane, less rational societies and better approximates the moral theories created by philosophers than other sets of values, to that extent we can be confident that Vm is the set of values that comprise morality.

Did I get that right?

Regarding fairmindedness: I endorse giving someone's advice consideration to the extent that I'm confident that considering their advice will implement my values. And, sure, it's unlikely that the advice of an unfairminded person would, if considered, implement the value of fairness.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-07T01:51:13.076Z · LW(p) · GW(p)

Did I get that right?

Yes, all those things provide small bits of Bayesian evidence that Vm is closer to morality than some other set of values.

comment by CCC · 2012-11-01T07:45:59.304Z · LW(p) · GW(p)

This brings up an interesting question, which is: might there be some "semi-sociopathic" humans who care about morality, but incrementally, not categorically?

It seems very likely that a person who cares a certain amount about morality, and a certain amount about money, would be willing to compromise his morality if given sufficient money. Such a mental model would form the basis of bribery. (It doesn't have to be money, either, but the principle remains the same).

So a semi-sociopathic person would be anyone who could be bribed into completely disregarding morality.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-11-01T16:13:25.231Z · LW(p) · GW(p)

a semi-sociopathic person would be anyone who could be bribed into completely disregarding morality.

On this account, we could presumably also categorize a semi-semi-sociopathic person as one who could be bribed into partially disregarding the thing we're labeling "morality". And of course bribes needn't be money... people can be bribed by all kinds of things. Social status. Sex. Pleasant experiences. The promise of any or all of those things in the future.

Which is to say, we could categorize a semi-semi-sociopath as someone who cares about some stuff, and makes choices consistent with maximizing the stuff they care about, where some of that stuff is what we're labeling "morality" and some of it isn't.

We could also replace the term "semi-semi-sociopath" with the easier to pronounce and roughly equivalent term "person".

It's also worth noting that there probably exists stuff that we would label "morality" in one context and "bribe" in another, were we inclined to use such labels.

comment by CCC · 2012-11-01T07:33:30.754Z · LW(p) · GW(p)

I don't think I'm coming across right. I'm not saying that morality is some sort of collective agreement of people in regards to their various preferences. I'm saying that morality is a series of concepts such as fairness, happiness, freedom etc., that these concepts are objective in the sense that it can be objectively determined how much fairness, freedom, happiness etc. there is in the world, and that the sum of these concepts can be expressed as a large equation.

Ah, I think I see your point. What you're saying - and correct me if I'm wrong - is that there is some objective True Morality, some complex equation that, if applied to any possible situation, will tell you how moral a given act is.

This is probably true.

This equation isn't written into the human psyche; it exists independantly of what people think about morality. It just is. And even if we don't know exactly what the equation is, even if we can't work out the morality of a given act down to the tenth decimal place, we can still apply basic heuristics and arrive at a usable estimate in most situations.

My question is, then - assuming the above is true, how do we find that equation? Does there exist some objective method whereby you, I, a Pebblesorter, and a Paperclipper can all independently arrive at the same definition for what is moral (given that the Pebblesorter and Paperclipper will almost certainly promptly ignore the result)?

(I had thought that you were proposing that we find that equation by summing across the moral values and imperatives of humanity as a whole - excluding the psychopaths. This is why I asked about the exclusion, because it sounded a lot like writing down what you wanted at the end of the page and then going back and discarding the steps that wouldn't lead there; that is also why I asked about the aliens).

I don't know if I could tell, but I'd very much prefer that the AI not do that, and would consider myself to have been massively harmed if it did, even if I never found out. My preference is to actually interact with a diverse variety of people, not to merely have a series of experiences that seem like I'm doing it.

Yes, I think we're in agreement on that. (Though this does suggest that 'sentient' may need a proper definition at some point).

Replies from: nshepperd, Ghatanathoah
comment by nshepperd · 2012-11-01T09:35:41.695Z · LW(p) · GW(p)

What you're saying - and correct me if I'm wrong - is that there is some objective True Morality, some complex equation that, if applied to any possible situation, will tell you how moral a given act is.

In the same way as there exists a True Set of Prime Numbers, and True Measure of How Many Paperclips There Are...

comment by Ghatanathoah · 2012-11-01T08:56:18.696Z · LW(p) · GW(p)

My question is, then - assuming the above is true, how do we find that equation?

Even though the equation exists independently of our thoughts (the same way primality exists independently from Pebblesorter thoughts) fact that we are capable of caring about the results given by the equation means we must have some parts of it "written" in our heads, the same way Pebblesorters must have some concept of primality "written" in their heads. Otherwise, how would we be capable of caring about its results?

I think that probably evolution metaphorically "wrote" a desire to care about the equation in our heads because if humans care about what is good and right it makes it easier for them to cooperate and trust each other, which has obvious fitness advantages. Of course, the fact that evolution did a good thing by causing us to care about morality doesn't mean that evolution is always good, or that evolutionary fitness is a moral justification for anything. Evolution is an amoral force causes many horrible things to happen. It just happened that in this particular instance, evolution's amoral metaphorical "desires" happened to coincide with what was morally good. That coincidence is far from the norm, in fact, evolution probably deleted morality from the brains of sociopaths because double-crossing morally good people also sometimes confers a fitness advantage.

So how do we learn more about this moral equation that we care about? One common form of attempting to get approximations of it in philosophy is called reflective equilibrium, where you take your moral imperatives and heuristics and attempt to find the commonalities and consistencies they have with each other. It's far from perfect, but I think that this method has produced useful results in the past.

Eliezer has proposed what is essentially a souped up version of reflective equilibrium called Coherent Extrapolated Volition. He has argued, however, that the primary use of CEV is in designing AIs that won't want to kill us, and that attempting to extrapolate other people's volition is open to corruption, as we could easily fall to the temptation to extrapolate it to something that personally benefits us.

Does there exist some objective method whereby you, I, a Pebblesorter, and a Paperclipper can all independently arrive at the same definition for what is moral (given that the Pebblesorter and Paperclipper will almost certainly promptly ignore the result)?

Again, we could probably get closer through reflective equilibrium, and by critiquing the methods and results of each other's reflections. If you somehow managed to get a Pebblesorter or a Paperclipper to do it too, they might generate similar results, although since they don't intrinsically care about the equation you would probably have to give them some basic instructions before they started working on the problem.

I had thought that you were proposing that we find that equation by summing across the moral values and imperatives of humanity as a whole - excluding the psychopaths.

If we assume that most humans care about acting morally, doing research about what people's moral imperatives are might be somewhat helpful, since it would allow us to harvest the fruits of other people's moral reflections and compare them with our own. We can exclude sociopaths because there is ample evidence that they care nothing for morality.

Although I suppose, that a super-genius sociopath who had the basic concept explained to them might be able to do some useful work in the same fashion that a Pebblesorter or Paperclipper might be able to. Of course, the genius sociopath wouldn't care about the results, and probably would have to be paid a large sum to even agree to work on the problem.

Replies from: CCC
comment by CCC · 2012-11-01T14:14:17.137Z · LW(p) · GW(p)

I think that probably evolution metaphorically "wrote" a desire to care about the equation in our heads because if humans care about what is good and right it makes it easier for them to cooperate and trust each other, which has obvious fitness advantages.

Hmmm. That which evolution has "written" into the human psyche could, in theory, and given sufficient research, be read out again (and will almost certainly not be constant across most of humanity, but will rather exist with variations). But I doubt that morality is all in out genetic nature; I suspect that most of it is learned, from our parents, aunts, uncles, grandparents and other older relatives; I think, in short, that morality is memetic rather than genetic. Though evolution still happens in memetic systems just as well as in genetic systems.

So how do we learn more about this moral equation that we care about? One common form of attempting to get approximations of it in philosophy is called reflective equilibrium, where you take your moral imperatives and heuristics and attempt to find the commonalities and consistencies they have with each other. It's far from perfect, but I think that this method has produced useful results in the past.

Hmmm. Looking at the wikipedia article, I can expect reflective equilibrium to produce a consistent moral framework. I also expect a correct moral framework to be consistent; but not all consistent moral frameworks are correct. (A paperclipper does not have what I'd consider a correct moral framework, but it does have a consistent one).

If you start out close to a correct moral framework, then reflective equilibrium can move you closer, but it doesnt necessarily do so.

Eliezer has proposed what is essentially a souped up version of reflective equilibrium called Coherent Extrapolated Volition. He has argued, however, that the primary use of CEV is in designing AIs that won't want to kill us, and that attempting to extrapolate other people's volition is open to corruption, as we could easily fall to the temptation to extrapolate it to something that personally benefits us.

Hmmm. The primary use of trying to find the True Morality Equation, to my mind, is to work it into a future AI. If we can find such an equation, prove it correct, and make an AI that maximises its output value, then that would be an optimally moral AI. This may or may not count as Friendly, but it's certainly a potential contender for the title of Friendly.

Again, we could probably get closer through reflective equilibrium, and by critiquing the methods and results of each other's reflections. If you somehow managed to get a Pebblesorter or a Paperclipper to do it too, they might generate similar results, although since they don't intrinsically care about the equation you would probably have to give them some basic instructions before they started working on the problem.

Carrying through this method to completion could give us - or anyone else - an equation. But is there any way to be sure that it necessarily gives us the correct equation? (A pebblesorter may actually be a very good help in resolving this question; he does not care about morality, and therefore does not have any emotional investment in the research).

The first thought that comes to my mind, is to have a very large group of researchers, divide them into N groups, and have each of these groups attempt, independently, to find an equation; if all of the groups find the same equation, this would be evidence that the equation found is correct (with stronger evidence at larger values of N). However, I anticipate that the acquired results would be N subtly different, but similar, equations.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-01T14:36:52.130Z · LW(p) · GW(p)

But I doubt that morality is all in out genetic nature; I suspect that most of it is learned, from our parents, aunts, uncles, grandparents and other older relatives; I think, in short, that morality is memetic rather than genetic.

That's possible. But memetics can't build morality out of nothing. At the very least, evolved genetics has to provide a "foundation," a part of the brain that moral memes can latch onto. Sociopaths lack that foundation, although the research is inconclusive as to what extent this is caused by genetics, and what extent it is caused by later developmental factors (it appears to be a mix of some sort).

Hmmm. Looking at the wikipedia article, I can expect reflective equilibrium to produce a consistent moral framework. I also expect a correct moral framework to be consistent; but not all consistent moral frameworks are correct.

Yes, that's why I consider reflective equilibrium to be far from perfect. Depending on how many errors you latch onto, it might worsen your moral state.

Carrying through this method to completion could give us - or anyone else - an equation. But is there any way to be sure that it necessarily gives us the correct equation?

Considering how morally messed up the world is now, even an imperfect equation would likely be better (closer to being correct) than our current slapdash moral heuristics. At this point we haven't even achieved "good enough," so I don't think we should worry too much about being "perfect."

However, I anticipate that the acquired results would be N subtly different, but similar, equations.

That's not inconceivable. But I think that each of the subtly different equations would likely be morally better than pretty much every approximation we currently have.

Replies from: CCC
comment by CCC · 2012-11-03T13:32:03.076Z · LW(p) · GW(p)

But memetics can't build morality out of nothing. At the very least, evolved genetics has to provide a "foundation," a part of the brain that moral memes can latch onto. Sociopaths lack that foundation, although the research is inconclusive as to what extent this is caused by genetics, and what extent it is caused by later developmental factors

That sounds plausible, yes.

Considering how morally messed up the world is now, even an imperfect equation would likely be better (closer to being correct) than our current slapdash moral heuristics. At this point we haven't even achieved "good enough," so I don't think we should worry too much about being "perfect."

Hmmm. Finding an approximation to the equation will probably be easier than step two; encouraging people worldwide to accept the approximation. (Especially since many people who do accept it will then promptly begin looking for loopholes; either to use or to patch them).

However, if the correct equation cannot be found, then this means that the Morality Maximiser AI cannot be designed.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-06T01:12:32.118Z · LW(p) · GW(p)

However, if the correct equation cannot be found, then this means that the Morality Maximiser AI cannot be designed.

That's true, what I was trying to say is that a world ruled by a 99.99% Approximation of Morality Maximizer AI might well be far far better than our current one, even if it is imperfect.

Of course, it might be a problem if we put the 99.99% Approximation of Morality Maximizer AI in power, then find the correct equation, only to discover that the 99AMMAI is unwilling to step down in favor of the Morality Maximizer AI. On the other hand, putting the 99AMM AI in power might be the only way to ensure a Paperclipper doesn't ascend to power before we find the correct equation and design the MMAI. I'm not sure whether we should risk it or not.

comment by TheOtherDave · 2012-10-26T14:57:06.880Z · LW(p) · GW(p)

Hmmm. To avoid Omelas, equality would have to be fairly heavily weighted; any finite weighting given to equality, however, will simply mean that Omelas is only possible given a sufficiently large population (by balancing the cost of the inequality with the extra happiness of the extra inhabitants).

Well, if we're really going to take Omelas seriously as our test case, then presumably we also have to look at how much that "extra happiness" (or whatever else we're putting in the plus column) is reduced by those who walk away from it, and by those who are traumatized by it, and so forth. It might turn out that increasing the population doesn't help.

But that's just a quibble. I basically agree: once we swallow the assumption that for some reason we neither understand nor can ameliorate, the happiness of the many ineluctably depends on the misery of the few, then a total-utilitarian approach either says that equality is the most important factor in utility (which is a problem like you describe), or endorses the few being miserable.

That's quite an assumption to swallow, though. I have no reason to believe it's true of the world I live in.

A weaker version that might be true of the world I actually live in is that concentrating utility-generating resources in fewer hands results in higher total utility-from-all-sources-other-than-equality (Ua) but more total-disutility-from-inequality (Ub). But it's not quite as clear that our (Ua, Ub) preferences are lexicographic.

Replies from: CCC
comment by CCC · 2012-10-28T14:45:50.229Z · LW(p) · GW(p)

Well, if we're really going to take Omelas seriously as our test case, then presumably we also have to look at how much that "extra happiness" (or whatever else we're putting in the plus column) is reduced by those who walk away from it, and by those who are traumatized by it, and so forth. It might turn out that increasing the population doesn't help.

Doubling the population should double the happiness; double the trauma; double the people who walk away. The end result should be (assuming a high enough population that the Law of Large Numbers is a reasonable heuristic) about twice the utility.

A weaker version that might be true of the world I actually live in is that concentrating utility-generating resources in fewer hands results in higher total utility-from-all-sources-other-than-equality (Ua) but more total-disutility-from-inequality (Ub). But it's not quite as clear that our (Ua, Ub) preferences are lexicographic.

Consider the case of farmland; larger farms produce more food-per-acre than smaller farms. (Why? Because larger farms attract commercial farmers with high-intensity farming techniques; and they can buy better farming equipment with their higher profits). Now, in the case of farmland, the optimal scenario is not equality; you don't want everyone to have the same amount of farmland, you want those who are good at farming to have most of it. (For a rather dramatic example of this, see the Zimbabwe farm invasions).

On the other hand, consider the case of food itself. Here, equality is a lot more important; giving one man food for a hundred while ninety-nine men starve is clearly a failure case, as a lot of food ends up going rotten and ninety-nine people end up dead.

So the optimal (Ua, Ub) ordering depends on exactly what it is that is being ordered; there is no universally correct ordering.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-28T15:36:40.515Z · LW(p) · GW(p)

You seem to be assuming a form of utility that is linear with happiness, with trauma, with food-per-acre, with starving people, etc.
I agree with you that if we calculate utility this way, what you say follows.
It's not clear to me that we ought to calculate utility this way.

Replies from: CCC
comment by CCC · 2012-10-29T09:20:10.134Z · LW(p) · GW(p)

Hmmm. There are other ways to calculate utility, yes, and some of them are very likely better than linear. But all of them should at least be monotonically increasing with increased happiness, lower trauma, etc. There isn't some point of global happiness where you can say that global happiness above this level is worse than global happiness at this level, if all else remains constant. The increase may be smaller for higher starting levels of happiness, but it should be an increase.

Such a system can either be bounded above by a maximum value, which it approaches asymptotically (such that no amount of global happiness alone can ever be worth, say, ten billion utilions, but it can approach arbitrarily close to that amount), or it can be unbounded (in which case enough global happiness can counter any finite amount of negative effects). A linear system would be unbounded, and my comments above can be trivially changed to fit with any unbounded system (but not necessarily with a bounded system).

It's not clear to me whether it should be a bounded or an unbounded system..

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-29T14:01:21.648Z · LW(p) · GW(p)

all of them should at least be monotonically increasing with increased happiness, lower trauma, etc

OK, so we agree that doubling the population doesn't provide twice the utility, but you're now arguing that it at least increases the utility (at least, up to a possible upper bound which might or might not exist).

This depends on the assumption that the utility-increasing aspects of increased population increase with population faster than the utility-decreasing aspects of increased population do. Which they might not.

Replies from: CCC
comment by CCC · 2012-10-29T20:38:40.071Z · LW(p) · GW(p)

...you know, it's only after I read this comment that I realised that you're suggesting that the utility-decreasing aspects may not use the same function as the utility-decreasing aspects. That is, what I was doing was mathematically equivalent to first linearly combining the separate aspects, and only then feeding that single number to a monotonically increasing nonlinear function.

Now I feel somewhat silly.

But yes, now I see that you are right. There are possible ethical models (example: bounded asymptotic increase for positive utility, unbounded linear decrease for negative utility) wherein a larger Omelas could be worse than a smaller Omelas, above some critical maximum size. In fact, there are some functions wherein an Omelas of size X could have positive utility, while an Omelas of size Y (with Y>X) could have negative utility.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-30T16:52:35.053Z · LW(p) · GW(p)

Yup. Sorry I wasn't clearer earlier; glad we've converged.

comment by nshepperd · 2012-10-26T11:24:25.853Z · LW(p) · GW(p)

What does seem to work is to pick a society whose inhabitants seem happy and fulfilled, and trying to use whatever rules they use.

If you're going to do that, why not just directly use happiness and fulfillment?

Replies from: CCC
comment by CCC · 2012-10-26T12:15:44.832Z · LW(p) · GW(p)

If you're going to do that, why not just directly use happiness and fulfillment?

I cannot create an entire ethical framework, for everyone to follow, on any basis, and expect that it will be able to hold up for the next thousand years. If I try, I will fail, and this is why: because people cheat. Many intelligent agents will poke at the rules, seeking a possible exploit thereof that enhances their success at the possible expense of their neighbours' success. Over the next thousand years, there will be thousands, probably millions, of such intelligent agents hunting for, and attempting to exploit, flaws in the system; people who stick by the letter of the rule, and avoid the spirit of the rule. I cannot create an entire ethical framework, because I cannot outwit thousands or millions of future peoples' attempts to find and exploit gaps and loopholes in my framework.

Hence, the best that I can do is to find a system that has already endured a period of field testing and that hasn't broken yet; and perhaps attempt a small, incremental improvement (no more) in order to test that improvement.

Replies from: nshepperd
comment by nshepperd · 2012-10-26T13:03:22.994Z · LW(p) · GW(p)

What does that have to do with the situation at hand? Morality is an abstract division of actions into right and wrong, not some set of laws laid down by philosophers on the rest of the population. If you're trying to work out what you mean by "morality" and use some criteria (such as something including happiness and fulfillment of populations which adopt that definition) to choose from a bunch of alternatives, then probably those criteria themselves are the most accurate definition of "morality" you could hope to find. I might add, in [almost] exactly the same way that a program which writes and then executes a program to add two numbers is, in fact, itself a program that adds two numbers.

You can write out your final definition in legalese later, if the situation calls for it.

Replies from: CCC
comment by CCC · 2012-10-26T13:55:45.051Z · LW(p) · GW(p)

What does that have to do with the situation at hand? Morality is an abstract division of actions into right and wrong, not some set of laws laid down by philosophers on the rest of the population.

Morality comes with an implicit rule; when it says that "this action is the right action to take in this situation", then the implicit rule is "if you find yourself in this situation, take this action". There is usually no Morality Policeman ready to administer punishment if the rule is not followed, and the choice to follow the rule or not remains; but the rule is there.

f you're trying to work out what you mean by "morality" and use some criteria (such as something including happiness and fulfillment of populations which adopt that definition) to choose from a bunch of alternatives, then probably those criteria themselves are the most accurate definition of "morality" you could hope to find.

The difficulty is that I know that the algorithm that I am following is very likely not to fulfil the criteria in the very best possible way; merely in (more or less) the best possible way that they have been fulfilled in the past. If I simply list the criteria, then I falsely imply that the chosen system of morality is the best fit for those criteria; and I am trying to avoid that implication.

comment by Peterdjones · 2012-10-26T07:50:36.590Z · LW(p) · GW(p)

Define it, or defend it? There are a lot of defences, but not so much definitions.

comment by Jayson_Virissimo · 2012-10-26T07:36:40.212Z · LW(p) · GW(p)

I think the metaphor misses something important here, because the number of pebbles seems completely arbitrary. What, if anything, would change if in the pebble-sorters' ancestral environment, preferring 13-pebble heaps was adaptive, but preferring 11-pebble heaps (or spending resources on that that do) was not?

Replies from: wedrifid, MugaSofer
comment by wedrifid · 2012-10-26T10:00:45.770Z · LW(p) · GW(p)

I think the metaphor misses something important here, because the number of pebbles seems completely arbitrary. What, if anything, would change if in the pebble-sorters' ancestral environment, preferring 13-pebble heaps was adaptive, but preferring 11-pebble heaps (or spending resources on that that do) was not?

Preferring other people like Larry to be homosexual is adaptive for me. And it is the judgement by others (and the implicit avoidance of that through shame) that we are considering here. That said:

I think the metaphor misses something important here

Absolutely, and the entire line of reasoning relies on conveying the speaker's own morality ("it is second-order 'right' to be homosexual") on others without making it explicit.

comment by MugaSofer · 2012-10-26T08:23:37.137Z · LW(p) · GW(p)

The same reason sorting pebbles into correct heaps was adaptive in the first place.

EDIT: Wait, does it matter that homosexuality is probably not adaptive?

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-10-26T08:48:37.503Z · LW(p) · GW(p)

Wait, does it matter that homosexuality is probably not adaptive?

That was the point of my comment. There is a large disanalogy between heterosexuality and 13-pebble heap preference (namely, the first highly adaptive, but the second has no apparent reason to be). Although, I'm not sure if that is enough to break the metaphor.

Replies from: MugaSofer, MugaSofer
comment by MugaSofer · 2012-10-26T09:01:12.180Z · LW(p) · GW(p)

There are many properties homosexuality has but 11-pebble heap preference don't, and vice versa. Why is evolutionary maladaptiveness worth pointing out, is my question.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-10-26T09:13:14.278Z · LW(p) · GW(p)

There are many properties homosexuality has but 11-pebble heap preference don't, and vice versa. Why is evolutionary maladaptiveness worth pointing out, is my question.

Well, if moral norms are the Nash equilibria that result from actual historical bargaining situations (that are determined largely by human nature and the ancestral environment), then it seems somewhat relevant. If moral norms are actually imperative sentences uttered by God, then it seems completely irrelevant. Etc...

I suppose whether or not the pebble-sorting metaphor is good depends on which meta-ethical theory is true. In other words, I'm agreeing with PhilGoetz; Example 2 and Example 3 are only in separate classes of meta-wants assuming a (far from universally shared) moral system.

Replies from: Ghatanathoah, MugaSofer
comment by Ghatanathoah · 2012-10-26T10:32:48.054Z · LW(p) · GW(p)

Well, if moral norms are the Nash equilibria that result from actual historical bargaining situations

I would regard moral norms as useful heuristics for achieving morally good results, not as morality in and of itself.

I suppose whether or not the pebble-sorting metaphor is good depends on which meta-ethical theory is true.

I think that some sort of ethical naturalism (or "moral cognitivism" as Eliezer calls it) is correct, where "morally good" is somewhat synonymous with "helps people live lives full of positive values like love, joy, freedom, fairness, high challenge, etc." There is still much I'm not sure of, but I think, that is probably pretty close to the meaning of right. In Larry's case I would argue that homosexual relationships usually do help people live such lives.

comment by MugaSofer · 2012-10-26T09:30:18.410Z · LW(p) · GW(p)

Oh, you mean that humans might genuinely dislike homosexuality as a terminal value, because evo-psych.

... huh.

comment by MugaSofer · 2012-10-26T09:56:10.277Z · LW(p) · GW(p)

Incidentally, it's easier to sort pebbles into heaps of 11. The original pebblesorters valued larger heaps, but had a harder time determining their correctness.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-10-26T10:03:41.256Z · LW(p) · GW(p)

That's why I was careful to refer to them as 11-Pebble and 13-Pebble Favorers. They do value other sizes of pebble heaps, 11 and 13 are just the numbers they do most frequently. Or perhaps 11 and 13 are the heaps they like making in their personal time, but they like larger prime numbers for social pebble-sorting endeavors. The point is, I said they "favored" that size because I wanted to make sure that the ease of sorting the piles didn't seem too relevant, since that would distract away from the central metaphor.

Replies from: MugaSofer
comment by MugaSofer · 2012-10-26T10:08:01.386Z · LW(p) · GW(p)

Oops.

comment by orthonormal · 2009-05-18T19:03:18.162Z · LW(p) · GW(p)

I suspect we're doing some extrapolation here in order to distinguish these cases. I expect that if Mimi knew more about herself and the world, and thought more clearly, she would still want to not want heroin; while I expect that if Larry knew more about himself and the world, and thought more clearly, he would be likely to reject the system of belief that causes him to think homosexuality immoral.

Replies from: mitechka
comment by mitechka · 2009-05-18T19:28:04.970Z · LW(p) · GW(p)

Alternatively, after sobering up, Mimi might decide that experiencing heroine high makes her life so much more fulfilling, that the much shortened life expectancy of a heroine addict doesn't seems to be a fair price to pay for it.

As usual it is all up to personal definition of utility.

comment by Mike Bishop (MichaelBishop) · 2009-05-17T17:47:34.665Z · LW(p) · GW(p)

Contra Cyan & Alicorn, I am inclined to go with PhilGoetz and "punt it off to your moral system, or your expected-value computations."

Trying to change your homosexual desires will probably fail and create a lot of collatoral damage. I would guess that trying to change your desire for heroin is somewhat more likely to succeed, though I'm willing to consider the argument that heroin addicts should accept their addiction but attempt to minimize its harmful side effects.

comment by Cyan · 2009-05-17T02:51:29.121Z · LW(p) · GW(p)

I think the distinction is that we think of Mimi as wishing to revoke a decision made of her own free will; not so with Larry.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-17T04:04:53.631Z · LW(p) · GW(p)

Mimi grew up in a society that taught her that heroin is bad. Larry grew up in a society that taught him that homosexuality is bad. How do you tell the difference from their point of view?

comment by Alicorn · 2009-05-17T02:48:31.539Z · LW(p) · GW(p)

I think it probably has something to do with the fact that Mimi (probably) wasn't born addicted to heroin (and even if she was, we can point to the behavior that caused it), whereas the consensus seems to be that homosexuality is innate.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-17T04:07:35.112Z · LW(p) · GW(p)

There must be more to it than that. If Larry were innately born attracted to children rather than to men, we probably wouldn't say it was okay.

Replies from: Alicorn
comment by Alicorn · 2009-05-17T04:19:20.524Z · LW(p) · GW(p)

It's not about whether it's okay, it's about whether it's "part of who he is" or an alien intrusion.

Replies from: newerspeak, Richard_Kennaway
comment by newerspeak · 2009-05-17T08:52:04.927Z · LW(p) · GW(p)

... "part of who he is" or an alien intrusion.

Okay.

I'm Paul Erdos. I've been taking amphetamine and ritalin for 20-odd years to enhance my cognitive performance. In general I want to want these drugs, because they help me do good, important and enjoyable work, which is impossible for me without them.

I can stop wanting these drugs when I want to, like when my friend bet me $500 that I couldn't. I wanted to win that bet, so I wanted not to want the drugs, so I stopped wanting them. Was that my only motivation?

Also, I don't want others to want to want amphetamines just because I want to want amphetamines.

A while ago I took Euler's place as the most prolific mathematician of all time.

Replies from: ABranco
comment by ABranco · 2009-10-13T17:36:57.023Z · LW(p) · GW(p)

Paul Erdös did it regularly, yes. Successfully, it seems — but I wonder about the costs. Does anyone have consistent data on that?

Picking only Erdös' case, would, I'm afraid, be a case of both survivorship bias and hasty generalization.

comment by Richard_Kennaway · 2009-05-19T10:46:20.988Z · LW(p) · GW(p)

It's not about whether it's okay, it's about whether it's "part of who he is" or an alien intrusion.

That doesn't solve PhilGoetz's example though. And in the original version of Larry, his parents might very well say that his revulsion at homosexual acts is "who he is" and his sexual feelings the "alien intrusion". Are these concepts anything but a way of making disguised moral judgements? Is "who someone really is" just "who I would prefer them to be"?

Then again, another attitude to Larry is that his sexual feelings are who he really is, but that resisting them is a cross he has to bear. (I believe this is the Roman Catholic view.) So I don't think the concept of authenticity solves these problems.

comment by AnlamK · 2009-05-16T09:59:54.045Z · LW(p) · GW(p)

Harry Frankfurt, who came up with the original idea, did a much better job in explaining in my opinion. (Why are you not referring to his paper?)

Here is the link for the curious: http://www.usfca.edu/philosophy/pdf%20files/Freedom%20of%20the%20Will%20and%20the%20Concept%20of%20a%20Person.pdf

Replies from: Alicorn
comment by Alicorn · 2009-05-16T15:10:38.411Z · LW(p) · GW(p)

I probably should have mentioned Frankfurt's work, but I was being petty and declined to do so because he irritates me by calling second-order desire a criterion for personhood. Moreover, I wasn't trying to get into the notion of "will" or what second-order desire is for; I just wanted to provide a summary and some examples because someone had asked about it, and if the post is well-received I'll follow up with more complicated stuff.

Replies from: thomblake
comment by thomblake · 2009-05-19T16:16:16.790Z · LW(p) · GW(p)

Still, at least a hat-tip is obviously warranted.

I probably should have mentioned Frankfurt's work, but I was being petty and declined to do so because he irritates me by calling second-order desire a criterion for personhood.

Hardly an excuse for academic dishonesty. Okay, this forum is hardly 'academic', but the point stands.

comment by Jordan · 2009-05-16T08:15:13.571Z · LW(p) · GW(p)

It's not always so easy to say which desire is actually first order and which is second order.

For instance, example 3 could be inverted:

Larry was brought up to believe God hates homosexuality. Because of this he experiences genuine disgust when he thinks about homosexual acts, and so desires not to perform them or even think about them (first order). However, he really likes his friend Ted and sometimes wished God wasn't such a dick (second order).

There's likely even a third order desire: Larry was brought up to be a good Christian, and desperately wishes he didn't wish God was anything other than He is (third order).

I imagine our desires are less like a logical hierarchy and more like a food chain. On any given day Larry's libido could be the biggest fish in the sea.

Replies from: Psychohistorian, AllanCrossman
comment by Psychohistorian · 2009-05-16T10:52:59.718Z · LW(p) · GW(p)

wished God wasn't such a dick (second order).

This is not second order. It's just D(God approved of homosexuality). If he himself were God, then it would probably be second order, but just wanting the rules to be different is first-order. Similarly,

Larry was brought up to be a good Christian, and desperately wishes he didn't wish God was anything other than He is (third order).

is not third order. Third order gets weird. Third order would be that Mimi wants heroin, and in fact, wants to want heroin, (if you offered to magically make her not like heroin, she'd emphatically decline), but on top of that, she wants to want to not want to use heroin. Maybe she doesn't have a problem with it, but her friends do. Maybe she would be well served if she could express an honest desire to quit while not actually quitting. Third-order desires get pretty confusing, though I may have just explained this one poorly.

Furthermore

I imagine our desires are less like a logical hierarchy and more like a food chain. On any given day Larry's libido could be the biggest fish in the sea.

is not exactly in line with the post. I don't think there's any claim that these things are a logical hierarchy. One can have extremely weak first order desires and extremely strong second order desires, though the latter will tend to consolidate into changed first order desires if they are strong enough.

Second order desires do often stem from conflicting first order desires, but what determines the order is the object of desire (if it itself is about a desire, it's 2nd order, and so on), not its magnitude.

Replies from: Jordan, conchis
comment by Jordan · 2009-05-16T16:36:22.215Z · LW(p) · GW(p)

I can see that I worded things in a misleading fashion. The main point is just that first orders desires and second order desires referring to them can often switch places. Rephrasing:

Larry wants to be straight... (first order)

...but wants not to want to be straight so he can be with his friend Ted. (second order)

I imagine our desires are less like a logical hierarchy and more like a food chain.

is not exactly in line with the post. I don't think there's any claim that these things are a logical hierarchy.

By logical hierarchy I just mean the notion there is some well ordering of terms here. Each desire can be the first order desire to the other, forming a loop rather than a tier.

Replies from: pjeby, Psychohistorian
comment by pjeby · 2009-05-16T17:05:30.722Z · LW(p) · GW(p)

The main point is just that first orders desires and second order desires referring to them can often switch places.

Actually, it's simpler to treat all desires as independent motivations. Whether they are "first-order" or "second-order" is a function of which is the organism's current goal.

When an addict feels bad, their current goal is to not feel bad - so they indulge. Anything that would stop them from doing so -- including any desire to reform -- is now opposed, whether the thing being opposed is an external event or an internal desire to quit.

Conversely, when the addict is satisfied, their current goal may be to feel better about themselves, at which point whatever obstacles to that goal become relevant, whether external events or internal desires.

IOW, it's not the case that desires are ever "first order" or "second order" in and of themselves. So-called "second order" desires are merely subgoals that arise as a side-effect of a conflict between an active goal and another desire that opposes the goal in some way.

And it's important to understand their contextual nature. A subgoal that's consistently reinforced can become a seemingly-independent desire via the standard "cached thought" or "promoted subgoal" mechanism, but this doesn't always occur. And until it occurs, the second-order desires are just temporary subgoals. See for example, Alicorn's example 1: the goal of liking Mountain Dew (or not disliking it) is strictly in the context of the alertness goal. If the alertness goal were satisfied in some other way, the subgoal would no longer be active.

Replies from: MichaelVassar
comment by MichaelVassar · 2009-05-18T11:11:44.110Z · LW(p) · GW(p)

I'm pretty sure that you are pointing to a correct proposition here but have overstated your case. At the very least, separate from goals its possible to build non-desire habits which desires can affirm or conflict with. More importantly, you are missing the point that second order desires can be about first order desires without the first order desires, even when active, being about the second order desires.

Replies from: pjeby, Vladimir_Nesov
comment by pjeby · 2009-05-18T14:20:22.639Z · LW(p) · GW(p)

More importantly, you are missing the point that second order desires can be about first order desires without the first order desires, even when active, being about the second order desires.

How so? That is, how am I missing that point? I'm simply saying that second-order desire can only arise as a subgoal of some other desire, even if that other desire is to simply have a certain social image. This doesn't imply any sort of symmetry being required, so I'm not clear on why you think I said it does. (In fact, my reference to example 1 describes an asymmetric conflict case.)

Replies from: stcredzero
comment by stcredzero · 2009-05-18T16:17:16.203Z · LW(p) · GW(p)

Asymmetry between 2nd order and 1st order desires can be explained easily if 2nd order desires didn't really exist, or if only one specific 2nd order desire existed, namely a desire for resolution. "I want to want X" then just becomes some person's rationalization about their conflicted situation. I find this idea attractive, because a desire for resolution seems a natural thing for a conscious being to have.

Replies from: pjeby
comment by pjeby · 2009-05-18T16:48:27.786Z · LW(p) · GW(p)

I find this idea attractive, because a desire for resolution seems a natural thing for a conscious being to have.

But such a thing isn't intrinsic. People routinely do things that are in conflict, without ever resolving the conflict. If anything, we have a drive to appear consistent to other people -- a more evolutionarily-relevant drive than a desire to actually be consistent.

Meanwhile, second-order desires are just subgoals, like "walk across the room" is a subgoal of "get a glass of water". We experience wanting to (not) want something because it supports some other goal -- whether the other goal is something we want to admit to or not.

But that other goal is never really "get some resolution" -- that's just a verbal explanation that deflects attention from whatever the real goal is. (Because without some conflicting goal being present, there would be nothing to "resolve"!)

Replies from: stcredzero
comment by stcredzero · 2009-05-18T17:59:21.842Z · LW(p) · GW(p)

I disagree. Conflict resolution is intrinsic, however most people resolve many of their conflicts in irrational ways, including distraction. Come to think of it, a desire/goal-conflict resolution urge explains procrastination quite handily.

Let's not confuse "a desire to resolve conflicting urges" with "a desire to be rationally self-consistent." These are two different things. Everyone will have the former. Some will be able to cultivate the latter. A drive to appear consistent to others is yet a third thing.

I suspect that our drive for "getting resolution" is much like our preference for clearly annunciated speech, sunny vistas, and uncluttered rooms. We are driven to optimize our perception, and this drive is expressed as aesthetic desire. Our sense of the aesthetic even extends to internal perception of our ideas -- there is an attraction to elegant ideas. Religions often exploit this. For example, Islam is said to be popular in parts of the world because it presents itself as straightforward.

I think you're right that "getting resolution" is not a goal. It is more like a drive. Much like our desire to see all of what we are observing often results in our craning our neck. Like other drives it can result in goals. I also like your subgoal formulation. I would posit that the urge towards "clarity" is what drives it. But remember, just because one can imagine some scenario and take it as a desired goal, doesn't mean that the situation is sensible. I think wanting to want X is along the same lines as wanting to hear the sound of one hand clapping.

So, in the Mountain Dew example, the subject wants to stay awake and the subject also wants to avoid the unpleasant stimuli of Mountain Dew. To resolve this goal conflict, they formulate the subgoal, "I want to want Mountain Dew," which is a condition where there is no conflict. I note, however, that the subject wouldn't mind drinking chilled but flat Mountain Dew if it were readily available. Most likely they would immediately want to drink it. I posit that they always wanted to drink the Mountain Dew, but that they had a conflicting goal (that of avoiding carbonation), and were distracted by a poorly formulated subgoal.

Replies from: pjeby, Alicorn
comment by pjeby · 2009-05-18T20:08:08.808Z · LW(p) · GW(p)

Let's not confuse "a desire to resolve conflicting urges" with "a desire to be rationally self-consistent." These are two different things. Everyone will have the former. Some will be able to cultivate the latter. A drive to appear consistent to others is yet a third thing.

My point is that "a desire to resolve conflicting urges" is an unnecessary hypothesis. Conflict resolution is an emergent property of goal-seeking, not an independent goal or desire of itself, nor even a component of goal-seeking.

If you have a goal to get a soda from the fridge, and therefore a subgoal of walking across the room, but there is something in your way, then you will desire to go around it. To posit even a "drive" to "get resolution" is adding unnecessary entities to the equation.

Now, if you said that we experience conflict as painful, and desire to avoid it, I'd agree with you. However, experiencing the pain of conflict does not consistently motivate people to resolve the conflict. In fact, it frequently motivates people to avoid the subject entirely, so as to remove awareness of the conflict!

That's why I believe that talking about "conflict resolution as intrinsic" or an urge to "get resolution" is both unnecessary and erroneous: people DO experience negative reinforcement from conflict, but this is not the same thing as a desire for resolution. In humans (as in all animals that I know of), a drive to avoid one thing does not produce the same results as a drive to approach its opposite (nor vice versa).

Replies from: stcredzero
comment by stcredzero · 2009-05-18T22:54:43.651Z · LW(p) · GW(p)

That's why I believe that talking about "conflict resolution as intrinsic" or an urge to "get resolution" is both unnecessary and erroneous: people DO experience negative reinforcement from conflict, but this is not the same thing as a desire for resolution.

A very good point. It's much more accurate to say that people have an aversion to internal conflicts, and that this is part of the inbuilt mechanism for mediating between conflicting desires. This is a better way to word what I am getting at. "Desire for resolution" can be easily misinterpreted. For example, I did not mean a "desire for a rational resolution in actuality." That would preclude the mechanism from being a factor in procrastination, and I believe it is a part of that. I think it is also related to the Paradox of Choice.

http://www.amazon.com/Paradox-Choice-Why-More-Less/dp/0060005688

As with many evolved mechanisms, it works imperfectly, but well enough (especially when viewed in the context of a Stone Age denizen's life).

If you have a goal to get a soda from the fridge, and therefore a subgoal of walking across the room, but there is something in your way, then you will desire to go around it. To posit even a "drive" to "get resolution" is adding unnecessary entities to the equation.

I don't think it's an unnecessary entity, merely a mis-stated one.

Remember the context of the OP. I thought we were talking about perceived conundrums. When a way to "go around it" is not immediately obvious, then one sometimes makes up an impractical subgoal, like "I want to want to have sex with women," from example 3. It's often the case that there is an awareness of the impracticality of such subgoals, and as such it offers inadequate relief, but the impractical subgoal still becomes a fixation. I have generally found myself in conundrums actively seeking some answer. But it seems reasonable that this is not going to be everyone's reaction, and the drive is actually avoidance of conflict.

comment by Alicorn · 2009-05-18T18:05:24.387Z · LW(p) · GW(p)

This is kind of tangential to your actual statement, but I've never found an originally carbonated beverage that was flat enough to be drinkable. I'd think it had more to do with the flavorings used in sodas if I didn't have the same problem with seltzer water and sparkling juices.

Replies from: stcredzero
comment by stcredzero · 2009-05-18T19:15:53.289Z · LW(p) · GW(p)

But if someone had a magic powder that was tasteless and could remove all carbonation from a drink, then perhaps you could drink it and in a given context would want to. My point is that "2nd order desires" are probably just due to mis-formulated goals and subgoals. I don't think people really want to want X. Most often, they want X but don't also want Y, or they want X but cannot give up Y. I suspect it often helps if you can get as close to the level of basic drives as possible. In the Mountain Dew conundrum, it's self preservation and avoidance of noxious stimuli. These desires are not in conflict, only the particular goal+subgoal scheme resulting from them.

In other words, I doubt many people really "Want to want X." They often convince themselves of this in order to enable fulfilling some other directive.

comment by Vladimir_Nesov · 2009-05-18T22:45:27.571Z · LW(p) · GW(p)

Of two conflicting desires, we call second-order the one we don't expect to go away, and as such more invariant, part of self, even if not ever in control.

Most second-order desires are not about first-order desires, they are about the same thing as the first-order desire. For the second-order desire, modifying the first-order desire is instrumental, not terminal, and the same applies in the other direction. The differences I see come from first-order desire being actually in control, and being stupid enough not to work on eliminating the second-order desire.

comment by Psychohistorian · 2009-05-16T19:48:17.529Z · LW(p) · GW(p)

First , these two statements are 2nd order and 3rd order respectively (taboo "straight" and you get -very roughly- "Larry wants to want to have sex with women" ). Second, they are not representations of the same thing, since they point in opposite directions. Thus, they don't seem to support the claim that first and second order desires can switch places.

More importantly, you can't just infer n-th order desires from n-m, n>m>0 order desires.

"I want chocolate," does not imply: "I do not want to want to want to not want to want to not want to not want to want to want to not want to not want chocolate," even though the two happen to point in the same direction. The first one is true, the second one is almost certainly false, since I really don't think that hard about chocolate.

Higher-order desires get really, really convoluted if you use imprecise language, and they're not exactly simple to begin with.

I am starting to agree with other posters that the whole construction may not map reality too accurately, but if you actually use precise language, n-th order desires are distinct and meaningful. Without precise language, you're just showing it's possible to say the same thing in more than one way, which is true, but not insightful.

Replies from: Jordan
comment by Jordan · 2009-05-16T23:34:53.192Z · LW(p) · GW(p)

More importantly, you can't just infer n-th order desires from n-m, n>m>0 order desires.

I am starting to agree with other posters that the whole construction may not map reality too accurately

I agree. Generally people don't have very high order desires. Discounting the accuracy/usefulness of the notion of ordered desires was the entire thrust of my original comment.

First , these two statements are 2nd order and 3rd order respectively

This is still a matter of interpretation based on wording. In my original phrasing it's more apparent that the desire is first order. My set up was that Larry "experiences genuine disgust when he thinks about homosexual acts", the intent being that his reaction to homosexuality is involuntary, and his desire is simply to avoid that unpleasant reaction. This is first order.

I'm not claiming the order can always be reversed. I was just giving a particular construction where it could.

comment by conchis · 2009-05-16T11:33:14.867Z · LW(p) · GW(p)

Third order would be that Mimi wants heroin, and in fact, wants to want heroin,

Isn't that just second order? Third order would be wanting to want to want heroin.

Replies from: Psychohistorian
comment by Psychohistorian · 2009-05-16T12:16:58.284Z · LW(p) · GW(p)

It included the next sentence (wanting to want to not want); edited to make it less ambiguous.

Replies from: conchis
comment by conchis · 2009-05-16T12:48:31.940Z · LW(p) · GW(p)

Ah. Sorry!

comment by AllanCrossman · 2009-05-16T09:01:12.184Z · LW(p) · GW(p)

sometimes wished God wasn't such a dick (second order)

Why's that second order?

comment by pjeby · 2009-05-16T14:20:21.468Z · LW(p) · GW(p)

There's a simpler model for all of these examples -- you're describing conflicts between an "away-from" motivation and a "towards" motivation. These systems are semi-independent, via affective asynchrony. The second-order want is then arising as a subgoal of the currently-active goal (be alert, etc.).

I guess what I'm trying to say here is that there really aren't "second order wants" in the system itself; they're just an emergent property of a system with subgoals that explicitly models itself as an agent, especially if it also has goals about "what kind of person" it is desirable or undesirable to be.

It's likely that examples 2 and 4 would both be based in self-image goals, as well as the more obvious example in 3. Clearly, non-self-image cases like #1 exist too, so it's not strictly about such things, but self-image goals (whether "toward" or "away from") are the most common source of lasting and emotionally-distressing conflict in people's lives, at least in my experience.

Replies from: JamesAndrix
comment by JamesAndrix · 2009-05-17T19:46:27.167Z · LW(p) · GW(p)

I identified 2 and 4 as most clearly about just wanting utilons.

I suspect that all such metawants can be reduced to trade-offs in the world. There is a bad tasting alertness potion, an addictive happiness drug with side effects, and a nonintuitive offer of money.

3 is a bit harder to look at this way, I think any solution needs to work just as well for Darryl who is also homosexual and finds it repulsive, but instead desires to no longer find it repulsive.

comment by MugaSofer · 2012-10-26T08:49:25.706Z · LW(p) · GW(p)

Carbonated beverages make my mouth hurt. I have developed a more generalized aversion to them after repeatedly trying to develop a taste for them and experiencing pain every time.

Wait, that's unusual? I used to have the exact same problem, but I thought it was due to generalized willpower issues. When I got better at willpower, the problem disappeared (although I still tend to choose non-carbonated versions of drinks I like if possible.)

comment by Curiouskid · 2011-05-26T00:06:13.983Z · LW(p) · GW(p)

I'm glad you introduced me to the term meta-wanting because it reminds me on an argument against free will.

Basically, you can go to a CD store (itunes now) and you can choose which CD you choose to buy because you prefer that CD. But you cannot prefer to prefer that CD. You simply prefer (1st order) that CD. You could try to raise the order of your preferences (an idea that had not occurred to me until now), but at the next highest order, your decision has already been made.

To me, that is the most convincing argument against free will that I've ever come across. Has anyone heard it before?

Replies from: MathieuRoy, shminux
comment by Mati_Roy (MathieuRoy) · 2013-11-23T15:51:51.412Z · LW(p) · GW(p)

I want that my highest metawanting be this sentence.

  1. This is my highest order of metawanting.
  2. It was determine by me wanting it (so it wasn't already made).

I'm joking. I don't really want to want that my highest metawanting be wanting that my highest metawanting be wanting that my highest metawanting be wanting that.... haaaaaaaaaaaaaaaaaaaaa. :-)

comment by shminux · 2012-01-04T03:08:30.688Z · LW(p) · GW(p)

Have you considered that the free will debate is vacuous, as, ironically, we have no choice but to act as if we had free will?

Replies from: Curiouskid
comment by Curiouskid · 2012-01-04T03:13:50.749Z · LW(p) · GW(p)

So, I made this post before I'd even read the sequences.

comment by JamesCole · 2009-05-18T10:33:50.784Z · LW(p) · GW(p)

I don't think the right way to clarify this problem is by looking at it terms of first- and second-level desires. I think you need to turn it around and see it as a matter of what 'true self' means.

If people say that the desires you "endorse" on the second level are most reflective of your true self, they're wrong. This is because what we take to define our true selfs is based on different criteria, and those criteria define it such that people's second-level desires don't always match up with what we taken their 'true selves' to be, as in the case of Larry.

comment by Drahflow · 2009-05-17T20:30:40.725Z · LW(p) · GW(p)

In a perfectly rational agent, no n-th order wants should exist.

Your problems with mountain dew might account for -1 util, you being awake for 2 utils, then you "want" to drink that stuff. Shut up and add.

The only source of multi-level desires I can see is an imperfect caching algorithm, which spews forth "Do not drink mountain dew" although the overall utility would be positive.

Replies from: conchis, latanius
comment by conchis · 2009-05-18T09:57:26.071Z · LW(p) · GW(p)

Your problems with mountain dew might account for -1 util, you being awake for 2 utils, then you "want" to drink that stuff. Shut up and add.

It still seems perfectly reasonable for a rational agent to not-want to not-want Mountain Dew here. If it were feasible to self-modify to become Moutain-Dew-indifferent at a cost less than 1 util, then the utility of drinking Mountain Dew & being indifferent to it would be greater than the utility of drinking it while continuing to not-want it.

Replies from: steven0461
comment by steven0461 · 2009-05-18T10:17:26.616Z · LW(p) · GW(p)

Self-modifying not to feel discomfort from drinking Mountain Dew isn't at all the same thing as self-modifying to want to drink Mountain Dew keeping discomfort constant. The latter isn't something you want to do on pain of committing the Wirehead Fallacy. The former isn't something that "want to want" language really applies to, as far as I can see.

Replies from: Alicorn, conchis
comment by Alicorn · 2009-05-18T14:45:58.254Z · LW(p) · GW(p)

I don't see why not. Spicy foods inflict pain; wanting to develop a greater tolerance for cayenne pepper wouldn't be all that weird. Mountain Dew inflicts pain; learning to put up with that would be instrumentally useful to me. I'd prefer that it just not inflict pain, but assuming that's not an option, I'd be okay with just developing the ability to deal with it.

Replies from: stcredzero, steven0461
comment by stcredzero · 2009-05-18T16:23:15.816Z · LW(p) · GW(p)

In case it hasn't been mentioned, many stores that sell Mountain Dew also stock NoDoz tablets, which you can develop a technique for swallowing whole with water.

I was actually in the position of wanting to develop a tolerance for spicy foods when I was young. I would go to the pantry and treat myself by dosing myself with Tabasco sauce. But I don't think of that as my "wanting to want KimChee." I wanted to eat KimChee like my parents and I took steps to achieve that goal.

I don't think there's any need for meta-desires at all, except for one: a drive to resolve conflicting desires. And this is arguably also a 1st order desire. There are obvious reasons why we'd evolve such a drive. We can explain away things like wanting to want internal conflict as merely a desire to be someone like Timothy Levitch from "Waking Life." It also makes sense that one can formulate paradoxical desires that are strongly resistant to resolution, or rationalize to oneself that they are conundrums. But I posit that these are merely ill-formed desires -- that if you can't reduce everything down to conflicting 1st-order desires, you haven't delved deeply enough.

http://dannyman.toldme.com/2003/11/19/timothy-levitch-waking-life/

comment by steven0461 · 2009-05-18T21:11:21.417Z · LW(p) · GW(p)

Agreed, what I said wasn't literally true. If you get 2 utils from caffeine and -1 util from pain, and if despite this you don't want to drink MD, then it's rational to self-modify to want to drink MD. But the point I meant to make is that it's not rational to self-modify to assign 0 utils to the same pain instead (because you don't care about utils, you care about things using utils), which is what I (mis)interpreted conchis as saying.

comment by conchis · 2009-05-18T10:51:26.116Z · LW(p) · GW(p)

I interpret "wanting to want" as encompassing the former, and don't really see any reason to limit it in the way you suggest. But either way, we have no substantive disagreement.

comment by latanius · 2009-05-18T08:20:50.536Z · LW(p) · GW(p)

Or the vertical hierarchy in our brains, and our inability to modify it. All four examples show "first order desires" created by lower level subsystems (taste perception, sexuality, motivations, etc.), and the higher level ones are trying to override it (consciousness, planning, models of the future world). But we aren't designed rationally enough to be able to control those lower levels, although we know partially about what they "think". ("We" = the conscious part writing LW comments...)

comment by dclayh · 2009-05-16T03:46:56.109Z · LW(p) · GW(p)

So far so good. I look forward to the hard stuff :) And thanks for engaging my request.

Actually, your calling second-level agreement "endorsement" has led me to wonder whether there's a special term for desires that you want to want, want to want to want, and so on ad infinitum, analogous to common knowledge) or Hofstadter's hyperrational groups (where everyone knows that everyone knows etc. that everyone is rational).

Replies from: Peter_de_Blanc, Alicorn, dclayh, cousin_it
comment by Peter_de_Blanc · 2009-05-16T17:41:12.412Z · LW(p) · GW(p)

[I] wonder whether there's a special term for desires that you want to want, want to want to want, and so on ad infinitum

"Reflectively consistent."

Replies from: dclayh
comment by dclayh · 2009-05-16T18:10:50.053Z · LW(p) · GW(p)

I think that's for beliefs, not desires.

Particularly because you can bring your beliefs and metabeliefs, etc., into alignment by reflection, whereas making your desires consistent requires at a minimum some kind of action, and may not be possible at all (except for the trivial case of instrumental second-order desires as in Ex. 1).

comment by Alicorn · 2009-05-16T04:01:50.927Z · LW(p) · GW(p)

I haven't run into any special jargon for endorsed desires, but it would be a cool word to have. There's some debate about whether we can really go up indefinitely in higher orders - it's not clear that we have the necessary cognitive capacity to go higher than about six nested intentional states. (For instance, I intend that you know that I intend that you know that I want to want Mountain Dew, but if you could come up with a more complicated string of propositional attitudes, I'd be a bit lost.)

Replies from: stcredzero
comment by stcredzero · 2009-05-18T16:52:02.210Z · LW(p) · GW(p)

I see nothing more than the intent to share knowledge of intent, or more concisely, to share intent. This can be reduced to a desire for synchrony of consciousness. When I sit down to play music with others, I do not have all of these intents and meta-intents. Yet it's arguably true that I do intend for the other musician to know that I intend him or her to know that I want to make good music. One can compose an infinite number of such statements concerning meta-intent and knowledge. But this is only an effect of the linearity of language, causing us to consider each constituent relationship one at a time. Beyond this, one can simply deal with the whole (jam session) in a highly functional, immediate, and mutually pleasing way.

Nth order desires are really just a variation on Xeno's paradox. It's a nifty mental exercise, but reality doesn't work that way. One just has 1st order wants, and a conflict resolution drive which is the only 2nd order want.

comment by dclayh · 2009-06-05T03:29:53.927Z · LW(p) · GW(p)

Update: upon reading Frankfurt I find that he calls this sort of infinitely regressed metawanting a "decisive" want or desire.

comment by cousin_it · 2009-05-16T09:49:15.808Z · LW(p) · GW(p)

Seems to be related to FAI. I'd look forward very much to any mathematic formalization of the term, if you have ideas how to get there.

comment by Cameron_Taylor · 2009-05-19T07:49:29.483Z · LW(p) · GW(p)

Suppose also that there is a can of Mountain Dew handy. I know that Mountain Dew contains caffeine and that caffeine will make me alert.

I am hesitant to bring it up because I don't want to become the multiculturalism police on LessWrong, but I found this distracting. American Mountain Dew contains a large caffeine content yet in most other countries Mountain Dew is Caffeine free. There is a significant minority of LessWrong participants who do not dwell in America and those readers can not help but become distracted when posts seem to be clearly intended for those of a different Mountain Dew recipe.

Surely substituting 'Coke' or 'Pepsi' would make the Australians and Canadians among us feel more welcome.

Replies from: Alicorn, Nick_Tarleton, SoullessAutomaton
comment by Alicorn · 2009-05-19T14:33:45.948Z · LW(p) · GW(p)

I'm sure you thought this would be cute or funny or something, but the objections aren't commensurate. I wasn't making a sweeping statement about the alertness-giving properties of Mountain Dew. Trying to do that would have been beside the point, since I was giving an individual example about my own beverage-related limitations, and living where I live, a Mountain Dew that might materialize in my home would have plenty of caffeine. To compare, I wouldn't have blinked if Psychohistorian had phrased the original remark about women as "I'll still find women alluring", making it about himself instead of about women.

Alternatively, I could just protest that in my idiolect, Mountain Dew refers to a beverage that contains caffeine, and your wacky foreign Mountain Dew is not Mountain Dew at all.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-19T15:08:34.759Z · LW(p) · GW(p)

Trying to do that would have been beside the point, since I was giving an individual example about my own beverage-related limitations, and living where I live, a Mountain Dew that might materialize in my home would have plenty of caffeine.

Where you "attribute [your] distraction entirely to the sense that it was directed at a presumed male audience", I attribute my distraction entirely to the sense that it was directed at a presumed American audience. It is regarding this presumption that I demand commensurate consideration. Such consideration could perhaps take the form of a simple acknowledgement: "Oh, really? I never new that! Next time I'll either use a different example or I'll throw in brief a parenthetised comment or footnote to make the text accessible to the non-American reader."

Alternatively, I could just protest that in my idiolect, Mountain Dew refers to a beverage that contains caffeine, and your wacky foreign Mountain Dew is not Mountain Dew at all.

You could make that protestation. Yet you are dismissing the alternate experience as 'wacky foreign' as a point demonstrating how gender specific descriptions is incommensurably more significant than nationality-specificity. This is distressing. It would seem to lend support to a conclusion that your objections have been less about the implicit exclusion of minority participants and more about mere political manouvering in favour of your own particular group. This significantly reduces the credibility of any objections that you may make, at least in my eyes. I could quite fairly be accused of having an anti-hypocricy bias.

Replies from: Cyan, Alicorn, Nick_Tarleton
comment by Cyan · 2009-05-20T05:27:13.995Z · LW(p) · GW(p)

Where you "attribute [your] distraction entirely to the sense that it was directed at a presumed male audience", I attribute my distraction entirely to the sense that it was directed at a presumed American audience.

Here's where your analogy runs off the rails. Alicorn's text isn't directed at a presumed American audience -- it's directed at an audience presumed to be able to infer that Mountain Dew contains caffeine where she lives. Your rejoinder skips over this exact point made by Alicorn:

I wouldn't have blinked if Psychohistorian had phrased the original remark about women as "I'll still find women alluring", making it about himself instead of about women.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-20T05:47:06.315Z · LW(p) · GW(p)

My rejoinder did not so much skip the point as not see the point as significant. One of the strengths of analogies is that they can help trace where exactly the difference in thinking or opinion lies. I actually don't see Psycho's presumption of audience as more significant than that of Alicorn; I can infer that Psycho is speaking from his individual experience as a male just as easily as that Ali is speaking from her individual experience as an American.

Replies from: JGWeissman
comment by JGWeissman · 2009-05-20T06:19:44.275Z · LW(p) · GW(p)

The difference is that Phycohistorian was describing experiences that he intended the audience to recognize and identify with as their own, while Alicorn was describing her own experience as her own unique experience.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-20T07:06:29.955Z · LW(p) · GW(p)

That does seem like a bizarre thing for Psycho to intend! I gave him a little more benefit of the doubt. Perhaps I was too generous in my interpretation, I get that a lot!

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-05-20T09:37:15.335Z · LW(p) · GW(p)

It isn't necessarily a deliberate, conscious intent. However:

I know that Mountain Dew contains caffeine and that caffeine will make me alert. However, I also know that I hate Mountain Dew.

vs.

It's part of that set of things that doesn't go away no matter what you say or think about them. Women will still be alluring, food will still be delicious, and Michaelangelo's David will still be beautiful, no matter how well you describe these phenomenon.

Surely you see the difference?

comment by Alicorn · 2009-05-19T15:24:51.939Z · LW(p) · GW(p)

I'm sorry if it wasn't clear, but the choice of the words "wacky foreign" was to be silly (in an apparently failed attempt to keep the discussion light), not to indicate an actual belief about the relative wackiness of foreign and domestic soft drinks.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-20T03:24:20.454Z · LW(p) · GW(p)

I am afraid I missed that. The plain literal interpretation actually seemed to me to more closely fit the remainder of the reply. The core of your reply took the discussion from 'light and slightly silly' to serious while simultaniously dismissing the underlying serious message, which rather shocked me. 'Light and silly' just doesn't seem to work when rapport is broken, which I tend to discover rather often!

In any case, I wouldn't change a word of either of my preceeding posts yet am intrigued by the response. (And also somewhat glad karma can be gained so readily on trivial topics that it can be freely spent on those that I feel actually matter.)

comment by Nick_Tarleton · 2009-05-20T03:29:33.240Z · LW(p) · GW(p)

There's quite a difference between a parochialism that is, at worst, confusing, and a parochialism that makes some people feel ignored or excluded (regardless of whether or not this feeling is justified).

comment by Nick_Tarleton · 2009-05-20T03:32:29.982Z · LW(p) · GW(p)

There's a big difference between parochialism that is, at worst, confusing, and parochialism that makes some people feel ignored or excluded (even if they aren't being ignored or excluded).

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-20T05:35:06.666Z · LW(p) · GW(p)

Parochialism exhibits itself perhaps most significantly when it comes to deciding which claims of exclusion are socially acceptable to make.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-05-20T09:49:50.898Z · LW(p) · GW(p)

politically correct

You realize that this is an anti-applause light that conveys little informational value, right?

The actual argument is that since women comprise half the population and are severely underrepresented on LW as it is, if phrasing that implies the audience is uniformly male makes women feel excluded it is detrimental to the goal of spreading rationality. Do you actually have an argument against this?

Please note that "women won't actually feel excluded" is demonstrably false.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-20T12:24:45.332Z · LW(p) · GW(p)

You realize that this is an anti-applause light that conveys little informational value, right?

No. It adequately serves as a descriptive reference to the social dynamics involved in determining what is Right, moral, acceptable enlightened or otherwise good. If you can suggest a substitute phrase then I would happily adopt it. Arguments along the lines of 'something to do with political correct therefore something bad about the other side' are common. It is to be expected that some will assume a similar error of reasoning is being applied whenever the phrase is being used no matter the actual content and I would prefer to have a phrase that avoided this hassle.

... Do you actually have an argument against this?

No. I can see a few minor arguments that could be made but why would I make them? It is a conclusion that I support. However, I find some of the soldiers used to support said conclusion distasteful, inconsistent in their application and neglecting some of the spirit of compromise and mutual understanding necessary when communicating across a cultural barrier. I reject posts and certain of the normative demands contained therein on their own merit as I see it.

... the audience is uniformly male makes women feel excluded it is detrimental to the goal of spreading rationality.

While it doesn't make any difference for the purposes of my reply, that is not the 'actual conclusion'. It is reasonable to desire an inclusive environment independently of the influence this improved environment may have on the spread of rationality. I for one accept inclusiveness a terminal value while 'spreading rationality' is not a goal of mine at all.

Please note that "women won't actually feel excluded" is demonstrably false.

That claim in the quotes would be an insane claim to make.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-05-20T19:40:16.990Z · LW(p) · GW(p)

No. It adequately serves as a descriptive reference to the social dynamics involved in determining what is Right, moral, acceptable enlightened or otherwise good. If you can suggest a substitute phrase then I would happily adopt it. Arguments along the lines of 'something to do with political correct therefore something bad about the other side' are common. It is to be expected that some will assume a similar error of reasoning is being applied whenever the phrase is being used no matter the actual content and I would prefer to have a phrase that avoided this hassle.

The phrase you're looking for is probably "socially acceptable" or "social norm". The phrase "politically correct" is primarily used as a connotationally-loaded derogatory for social norms the speaker disagrees with, and to signal, in a beliefs-as-attire manner, group membership with certain political positions. If you want to criticize social norms, which I agree is a rewarding and enjoyable hobby, you would do better to name the specific norms you take issue with, rather than using a catch-all term for norms you dislike.

For instance: Which claims of exclusion are not acceptable to raise that you think ought to be? Why? What norms would you prefer?

I reject posts and certain of the normative demands contained therein on their own merit as I see it.

Okay. Which specific normative demands are you rejecting?

It is reasonable to desire an inclusive environment independently of the influence this improved environment may have on the spread of rationality. I for one accept inclusiveness a terminal value while 'spreading rationality' is not a goal of mine at all.

So you accept inclusiveness as valuable, but disagree with explanations given by individuals who felt excluded for why they felt that way, and feel that others are neglecting the spirit of mutual understanding across a cultural barrier? I'm not sure I follow.

Replies from: Cameron_Taylor, Cameron_Taylor
comment by Cameron_Taylor · 2009-05-21T00:54:42.607Z · LW(p) · GW(p)

The phrase you're looking for is probably "socially acceptable" or "social norm".

Socially acceptable is suitable, I edited. In most situations I avoid it since it is an applause light that is still yet to be diffused. In this context, however, it feels more like a neutral descriptor.

Okay. Which specific normative demands are you rejecting?

Do not presume so much.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-05-21T09:35:34.252Z · LW(p) · GW(p)

Do not presume so much.

Then I confess I am at a loss as to what your point in all this was, as you seem to have stated is a rejection of something that other people said without any real explanation as to what you're rejecting, or why.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-06-24T01:36:42.461Z · LW(p) · GW(p)

As I stated, I intend no point other than those particular assertions made in my posts.

If you insist that I must only deploy arguments in support of a particular political agenda then said agenda is this: Bad arguments and hypocrisy presented in support of positions I approve of are still bad arguments and hypocrisy.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-06-24T02:14:13.090Z · LW(p) · GW(p)

I don't normally remark on such things, but I'm a bit discouraged to note the following:

  • The parent comment was the first time Cameron_Taylor has posted anything in roughly a month, in a long-dead argument in which he and I were disagreeing.
  • At roughly the same time the parent comment was posted, roughly the last 80 or so posts I've made were all voted down, consecutively, once each, for no discernable reason.
  • I recall at least two other commenters mentioning being voted down suddenly on multiple unrelated comments previously while arguing with Cameron_Taylor.

Karma is easy-come, easy-go, but I'm thinking that someone is not exactly participating in good faith here.

Replies from: conchis, thomblake, Eliezer_Yudkowsky
comment by conchis · 2009-06-24T02:34:31.619Z · LW(p) · GW(p)

By way of confirmation, this has indeed happened to both myself and at least one other commenter previously (I'll leave it to them whether they want to reveal themselves). I had been waiting to see whether it would happen again to be sure, but we now seem to have pretty good evidence of bad faith.

As SoullessAutomaton notes, the karma itself is not much of an issue, but it's nonetheless rather disappointing to see this sort of behavior. It's not immediately clear whether there's much to be done about it other than public shaming, but as a possible means of preventing this happening again, I don't suppose there's any way to revoke the downvoting privileges of those who seem to be abusing the system?

Replies from: Alicorn
comment by Alicorn · 2009-06-24T02:40:04.598Z · LW(p) · GW(p)

I was the other commenter, and confirm the observation.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-06-24T03:59:18.261Z · LW(p) · GW(p)

This is unfortunate, perhaps there should be a top level post to discuss the wise way to respond.

comment by thomblake · 2009-06-24T15:02:15.109Z · LW(p) · GW(p)

For the record, I've been known to downvote large numbers of posts at once (since I'm only here looking at comments for short periods of time, and downvote a lot of posts) but I read them first. Not so much lately, due to the extremely limited number of downvotes available.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-06-24T21:44:44.410Z · LW(p) · GW(p)

While I do not profess to understand the motivation for it, your apparent conviction that a substantial percentage of comments ought to be voted down is of an entirely different character than a mass downvoting aimed at a specific person, targetting what seems to be all of their comments from the past three weeks or so. The latter kind of behavior I would expect on sites like Digg; I tend to expect better of people here.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-06-24T22:14:40.500Z · LW(p) · GW(p)

Let's consider a less convenient possible world. I come across several stupid comments, realize that the author has a lot of karma, and then start reading their old comments. Careful reading, including the context when necessary, leads me to believe half of their old comments are bad or overrated, and deserving of a downvote. I would argue that making those downvotes is justified, but I'd like to think I have better things to do than read and vote on comments on dead threads.

Edit: this comment may be confusing, please read my follow up to orthonormal.

Replies from: orthonormal
comment by orthonormal · 2009-06-24T22:25:09.880Z · LW(p) · GW(p)

Don't use "least convenient possible world" to mean "a different hypothesis to explain what you're seeing". We don't want the usage to get confused.

EDIT: Also, it's unlikely for this effect to result in every one of SA's last 80 comments being downvoted once.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-06-24T23:31:27.122Z · LW(p) · GW(p)

I'm sorry I was unclear. I didn't mean to suggest that this was an alternative explanation for this event. In fact, as you point out, the hypothetical I described contradicts SA's testimony in an important way (the proportion of the comments downvoted).

The reason I brought up the hypothetical was to promote discussion about scenarios that are more difficult to evaluate than what actually appears to have occurred.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-06-25T09:51:58.868Z · LW(p) · GW(p)

The reason the subject came up at all is because this instance was particularly blatant. Otherwise, we don't generally have enough information to evaluate other scenarios reliably--this is why Eliezer wants a way to monitor voting abuse.

Even so I'm willing to grant that it could be something innocuous (and will apologize if that is the case), but the evidence so far leans toward abuse.

If you want to promote discussion about the issue, a top-level post is probably in order, as you yourself previously noted; feel free to make one.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-24T03:52:29.118Z · LW(p) · GW(p)

I've previously asked Tricycle for the ability to monitor this sort of thing. I will ask them again.

For the record, this sort of systematic downvoting is not only not in good faith, but grounds for removal of the ability to downvote.

comment by Cameron_Taylor · 2009-05-21T00:46:10.210Z · LW(p) · GW(p)

Okay. Which specific normative demands are you rejecting?

Those that I specifically reject in any specific post that I make. I have neither the obligation nor inclination to use my posts to make a united stance for one particular political position that I identify with. I in fact choose to disagree with poor arguments for opinions I approve of.

I'm not sure I follow.

I don't believe following is your intent.

comment by SoullessAutomaton · 2009-05-19T09:46:56.772Z · LW(p) · GW(p)

Surely substituting 'Coke' or 'Pepsi' would make the Australians and Canadians among us feel more welcome.

This actually loses something in context, though--Mt. Dew (in the USA) has a somewhat higher caffeine content than those (about 20% more, I think), and also has a reputation as something people drink primarily for the caffeine content, not the flavor.

A better example might be "energy drinks" like Red Bull, which are typically dense, syrupy carbonated beverages with twice the caffeine content of a cola, but I'm not sure how common those are in other areas.

EDIT: This comment was written on the premise that the parent was a genuine request for greater acknowledgement of LW readers not in the USA, not a disingenuous attempt to make an off-topic point about something completely unrelated to this post. Disregard this comment as appropriate.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-20T12:43:20.028Z · LW(p) · GW(p)

In Australia at least we have Red Bull and that does seem to be a better substitute.

The grandparent was a genuine request. The sincerity of the disingenuous EDIT in the reply I have ironic doubts about.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-05-20T19:34:51.959Z · LW(p) · GW(p)

The grandparent was a genuine request. The sincerity of the disingenuous EDIT in the reply I have ironic doubts about.

I reevaluated the comment after seeing the downvotes it received and based on the apparent attempt to score a point in a discussion on a different post. If you were being genuine then okay, I accept your word on the matter and retract the edit with my apologies.