Humans are utility monsters

post by PhilGoetz · 2013-08-16T21:05:28.195Z · LW · GW · Legacy · 216 comments

Contents

216 comments

When someone complains that utilitarianism1 leads to the dust speck paradox or the trolley-car problem, I tell them that's a feature, not a bug. I'm not ready to say that respecting the utility monster is also a feature of utilitarianism, but it is what most people everywhere have always done. A model that doesn't allow for utility monsters can't model human behavior, and certainly shouldn't provoke indignant responses from philosophers who keep right on respecting their own utility monsters.

The utility monster is a creature that is somehow more capable of experiencing pleasure (or positive utility) than all others combined. Most people consider sacrificing everyone else's small utilities for the benefits of this monster to be repugnant.

Let's suppose the utility monster is a utility monster because it has a more highly-developed brain capable of making finer discriminations, higher-level abstractions, and more associations than all the lesser minds around it. Does that make it less repugnant? (If so, I lose you here. I invite you to post a comment explaining why utility-monster-by-smartness is an exception.) Suppose we have one utility monster and one million others. Everything we do, we do for the one utility monster. Repugnant?

Multiply by nine billion. We now have nine billion utility monsters and 9x1015 others. Still repugnant?

Yet these same enlightened, democratic societies whose philosophers decry the utility monster give approximately zero weight to the well-being of non-humans. We might try not to drive a species extinct, but when contemplating a new hydroelectric dam, nobody adds up the disutility to all the squirrels in the valley to be flooded.

If you believe the utility monster is a problem with utilitarianism, how do you take into account the well-being of squirrels? How about ants? Worms? Bacteria? You've gone to 1015 others just with ants.2 Maybe 1020 with nematodes.

"But humans are different!" our anti-utilitarian complains. "They're so much more intelligent and emotionally complex than nematodes that it would be repugnant to wipe out all humans to save any number of nematodes."

Well, that's what a real utility monster looks like.

The same people who believe this then turn around and say there's a problem with utilitarianism because (when unpacked into a plausible real-life example) it might kill all the nematodes to save one human. Given their beliefs, they should complain about the opposite "problem": For a sufficient number of nematodes, an instantiation of utilitarianism might say not to kill all the nematodes to save one human.

 

1. I use the term in a very general way, meaning any action selection system that uses a utility function—which in practice means any rational, deterministic action selection system in which action preferences are well-ordered.

2. This recent attempt to estimate the number of different living beings of different kinds gives some numbers. The web has many pages claiming there are 1015 ants, but I haven't found a citation of any original source.

216 comments

Comments sorted by top scores.

comment by Said Achmiz (SaidAchmiz) · 2013-08-16T23:30:19.082Z · LW(p) · GW(p)

So here's a question for anyone who thinks the concept of a utility monster is coherent and/or plausible:

The utility monster allegedly derives more utility from whatever than whoever else, or doesn't experience any diminishing returns, etc. etc.

Those are all facts about the utility monster's utility function.

But why should that affect the value of the utility monster's term in my utility function?

In other words: granting that the utility monster experiences arbitrarily large amounts of utility (and granting the even more problematic thesis that experienced utility is intersubjectively comparable)... why should I care?

Replies from: TsviBT, novalis, Leon, PhilGoetz, Randaly, Jack, MugaSofer, PrometheanFaun, DanielLC
comment by TsviBT · 2013-08-17T01:27:23.369Z · LW(p) · GW(p)

I always automatically interpret the utility monster as an entity that somehow can be in a state that is more highly valued under my utility function than, say, a billion other humans put together.

But then the monster isn't a problem, because if there were in fact such an entity, I would indeed actually want to sacrifice a billion other humans to make the monster happy. This is true by definition.

Replies from: SaidAchmiz, Eliezer_Yudkowsky
comment by Said Achmiz (SaidAchmiz) · 2013-08-17T02:23:26.560Z · LW(p) · GW(p)

I always automatically interpret the utility monster as an entity that somehow can be in a state that is more highly valued under my utility function than, say, a billion other humans put together.

That's easy. For most people (in general; I don't mean here on lesswrong), this just describes one's family (and/or close friends)... not to mention themselves!

I mean, I don't know exactly how many random people's lives, in e.g. Indonesia, would have to be at stake for me to sacrifice my mother's life to save them, but it'd be more than one. Maybe a lot more.

A billion? I don't know that I'd go that far. But some people might.

Replies from: TsviBT
comment by TsviBT · 2013-08-17T10:10:22.473Z · LW(p) · GW(p)

Well, whether you really want (in the extrapolated volition sense) to sacrifice 10^{whatever} lives to save your family is a whole big calculation involving interpersonal morality, bounded rationality/virtue ethics, TDT/game theory, etc. The point that I was echoing is that if you really would want to make that trade, there's nothing monstery about your family - you just {love them that much}/{love others that little}. The utility monster is an objection to the social morality theory called "utilitarianism"; the utility monster becomes gibberish when phrased as an objection to "any set of preferences can in principal be completely specified by a utility function, to be handed to a generic decision process, resulting in optimal decision making". Like, "Oh no, oh no, I found this monster, and it is soooo soooo good to feed it humans! It is even more better every time I feed it another human! Woe is me! Goooood!!".

Now, the utility monster makes perfect sense as an objection to humans actually making decisions purely using explicit quantitative expected utility calculations. But that doesn't say anything about utility as a formalized version of "good". Rather, that's some sort of comment about the capricious quality of bounded reasoning under uncertainty - you always worry about strong conclusions that make you do particularly effective things, because a mistake in your calculations means you are doing particularly effective bad things. One particular sort of dangerously strong conclusion would be concluding that, e.g., the marginal utility of {UMonster eating an additional human} is larger than and grows faster than the marginal utility of {another humans gets eaten alive}.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-17T08:12:56.818Z · LW(p) · GW(p)

To continue the argument: It could be a problem if you'd want to protect the utility monster once it exists, but would prefer that the utility monster not exist. For example it could be an innocent being who experiences unimaginable suffering when not given five dollars.

Replies from: byrnema, army1987, TsviBT
comment by byrnema · 2013-08-18T03:45:18.443Z · LW(p) · GW(p)

Our oldest utility monster is eight years old. (Did you have this example specifically in mind? Seems to fit the description very well.)

comment by A1987dM (army1987) · 2013-08-19T17:53:51.557Z · LW(p) · GW(p)

If you prefer a happy monster to no monster and no monster to a sad monster, then you prefer a happy monster to a sad monster, and TsviBT's point applies.

Whereas if you prefer no monster to a happy monster to a sad monster, why don't you kill the monster?

Replies from: Eliezer_Yudkowsky, SaidAchmiz
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-19T20:45:00.072Z · LW(p) · GW(p)

...sometimes I wonder about the people who find it unintuitive to consider that "Killing X, once X is alive and asking not to be killed" and "Preferring that X not be born, if we have that option in advance" could have widely different utility to me. The converse perspective implies that we should either (1) be spawning as many babies as possible, as fast as possible, or (2) anyone who disagrees with 1 should go on a murder spree, or at best consider such murder sprees ethically unimportant. After all, not spawning babies as fast as possible is as bad as murdering that many existent adults, apparently.

Replies from: Lukas_Gloor, army1987, selylindi, None, MugaSofer
comment by Lukas_Gloor · 2013-08-20T00:40:10.742Z · LW(p) · GW(p)

The crucial question is how we want to value the creation of new sentience (aka population ethics). It has been proven impossible to come up with intuitive solutions to it, i.e. solutions that fit some seemingly very conservative adequacy conditions.

The view you outline as an alternative to total hedonistic utilitarianism is often left underdetermined, which hides some underlying difficulties.

In Practical Ethics, Peter Singer advocated a position he called "prior-existence preference utilitarianism". He considered it wrong to kill existing people, but not wrong to not create new people as long as their lives would be worth living. This position is awkward because it leaves you no way of saying that a very happy life (one where almost all preferences are going to be fulfilled) is better than a merely decent life that is worth living. If it were better, and if the latter is equal to non-creation, then denying that the creation of the former life is preferable over non-existence would lead to intransitivity.

If I prefer, but only to a very tiny degree, having a child with a decent life over having one with an awesome life, would it be better if I had the child with the decent life?

In addition, nearly everyone would consider it bad to create lives that are miserable. But if the good parts of a decent life can make up for the bad parts in it, why doesn't a life consisting solely of good parts constitute something that is important to create? (This point applies most forcefully for those who adhere to a reductionist/dissolved view on personal identity.)

One way out of the dilemma is what Singer called the "moral ledger model of preferences". He proposed an analogy between preferences and debts. It is good if existing debts are paid, but there is nothing good about creating new debts just so they can be paid later. In fact, debts are potentially bad because they may remain unfulfilled, so all things being equal, we should try to avoid making debts. The creation of new sentience (in form of "preference-bundles" or newly created utility functions) would, according to this view, be at most neutral (if all the preferences will be perfectly fulfilled), and otherwise negative to the extent that preferences get frustrated.

Singer himself rejected this view because it would imply voluntary human extinction being a good outcome. However, something about the "prior-existence" alternative he offered seems obviously flawed, which is arguably a much bigger problem than something being counterintuitive.

Replies from: Ghatanathoah, Lukas_Gloor, ESRogs
comment by Ghatanathoah · 2013-08-23T21:18:05.743Z · LW(p) · GW(p)

In my view population ethics failed at the start by making a false assumption, namely "Personal identity does not matter, all that matters is the total amount of whatever makes life worth living (ie utility)." I believe this assumption is wrong.

Derek Parfit first made this assumption when discussing the Nonidentity Problem. He believed it was the most plausible solution, but was disturbed by its other implications, like the Repugnant Conclusion. His work is what spawned most of the further debate on population ethics and its disturbing conclusions.

After meditating on the Nonidentity Problem for a while I realized Parfit's proposed solution had a major problem. In the traditional form of the NIP you are given a choice between two individuals who have different capabilities for utility generation (one is injured in utero, the other is not). However, there is another way to change the amount of utility someone gets out of life besides increasing or reducing their capabilities. You could also change the content of their preferences, so that a person has more ambitious preferences that are harder to achieve.

I reframed the NIP as giving a choice between having two children with equal capabilities (intelligence, able-bodiedness, etc.) but with different ambitions, one wanted to be a great scientist or artist, while the other just wanted to do heroin all day. It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility.

In my view the primary thing that determines whether someone's creation is good or not is their identity (ie, what sort of preferences they have, their personality, etc). What constitutes someone having a "morally right" identity is really complicated and fragile, but generally it means that they have the sort of rich, complex values that humans have, and that they are (in certain ways) unique and different from the people who have come before. In addition to their internal desires, their relationship to other people is also important. (Of course, this only applies if their total lifetime utility is positive, if it's negative it's bad to create them no matter what their identity is).

We can now use this to patch Singer's "Moral Ledger" in a way that fits Eliezer's views. Creating someone with the "wrong" identity is a debt, but creating a person with a "right" identity is not. So we shouldn't create a utility monster (if "utility monster" is a "wrong" identity), because that would create a debt, but killing the monster wouldn't solve anything, it would just make it impossible to pay the debt.

My "Identity Matters" model also helps explain our intuitions about our duties to have children. In the total and average views, the identity of the child is unimportant. In my model it is. If someone doesn't want to have children, having an unwanted child is a "debt" regardless of the child's personal utility. A child born to parents who want to have one, by contrast may be "right" to have, even if its utility is lower than that of the aforementioned unwanted child. (Of course, this model needs to be flexible about what makes someone "your child" in order to regard things like sterile parents adopting unwanted children as positive, but I don't see this as a major problem).

In addition to identity mattering, we also seem to have ideals about how utility should be concentrated. Most people intuitively reject things like Replaceability and the Repugnant Conclusion, and I think they're right to. We seem to have an ideal that a small population with high per-person utility is better than a large one with low per-person utility, even if its total utility is higher. I'm not suggesting Average Utilitarianism, as I said in another comment, I think that AU is a disastrously bad attempt to mathematize that ideal. But I do think that ideal is worthwhile, we just need a less awful way to fit it into our ethical system.

A third reason for our belief that having children is optional is that most people seem to believe in some sort of Critical Level Utilitarianism with the critical level changing depending on what our capabilities for increasing people's utility are. Most people in the modern world would consider it unthinkable to have a child whose level of utility would have been considered normal in Medieval Europe. And I think this belief isn't just the status quo bias, I would also consider it unconscionable to have a child with normal Modern World levels of utility in a transhuman future.

Replies from: ygert
comment by ygert · 2013-08-23T21:50:12.383Z · LW(p) · GW(p)

It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility.

Oh? Yes, true it is better to have the ambitious child. I agree and I think most others will too. But I don't think that's because of some fundamental preference, but rather because the ambitious child has a far greater chance of causing good in the world. (Say, becoming an artist and painting masterpieces that will be admired for centuries to come, or becoming a scientist and developing our understanding of the fundamental nature of the universe.) The druggie will not provide these positive externalities, and may even provide negative ones. (Say, turning to crime in order to feed his addiction, as some druggies do.)

I think this adequately explains this reaction, and I do not see a need to posit a fundamental term in our utility functions to explain it.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-08-26T03:30:47.595Z · LW(p) · GW(p)

I think this adequately explains this reaction, and I do not see a need to posit a fundamental term in our utility functions to explain it.

I disagree. I have come to realize that that morality isn't just about maximizing utility, it's also about protecting fragile human* values. Creating creatures that have values fundamentally opposed to those values, such as paperclip maximizers, orgasmium, or sociopaths, seems a morally wrong thing to do to me.

This was driven home to me by a common criticism of utilitarianism, namely that it advocates that, if possible, we should kill everyone and replace them with creatures whose preferences are easier to satisfy, or who are easier to make happy. I believe this is a bug, not a feature, and that valuing the identity of created creatures is the solution. Eliezer's essays on the fragility and complexity of human values also helped me realize this.

*When I say "human" I mean any creature with a sufficiently humanlike mind, regardless of whether it is biologically human or not.

Replies from: ygert
comment by ygert · 2013-08-26T21:33:20.271Z · LW(p) · GW(p)

Perhaps I was unclear. I used utilitarian terminology, but utilitarianism is not necessary for my point. To restate: If I could choose between an ambitious child being born, or a druggie child being born, I (and you, according to your above comment) would choose the ambitious child, all else being equal. Why would we choose that? Well, there are several possible explanations, including the one which you gave. However, yours was complicated and far from trivially true, and so I point out that such massive suppositions are unnecessary, as we already have a certain well known human desire to explain that choice. (Call that desire what you will, perhaps "altruism", or "bettering the world". It's the desire that on the margin, more art, knowledge, and other things-considered-valuable-to-us are created.)

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-08-27T09:02:19.993Z · LW(p) · GW(p)

I agree that externalities are the first reason that comes to mind. But when I try to modify the thought experiments to control for this my preferences remain the same.

For instance, if I imagine someone with rather introverted ambitions, for instance, someone who wants to collect and modify cars, or beat lots of difficult videogames, versus someone with unambitious, but harmless preferences, (such as looking at porn all day), I still preferred the ambitious person. Incidentally, I'm not saying it's bad that there are people who want to look at porn (or who want to use recreational drugs, for that matter), I'm just saying it's bad that there are people who want to devote their entire life too it and do nothing more ambitious.

To test my ideals even further (and to make sure my intuitions were not biased by the fact that porn and drugs are low-status activities) I imagined two people who both wanted to just look at porn all day. The difference was that one wanted to compare and contrast the porn they watched and develop theories about the patterns he found, while the other just wanted to passively absorb it without really thinking. I preferred the Intellectual Porn Watched to the Absorber.

Call that desire what you will, perhaps "altruism", or "bettering the world". It's the desire that on the margin, more art, knowledge, and other things-considered-valuable-to-us are created.

I think the strongest reason to value certain identities over others is that otherwise, the most efficient way to create things-considered-valuable-to-us is to change who "us" is. Once we get good at AI or genetics, kill everyone and replace them with creatures who value things that are easier to manufacture than art and knowledge. Or, if we have an aversion to killing, just sterilize everyone and make sure all future creatures born are of this type. The fact that this seems absurdly evil indicates to me that we do value identity over utility to some extent.

Replies from: ygert
comment by ygert · 2013-08-27T14:59:01.156Z · LW(p) · GW(p)

Hm. That's actually a pretty good answer. I too find I would prefer the Intellectual Porn Watcher to the Absorber. I will note, however, that the preference is rather weak. If you would give me $10 (or however much) in exchange for letting the Absorber exist rather than the Intellectual Porn Watcher, I'd take that, even for relatively low values of money. (I'm not quite sure of what the cuttoff is though, but it's low). On the other hand, I think I'd be willing to give up a fair bit of money to have the Ambitious Intellectual exist rather than the Druggie.

Thinking about it in these terms is by no means perfect, but it allows me to solidify my view of my preferences. In any case, I'll admit this is a good point.

I think the strongest reason to value certain identities over others is that otherwise, the most efficient way to create things-considered-valuable-to-us is to change who "us" is. Once we get good at AI or genetics, kill everyone and replace them with creatures who value things that are easier to manufacture than art and knowledge. Or, if we have an aversion to killing, just sterilize everyone and make sure all future creatures born are of this type. The fact that this seems absurdly evil indicates to me that we do value identity over utility to some extent.

See, "valuable" is a two place word, it takes as arguments both an object or state, and a valuer. Now, when I talk about this, I say "us" as the valuer, (and you can argue that I really should be only saying me, as our goal-systems are not necessarily aligned, but we'll put that aside), but that specifically means the "us" that is having this conversation. Or to put it another way, if you ask me "How much do you value thing X?", you can model it as me going to a black box inside my head and getting an answer. Of course, if you take out that black box and replace it with another one, the answer may be different. But, even if I know that tomorrow someone will come and do surgery to swap those "boxes", that doesn't change my answer today.

Sorry for rambling a bit. I'm not sure how best to explain it all. But I value art and knowledge. (To use your example.) If you replace me with someone who values paperclips, then that other person will go and do the things he values, like making paperclips and not art and knowledge, and I will hate him for that. I don't like the world were he does that, as my utility function does not include terms for paperclips. He would value that world, and would fight tooth and claw to get to that worldstate. Nothing says we have to agree on what is the best worldstate, and nothing says I am obliged to bring about arbitrary wold states others want.

Replies from: ygert
comment by ygert · 2013-08-27T15:19:31.917Z · LW(p) · GW(p)

... Oh. Actually, on reading what you wrote over again, I think (in the last section, the points about ambition still stand) we are arguing over different things, and are more in agreement then we thought. You say you value "identity over utility" (to some extent). I think I interpreted that to mean something subtly different from what you meant.

By utility, you meant total utility of everyone (or maybe the average utility of everyone?) Realizing that, of course we value lots of things over "utility", when "utility" is used in that sense. (I will call it ToAU, for "Total or Average Utility", to avoid confusing it with what I will call MPU, "My Personal Utility".)

Yess, what you make is a good point that ToAU is not what we should be maximizing. I agree. I was arguing that it is nonsensical to not value utility, as by definition, MPU is what we should be maximizing. (Ok, put aside for now, as before, that me and you may have slightly different goal systems and I so I should be using a different pronoun, either you, if I am talking about what you are maximising, or me, if we are talking about me.)

Now, MPU is quite the complex function, and for us, at least, it includes terms for art and science existing, for humans not being killed, for minimizing not only our (mine, your) personal suffering, but also for minimising global suffering. Altruism is a major part of MPU, make no mistake, I am not arguing that others' opinions do not matter, at least for some value of "others", definitely including all humans, and likely including many non humans. MPU does include a term for the enjoyment, happiness, identity, non-suffering, and so forth of those in this category, but (as you have shown) this category cannot be completely universal.

In fact, in the end, all this boils down to is that you were arguing against utilitarianism, while I was arguing for consequentialism, two very similar ethical systems, but profoundly different.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-08-28T00:40:09.398Z · LW(p) · GW(p)

I was arguing that it is nonsensical to not value utility, as by definition, MPU is what we should be maximizing.

Sorry, I tend to carelessly use the word "utility" to mean "the stuff utilitarians want to maximize," forgetting that many people will read it as "Von Neuman-Morgenstern Utility." You actually aren't the first person on Less Wrong I've done this to.

In fact, in the end, all this boils down to is that you were arguing against utilitarianism, while I was arguing for consequentialism, two very similar ethical systems, but profoundly different.

I agree entirely.

comment by Lukas_Gloor · 2013-08-20T00:40:30.693Z · LW(p) · GW(p)

Average utilitarianism (which can be both hedonistic or about preferences / utility functions) is another way to avoid the repugnant conclusion. However, average utilitarianism comes with its own conclusions that most consider to be unacceptable. If the average life in the universe turns out to be absolutely miserable, is it a good thing if I bring a child into existence that will have a life that goes slightly less miserable? Or similarly, if the average life is free of suffering and full of the most intense happiness possible, would I be acting catastrophically wrong if I brought into existence a lot of beings that constantly experience the peak of current human happiness (without ever having preferences unfulfilled too), simply because it would lower the overall average?

Another point to bring up against average utilitarianism is that is seems odd why the value of creating a new life should depend on what the rest of the universe looks like. All the conscious experiences remain the same, after all, so where does this "let's just take the average!" come from?

Replies from: army1987
comment by A1987dM (army1987) · 2013-08-20T22:19:04.916Z · LW(p) · GW(p)

If the average life in the universe turns out to be absolutely miserable, is it a good thing if I bring a child into existence that will have a life that goes slightly less miserable? Or similarly, if the average life is free of suffering and full of the most intense happiness possible, would I be acting catastrophically wrong if I brought into existence a lot of beings that constantly experience the peak of current human happiness (without ever having preferences unfulfilled too), simply because it would lower the overall average?

More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.

Replies from: Ghatanathoah, CronoDAS
comment by Ghatanathoah · 2013-08-23T19:40:52.656Z · LW(p) · GW(p)

More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.

This can be resolved by taking a timeless view of the population, so that someone still counts as part of the average even after they die. This neatly resolves the question you asked Eliezer earlier in the thread, "If you prefer no monster to a happy monster why don't you kill the monster." The answer is that once the monster is created it always exists in a timeless sense. The only way for there to be "no monster" is for it to never exist in the first place.

That still leaves the most repugnant conclusion of naive average utilitarianism, namely that it states that, if the average utility is ultranegative (i.e., everyone is tortured 24/7), creating someone with slightly less negative utility (ie they are tortured 23/7) is better than creating nobody.

In my view average utilitarianism is a failed attempt to capture a basic intuition, namely that a small population of high utility people is sometimes better than a large one of low utility people, even if the large population's total utility is higher. "Take the average utility of the population" sounds like an easy and mathematically rigorous to express that intuition at first, but runs into problems once you figure out "munchkin" ways to manipulate the average, like adding moderately miserable people to a super-miserable world..

In my view we should keep the basic intuition (especially the timeless interpreation of it), but figure out a way to express it that isn't as horrible as AU.

Replies from: army1987, teageegeepea
comment by A1987dM (army1987) · 2013-08-23T22:05:49.806Z · LW(p) · GW(p)

This can be resolved by taking a timeless view of the population, so that someone still counts as part of the average even after they die.

In that view, does someone already counts as part of the average even before they are born?

Replies from: TheOtherDave, Fronken, selylindi
comment by TheOtherDave · 2013-08-24T05:57:48.224Z · LW(p) · GW(p)

I would think so. Of course, that's not to say we know that they count... my confidence that someone who doesn't exist once existed is likely much higher, all else being equal, than my confidence that someone who doesn't exist is going to exist.

This should in no way be understood as endorsing the more general formulation.

comment by Fronken · 2013-08-23T22:31:49.909Z · LW(p) · GW(p)

Presumably, only if they get born. Although that's tweakable.

comment by selylindi · 2013-08-28T15:30:39.344Z · LW(p) · GW(p)

Yes and no. Yes in that the timeless view is timeless in both directions. No in that for decisionmaking we can only take into account predictions of the future and not the future itself.

For intuitive purposes, consider the current political issues of climate change and economic bubbles. It might be the case that we who are now alive could have better quality of life if we used up the natural resources and if we had the government propogate a massive economic bubble that wouldn't burst until after we died. If we don't value the welfare of possible future generations, we should do those things. If we do value the welfare of possible future generations, we should not do those things.

For technical purposes, suppose we have an AIXI-bot with a utility function that values human welfare. Examination of the AIXI definition makes it clear that the utility function is evaluated over the (predicted) total future. (Entertaining speculation: If the utility function was additive, such an optimizer might kill off those of us using more than our share of resources to ensure we stay within Earth's carrying capacity, making it able to support a billion years of humanity; or it might enslave us to build space colonies capable of supporting unimaginable throngs of future happier humans.)

For philosophical purposes, there's an important sense in which my brainstates change so much over the years that I can meaningfully, if not literally, say "I'm not the same person I was a decade ago", and expect that the same will be true a decade from now. So if I want to value my future self, there's a sense in which I necessarily must value the welfare of some only-partly-known set of possible future persons.

comment by teageegeepea · 2013-08-25T03:34:39.412Z · LW(p) · GW(p)

If I kill someone in their sleep so they don't experience death, and nobody else is affected by it (maybe it's a hobo or something), is that okay under the timeless view because their prior utility still "counts"?

Replies from: ArisKatsaris, Ghatanathoah, ygert
comment by ArisKatsaris · 2013-08-25T22:12:51.014Z · LW(p) · GW(p)

If we're talking preference utilitarianism, in the "timeless sense" you have drastically reduced the utility of the person, since the person (while still living) would have preferred not to be so killed; and you went against that preference.

It's because their prior utility (their preference not to be killed) counts, that killing someone is drastically different from them not being born in the first place.

comment by Ghatanathoah · 2013-08-26T02:59:02.644Z · LW(p) · GW(p)

No, because they'll be deprived of any future utility they might have otherwise received by remaining alive.

So if a person is born, has 50 utility of experiences and is then killed, the timeless view says the population had one person of 50 utility added to it by their birth.

By contrast, if they were born, have 50 utility of experiences, avoid being killed, and then have an additional 60 utility of experiences before they die of old age, the timeless view says the population had one person of 110 utility added to it by their birth.

Obviously, all other things being equal, adding someone with 110 utility is better than adding someone with 50, so killing is still bad.

comment by ygert · 2013-08-25T04:54:12.360Z · LW(p) · GW(p)

The obvious way to avoid this is to weight each person by their measure, e.g. the amount of time they spend alive.

Replies from: teageegeepea
comment by teageegeepea · 2013-08-25T21:58:31.508Z · LW(p) · GW(p)

I think total utilitarianism already does that.

Replies from: ygert
comment by ygert · 2013-08-25T22:16:25.002Z · LW(p) · GW(p)

Yes, that's my point (Maybe my tenses were wrong.) This answer (the weighting) was meant to be the answer to teageegeepea's question of how exactly the timeless view considers the situation.

comment by CronoDAS · 2013-08-22T15:24:40.735Z · LW(p) · GW(p)

More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.

In real life, this would tend to make the remaining people less happy.

comment by ESRogs · 2013-08-23T03:07:28.964Z · LW(p) · GW(p)

not wrong to not create new people as long as their lives would be worth living

Did you mean to write, "not wrong to create new people..." ?

Replies from: somervta
comment by somervta · 2013-08-23T07:35:15.796Z · LW(p) · GW(p)

No, that's Singer's position. He's saying there is no obligation to create new people.

Replies from: ESRogs
comment by ESRogs · 2013-08-23T12:50:39.027Z · LW(p) · GW(p)

Then what's the qualifier about their lives being worth living there for? Presumably he believes it's also not wrong to not create people whose lives would not be worth living, right?

Replies from: somervta
comment by somervta · 2013-08-23T13:27:45.081Z · LW(p) · GW(p)

Huh. Rereading it, your interpretation might make more sense. I was thinking about that as 'even if their lives would be worth living, you don't have an obligation to create new people', which is a position that Peter Singer holds, but so is the position expressed after your correction.

comment by A1987dM (army1987) · 2013-08-19T21:47:39.033Z · LW(p) · GW(p)

In the case of actual human children in an actual society, there are considerations that don't necessarily apply to hypothetical alien five-dollar-bill-satisficers in a vacuum.

comment by selylindi · 2013-08-28T17:22:33.724Z · LW(p) · GW(p)

Perhaps you and they are just focusing on different stages of reasoning. The difference in utility that you've described is a temporal asymmetry that sure looks at first glance like a flaw. But that's because it's an unnecessary complexity to add it as a root principle when explaining morality up to now. Each of us desires not to be a victim of murder sprees (when there are too many people) or to have to care for dozens of babies (when there are too few people), and the simplest way for a group of people to organize to enforce satisfaction of that desire is for them to guarantee the state does not victimize any member of the group. So on desirist grounds I'd expect the temporal asymmetry to tend to emerge strategically as the conventional morality applying only among the ruling social class of a society: only humans and not animals in a modern democracy, only men when women lack suffrage, only whites when blacks are subjugated, only nobles in aristocratic society, and so on. (I can readily think of supporting examples, but I'm not confident in my inability to think of contrary examples, so I do not yet claim that history bears out desirism's prediction on this matter.)

Of course, if you plan to build an AI capable of aquiring power over all current life, you may have strong reason to incorporate the temporal asymmetry as a root principle. It wouldn't likely emerge out of unbalanced power relations. And similarly, if you plan on bootstrapping yourself as an em into a powerful optimizer, you have strong reason to precommit to the temporal asymmetry so the rest of us don't fear you. :D

comment by [deleted] · 2013-08-25T03:24:18.348Z · LW(p) · GW(p)

If the utility monster is so monstrously sad, why would it be asking not to be killed? Usually, a decent rule of thumb is that if someone doesn't want to die there's a good chance their lives are somewhat worth living.

The converse perspective implies that we should either (1) be spawning as many babies as possible, as fast as possible, or (2) anyone who disagrees with 1 should go on a murder spree, or at best consider such murder sprees ethically unimportant.

This conclusion is technically incorrect. For new babies, you don't know in advance whether their lives will be worth living. Even if you go with positive expected value (and no negative externalities), you can still have better alternatives, e.g. do science now that makes many more and much better lives much later; "as fast as possible" is logically unnecessary.

Also, killing sprees have side-effects on society that omissions of reproduction don't have, e.g. already-born people will take costly measures not to be killed (etc...)

comment by MugaSofer · 2013-08-21T15:51:27.257Z · LW(p) · GW(p)

I worries me how many people have come to exactly those conclusions. I mean, it's not very many, but still ...

comment by Said Achmiz (SaidAchmiz) · 2013-08-19T19:21:29.781Z · LW(p) · GW(p)

If you prefer a happy monster to no monster and no monster to a sad monster, then you prefer a happy monster to a sad monster

Only if your preferences are transitive.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-08-19T19:48:19.997Z · LW(p) · GW(p)

If you have any sort of coherent utility system at all, they will be.

A better point is that "no monster" just means you're shunting the problem to poor Alternate You in another many-worlds branch, whereas killing a happy monster means actually decreasing the number of universes with the monster in it by one.

comment by TsviBT · 2013-08-17T10:11:37.982Z · LW(p) · GW(p)

I don't get it, how is that different from any old bad thing you want to avoid?

comment by novalis · 2013-08-17T00:49:34.767Z · LW(p) · GW(p)

why should I care?

Isn't this an objection to any theory of ethics?

Replies from: metastable, SaidAchmiz, Juno_Watt, MugaSofer
comment by metastable · 2013-08-17T01:04:17.026Z · LW(p) · GW(p)

As a lone question, it could be, but the point of his post is that even stipulating utilitarianism it does not follow that you or I should maximize the utils of Mr. Utility Monster.

comment by Said Achmiz (SaidAchmiz) · 2013-08-17T01:10:44.848Z · LW(p) · GW(p)

No, only theories of ethics that say that I should care about things that I do not already care about.

And it is, in case, not an objection but a question. :)

comment by Juno_Watt · 2013-08-18T23:53:15.519Z · LW(p) · GW(p)

Not necesarily a fatal one.

comment by MugaSofer · 2013-08-18T22:55:53.016Z · LW(p) · GW(p)

I believe some famous philosopher already has this point named after him.

comment by Leon · 2013-08-17T00:32:54.284Z · LW(p) · GW(p)

This is just the (intended) critique of utilitarianism itself, which says that the utility functions of others are (in aggregate) exactly what you should care about.

Replies from: DanArmak
comment by DanArmak · 2013-08-17T13:03:18.939Z · LW(p) · GW(p)

Utilitarianism doesn't say that. Maybe some variant says that, but general utilitarianism merely says that I should have a single self-consistent utility function of my own, which is free to assign whatever weights to others.

ETA: PhilGoetz says otherwise. I believe that he is right, he's an expert in the subject matter. I am surprised and confused.

Replies from: Kaj_Sotala, AlexMennen, PhilGoetz, MugaSofer
comment by Kaj_Sotala · 2013-08-17T23:23:59.923Z · LW(p) · GW(p)

If you're unsure of a question of philosophy, the Stanford Encyclopedia of Philosophy is usually the best place to consult first. Its history of utilitarianism article says that

Though there are many varieties of the view discussed, utilitarianism is generally held to be the view that the morally right action is the action that produces the most good. There are many ways to spell out this general claim. One thing to note is that the theory is a form of consequentialism: the right action is understood entirely in terms of consequences produced. What distinguishes utilitarianism from egoism has to do with the scope of the relevant consequences. On the utilitarian view one ought to maximize the overall good — that is, consider the good of others as well as one's own good.

The Classical Utilitarians, Jeremy Bentham and John Stuart Mill, identified the good with pleasure, so, like Epicurus, were hedonists about value. They also held that we ought to maximize the good, that is, bring about ‘the greatest amount of good for the greatest number’.

Utilitarianism is also distinguished by impartiality and agent-neutrality. Everyone's happiness counts the same. When one maximizes the good, it is the good impartially considered. My good counts for no more than anyone else's good. Further, the reason I have to promote the overall good is the same reason anyone else has to so promote the good. It is not peculiar to me.

Note the last paragraph in particular. Utilitarianism is agent-neutral: while it does take your utility function into account, it gives it no more weight than anybody else's.

The "general utilitarianism" that you mention is mostly just "having a utility function", not "utilitarianism" - utility functions might in principle be used to implement ethical theories quite different from utilitarianism. This is a somewhat common confusion on LW (one which I've been guilty of myself, at times). I think it has to do with the Sequences sometimes conflating the two.

EDIT: Also, in SEP's Consequentialism article:

Since classic utilitarianism reduces all morally relevant factors (Kagan 1998, 17–22) to consequences, it might appear simple. However, classic utilitarianism is actually a complex combination of many distinct claims, including the following claims about the moral rightness of acts:

Consequentialism = whether an act is morally right depends only on consequences (as opposed to the circumstances or the intrinsic nature of the act or anything that happens before the act).

Actual Consequentialism = whether an act is morally right depends only on the actual consequences (as opposed to foreseen, foreseeable, intended, or likely consequences).

Direct Consequentialism = whether an act is morally right depends only on the consequences of that act itself (as opposed to the consequences of the agent's motive, of a rule or practice that covers other acts of the same kind, and so on).

Evaluative Consequentialism = moral rightness depends only on the value of the consequences (as opposed to non-evaluative features of the consequences).

Hedonism = the value of the consequences depends only on the pleasures and pains in the consequences (as opposed to other goods, such as freedom, knowledge, life, and so on).

Maximizing Consequentialism = moral rightness depends only on which consequences are best (as opposed to merely satisfactory or an improvement over the status quo).

Aggregative Consequentialism = which consequences are best is some function of the values of parts of those consequences (as opposed to rankings of whole worlds or sets of consequences).

Total Consequentialism = moral rightness depends only on the total net good in the consequences (as opposed to the average net good per person).

Universal Consequentialism = moral rightness depends on the consequences for all people or sentient beings (as opposed to only the individual agent, members of the individual's society, present people, or any other limited group).

Equal Consideration = in determining moral rightness, benefits to one person matter just as much as similar benefits to any other person (= all who count count equally).

Agent-neutrality = whether some consequences are better than others does not depend on whether the consequences are evaluated from the perspective of the agent (as opposed to an observer).

comment by AlexMennen · 2013-08-17T17:14:02.341Z · LW(p) · GW(p)

PhilGoetz says otherwise. I believe that he is right, he's an expert in the subject matter. I am surprised and confused.

PhilGoetz is correct, but your confusion is justified; it's bad terminology. Consequentialism is the word for what you thought utilitarianism meant.

Replies from: DanArmak
comment by DanArmak · 2013-08-17T18:38:34.017Z · LW(p) · GW(p)

I thought a consequentialist is not necessarily a utilitarianist. Utilitarianism should mean that all values are comparable and tradeable via utilons (measured in real numbers), and (ideally) a single utility function for measuring the utility of a thing (to someone). The Wikipedia page you link lists "utilitarianism" as only one of many philosophies compatible with consequentialism.

Replies from: AlexMennen
comment by AlexMennen · 2013-08-17T19:17:52.050Z · LW(p) · GW(p)

You are correct that utilitarianism is a type of consequentialism, and that you can be a consequentialist without being a utilitarian. Consequentialism says that you should choose actions based on their consequences, which pretty much forces you into the VNM axioms, so consequentialism is roughly what you described as utilitarianism. As I said, it would make sense if that is what utilitarianism meant, but despite my opinions, utilitarianism does not mean that. Utilitarianism says that you should choose the action that results in the consequence that is best for all people in aggregate.

Replies from: DanArmak, Lukas_Gloor
comment by DanArmak · 2013-08-17T20:57:29.906Z · LW(p) · GW(p)

I see. Thank you for clearing up the terminology.

Then what would the term be for a VNM-rational, moral anti-realist who explicitly considers others' welfare only because they figure in his utility function, and doesn't intrinsically care about their own utility functions?

Replies from: Jack, AlexMennen, Juno_Watt, blacktrance
comment by Jack · 2013-08-19T14:02:03.100Z · LW(p) · GW(p)

Then what would the term be for a VNM-rational, moral anti-realist who explicitly considers others' welfare only because they figure in his utility function, and doesn't intrinsically care about their own utility functions?

"Utilitarian" and all the other labels in normative ethics are labels for what ought to be in an agent's utility function. So I would call this person someone who rightly stopped caring about normative philosophy.

comment by AlexMennen · 2013-08-17T22:10:13.508Z · LW(p) · GW(p)

I don't know of a commonly agreed-upon term for that, unfortunately. "Utility maximizer", "VNM-rational agent", and "homo economicus" are similar to what you're looking for, but none of these terms imply that the agent's utility function is necessarily dependent on the welfare of others.

comment by Juno_Watt · 2013-08-19T15:00:55.840Z · LW(p) · GW(p)

Rational self-interest?

comment by blacktrance · 2013-08-23T05:49:34.366Z · LW(p) · GW(p)

To use an Objectivist term, it's a person who's acting in his "properly understood self-interest".

comment by Lukas_Gloor · 2013-08-19T01:36:58.100Z · LW(p) · GW(p)

Utilitarianism says that you should choose the action that results in the consequence that is best for all people in aggregate.

Not just people but all the beings that serve as "vessels" for whatever it is that matters (to you). According to most common forms of utilitarianism, "utility" consists of happiness and/or (the absence of) suffering or preference satisfaction/frustration.

comment by PhilGoetz · 2013-08-17T15:20:32.174Z · LW(p) · GW(p)

Thanks, but I tend to define and use my own terminology, because the standard terms are too muddled to use. I am an expert in my own terminology. Leon is talking about utilitarianism as the word is usually, or at least historically, used outside LessWrong, as a computation that everyone can perform and get the same answer, so society can agree on an action.

Replies from: DanArmak
comment by DanArmak · 2013-08-17T16:03:09.826Z · LW(p) · GW(p)

a computation that everyone can perform and get the same answer, so society can agree on an action.

But that computation is still a two-place function; it depends on the actual utility function used. Surely "classical" utilitarianism doesn't just assume moral-utility realism. But without "utility realism" there is no necessary relation between the monster's utility according to its own utility function, and the monster's utility according to my utility function.

Humans are similar, so they have similar utility functions, so they can trade without too many repugnant outcomes. And because of this we sometimes talk of utility functions colloquially without mentioning whose functions they are. But a utility monster is by definition unlike regular humans, so the usual heuristics don't apply; this is not surprising.

When I thought of a "utility monster" previously, I thought of a problem with the fact that my (and other humans') utility functions are really composed of many shards of value and are bad at trading between them. So a utility monster would be something that forced me to sacrifice a small amount of one value (murder a billion small children) to achieve a huge increase in another value (make all adults transcendently happy). But this would still be a utility monster according to my own utility function.

On the other hand, saying "a utility monster is anything that assigns huge utility to itself - which forces you to assign huge utility to it too, just because it says so" - that's just a misunderstanding of how utility works. I don't know if it's a strawman, but it's definitely wrong.

I notice that I am still confused about what different people actually believe.

Replies from: PhilGoetz
comment by PhilGoetz · 2013-08-19T23:34:44.173Z · LW(p) · GW(p)

If by "moral-utility realism" you mean the notion that there is one true moral utility function that everyone should use, I think that's what you'll find in the writings of Bentham, and of Nozick. Not explicitly asserted; just assumed, out of lack of awareness that there's any alternative. I haven't read Nozick, just summaries of him.

Historically, utilitarianism was seen as radical for proposing that happiness could by itself be the sole criterion for an ethical system, and for being strictly consequentialist. I don't know when the first person proposed that it makes sense to talk about different people having different utility functions. You could argue it was Nietzsche, but he meant that people could have dramatically opposite value systems that are necessarily at war with each other, which is different from saying that people in a single society can use different utility functions.

(What counts as a "different" belief, BTW, depends on the representational system you use, particularly WRT quasi-indexicals.)

Anyway, that's no longer a useful way to define utilitarianism, because we can use "consequentialism" for consequentialism, and happiness turns out to just be a magical word, like "God", that you pretend the answers are hidden inside of.

comment by MugaSofer · 2013-08-18T22:57:32.509Z · LW(p) · GW(p)

"Utilitarianism" is sometimes used for both that "variant" (valuing utility) and the meaning you ascribe to it (defining "value" in terms of utility.) The Utility Monster is designed to interfere with the former meaning. Which is the correct meaning ...

comment by PhilGoetz · 2013-08-17T00:32:18.550Z · LW(p) · GW(p)

In this post, I wrote: "The standard view ... obliterates distinctions between the ethics of that person, the ethics of society, and "true" ethics (whatever they may be). I will call these "personal ethics", "social ethics", and "normative ethics" ."

Using that terminology, you're objecting to the more general point that social utility functions shouldn't be confused with personal utility functions. All mainstream discussion of utilitarianism has failed to make this distinction, including the literature on the utility monster.

However, it's still perfectly valid to talk about using utilitarianism to construct social utility functions (e.g., those to encode into a set of community laws), and in that context the utility monster makes sense.

Utilitarianism, and all ethical systems, are usually discussed with the flawed assumption that there is one single proper ethical algorithm, which, once discovered, should be chosen by society and implemented by every individual. (CEV is based on the converse of this assumption: that you can use a personal utility function, or the average of many personal utility functions. as a social utility function.)

Replies from: Jack, Juno_Watt, DanArmak
comment by Jack · 2013-08-19T13:40:29.091Z · LW(p) · GW(p)

Using that terminology, you're objecting to the more general point that social utility functions shouldn't be confused with personal utility functions. All mainstream discussion of utilitarianism has failed to make this distinction, including the literature on the utility monster.

That's because the mainstream discussion of utilitarianism the normative ethical theory has almost nothing at all to do with the concept of utility in economics.

comment by Juno_Watt · 2013-08-18T23:47:08.610Z · LW(p) · GW(p)

Utilitarianism, and all ethical systems, are usually discussed with the flawed assumption that there is one single proper ethical algorithm, which, once discovered, should be chosen by society and implemented by every individual

That flaw is not obvious to me. But the flaw in anything-goes ethics is.

comment by DanArmak · 2013-08-17T13:05:50.422Z · LW(p) · GW(p)

Using that terminology, you're objecting to the more general point that social utility functions shouldn't be confused with personal utility functions. All mainstream discussion of utilitarianism has failed to make this distinction, including the literature on the utility monster.

I don't doubt that you're right, but I find that stunning. How can this distinction not be made?

In the trivial example Selfish World, everyone assigns greater utility to themselves than to anyone else. That surely doesn't mean utilitarianism is useless - people can still make decisions and trade utilons!

Replies from: Jack
comment by Jack · 2013-08-19T13:38:09.725Z · LW(p) · GW(p)

"Utility" refers a representation of preference over goods and services in economics and decision theory. This usage dates to the late 1940s. It has almost nothing at all to do with the normative theory of utilitarianism which dates to the late 1780s.

As a normative theory is supposed to tell you how you ought to act saying "oh everyone ought to follow their own utility function" is completely without content. The entire content of the theory is that my utils and your utils are actually the same kind of thing such that we can combine them one-to-one in a calculation to determine how to act (we want to maximize total utils).

That surely doesn't mean utilitarianism is useless - people can still make decisions and trade utilons!

This isn't utilitarianism. It is ethical egoism as described by economists.

comment by Randaly · 2013-08-17T08:47:51.242Z · LW(p) · GW(p)

The utility monster is a concept created to critique utilitarianism. If you are not a utilitarian, then it is not a criticism of your beliefs. If you need to ask why you should care about another being's utility, and it's a serious rather than a rhetorical question, then you aren't a utilitarian.

comment by Jack · 2013-08-19T14:03:39.710Z · LW(p) · GW(p)

So this comment seems straightforwardly confused about what utilitarianism is. Why is it up this high?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-08-19T15:02:40.735Z · LW(p) · GW(p)

I don't know. Patterns of upvotes and downvotes on LessWrong still mystify me.

You are right; I was, when I wrote the grandparent, confused about what utilitarianism is. Having read the other comment threads on this post, I think the reason is that popular usage of the term "utilitarianism" on this site does not match its usage elsewhere. What I thought utilitarianism was before I started commenting on LessWrong, and what I think utilitarianism is now that I've gotten unconfused, are the same thing (the same silly thing, imo); my interim confusion is more or less described in this thread.

My primary objections to utilitarianism remain the same: intersubjective comparability of utility (I am highly dubious about whether it's possible), disagreement about what sorts of things experience utility in a relevant way (animals? nematodes? thermostats?) and thus ought to be considered in the calculation, divergence of utilitarian conclusions from foundational moral intuitions in non-edge cases, various repugnant conclusions.

As far as the utility monster goes, I think the main issue is that I am really not inclined to grant intersubjective comparability of experienced utility. It just does not seem coherent or meaningful to me to say that some creature, clearly very different from humans, experiences, say, "twice as much" utility at some given moment than a human does. How on earth did we come up with this number? How do we come up with any number in such a case? Forget numbers — how do we even create an ordering of experienced utility between different sorts of creatures?

comment by MugaSofer · 2013-08-18T22:51:52.218Z · LW(p) · GW(p)

Because you care about other agents' utility. Right? That's what the Utility Monster is meant to be an issue with.

comment by PrometheanFaun · 2013-08-18T06:15:10.078Z · LW(p) · GW(p)

In more personal terms, if you fit your utility function to your friends and decide what is best for them based on that, rather than letting them to their own alien utility functions and helping them to get what they really want rather than what you think they should want, you are not a good friend. I say this because if the function you're pushing prohibits me from fulfilling my goals, I will avoid the fuck out of you. I will lie about my intentions. I will not trust you. It doesn't matter if your heart's in the right place.

Replies from: metastable
comment by metastable · 2013-08-18T07:35:11.751Z · LW(p) · GW(p)

fit your utility function to your friends and decide what is best for them based on that, rather than letting them to their own alien utility functions and helping them to get what they really want rather than what you think they should want.

The definition of want here is ambiguous, and that makes this is a little hard to parse. How are you defining "want" with respect to "utility function"? Do you mean to make them equivalent?

If by "want" you mean desire in accord with their appropriately calibrated utility functions, then, well, sure. A friend is selfish by any common understanding if he doesn't care about his buddies' needs.

But it seems like you might be saying that he's a bad friend for not helping his friends get what they want regardless of what he thinks they need. While this is one view of friendship, it is not nearly as common, and I can make a strong case against it. Such a view would require that you help addicts continue to use, that you help self-destructive people harm themselves, that you never argue with a friend over a toxic relationship you can see, and that you never really try to convince a friend to try anything he or she doesn't think he or she will like.

I will lie about my intentions. I will not trust you. It doesn't matter if your heart's in the right place.

Sadly, this happens. If you're saying you think it should happen more, okay. But I would consider a friend pretty poor if he or she weren't willing to risk a little alienation because of genuine concern.

Replies from: PrometheanFaun, MugaSofer
comment by PrometheanFaun · 2013-08-18T09:33:52.429Z · LW(p) · GW(p)

I meant the former case, what use are people who's wants don't perfectly align with their utility function? xJ I guess whenever the latter case occurs in my life, that's not really what's happening. The dog thinks it's driving away a threat I don't recognise, when really it's driving away an opportunity it's incapable of recognising. Sometimes it might even be the right thing for them to do, even by my standards, given a lack of information. I still have to manage them like a burdensome dog.

comment by MugaSofer · 2013-08-18T22:53:22.369Z · LW(p) · GW(p)

The definition of want here is ambiguous, and that makes this is a little hard to parse. How are you defining "want" with respect to "utility function"? Do you mean to make them equivalent?

If by "want" you mean desire in accord with their appropriately calibrated utility functions, then, well, sure. A friend is selfish by any common understanding if he doesn't care about his buddies' needs.

Assuming that the utility monster is not, somehow, mistaken regarding it's wants...

comment by DanielLC · 2013-08-23T02:42:45.103Z · LW(p) · GW(p)

The utility monster is generally given as opposition to hedonistic or preference utilitarianism in particular. It's not an objection to arbitrary utility functions. There's no monster that can be an increasing number of paperclips.

comment by metastable · 2013-08-18T04:30:45.309Z · LW(p) · GW(p)

Most people in time and space have considered it strange to take the well-being of non-humans into account

I think this is wrong in an interesting way: it's an Industrial Age blind spot. Only people who've never hunted or herded and buy their meat wrapped in plastic have never thought about animal welfare. Many indigenous hunting cultures ask forgiveness when taking food animals. Countless cultures have taboos about killing certain animals. Many animal species' names translate to "people of the __." As far as I can tell, all major religions consider wanton cruelty to animals a sin, and have for thousands of years, though obviously, people dispute the definition of cruelty.

Replies from: PhilGoetz, someonewrongonthenet, eurg, SpectrumDT, MugaSofer
comment by PhilGoetz · 2013-08-18T17:19:52.042Z · LW(p) · GW(p)

I kinda think the opposite is true. It's people who live in cities who join PETA. Country folk get acclimatized to commoditizing animals.

I'd like to see a summary of the evidence that many Native Americans actually prayed for forgiveness to animal spirits. There's been a lot of retrospective "reframing" of Native American culture in the past 100 years--go to a pow-wow today and an earnest Native American elder may tell you stories about their great respect for the Earth, but I don't find these stories in 17th thru 19th-century accounts. Praying for forgiveness makes a great story, but you usually hear about it from somebody like James Fenimore Cooper rather than in an ethnographic account. Do contemporary accounts from the Amazon say that tribespeople there do that?

(Regarding the reliability of contemporary Native American accounts: Once I was researching the Cree Indians, and I read an account, circa 1900, by a Cree, boasting that their written language was their own invention and went back generations before the white man came. The next thing I read was an account from around 1860 of a white missionary who had recently learned Cree and invented the written script for it. I may possibly be confusing the Cree with Ojibway, but it was the same language in both stories.)

I'm not aware of any Western religion that says cruelty to animals is a sin. Individual interpretations, maybe, but I'm pretty sure you won't find a word about it in the whole of the Bible. The Anglican church was fine with bear-baiting. I don't think the Catholic church complained about vivisection.

And it's certainly true that tribal cultures gave zero or negative weight to the well-being of competing tribes. Utilitarianism is tricky to apply when you have to periodically kill your neighbors to survive.

In any case, indigenous cultures aren't the ones complaining that utilitarianism leads to utility monsters. The people who've made those arguments do have their own preferred utility monsters.

Replies from: MugaSofer, None, selylindi, metastable
comment by MugaSofer · 2013-08-18T22:46:45.162Z · LW(p) · GW(p)

I kinda think the opposite is true. It's people who live in cities who join PETA. Country folk get acclimatized to commoditizing animals.

This sounds right to me. After all, you don't find plantation owners agitating for the rights of slaves. No, it's people who live off far away from actual slaves, meeting the occasional lucky black guy who managed to make it in the city and noting that he seems morally worthy.

Replies from: novalis, Jiro
comment by novalis · 2013-08-19T06:31:15.198Z · LW(p) · GW(p)

Um, what about the actual slaves and ex-slaves?

Replies from: PhilGoetz, MugaSofer
comment by PhilGoetz · 2013-08-19T23:07:23.479Z · LW(p) · GW(p)

In this analogy, they correspond to non-human animals, who have not yet expressed an opinion on the matter.

Replies from: novalis
comment by novalis · 2013-08-20T04:46:43.725Z · LW(p) · GW(p)

You mean, have not yet expressed an opinion in a way that you understand.

Anyway, the fact that slaves and ex-slaves did advocate for the rights of slaves indicates that closeness to a problem does not necessarily lead one to ignore it.

comment by MugaSofer · 2013-08-24T13:18:46.101Z · LW(p) · GW(p)

They did not benefit from slavery, as the plantation owners did.

Sorry, that was meant to be the implication of "plantation owners" - "they're biased", not "anyone who actually met slaves was fine with it.".

comment by Jiro · 2013-08-19T01:27:26.776Z · LW(p) · GW(p)

This makes the claim unfalsifiable. People who work closely with animals are the greatest believers in animal rights? Obviously animals should have rights, since they're the ones who know the best. People who work closely with animals believe in animal rights the least? Obviously animals should have rights, since people who work closely with animals are rationalizing it away like slaveholders and the people with the least contact with animals are the most objective. No matter what happens, that "proves" that the people who talk about animal rights are the ones we should listen to.

Replies from: PhilGoetz, MugaSofer
comment by PhilGoetz · 2013-08-19T23:09:01.841Z · LW(p) · GW(p)

I could make equally-valid stories up to come to the opposite conclusion: People who work closely with animals are the greatest believers in animal rights? Obviously they are prejudiced by their close association. People who work closely with animals believe in animal rights the least? Obviously they're the ones who know best.

Replies from: Bayeslisk
comment by Bayeslisk · 2013-08-21T12:32:09.134Z · LW(p) · GW(p)

If you can explain everything, you can't explain anything.

comment by MugaSofer · 2013-08-24T13:35:39.028Z · LW(p) · GW(p)

There are two axes here - knowledge and bias. Those who own farms are most biased, but also most knowledgeable. Those who own farms but don't work on them are both biased and ignorant, so I would predict they are most in favour of farming. Those who are ignorant, but only benefit indirectly - the city dwellers - I would predict higher variance, since it may prove convenient for various reasons to be against it. And finally, the knowledgeable and who benefit only slightly; I would predict that the more knowledge, the more likely that it outweighed the bias.

Of course, I already know these to be true in both cases, pretty much. (Can anyone think of a third example to test these predictions on?) But in general, I would expect large amounts of bias to outweigh knowledge - power corrupts - and low amounts of bias to be eventually overcome by the evidence of nastyness. That's just human nature (or my model of it), and slavery is just a handy analogy where stuff lined up much the same way.

Replies from: Jiro
comment by Jiro · 2013-08-24T17:45:33.329Z · LW(p) · GW(p)

This argument doesn't help you. The problem is that the original (implied) claim (that the positions of city-dwellers and farmers happen because vegetarianism is good but people oppose it for irrational reasons) is unfalsifiable: if city-dwellers favor it and farmers oppose it, that happens because vegetarianism is good; if city-dwellers oppose it and farmers favor it, that still happens because vegetarianism is good.

Your explanation in terms of two axes is not wrong, but that explanation implies that the positions of farmers and city-dwellers can go either way regardless of whether vegetarianism is good. In other words, your explanation doesn't save the original claim, and in fact demolishes it instead.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-26T15:09:33.680Z · LW(p) · GW(p)

This argument doesn't help you. The problem is that the original (implied) claim (that the positions of city-dwellers and farmers happen because vegetarianism is good but people oppose it for irrational reasons) is unfalsifiable: if city-dwellers favor it and farmers oppose it, that happens because vegetarianism is good; if city-dwellers oppose it and farmers favor it, that still happens because vegetarianism is good.

Your explanation in terms of two axes is not wrong, but that explanation implies that the positions of farmers and city-dwellers can go either way regardless of whether vegetarianism is good.

What? No. Where are you getting that from?

In other words, your explanation doesn't save the original claim, and in fact demolishes it instead.

Which original claim? I just pointed out that you have to take bias into account.

comment by [deleted] · 2013-08-19T01:59:35.725Z · LW(p) · GW(p)

I kinda think the opposite is true. It's people who live in cities who join PETA.

No, it goes both ways. It's only people who live in cities who can either completely ignore animal welfare or go to the other wacky extreme, rather than realizing what is involved in using animals for raw material for things and understanding that some kind of arrangement has to be made and trying to make it the best one possible.

comment by selylindi · 2013-08-28T14:55:18.183Z · LW(p) · GW(p)

I'm not aware of any Western religion that says cruelty to animals is a sin.

FWIW I'll provide some institutional references:

The current Catechism of the Catholic Church section 2418 reads, in part: "It is contrary to human dignity to cause animals to suffer or die needlessly." The 1908 Catholic Encyclopedia goes into more detail.

I also searched for statements by the largest Protestant denominations. I found nothing by the EKD. The SBC doesn't take official positions but the Humane Society publishes a PDF presenting Baptist thinking that is favorable to animals.

The United Synagogue of Conservative Judaism website has lots of minor references to animal welfare. One specific example is that they appear to endorse the Humane Farm Animal Care Standards.

The largest Muslim organization that I found reference to, the Nahdlatul Ulama, does not appear to have any official stance on treatment of animals.

comment by metastable · 2013-08-18T18:58:34.090Z · LW(p) · GW(p)

It's people who live in cities who join PETA.

The developed world is thoroughly urbanized. Des Moines is as far from animals as Manhattan. I think what you mean is that a certain politique ascendant on both coasts is much more likely to purchase animal rights as an expansion pack. Which is not to pre-judge the add-on, but to say it has very little to do with the size of your skyscrapers.

That said, I'm not disputing at all that modern agribusiness commodifies animals and that many of today's farmers and ranchers are pretty insulated from the things they eat.

There are many accounts of prayers to animals. One of the best-attested is of the Ainu prayers to the bears they worship (and kill.)

I'm not aware of any Western religion that says cruelty to animals is a sin

Well, that does exclude Hinduism, Jainism, and Buddhism, which famously do have animal ethics. But even if we're just talking the western religions, then yeah, they do, too.

Without getting into a nasty debate involving proof-texting and what Atheists say the Bible says versus what Theists say the Bible says: if you go ask a few questions in the pertinent parts of Stack Exchange of Muslim, Roman Catholic, Protestant, Eastern Orthodox, and Orthodox Jewish thinkers, I guarantee they will answer back that wanton cruelty to animals is wrong. And the same would be true if you started reading random imams, theologians, patriarchs, and pastors.

Individual interpretations, maybe

Unfortunately, there is no possible answer to this.

The Anglican Church was fine with bear-baiting. I don't think the Catholic Church complained about vivisection.

While the first and loudest opposition to cock-fighting and bear-baiting came from Puritans and Methodists, outside the Church of England's mainstream, these people were indisputably Anglicans at the beginning. And a voice of conscience from the margins of the culture is very common, and usually just means that the center of the culture has been captured by self-interest.

Catholics leaders were present at the beginning of the anti-vivisection movement.

it's certainly true that tribal cultures gave zero or negative weight to the well-being of competing tribes.

If this were true, tribes would be in constant total war, which is actually a foreign concept to most tribal societies. Read Napoleon Chagnon again. They kill out of self-interest, and out of revenge, but it's not constant and it's not something they feel awesome about.

Replies from: Jiro, PhilGoetz
comment by Jiro · 2013-08-18T19:35:21.627Z · LW(p) · GW(p)

Unfortunately, there is no possible answer to this.

Of course there is. Not all statements in religious holy books require the same amount of interpretation. If the various holy books said "thou shalt not be cruel to animals" using fairly direct language, that would be an answer to that. Problem is, they don't.

If this were true, tribes would be in constant total war

That doesn't follow. I grant zero weight to the well-being of clothes, but that doesn't mean I go around destroying my clothes and setting department stores on fire. Granting zero weight to something doesn't imply wanting to destroy it, and even granting negative weight to it only means wanting to destroy it insofar as destroying it doesn't make something else worse that you do care about (such as risking death to your own tribesmen in the war.)

Also, I wonder how many of the cultures who pray to the spirit of the animal also pray to the spirit of plants, rocks, the sun, or other things that even vegetarians don't think have any rights.

Replies from: metastable
comment by metastable · 2013-08-18T22:43:23.860Z · LW(p) · GW(p)

A minimal investment of time would convince anybody willing to be convinced that at the very least there are many doctrinal authorities on record in every large strain of western monotheism against cruelty to animals, and that these authorities adduce evidence from ancient holy texts to support their pronouncements. Feel free to disagree with Aquinas, eastern patriarchs, a large body of hadiths, and many rabbinical rulings about the faiths they represent. There is a hermeneutical constellation of belief systems that posits texts speaking for themselves without any interpretation and announces that meanings are clear to the newcomer, or outsider, or even the barely literate, in ways they were never clear to bodies of scholars who gave their lives to the study of the same texts. I'm not sure you want to be in that constellation. That is Constellation Fundamentalism, though to be fair to the actual fundamentalists, they don't seem to be amenable to animal bloodsports at all.

I grant zero weight to the well-being of clothes, but that doesn't mean I go around destroying my clothes

Clothes aren't a threat to ambush you, and aren't eating tapirs you could eat. I assume you would burn them if you feared ambush or starvation.

doesn't make something else worse that you do care about (such as risking death to your own tribesmen in war)

Total war doesn't mean you can't be tactical in your approach, obviously. Dissembling and biding time are smart.

What I mean about the tribes being in constant total war is that since, as was pointed out, they are in competition for resources with neighboring tribes, they would kill neighbors whenever they thought they could get away with it if they attached zero utility to these people's survival. And we see that's not the case, not at all. Hunter-gatherers trade, they intermarry, they feast together, they form friendships and alliances between tribes, they do a bunch of things that would be socially impossible if there were not any empathy at all. Sometimes they betray and murder. But by no means all the time. Napoleon Chagnon's accounts of the Yanomamo, where most of this stuff about violent stone-agers comes from recently, are quite clear that elders intervene to stop axe fights some times, and that the Yanomamo are mostly just terrified by the violence around them.

What we know from the psych side is that empathy appears to be basic in humans. Our researchers would have to be pretty consistently wrong about something very large if Stone Age people, just because they were Stone Age, were incapable of empathy with people outside their immediate kin group.

I wonder how many of the cultures who pray to the spirit of the animal also pray to the spirit of the plants, rocks, the sun, or other things that even vegetarians don't think have any rights.

Yeah, this is pretty interesting to me, too. I suspect, though, that a lot of people into deep ecology and Christian environmentalism and similar forms of environmentalism have...analogous?...attitudes toward the parts of nature that lack nervous systems. Not inside the rationalist/hedonic calculus/Peter Singer/utilitarian communities, probably, because there's so much emphasis on pleasure and pain there. But it wouldn't surprise me terribly if the "expanding circle of concern" eventually encompassed or re-encompassed things like trees and rivers.

Replies from: Jiro, MugaSofer
comment by Jiro · 2013-08-19T01:14:24.828Z · LW(p) · GW(p)

there are many doctrinal authorities on record in every large strain of western monotheism against cruelty to animals

Which means that many doctrinal authorities are capable of making stuff up.

While most religions' tenets require some interpretation of their holy books, there are degrees of this. Some claims made by religions come from their holy books in a fairly direct and straightforward way. Others are claimed to come from their holy books but in fact are the result of contrived interpretation. Religious animal cruelty laws fall in the second category. The holy books do not support laws about animal cruelty in the same way that they support "thou shalt not commit adultery".

Furthermore, even those contrived laws don't generally claim it's cruel to eat animals. Bringing up the fact that religions oppose animal cruelty is like pointing out that every religion and culture has rules about sexual immorality, and therefore we should oppose some particular type of sexual immorality that you don't like.

they are in competition for resources with neighboring tribes, they would kill neighbors whenever they thought they could get away with it if they attached zero utility to these people's survival.

During much of history, most cultures that knew Jews attached zero or negative utility to them, but pogroms only happened every so often. They didn't just kill all the Jews until the Nazi era.

What we know from the psych side is that empathy appears to be basic in humans.

Anthromorphizing is also pretty basic to humans; that's why the Eliza program convinces people.

But it wouldn't surprise me terribly if the "expanding circle of concern" eventually encompassed or re-encompassed things like trees and rivers.

But you're not following the implications of this. The idea that primitive cultures respect the spirit of animals was brought up to show that taking the well-being of animals into account is normal. If the same primitive people respect the spirit of things whose well-being we clearly should not take into account, such as vegetables, it doesn't support the point you brought it up to support.

Replies from: dspeyer, metastable, metastable, Richard_Kennaway, MugaSofer
comment by dspeyer · 2013-08-19T06:07:23.646Z · LW(p) · GW(p)

The holy books do not support laws about animal cruelty in the same way that they support "thou shalt not commit adultery".

IIRC, the requirements for humane slaughter are spelled out in great detail in the mishnah.

comment by metastable · 2013-08-19T02:53:51.953Z · LW(p) · GW(p)

Which means that many doctrinal authorities are capable of making stuff up.

Friend, I'm assuming you believe all/most of religion is made up anyway, right? I mean, you might think some of it was made up sincerely and some was made up cynically. But you know with an extraordinarily high degree of certainty it's all made up. Right? So who cares who made it up. It's there. Some people take it seriously.

It doesn't threaten non-theism at all to concede that religions define their own interpretations and belief systems. This concession is actually the bread and butter of non-theism. Really the only person who gets to contest that is the theist with an alternate interpretation, because he can appeal to a higher authority.

Even though I said I didn't want to sling scripture, and I really don't: why don't you muzzle the ox that treadeth out the grain? Why were the fifth and sixth days of creation declared good? Why was man created on the same day as the beasts of the field? Why was man originally given plants to eat, not flesh? Why was man specifically forbidden to eat "the life" of the animal? Why did you have to rest beasts of burden on the Sabbath? Why couldn't you disturb mother birds on their eggs? Why did fallen beasts of burden have to be helped up? Why were the animals saved with Noah during the flood? Why doesn't God forget sparrows? Why does God feed the birds of the air? Why is it that animals only become carnivorous after the exit from Eden? What does it mean that the lion will lie down with the lamb and that a little child shall lead them? Why are humans constantly portrayed as animals in scriptural metaphor?

Now, I totally believe you have answers for all these questions that acknowledge the scriptural references but manage to discredit their supposed connection to any sort of authorial concern for animal welfare or the environment. The problem is, that's not enough. You have to show that your answers were the one that audiences have understood and adopted over centuries. That will be difficult. It certainly appears that St. Francis and St. Augustine and St. Aquinas and Cardinal Manning and Tolkien and John Paul II disagree with you, and I'm inclined to say that their readings carry more popular weight than yours.

But you're not following the implications of this.

Oh, no, I get it. Respect for nature != concern for the pain of creatures with nervous systems. Spiritual environmentalism is nothing like utilitarian environmentalism. I just don't care about that very much. I am much more interested in whether some secular environmentalists will eventually develop secular justifications for assigning "rights" or something very like that to aspects of the environment that lack nervous systems. Probably not worth chasing that rabbit, tho.

Replies from: Jiro
comment by Jiro · 2013-08-19T08:16:57.493Z · LW(p) · GW(p)

Even though I said I didn't want to sling scripture, and I really don't: why...

That's a cheat that is commonly used by creationists who come up with lists of 100 and 200 arguments for creationism. The trick? Make a list containing a lot of very low quality arguments in the knowledge that it's long enough that no one person will have the patience (or sometimes the knowledge) to properly refute every single one. Then latch on to whichever ones got the least thorough response.

It's not hard to point out the flaws in your examples. For instance, Noah did save the animals, but he's saving them as resources--because if he doesn't, there won't be any animals--not as an anti-cruelty rule. If God also commanded that he take some seeds, would you then have claimed that he was concerned about cruelty to seeds? And notice that he takes seven pairs of clean animals so that he can make animal sacrifices.

But no matter which example I refute, you'd just point to another I haven't refuted. And I'm not going to do every single one.

Replies from: metastable
comment by metastable · 2013-08-19T11:50:56.780Z · LW(p) · GW(p)

Like I said, I really am sure you can refute these! That is beside the point. I doubt very much you can show that your refutations are what people actually believe about the texts.

I am not arguing the text is true. I am not even arguing that a certain interpretation of the text is correct. I am pointing out that people believe certain interpretations of the text.

This is not like arguing with William Lane Craig about creationism. This is like trying to tell William Lane Craig that nobody believes in creationism.

We may have reached the point of diminishing returns. Arguments are soldiers. Mine need a vacation. Enjoy your day.

Replies from: Jiro
comment by Jiro · 2013-08-19T17:03:48.045Z · LW(p) · GW(p)

I doubt very much you can show that your refutations are what people actually believe about the texts.

I would be very surprised if any major religion claims that Noah had to take the animals on the ark because not taking them would be cruelty to animals. In other words, yes, my refutation is what people believe about the texts. Except I'm not going to bother going through 13 refutations.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-29T20:29:14.890Z · LW(p) · GW(p)

I'm not going to bother going through 13 refutations.

How about, say, three? I could probably do three myself, but they would suck because I'm biased. And I'd be genuinely interested to hear it.

(This is completely beside the point, at this stage, so I can understand why you may not want to bother.)

comment by metastable · 2013-08-19T11:39:23.690Z · LW(p) · GW(p)

Mmmm. Clicked the wrong reply button. Sorry....

comment by Richard_Kennaway · 2013-08-19T07:20:14.350Z · LW(p) · GW(p)

If the same primitive people respect the spirit of things whose well-being we clearly should not take into account, such as vegetables

It's not that clear to Swiss politicians.

"The dignity of plants".

That was written by one of the committee that produced this official Swiss government publication. (PDF)

comment by MugaSofer · 2013-08-29T19:17:49.106Z · LW(p) · GW(p)

Furthermore, even those contrived laws don't generally claim it's cruel to eat animals. Bringing up the fact that religions oppose animal cruelty is like pointing out that every religion and culture has rules about sexual immorality, and therefore we should oppose some particular type of sexual immorality that you don't like.

Actually, he's responding to PG, who claimed that no major religion is against cruelty to animals ... presumably implying that this is a modern aberration? Or something? Regardless, it was he who claimed (in your analogy) that since no religion is against "sexual immorality", then clearly modern dislike of rape is not a part of basic human ethics.

During much of history, most cultures that knew Jews attached zero or negative utility to them, but pogroms only happened every so often. They didn't just kill all the Jews until the Nazi era.

They demonized them. That is not the same as attaching "zero or negative utility" except in the most dire of cases (which, admittedly, crop up with some regularity.)

comment by MugaSofer · 2013-08-29T19:11:23.537Z · LW(p) · GW(p)

There is a hermeneutical constellation of belief systems that posits texts speaking for themselves without any interpretation and announces that meanings are clear to the newcomer, or outsider, or even the barely literate, in ways they were never clear to bodies of scholars who gave their lives to the study of the same texts. I'm not sure you want to be in that constellation. That is Constellation Fundamentalism, though to be fair to the actual fundamentalists, they don't seem to be amenable to animal bloodsports at all.

To be fair to this idea, it can be useful to approach things from a fresh perspective. Scholars have had longer to develop the more ... complex misinterpretations.

The trouble springs up when you don't check the, y'know, facts. Like the original text your copy was translated from, say. Or the culture it was written in. Or logic.

(Or, in the opposite case, declaring that your once-over the text has revealed what believers "really" believe.)

Replies from: metastable
comment by metastable · 2013-08-29T19:35:34.408Z · LW(p) · GW(p)

Or, in the opposite case, declaring that your once-over the text has revealed what believers "really" believe.

So very much this.

comment by PhilGoetz · 2013-08-19T23:17:27.210Z · LW(p) · GW(p)

They kill out of self-interest, and out of revenge, but it's not constant and it's not something they feel awesome about.

Most Native American cultures felt awesome about killing enemies in battle. I don't know if it's universal, but it was very common for warriors to be highly-respected in tribal cultures, in proportion to how many people they'd killed.

I don't think you can assert that it's not constant, either. Look at the conflict between Hopi & Navajo, Cree & Blackfoot. Similar to the Palestinian/Israeli conflict, and I'd call that constant.

Modern all-out, extended-duration war is a foreign concept to such groups, but "this tribe is our enemy and we will kill any of them found unprotected" and "let us all get together and annihilate this troublesome neighbor village and take their women" is not.

Replies from: metastable
comment by metastable · 2013-08-20T00:00:44.139Z · LW(p) · GW(p)

Most Native American cultures felt awesome about killing enemies in battle.

Weren't you just saying there's a lot of mythologizing of the NA past?

Did you know there are specific Navajo rituals designed to cleanse warriors returning from war before they re-enter the community, to prevent their violence from infecting the community? And that these rituals have counterparts in cultures around the world, and are of interest to modern trauma researchers?

It is helpful to separate desirable status as a successful warrior from desire for war. It is very common for very successful warriors to prefer peace, in tribal societies as in modern. That's not to say young guys don't want to make their bones and old guys don't see the need to take care of business: it's to say that only a totally deranged person kills without any barriers, and very few people are totally deranged.

It's interesting that you adduce the Palestinian/Israeli conflict in this context. I am very certain that the majority of Israelis and Palestinians are capable of empathy for each other. This doesn't mean they wouldn't shell each other or commit atrocities. But you're arguing a hard line: that tribes attach "zero or negative" utility to each other's continued existence.

Replies from: Lumifer
comment by Lumifer · 2013-08-20T00:51:20.051Z · LW(p) · GW(p)

tribes attach "zero or negative" utility to each other's continued existence.

This needs modifiers: it looks to me that with "always" added this is wrong, but with "sometimes" added this is correct.

comment by someonewrongonthenet · 2013-08-18T17:19:35.058Z · LW(p) · GW(p)

Only people who've never hunted or herded and buy their meat wrapped in plastic have never thought about animal welfare

Farmers are in contact with animals even more often than hunter gatherers. But have you ever seen the whole "asking for forgiveness" thing in an agricultural society? (not rhetorical)

Replies from: metastable
comment by metastable · 2013-08-18T23:15:33.243Z · LW(p) · GW(p)

No, though I've seen small-scale family farms ensure that their stock live pleasantly and are slaughtered humanely, and I myself have tried to make sure food animals I've killed died quickly and painlessly.

Mileage will vary. There are a lot of true horror stories about farming and ranching, and they're not all from industrial feedlots.

comment by eurg · 2013-08-18T15:34:33.576Z · LW(p) · GW(p)

The asking for forgiveness may indicate that people somehow thought of the act as killing, but that did not change their actions. Humans have had a distinctive influence on the local megafauna wherever they showed up. A cynic might write that "humans did not really care about the well-being of ...". We for instance also have taboos of eating dogs and cats, but the last time I checked it was not because of value their lives, but because they are cute. It's mostly organized lying to feel OK.

Replies from: RomanDavis, MugaSofer, metastable, someonewrongonthenet
comment by RomanDavis · 2013-08-19T15:20:46.436Z · LW(p) · GW(p)

What? Of course people care about the lives of dogs and cats.

Anecdotal Evidence: All the people I've seen cry over the death of a dog. Not just children, either. I've seen grown men and women grieve for months over the death of a beloved dog.

Even if their sole reason for caring is that their cute, that wouldn't invalidate the fact that they care. There's some amount of "organized lying" in most social interactions, that doesn't imply that people don't care about anything. That's silliness, or puts such a high burden of proof/ high standard of caring (even when most humans can talk about degrees of caring more or less) as to be both outside the realm of what normal people talk about and totally unfalsifiable.

comment by MugaSofer · 2013-08-18T21:42:22.355Z · LW(p) · GW(p)

We for instance also have taboos of eating dogs and cats, but the last time I checked it was not because of value their lives, but because they are cute.

More because we regularly socialize with them. People are not, generally, in favour of killing just the ugly pets.

(And, this is purely anecdotal, but viewing animals more as less-intelligent individuals with a personality and so on and less as fleshy automatons seems to correlate with pets.)

comment by metastable · 2013-08-18T16:36:02.945Z · LW(p) · GW(p)

I guess I'm not cynical?

People have to eat. It's consistent to feel that animal life has value but to know that your tribe needs meat, and to prioritize the second over the first. The fact that you value an animal life doesn't mean you value it above all else. And the fact that humans wiped out the Giant Sloth/Mammoth/whatever only necessitates that we were really good hunters. It says nothing about our motivations.

Also, I think you would find it really hard to disentangle cuteness from empathy, if that's what you're trying to do.

comment by someonewrongonthenet · 2013-08-18T17:11:16.931Z · LW(p) · GW(p)

Asking for forgiveness is usually a hunter-gatherer thing. Before agriculture brought starchy grains and dairy on the scene animal fat was the major calorie source, and vegetarianism would have meant only fruits, nuts, leafy vegetables, and tubers. And you'd need a lot of tubers in order for this to be a sufficiently calorie rich diet.

Replies from: eurg
comment by eurg · 2013-08-18T17:49:32.750Z · LW(p) · GW(p)

You are right, of course. I did not want to imply that a vegan diet would have been feasible until recent advances.

comment by SpectrumDT · 2020-02-17T10:40:15.140Z · LW(p) · GW(p)
I think this is wrong in an interesting way: it's an Industrial Age blind spot.

I think "most people in time and space" have lived in the industrial age. Am I wrong?

comment by MugaSofer · 2013-08-18T21:39:05.184Z · LW(p) · GW(p)

Most cultures, I understand, base moral worth on a "great chain of being" model, with gods above heroes above mortals, and mortals above those **s in the next village above smart animals above dumb animals ... you probably get the picture.

comment by private_messaging · 2013-08-18T11:22:13.226Z · LW(p) · GW(p)

The actual reality does not have high level objects such as nematodes or humans.

Before one could even consider an utility of a human (or a nematode) 's existence, one got to have a function that would somehow process a bunch of laws of physics and state of a region of space, and tell us how happy/unhappy that region of space feels, what is it's value, and so on.

What would be the properties of that function? Well, for one thing, an utility of a region of space would not generally be equal to sum of utilities of parts, for the obvious reason that your head has bigger utility when it haven't been diced into perfect cubic blocks and then rearranged like a Rubik's cube.

This function could, then, be applied to a larger region of space containing nematodes and humans, and process it in some way which would clearly differ from any variety of arithmetic utilitarianism that adds or averages utilities of nematodes and humans, because, as we have established above, the function is not distributive over regions of spacetime, and nematodes and humans are just regions of spacetime with specific stuff inside.

What I imagine that function would do, is identify existence of particular computational structures of interest in the region of space, and there are many such structures inside a human head that do not exist in any region of space occupied by nematodes, which have a much smaller set of structures with extra nematodes not adding any new structures (unlike humans who, due to distinct memories and the different ways their brains are arranged, do add new structures, linearly up to a fairly large number).

So even a very large region of spacetime full of nematodes and one human can have it's utility decreased a lot more by random rearrangements of the atoms (quarks, what ever the bottom level is - does not matter) constituting a human than by random rearrangements of the atoms constituting nematodes.

edit: that is, as long as there's enough nematodes to cover the entire nematode experience space (which is quite small), increases in their number won't add to computational structure of the whole region. Something that's not true for people, up to a really very large number of people.

Replies from: PhilGoetz, MugaSofer
comment by PhilGoetz · 2015-02-09T20:59:56.359Z · LW(p) · GW(p)

The actual reality does not have high level objects such as nematodes or humans.

Um... yes, it does. "Reality" doesn't conceptualize of them, but I, the agent analyzing the situation, do. I will have some function that looks at the underlying reality and partitions it into objects, and some other function that computes utility over those objects. These functions could be composed to give one big function from physics to utility. But that would be, epistemologically, backwards.

Before one could even consider an utility of a human (or a nematode) 's existence, one got to have a function that would somehow process a bunch of laws of physics and state of a region of space, and tell us how happy/unhappy that region of space feels, what is it's value, and so on.

No. Utility is a thing agents have. "Utility theory" is a thing you use to compute an agent's desired action; it is therefore a thing that only intelligent agents have. Space doesn't have utility. To quote (perhaps unfortunately) Žižek, space is literally the stupidest thing there is.

Replies from: private_messaging
comment by private_messaging · 2015-02-14T23:50:45.935Z · LW(p) · GW(p)

Before one could even consider an utility of a human (or a nematode) 's existence

No. Utility is a thing agents have.

'one' in that case refers to an agent who's trying to value feelings that physical systems have.

I think there's some linguistic confusion here. As an agent valuing that there's no enormous torture camp set up in a region of space, I'd need to have an utility function over space, which gives the utility of that space.

Replies from: PhilGoetz
comment by PhilGoetz · 2015-02-16T02:12:14.180Z · LW(p) · GW(p)

'one' in that case refers to an agent who's trying to value feelings that physical systems have.

I see what you're doing, then. I'm thinking of a real-life limited agent like me, who has little idea how the inside of a nematode or human works. I have a model of each, and I make a guess at how to weigh them in my utility function based on observations of them. You're thinking of an ideal agent that has a universal utility function that applies to arbitrary reality.

Still, though, the function is at least as likely to start its evaluation top-down (partitioning the world into objects) as bottom-up.

I don't understand your overall point. It sounds to me like you're taking a long way around to agreeing with me, yet phrasing it as if you disagreed.

Replies from: dxu
comment by dxu · 2015-02-16T02:22:20.680Z · LW(p) · GW(p)

I think (and private_messaging should feel free to correct me if I'm wrong) that what private_messaging is saying is, in effect, that before you can assign utilities to objects or worldstates or whatever, you've got to be able to recognize those objects/worldstates/whatever. I may value "humans", but what is a "human"? Since the actual reality doesn't have a "human" as an ontologically fundamental category--it simply computes the behavior of particles according to the laws of physics--the definition of the "human" which I assign utility to must be given by me. I'm not going to get the definition of a "human" from the universe itself.

Replies from: PhilGoetz
comment by PhilGoetz · 2015-02-16T03:02:07.440Z · LW(p) · GW(p)

Okay. I don't understand his point, then. That doesn't seem relevant to what I was saying.

comment by MugaSofer · 2013-08-18T21:27:37.207Z · LW(p) · GW(p)

What would be the properties of that function? Well, for one thing, an utility of a region of space would not generally be equal to sum of utilities of parts, for the obvious reason that your head has bigger utility when it haven't been diced into perfect cubic blocks and then rearranged like a Rubik's cube.

I'm not entirely sure what the point of this comment was, but in that case, surely the problem occurs when said chunks die? I mean, if they magically kept working the same way, linking telepathically with the other chunks and processing information perfecty well, I don't see why they wouldn't be just as valuable, albeit rather grisly looking.

Replies from: private_messaging
comment by private_messaging · 2013-08-19T06:54:10.666Z · LW(p) · GW(p)

Finding out that the chunks will die (given the laws of physics as they are) is something that the function in question got to do. Likewise, finding out that they won't die with some magic, but would die if they weren't rearranged and the magic was applied (portal-ing the blood all over the place).

You just keep jumping to making an utility that is computed from the labels you already assign to the world.

edit: one could also subdivide it into very small regions of space, and note that you can't compute any kind of utility of the whole by going over every piece in isolation and then summing.

edit2: to be exact, I am counter-exampling the f(ab)=f(a)+f(b) (where "ab" is a concatenated with b) with f(ab)!=f(ba) and a+b=b+a .

More broadly, mathematics1 has been very useful in science, and so ethicists try to use mathematics2 . Where mathematics1 is a serious discipline where one states assumptions and progresses formally, and mathematics2 is "there must be arithmetical operations involved" or even "it is some kind of Elvish" . (while mathematics1 doesn't get you very far because we can't make many assumptions)

comment by Paul Crowley (ciphergoth) · 2013-08-17T07:01:36.773Z · LW(p) · GW(p)

I broadly agree - it seems to me a plausible and desirable outcome of FAI that most of the utility of the future comes from a single super-mind made of all the material it can possibly gather in the Universe, rather than from a community of human-sized individuals.

The sort of utility monster I worry about is one that we might weigh more not because it is actually more sophisticated or otherwise of greater intrinsic moral weight, but simply one that feels more strongly.

Replies from: PhilGoetz
comment by PhilGoetz · 2013-08-17T16:13:08.816Z · LW(p) · GW(p)

Well, nematodes might already feel more strongly. If you have a total of 302 neurons, and 15 of them signal "YUM!" when you bite into a really tasty protozoan, that might be pure bliss.

Replies from: Eliezer_Yudkowsky, scaphandre
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-18T03:39:53.757Z · LW(p) · GW(p)

I'd bet against this at pretty extreme odds, if only there were some way to settle the bet.

Replies from: byrnema, MugaSofer, PhilGoetz
comment by byrnema · 2013-08-18T04:00:13.761Z · LW(p) · GW(p)

if only there were some way to settle the bet.

I don't think, in general, there could be a way to compare 'strength of feeling', etc. across two separate systems. For example, all you can do is measure the behavior of the organism, but that organism is always going to do the maximum that it can do to maximize its utility function. All you would be doing is measuring the organism's resources for optimizing its utility function, and determining the strength of its preference for any one thing relative to its other preferences only.

Replies from: ESRogs
comment by ESRogs · 2013-08-18T09:01:11.499Z · LW(p) · GW(p)

It seems plausible to me that there is more to 'bliss' than one's level of reaction to a stimulus. When my car is low on gas a warning light comes on, and in response to having its tank filled, the light goes off. Despite the ease of analogy, I think it's fair to describe the difference between this and my own feelings of want and satiety as a difference in kind, and not just degree.

Not that a machine couldn't experience human-like desires, but to be properly called human-like it would need to have something analogous to our sorts of internal representations of ourselves. I don't think the nematode's 302 neurons encode that.

Replies from: byrnema
comment by byrnema · 2013-08-18T17:11:00.031Z · LW(p) · GW(p)

Yes, I agree with you (and likely this was Eliezer's point) that nematodes likely don't have something that a specialized scientist (sort of like a linguist that compares types of feelings across systems) would identify as anologous to 'bliss'. But this would be because their systems aren't complex enough to have that particular feeling, not because they don't feel strongly enough.

... A car's gas gauge must feel very strongly that it either has enough gas or doesn't have enough gas, but the feeling isn't very interesting. (And I don't mind if the specialist mentioned above wants to put a threshold on how interesting a feeling must be to merit being a 'feeling'.)

Replies from: ESRogs
comment by ESRogs · 2013-08-18T18:00:03.068Z · LW(p) · GW(p)

Going back and re-reading ciphergoth's comment above, I now see why you're emphasizing strength of feeling. What you said makes sense, point conceded.

Replies from: PhilGoetz
comment by PhilGoetz · 2013-08-19T22:59:54.338Z · LW(p) · GW(p)

I expect that, as we learn enough about neuroscience to begin to answer this, we'll substitute "feels more strongly" with some other criteria on which humans come out definitively on top.

Replies from: byrnema
comment by byrnema · 2013-08-19T23:25:13.462Z · LW(p) · GW(p)

I agree, and not just because it's us deciding the rubric. I believe an objective sentient bystander would agree that there is some (important) measure by which we come out ahead. Meaning our utility needs a greater weight in the equation.

That is, if they are global utility maximizers. Incidentally, where does that assumption come from? It seems kind of strange. Are these utility maximizers just so social and empathetic they want everybody to be happy?

Replies from: None
comment by [deleted] · 2013-08-20T01:13:45.428Z · LW(p) · GW(p)

Are these utility maximizers just so social and empathetic they want everybody to be happy?

You could imagine the perfect global utility maximizer being created by self-modification of beings, or built by beings who desire such a maximizer.

Why would they want that in the first place? Prosocial emotions (e.g. caused by cooperation and kin selection instincts + altruistic memes) could be a starting point.

Another possible path is philosophical self-reflection. A self-modelling agent could model their utility as resulting from the valuation of mental states, e.g. a hedonist who thinks about what value is to him and concludes that what matters is the (un-)pleasantness of their brain states.

From there, you only need a few philosophical assumptions to generalize:

1) Mental states are time-local, the psychological present lasts maybe up to three seconds only.

2) Our selves are not immutable metaphysical entities, but physical system states that are being transformed considerably (from fetus to toddler to preteen to adult to mentally disabled).

3) Other beings share the crucial system properties (brains with (un-)plesantness); we even have common ancestors passing on the blueprints.

4) Hypothetically, though improbably, any being could be transformed into any other being in a gradual process by speculative technology (e.g. nano technology could tranform me into you, or a human into a chimp, or a pig etc.) without breaking life functions.

5) An agent might decide that it shouldn't matter how a system state came about, only what properties the system state has, e.g. it shouldn't matter to me whether you are a future version of me transformed by speculative technology starting with my current state, but only what properties your system states has (e.g. (un-)pleasantness)

I'm not claiming this is enough to beat everyday psychological egoism, but it could be enough for a philosopher-system to desire self-modification or the creation of an artificial global utility maximizer.

comment by MugaSofer · 2013-08-18T21:43:49.853Z · LW(p) · GW(p)

Come, now, it's hardly untestable. You can pay him if the FAI kills everyone to tile the universe with nematodes.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-08-19T07:26:04.634Z · LW(p) · GW(p)

That seems doable, if you trick the AI into tearing apart a simulation before it figures out it's in one.

But how do you test whether the AI weighted the nematodes so highly because their qualia are extra phenomenologically vivid, and not because their qualia are extra phenomenologically clipperiffic?

comment by PhilGoetz · 2013-08-19T22:57:01.530Z · LW(p) · GW(p)

I suspect we'd have to know a lot more about neuroscience and consciousness to define "feel more strongly" precisely enough for the question to have an answer. I also suspect that, if the answer doesn't come out the way we want it to, we'll substitute another question in its place that does, in the time-honored practice of claiming that universal, objective agenthood is defined by whatever scale humans win on.

comment by scaphandre · 2013-08-27T16:32:58.479Z · LW(p) · GW(p)

Do you really think that is at all likely that a nematode might be capable of feeling more informed life-satisfaction than a human?

comment by Qiaochu_Yuan · 2013-08-17T04:22:36.344Z · LW(p) · GW(p)

Most people in time and space have considered it strange to take the well-being of non-humans into account.

I don't think this is true. As gwern's The Narrowing Circle argues, major historical exceptions to this include gods and dead ancestors.

Replies from: somervta
comment by somervta · 2013-08-17T04:53:36.084Z · LW(p) · GW(p)

dead ancestors may not count as 'non-human', depending on your metric.

Replies from: Randaly, PhilGoetz, fubarobfusco
comment by Randaly · 2013-08-17T09:00:09.423Z · LW(p) · GW(p)

Same for most gods, given the degree to which they were anthropomorphized. (In fact, the Bhagavad-Gita talks about how Hindus need to anthropomorphize in order to give "personal loving devotion to Lord Krishna". [Quote from a commentary])

Replies from: MugaSofer
comment by MugaSofer · 2013-08-18T21:45:20.624Z · LW(p) · GW(p)

... which would imply that the reality is not anthropomorphic but empathising with it is a good thing.

comment by PhilGoetz · 2013-08-17T21:58:36.398Z · LW(p) · GW(p)

Yep, ancestors are dead humans, gods are humans in the same way Batman is human. (I mean, Thor is one of the Avengers. I think that gives it away.) I wanted to say "animals" without implying that humans aren't animals.

I remember reading about a Native American culture that had a designated Speaker for the Wolves who was supposed to represent them in meetings, but I can't remember any details. Could be bogus.

Replies from: metastable
comment by metastable · 2013-08-18T04:23:58.853Z · LW(p) · GW(p)

There are many indigenous cultures (with some hunters still around today) who ask forgiveness upon killing food animals. And history's full of bear cults, and animal species with names that translate into "people of the _," and taboos on harming various animals. I think the notion that humans have mostly only cared for the concerns of humans is the product of an industrial-age blind spot: only people who've never hunted or husbanded, and eat their meat from the slaughterhouse, have never thought about animal welfare.

comment by fubarobfusco · 2013-08-17T08:55:05.043Z · LW(p) · GW(p)

Dead ancestors are not minds that experience anything.

Replies from: Randaly, Document
comment by Randaly · 2013-08-17T09:42:21.070Z · LW(p) · GW(p)

Ancestor worshippers- who are the people whose opinions we're discussing- would disagree. Wikipedia:

Veneration of the dead or ancestor reverence is based on the belief that the dead have a continued existence...the goal of ancestor veneration is to ensure the ancestors' continued well-being

Replies from: fubarobfusco
comment by fubarobfusco · 2013-08-17T17:34:18.982Z · LW(p) · GW(p)

Sure, but there's a fact of the matter: It's not that we don't value the experiences or well-being of dead ancestors; it's that we hold that they do not have any experiences or well-being — or, at least, none that we can affect with the consequences of our actions. (For instance, Christians who believe in heaven consider their dead ancestors to be beyond suffering and mortal concerns; that's kind of the point of heaven.)

The "expanding circle" thesis notices the increasing concern in Western societies for the experiences had by, e.g., black people. The "narrowing circle" thesis notices the decreasing concern for experiences had by dead ancestors and gods.

The former is a difference of sentiment or values, whereas the latter is a difference of factual belief.

The former is a matter of "ought"; the latter of "is".

Slaveholders did not hold the propositional beliefs, "People's experiences are morally significant, but slaves do not have experiences." They did not value the experiences of all people. Their moral upbringing specifically instructed them to not value the experiences of slaves; or to regard the suffering of slaves as the appointed (and thus morally correct) lot in life of slaves; or to regard the experiences of slaves as less important than the continuity of the social order and economy which were supported by slavery.

Replies from: MugaSofer, Randaly
comment by MugaSofer · 2013-08-18T21:48:57.010Z · LW(p) · GW(p)

Slaveholders did not hold the propositional beliefs, "People's experiences are morally significant, but slaves do not have experiences." They did not value the experiences of all people.

You know, I think you're wrong about that. They talked about how savages needed to be ruled by civilised man, and the like, rather than claiming that they were the same as us but who gives a damn?

comment by Randaly · 2013-08-18T03:44:23.923Z · LW(p) · GW(p)

I am fairly confident that I haven't understood your point, as it doesn't seem to me to address the discussion above. My interpretation of your post is that it claims that people engaged in ancestor worship were factually wrong about whether their dead ancestors still counted as humans- e.g. whether or not they experienced anything. However, this is irrelevant to the question under discussion- of whether or not ancestor worship is a counter-example to the claim that most people throughout history haven't cared about non-humans. All that matters for this claim is whether or not most ancestor-worshippers thought that their ancestors qualified as people.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2013-08-23T13:52:47.932Z · LW(p) · GW(p)

I think the point that fubarobfusco was trying to make with that was a partial refutation of the "narrowing circle" thesis that says we care less about people not like us today than in the past. S/he was trying to say, "we haven't stopped caring about anyone we used to care about, we've just stopped believing in them. If we still believed our dead ancestors had feelings, we'd still care about them."

You're correct that all that matters for the question "did ancestor-worshippers care for non-humans" is whether the ancestor-worshippers thought their ancestors were human.

comment by Document · 2013-08-17T19:51:37.352Z · LW(p) · GW(p)

Therefore, by substitution, we don't experience anything in response to knowledge about things that will happen after we're dead?

Replies from: fubarobfusco
comment by fubarobfusco · 2013-08-17T20:25:30.529Z · LW(p) · GW(p)

What? Sorry, I don't see the connection.

(It's my impression that the belief of ancestor-worshipers is not that their actions today fulfill the past living desires of now-dead ancestors, but that their actions today affect the experiences of their dead ancestors today.)

Replies from: Document
comment by Document · 2013-08-17T21:49:07.387Z · LW(p) · GW(p)

I haven't read the article by gwern that Qiaochu linked, so I didn't know that it referred specifically to ancestor worship rather than the more general (believed) evaporation of respect for ancestors' desires as a terminal value.

comment by whateverfor · 2013-08-20T00:54:15.732Z · LW(p) · GW(p)

I've always believed having an issue with utility monsters is either a lack of imagination or a bad definition of utility (if your definition of utility is "happiness" then a utility monster seems grotesque, but that's because your definition of utility is narrow and lousy).

We don't even need to stretch to create a utility monster. Imagine there's a spacecraft that's been damaged in deep space. There's four survivors, three are badly wounded and one is relatively unharmed. There's enough air for four humans to survive one day or one human to survive four days. The closest rescue ship is three days away. After assessing the situation and verifying the air supply, the three wounded crewmembers sacrifice themselves so the one is rescued.

To quote Nozick from wikipedia: "Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose . . . the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility." That is exactly what happens on the spaceship, but most people here would find it pretty reasonable. A real utility monster would look more like that than some super-happy alien.

Replies from: Lumifer
comment by Lumifer · 2013-08-20T01:04:54.600Z · LW(p) · GW(p)

Imagine there's a spacecraft that's been damaged in deep space. There's four survivors, three are badly wounded and one is relatively unharmed. There's enough air for four humans to survive one day or one human to survive four days. The closest rescue ship is three days away. After assessing the situation and verifying the air supply, the three wounded crewmembers sacrifice themselves so the one is rescued.

Not exactly like that... :-)

http://en.wikipedia.org/wiki/R_v_Dudley_and_Stephens

comment by aelephant · 2013-08-17T00:07:03.146Z · LW(p) · GW(p)

When you're talking about the utility of squirrels, what exactly are you calculating? How much you personally value squirrels? How do you measure that? If it is just a thought experiment ("I would pay $1 per squirrel to prevent their deaths") how do you know that you aren't just lying to yourself & if it really came down to it, you wouldn't pay? Maybe we can only really calculate utility after the fact by looking at what people do rather than what they say.

Replies from: DSimon, MugaSofer
comment by DSimon · 2013-08-20T20:45:15.957Z · LW(p) · GW(p)

I may not actually want to pay $1 per squirrel, but if I still want to want to, then that's as significant a part of my ethics as my desire to avoid being a wire-head, even though once I tried it I would almost certainly never want to stop.

Replies from: aelephant
comment by aelephant · 2013-08-20T23:20:37.066Z · LW(p) · GW(p)

I would rather observe you & see what you do to avoid becoming a wirehead. I'd put saying you want to avoid becoming a wirehead & saying you want to want to pay to save the squirrels in the same camp -- totally unprovable at this point in time. In the future maybe we can scan your brain & see which of your stated preferences you are likely to act on; that'd be extremely cool, especially if we could scan politicians during their campaigns.

comment by MugaSofer · 2013-08-18T22:50:46.929Z · LW(p) · GW(p)

How do you know those people aren't still "lying to themselves"? Humans are not known for being perfect, bias-free reasoners.

Maybe we can only really calculate utility after the fact by looking at what perfect Bayesian agents do rather than mere mortals.

comment by Shmi (shminux) · 2013-08-18T16:29:46.139Z · LW(p) · GW(p)

I am mildly consequentialist, but not a utilitarian (and not in the closet about it, unlike many pretend-utilitarians here), precisely because any utilitarianism runs into a repugnant conclusion of one form or another. That said, it seems that the utility-monster type RC is addressed by negative utilitarians, who emphasize reduction in suffering over maximizing pleasure.

Replies from: novalis
comment by novalis · 2013-08-19T06:29:35.819Z · LW(p) · GW(p)

Isn't there an equivalent negative utility monster, who is really in a ferociously large amount of pain right now?

Replies from: AndHisHorse, shminux, Micha_Eichmann
comment by AndHisHorse · 2013-08-20T14:17:38.331Z · LW(p) · GW(p)

Perhaps, but if your utility scale can actually become negative (rather than simply hitting zero), the solution of assisted suicide is fairly simple and cheap to implement.

comment by Shmi (shminux) · 2013-08-19T16:07:56.387Z · LW(p) · GW(p)

Killing it reduces the overall suffering, since its quality of life is well below the "barely worth living" level, with no hope of improvement.

Replies from: DanielLC, novalis
comment by DanielLC · 2013-08-23T02:38:30.934Z · LW(p) · GW(p)

What if it can't be easily killed?

comment by novalis · 2013-08-20T04:42:40.724Z · LW(p) · GW(p)

That doesn't work for preference utilitarians (it would strongly prefer to remain alive).

comment by Micha_Eichmann · 2013-08-20T13:53:46.415Z · LW(p) · GW(p)

The purely negative utility monster (whether it is in a ferociously large amount of pain or not), that also has by definition no diminishing returns in its utility function, just hits zero pain at some point. Until it is in pain again, it is simply not part of the equation. The difference is: If your goal is to minimize X, you can't go on forever without diminishing returns (but with diminishing returns, you can) whereas if your goal is to maximize Y, you can go on forever with or without diminishing returns.

edit: It depends on how the function is defined. Above, I used allocated resources vs. utility (utility = relieve from suffering). But a negative utility monster would be possible if its condition got automatically worse and if it had no diminishing returns of (f.e.) suffering per unit pain, but all the other beings had.

comment by Salemicus · 2013-08-22T13:49:50.800Z · LW(p) · GW(p)

Well, isn't the central end of humanity (nay all sentient life) contentment and ease?

Seems like a strange assumption. Indeed, the reverse is often argued, that the central end of life is to be constantly facing challenges, to never be content, that we should seek out not ease but difficulty.

"How dull it is to pause, to make an end, To rust unburnished, not to shine in use!"

Moreover, even if your assertion were true for humans, and even all mammals, we can imagine non-mammalian sentient life.

Replies from: namismybabe
comment by namismybabe · 2013-08-25T19:22:30.002Z · LW(p) · GW(p)

So yeah.. all mammals do not avert painful situations and seek contented ones? If one kicks a dog, the dog actually likes that, or would not eventually fight against it? Isn't that part of the definition of sentience? Your point is essentially validating moving outside of one's comfort zone. However, I doubt many who advocate doing that would say humans don't by design seek situations of ease over situations of discomfort. Moving outside one's comfort zone via, say, learning to ride a bike is different from averting a stressful work or home environment.

As for non-mammals, well as humans are mammals, then I'm using our taxonomical order as a base. I don't know if the same applies to birds, reptiles or amphibians.

comment by Jiro · 2013-08-16T21:48:31.448Z · LW(p) · GW(p)

Saying that a utility monster means a "creature that is somehow more capable of experiencing pleasure (or positive utility) than all others combined" is vague, because it doesn't mean a creature that's just more capable, it's a creature that's a specific kind of "more capable". Just because human beings can experience more utility from the same actions than nematodes can doesn't make humans into utility monsters, because that's the wrong kind of "more capable". According to your own link, a utility monster is not susceptible to diminishing marginal returns, which doesn't seem to describe humans and certainly isn't a distinction between humans and nematodes.

Replies from: PhilGoetz, AlexMennen
comment by PhilGoetz · 2013-08-16T22:04:57.373Z · LW(p) · GW(p)

The qualification that a utility monster is not susceptible to diminishing marginal returns is made only because they're still assuming utility is measured in something like dollars, which has diminishing marginal returns, rather than units of utility, which do not. Removing that qualification doesn't banish the utility monster. The important point is that the utility monster's utility is much larger than anybody else's.

Replies from: Jiro, novalis, Decius
comment by Jiro · 2013-08-17T01:23:48.508Z · LW(p) · GW(p)

Removing that qualification does banish the utility monster. If the utility monster gets greater utility from dollars than someone else (let's say nematodes), but is still subject to diminishing marginal returns (at a slower rate than nematodes), then the utilitarian result is to start giving dollars to the utility monster until its utility-per-dollar has diminished enough to match the starting utility-per-dollar of the nematodes, and then to give to both the utility monster and the nematodes in a proportion which keeps them at the same rate. The "utility monster" has ceased to be a utility monster because it no longer gets everything. It still gets more, of course, but that's the equivalent of deciding that the starving person gets the food before the full person.

Replies from: satt, SpectrumDT, TGM
comment by satt · 2013-08-17T01:39:44.596Z · LW(p) · GW(p)

The "utility monster" has ceased to be a utility monster because it no longer gets everything. It still gets more, of course, but that's the equivalent of deciding that the starving person gets the food before the full person.

This sounds like it could be almost as repugnant as a utility monster that gets literally everything, depending on precisely how much "more" we're talking about.

Edit: if I were the kind of person who found utility monsters repugnant, that is. I'd already dissolved the "OMG what if utility monsters??" problem in my own mind by reasoning that the repugnant feeling comes from representing utility monsters as black boxes, stripping away all of the features of theirs that make it intuitively obvious why they generate more utility from the same inputs. Put another way, the things that make real-life utility monsters "utility monsters" are exactly the things that make us fail to recognize them as utility monsters. When a parent values their child's continued existence far more than their own, we don't call the child a "utility monster" if the parent sacrifices themselves to save their child, even though that's exactly the child's role in that situation.

Replies from: PhilGoetz
comment by PhilGoetz · 2013-08-17T15:34:11.501Z · LW(p) · GW(p)

Re. "black box", nice way of putting it. This post just gives an example where we can look inside the black box.

comment by SpectrumDT · 2020-02-17T20:53:48.837Z · LW(p) · GW(p)
The "utility monster" has ceased to be a utility monster because it no longer gets everything.

Can this be resolved by adding more monsters? I.e., instead of having just one utility monster on Earth, we could have a million or even 6 billion monsters (as many as there are humans). This would allow the monsters to fully benefit from consuming "everything" or at least close enough to "everything" to raise the dilemma.

Replies from: jimrandomh
comment by jimrandomh · 2020-02-29T23:47:51.475Z · LW(p) · GW(p)

Definitionally speaking, "making each human into a utility monster" is the same as not having any utility monsters at all; utility-monsterdom is a relative property of one agent with respect to the other agents in the population.

Replies from: SpectrumDT
comment by SpectrumDT · 2021-02-04T19:50:15.152Z · LW(p) · GW(p)

There are other agents in the population than humans.

(I apologize for the late reply. I didn't check my notifications.)

comment by TGM · 2013-08-18T22:29:35.762Z · LW(p) · GW(p)

I want to criticise either the idea that diminishing returns is important, or, at least, that dollar values make sense for talking about them.

Suppose we have a monster who likes to eat. Each serving of food is just as tasty as the previous, but he still gets diminishing returns on the dollar, because the marginal cost of the servings goes up.

We also have nematodes, who like to eat, but not as much. They never get a look in, because as the monster eats, they also suffer diminished utilons per dollar.

So the monster is serving the 'purpose' of the utility monster, but still has diminishing returns on the dollar. If we redefine diminishing returns to be on something else, I'm not sure it could be well justified or immune to this issue.

And, although humans are not an example of this sort of monster, the human race certainly is.

comment by novalis · 2013-08-17T00:23:41.390Z · LW(p) · GW(p)

Presumably, that's diminishing marginal returns relative to dollars input. In other words, "You can only drink 30 or 40 glasses of beer a day, no matter how rich you are."

comment by Decius · 2013-08-17T04:17:21.167Z · LW(p) · GW(p)

Units of utility are non-fungible, right?

Replies from: DanArmak
comment by DanArmak · 2013-08-17T13:10:57.272Z · LW(p) · GW(p)

They surely are fungible. The whole point of using utility functions in the first place, is that I can't convert apples into children saved, but I can convert utilons gained from eating apples into utilons gained from saving children, because both are just real numbers.

Replies from: Decius
comment by Decius · 2013-08-18T00:20:26.687Z · LW(p) · GW(p)

But you can't take utilons from the apple tree and give them to the children.

I guess I meant 'transferable' instead of 'fungible', or perhaps something else. The utility monster being associated with more utility does not require that the rest of the world be associated with less.

Replies from: AlexMennen, DanArmak
comment by AlexMennen · 2013-08-19T21:30:21.653Z · LW(p) · GW(p)

But you can't take utilons from the apple tree and give them to the children.

Right; I can't give you one of my utilons directly.

The utility monster being associated with more utility does not require that the rest of the world be associated with less.

If the world is already in a Pareto-optimal state, then changing it to benefit the utility monster would require making someone else worse off.

Replies from: Decius
comment by Decius · 2013-08-20T15:08:53.639Z · LW(p) · GW(p)

What does the Pareto-optimal state look like if a Utility Monster exists?

Replies from: AlexMennen
comment by AlexMennen · 2013-08-20T16:58:27.773Z · LW(p) · GW(p)

Pareto-optimal means that no one can be made better off without making someone else worse off. It doesn't care about how much better off it can make someone, so the existence of a Utility Monster makes no difference to which states are Pareto-optimal. Pareto-optimal could range all the way from giving all the resources to the Utility Monster to giving nothing to the Utility Monster.

So my comment was fairly trivial from the definition of Pareto-optimal; I was just trying to emphasize that there generally are a wide range of Pareto-optimal states; you can't just increase the utility for one person arbitrarily high without trading it off against someone else's utility; you can start, but eventually you hit a Pareto-optimal state, and then you've got tradeoffs to make.

Replies from: Decius
comment by Decius · 2013-08-22T16:11:31.757Z · LW(p) · GW(p)

It looks like you are taking some kind of sum across all agents as the utility of the world; that is incompatible with the basic assumption of the utility monster as I understand it.

The utility monster is something such that as it controls scarce resources, the marginal utility that it contributes to the world as a whole (per additional resource that it controls/consumes) increases. (With everything else having a decreasing marginal return).

The argument is that such a creature would receive all of the resources, and that is bad; the counterargument is that given the described setup, giving the utility monster all of the resources is good, and the fact that we intuit that it is bad is a problem with our intuition and not the math.

Replies from: AlexMennen
comment by AlexMennen · 2013-08-22T17:32:01.671Z · LW(p) · GW(p)

As far as I can tell, the definition involving increasing marginal returns was invented by some wikipedian. Wikipedia does not cite a source for that definition. According to every other source, a utility monster is an agent who gets more utility from having resources than anyone else gets from having resources, regardless of how the utility monster's marginal value of resources changes with the amount of resources already controlled.

Either way, the argument for giving the utility monster all the resources comes from maximizing the sum of the utilities of each agent. I'm not sure what you mean by this being incompatible with the assumption of the utility monster.

Edit: Also, rereading my previous comment, I notice that I was actually not taking a sum across the utilities of all agents. Pareto-optimal does not mean maximizing such a sum. It means a state such that it is impossible to make anyone better off without making anyone else worse off.

Replies from: Decius
comment by Decius · 2013-08-22T17:40:24.086Z · LW(p) · GW(p)

A +utility outcome for one agent is incomparable to a -utility for a different agent on the object layer. It is impossible to compare how much the utility monster gains from security to how much the peasant loses from lack of autonomy without taking a third point- this third viewpoint becomes the only agent in the meta-level (or, if there are multiple agents in the first meta, it goes up again, until there is only one agent at a particular level of meta).

Replies from: AlexMennen
comment by AlexMennen · 2013-08-22T18:20:21.074Z · LW(p) · GW(p)

This is true; there is no canonical way to aggregate utilities. An agent can only be a utility monster with respect to some scheme for comparing utilities between agents.

Replies from: Decius
comment by Decius · 2013-08-23T06:59:41.620Z · LW(p) · GW(p)

Such a scheme is only measuring its own utility of different states of the universe; a utility monster is not a problem for such a scheme/agent, any more than preventing 3^^^3 people being tortured for a million years at zero cost would be a problem.

Replies from: AlexMennen
comment by AlexMennen · 2013-08-23T17:57:08.493Z · LW(p) · GW(p)

I'm not quite sure what you mean. If you mean that any agent that cares disproportionately about a utility monster would not regret that it cares disproportionately about a utility monster, then that is true. However, if humans propose some method of aggregating their utilities, and then they notice that in practice, their procedure disproportionately favors one of them at the expense of the others, the others would likely complain that it was not a fair aggregation. So a utility monster could be a problem.

Replies from: Decius
comment by Decius · 2013-08-24T11:23:08.744Z · LW(p) · GW(p)

If humans propose some method of aggregating their utilities, and later notice that following that method is non-optimal, it is because the method they proposed does not match their actual values.

That's a characteristic of the method, not of the world.

Replies from: AlexMennen
comment by AlexMennen · 2013-08-24T16:08:40.094Z · LW(p) · GW(p)

That's right; being a utility monster is only with respect to an aggregation. However, the concept was invented and first talked about by people who thought there was a canonical aggregation, and as an unfortunate result, the dependency on the aggregation is typically not mentioned in the definition.

Replies from: Decius
comment by Decius · 2013-08-27T00:31:34.320Z · LW(p) · GW(p)

I can't resolve paradoxes that come up with regard to people who have internally inconsistent value systems; were they afraid that the canonical aggregation was such that they personally were left out, in a manner that proved they were bad (because they preferred outcomes where they did better than they did at the global maximum of the canonical aggregation)?

comment by DanArmak · 2013-08-18T10:44:22.864Z · LW(p) · GW(p)

'Fungible' means you don't care where you get your utilons from, as long as it's the same number of utilons.

Replies from: Decius
comment by Decius · 2013-08-18T17:13:48.899Z · LW(p) · GW(p)

Yes, I used the wrong term. For 'fungible' to be cogent in reference to a utility monster, utilons would have to be transferable.

comment by AlexMennen · 2013-08-19T21:23:47.677Z · LW(p) · GW(p)

Wikipedia does not cite a source for its claim that utility monsters have anything to do with non-decreasing marginal utility, nor does the claim make any sense at all. Does anyone know if some wikipedian just made this up, or whether it was published somewhere previously? I've also asked about this on the wikipedia article's talk page. If no one can find any prior source for the statement, I will edit it.

comment by dimension10 · 2016-01-19T13:29:52.324Z · LW(p) · GW(p)

I discussed this recently elsewhere: https://utilitarian.quora.com/Utility-monsters-arent-we-all I'm glad I'm not the only one who's thought of this.

comment by scaphandre · 2013-08-27T16:19:29.400Z · LW(p) · GW(p)

Nice post.

I disagree with the premise that humans are utility monsters, but I see what you are getting at.

I'm a little weary of the concept of a utility monster as it is easy to imagine and debate but I don't think it is immediately realistic.

I want my considerations of utility to be aware of possible future outcomes. If we imagine a concrete scenario like Zach's fantastic slave pyramid builders for an increasingly happy man, it seems obvious that there is something psychotic about an individual who could be made more happy by the senseless toil of other conscious beings. That is not the desired outcome of implementing their naive 'utilitarian ethics computer' genie.

I agree that that situation is repugnant. I think this is created from a poor implementation of their 'utilitarian ethics computer'.

Here's why humans in general are not repugnant: We are not using the suffering of others to increase solely our own happiness. At least not directly, deliberately and relentlessly.

I do agree that sometimes the life-satisfaction of squirrels is cut short by humans building dams (to follow your example).

Sometimes this could be morally right, sometimes not. Humans are imperfect utilitarians because we do a crappy job of counting the potential benefit and costs of all beings involved, with appropriate weights.

I don't see humans as repugnant monsters because I don't give humans infinitely more weight in this scaling.

comment by [deleted] · 2013-08-23T08:13:29.043Z · LW(p) · GW(p)

This is fucking brilliant.

comment by [deleted] · 2015-10-19T09:12:50.941Z · LW(p) · GW(p)

One man's utility monster is another man's neighbour down the street named Bob who you see when you for walks sometimes.

comment by teageegeepea · 2013-08-25T03:23:56.467Z · LW(p) · GW(p)

The human vs animal issue makes more sense if we focus not on "utility" but "asskicking".

comment by AndHisHorse · 2013-08-17T02:20:23.862Z · LW(p) · GW(p)

I do not see a contradiction in claiming that a) utility monsters do not exist and b) under utilitarianism, it is correct to kill an arbitrarily large number of nematodes to save one human.

The solution to this issue is to reject the idea of a continuous scale of "utility capability", under which nematodes can feel a tiny amount of utility, humans can feel a moderate amount, and some superhuman utility monster can feel a tremendous amount. Rather, we can (and, I believe, should) reduce it to two classes: agents and objects.

An agent, such as a human or a utility monster, is a creature which is sentient and judged by society to be worthy of moral consideration, including it in the social utility function. All agents are considered equal, with their individual utility units converted to some social standard. For example, Agent Alpha receives 100 Alpha-Utils from the average day, where Agent Beta receives 200 Beta-Utils from the average day. Both of these are converted into Society-Utils - let's say 10 Society-Utils - making an exchange rate of 10 Alpha:Society and 20 Beta:Society.

This is similar to how currency is exchanged. Assuming some reference point, perhaps an event which society deems is equally valuable for all agents (that is, society values it equally regardless of which agent experiences it), there exists a Utility Economy, in which there exists a comparative advantage; Agent Alpha and Agent Beta serve each other, producing more Society-Utils by trading than either could alone.

Left to the side are objects, such as nematodes. Objects are of value only to the extent to which they feature in an agent's utility function; for the purpose of ethical consideration, we consider objects to have no utility function. Therefore, it would be proper to kill nematodes to save humans - unless the side effects from killing so many nematodes began to threaten more humans than it would save. Similarly, animal protection laws would exist not because of any right of the animal, but rather the strong preferences of humans to avoid animal cruelty. This is consistent with the coexistence of factory farming and animal cruelty laws; humans don't much care about cows, but will fight to defend their pets (and creatures like them).

Of course, to some extent this is passing the buck to the "Utility Economy" to set fair rates, but I believe that a society could cobble together a reasonable exchange in which, for example, nobody's life would be valued trivially.

Replies from: TGM, Decius, SaidAchmiz
comment by TGM · 2013-08-18T22:42:16.190Z · LW(p) · GW(p)

All agents are considered equal,

If I contract a neurodegenerative illness, which will gradually reduce my cognitive function, until I end up in a vegetative state, do I retain agent-ness throughout, or at some point lose equal footing with healthy me in one go? Neither seems a good description of my slow slide from fully human to vegetable.

with their individual utility units converted to some social standard. For example, Agent Alpha receives 100 Alpha-Utils from the average day, where Agent Beta receives 200 Beta-Utils from the average day. Both of these are converted into Society-Utils - let's say 10 Society-Utils - making an exchange rate of 10 Alpha:Society and 20 Beta:Society.

What is an "average day"? My average day probably has greater utility than that of a captive of a sadistic gang...

comment by Decius · 2013-08-17T04:14:30.465Z · LW(p) · GW(p)

That looks like a great foundation for a set of laws, but a poor foundation for a set of ethics.

Replies from: AndHisHorse
comment by AndHisHorse · 2013-08-17T14:28:56.256Z · LW(p) · GW(p)

How so? I view this as an implementation of equality among agents. What makes it ethically repugnant?

Replies from: PhilGoetz, Decius
comment by PhilGoetz · 2013-08-17T14:47:20.361Z · LW(p) · GW(p)

This is a particular instance of the general approach, "I have to assign a number to each of these items, but it's hard and contentious to do, so instead I will give them all zeroes (objects) or ones (agents)." It always increases the total error. The world is not divided into agents and objects, and this approach would still increase total error, or at best leave it unchanged, even if they were, since errors in classification give a larger total error when they are thresholded instead of just left as, say, probabilities.

You should also consider that, when AI is developed, you will become an "object".

This approach doesn't work well even for humans. Very intelligent humans, armed with and experienced with mathematics, large computerized databases, regression analysis, probability and statistics, information theory, dimension reduction, data mining, machine learning, stability analysis, optimization techniques, and a good background in cog sci, biology, & physics, think more differently from average humans around 1600 AD, than average humans in 1600 AD did from dogs. So where do you draw the line?

Replies from: Decius
comment by Decius · 2013-08-18T00:40:35.391Z · LW(p) · GW(p)

Very intelligent humans, ... think more differently from average humans around 1600 AD, than average humans in 1600 AD did from dogs.

Reference? The priors for recent humans and dogs thinking more alike than modern humans and dogs (despite Euarchontoglires and Laurasiatheria diverging about 90 MYA and humans in 1600 diverging from modern humans 400 years ago). I might estimate at 90M:400 against, if I had to do so very quickly.

Why do you think that more than half of the change in thinking in the last 90 million years has occurred in the last 400?

Replies from: PhilGoetz
comment by PhilGoetz · 2013-08-18T04:37:24.454Z · LW(p) · GW(p)

Obviously I cannot cite a reference. This is an opinion. I take it you think less than half of the sum total of what has been discovered or learned was learned in the past 400 years? Your priors suggest you assume linear advance in thinking, but hominid cranial enlargement began only 1-2 million years ago. So you must also expect, as a prior, that the difference between humans and chimps is 1/90th - 1/45th of the difference between chimps and dogs. In that case, why exclude chimps from our society?

The maximum travel speed of humans today have travelled is about 7 miles per second. Assuming a travel rate of 0 miles per second 4 billion years ago, we do not conclude that bacteria were able to propel themselves 3.5 miles per second 2 billion years ago.

I don't really think there's been a change in humans. I think there are new tools available that help us think better, much like the new machines available that let us move fast.

Replies from: Decius
comment by Decius · 2013-08-18T17:22:17.568Z · LW(p) · GW(p)

You don't believe that homind cranial enlargement is responsible for more than half of the difference between modern humans and dogs, so why does it matter when it happened?

Suppose that dogs are 50-100 times further away from humans than chimps are. Further suppose that bacteria are more than 100 times further away from humans than dogs are. Why is one of those a reason to include chimps, and the other not a reason to include dogs. (Rocks are more than 100 times as different from humans than fungi are, right?) Rather than use relative closeness, I'm going to assert that absolute distance is important. (If that means that a typical human 400 years ago would not qualify now, I think it says more about them than it does about me; but I don't think that is the case).

I also danced around and didn't actually say that 90M:400 was the best prior; I said if I needed one quickly it's the one I would use. To refine that number first requires refining the question.

comment by Decius · 2013-08-18T00:30:17.270Z · LW(p) · GW(p)

That is it an implementation of equality among unequal agents. Why is an average day of Agent Alpha the same value as an average day of Agent Beta, and how does Agent Beta determine how much utility Agent Alpha gains from something other than the reference economy?

If we allow the agents to determine their own utility derived from, say, fiat currency, we have instead of a utility economy a financial economy. Everyone gains instrumental value from participating (or they stop participating). Allow precommitment and assume rational, well-informed agents, and the economic system maximizes each individual utility within the possible parameters.

comment by Said Achmiz (SaidAchmiz) · 2013-08-17T02:27:15.714Z · LW(p) · GW(p)

This seems like a pretty sensible account to me. (Does anyone see any obvious flaws?)

This is similar to how currency is exchanged. Assuming some reference point, perhaps an event which society deems is equally valuable for all agents (that is, society values it equally regardless of which agent experiences it), there exists a Utility Economy, in which there exists a comparative advantage; Agent Alpha and Agent Beta serve each other, producing more Society-Utils by trading than either could alone.

Could you explain this a bit more? I'm not sure I understand. (FYI, I know almost nothing about currency exchange.)

Replies from: AndHisHorse
comment by AndHisHorse · 2013-08-17T03:01:16.772Z · LW(p) · GW(p)

I don't know much about it either, but the basic principles I'm trying to transfer are:

a) N different nations have N different currencies. Agents 1 through N have Agent1-Utils, Agent2-Utils...AgentN-Utils.

b) They are able to interact in an international market by setting an exchange rate between their currencies. In this case, we propose the extra step of creating a single societal currency, which would be analogous to a "World Dollar", so that we need only N different conversions (Agent i to Society, i = 1..N) rather than N(N-1)/2 (Agent i to Agent j, i = 1..N, j = i+1..N), and the responsibility to set conversion rates is a societal, rather than individual, responsibility.

Admittedly, this analogy has its own "utility monster" - a nation which is economically powerful enough to manipulate exchange rates. However, that doesn't quite exist in the "Utility Economy" unless one agent is powerful enough to bend society to their whim, in which case it's not so much a utilitarian society as a dictatorship.