Destroying the Utility Monster—An Alternative Formation of Utility
post by DragonGod · 2017-06-08T12:37:47.371Z · LW · GW · Legacy · 40 commentsContents
Destroying the Utility Monster—An Alternative Formation of Utility Dragon’s System: Explanation: Glossary Assumptions: Some Inferences: Conclusion References None 40 comments
NOTE: This post contains LaTeX; it is recommended that you install “TeX the World” (for chromium users), “TeX All the Things” or other TeX/LaTeX extensions to view the post properly.
Destroying the Utility Monster—An Alternative Formation of Utility
I am a rational egoist, but that is only because there is no existing political system/social construct I identify with. If there was one I identified with, I would be strongly utilitarian. In all moral thought experiments, I err on the side of utilitarianism, and I’m faithful in my devotion to its tenets. There are some criticisms against utilitarianism, and one of the most common—and most powerful—is the utility monster which allegedly proves “utilitarianism is not egalitarian’’. [1]
For those who may not understand the terms, I shall define them below:
Utilitarianism is an ethical theory that states that the best action is the one that maximizes utility. "Utility" is defined in various ways, usually in terms of the well-being of sentient entities. Jeremy Bentham, the founder of utilitarianism, described utility as the sum of all pleasure that results from an action, minus the suffering of anyone involved in the action. Utilitarianism is a version of consequentialism, which states that the consequences of any action are the only standard of right and wrong. Unlike other forms of consequentialism, such as egoism, utilitarianism considers all interests equally.
[2]
The utility monster is a thought experiment in the study of ethics created by philosopher Robert Nozick in 1974 as a criticism of utilitarianism
A hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.[1] Nozick writes:
“Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose ... the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.”
This thought experiment attempts to show that utilitarianism is not actually egalitarian, even though it appears to be at first glance.
[1]
I first found out about the utility monster a few months ago, and pondered on it for a while, before filing it away. Today, I formalised a system for reasoning about utility that would not only defeat the utility monster, but make utilitarianism more egalitarian. I shall state my system, and then explain each of the points in more detail below.
Dragon’s System:
- All individuals have the same utility system.
- $U: -1 <= U <= 1$.
- The sum of the utility of an event and its negation is $0$.
- Specifically, the sum total of all positive utilities an individual can derive (for unique events without double counting) is $1$.
- Specifically, the sum total of all negative utilities an individual can derive (for unique events without double counting) is $-1$.
- At any given time, the sum total of an individual's potential utility space is $0$.
- To increase the utility of an event, you have to decrease the utility of its negation.
- To decrease the utility of an event you have to increase the utility of its negation.
- An event and its negation cannot have the same utility unless both are $0$.
- If two events are independent then the utility of both events occurring is the sum of their individual utilities.
Explanation:
- The same system for appropriating utility is applied to all individuals. This is for the purposes of consistency and to be more egalitarian.
-
The Utility an individual can get from an event is between $-1$ and $1$. To derive the Utility an individual gains from any event $E_i$, let the utility of $E_i$ under more traditional systems be $W_i$. $U_i = \frac{W_i}{\sum_{k = 1}^n} \forall E_i: W_i > 0$. In English:
Express the positive utility of each individual as a fraction of their total positive utility across all possible events (without double counting any utility).
-
For every event that can occur, there’s a corresponding event that represents that event not occurring called its negation; every event has a negation. If an individual gains positive utility from an event happening, then they must gain equivalent negative utility from the event not happening. The utility they derive from an event and its negation must sum to $0$. Such is only logical. The positive utility you gain from an event happening, is proportional to the negative utility g
-
This follows from the method of deriving “2” explained above.
-
This follows from the method of deriving “2” explained above.
-
This follows from “2” and “3”.
-
This follows from “3”.
-
This follows from “3”.
-
This follows from “3”.
-
This is via intuition. Two events $A$ and $B$ are independent if the utility of $A$ does not depend on the occurrence of $B$ nor does $B$ in any way affect the utility of $A$ and vice versa. If such is true, then to calculate the utility of $A$ and $B$, we need only sum the individual utilities of $A$ and $B$.
It can be seen that my system can be reduced to postulates “1”, “2”, “3”, “6” and “10”. The ten point system is for the sake of clarity which always supersedes brevity and eloquence.
If any desire the concise version: -
All individuals have the same utility system.
-
$U: -1 <= U <= 1$.
-
The sum of the utility of an event and its negation is $0$.
-
At any given time, the sum total of an individual's potential utility space is $0$.
-
If two events are independent then the utility of both events occurring is the sum of their individual utilities.
Glossary
Individual: This refers to any sapient entity; generally, this is restricted to humans, but if another conscious life-form (being aware of their own awareness, and capable of conceiving “dubito, ergo cogito, ergo sum—res cogitans”) decided to adopt this system, then it applies to them as well.
Event: Any well-defined outcome from which an individual can derive utility—positive or negative.
Negation: The negation of an event refers to the event not occurring. If event $A$ is the event that I die, then $\neg A$ is the event that I don’t die (i.e. live). If $B$ is the event that I win the lottery, then $\neg B$ is the event that I don’t win the lottery.
Utility Space: The set containing all events from which an individual can possibly derive utility from. This set is finite.
Utility Preferences: The mapping of each event in an individual’s utility space to the fractional utility they derive from the event, and the implicit ordering of events according to it.
Assumptions:
Each individual’s utility preferences are unique. No two individuals have the same utility space with the same values for all events therein.
We deal only with the utility space of an individual at a given point in time. For example, an immortal who values their continued existence does not value their existence for eternity with ~1.0 utility, but their existence for the next time period, and as such the immortal and mortal may derive same utility from their continued existence. Once an individual receives units of a resource, their utility space is re-evaluated in light of that. After each event, the utility space is re-evaluated.
The capacity to derive utility (CDU) of any individual is finite. No one is allowed to have infinite CDU. (It may be possible that an individual’s capacity to derive utility is vastly greater than several other individuals (utility monster) but the utility is normalised to deal specifically with such existences). No one has the right to have a greater capacity to derive utility than other individuals. We normalise the utility of every individuals, such that the maximum utility any individual can derive is 1. This makes the system egalitarian as every individual is given equal maximum (and minimum) utility regardless of their CDU.
The Utility space of an individual is finite. There are only so many events that you can possibly derive utility from. The death of an individual you do not know about is not an event you can derive utility from (assuming you don’t also find out about their death). Individuals can only be affected (positively or negatively) by a finite number of events.
Some Inferences:
A change in an individual’s CDU does not produce a change in normalised utility, unless there’s also a change in their utility preferences.
A change in an individual’s utility preferences is necessary and sufficient to produce a change in their normalised utility.
Conclusion
Any Utility system that conforms to these 5 axioms destroys the utility monster. I think the main problems of traditional utility systems, was unbounded utility, and as such they were indeed not egalitarian. My system destroys the concept of unbounded utility by considering the utility of an event to an individual as the fraction of their total utility from their utility space. This means no individual can have their total (positive or negative) utility space sum to more than any other. The sum total of the utility space for all individuals is equal. I believe this makes a utility system in which every individual is equally represented and is truly egalitarian.
This is a concept still in its infancy, so do critique, comment and make suggestions. I will listen to all feedback and use it to develop the system. This only intends to provide a different paradigm for reasoning about utility, especially in the context of egalitarianism. I did not attempt to formalise a mathematical system for calculating utility, and did not accept to do so due to lacking the mathematical acumen to do. I would especially welcome suggestions for calculating utility of dependent events, and other scenarios. This is not a system of utilitarianism and does not pretend to be such; this is only a paradigm for reasoning about utility. This system can however be applied to existing utilitarian systems.
References
[1] https://en.wikipedia.org/wiki/Utility\_monster
[2] https://en.wikipedia.org/wiki/Utilitarianism
40 comments
Comments sorted by top scores.
comment by siIver · 2017-06-08T16:01:56.776Z · LW(p) · GW(p)
This is the ultimate example of... there should be a name for this.
You figure out that something is true, like utilitarianism. Then you find a result that seems counter intuitive. Rather than going "huh, I guess my intuition was wrong, interesting" you go "LET ME FIX THAT" and change the system so that it does what you want...
man, if you trust your intuition more than the system, then there is no reason to have a system in the first place. Just do what is intuitive.
The whole point of having a system like utilitarinism is that we can figure out the correct answers in an abstarct, general way, but not necessarily for each particular situation. Having a system tells us what is correct in each situation, not vice versa.
The utility monster is nothing to be fixed. It's a natural consequence of doing the right thing, that just happens to make some people uncomfortable. It's hardly the only uncomfortable consequence of utilitarianism, either.
Replies from: Lumifer, denimalpaca, Jayson_Virissimo, AlexMennen, DragonGod↑ comment by Lumifer · 2017-06-08T16:49:52.003Z · LW(p) · GW(p)
You figure out that something is true, like utilitarianism.
That looks like a category error. What does it mean for utilitarianism to be "true"? It's not a feature of the territory.
if you trust your intuition more than the system, then there is no reason to have a system in the first place
Trust is not all-or-nothing. Putting ALL your trust into the system -- no sanity checks, no nothing -- seems likely to lead to regular epic fails.
↑ comment by denimalpaca · 2017-06-08T20:35:55.999Z · LW(p) · GW(p)
The term you're looking for is "apologist".
↑ comment by Jayson_Virissimo · 2017-06-08T16:29:53.163Z · LW(p) · GW(p)
This is the ultimate example of... there should be a name for this.
I think the name you are looking for is ad hoc hypothesis.
↑ comment by AlexMennen · 2017-06-09T05:39:20.690Z · LW(p) · GW(p)
Sometimes when explicit reasoning and intuition conflict, intuition turns out to be right, and there is a flaw in the reasoning. There's nothing wrong with using intuition to guide yourself in questioning a conclusion you reached through explicit reasoning. That said, DragonGod did an exceptionally terrible job of this.
Replies from: siIver↑ comment by siIver · 2017-06-10T15:01:42.295Z · LW(p) · GW(p)
Yeah, you're of course right. In the back of my mind I realized that the point I was making was flawed even as I was writing it. A much weaker version of the same would have been correct, "you should at least question whether your intuition is wrong." In this case it's just very obvious to me me that there is nothing to be fixed about utilitarianism.
Anyway, yeah, it wasn't a good reply.
comment by cousin_it · 2017-06-08T13:27:22.168Z · LW(p) · GW(p)
Do cats or bacteria have the same range of utility as people? Or are we utility monsters compared to bacteria, raising the possibility that something else can be a utility monster compared to us? I think both options are uncomfortable, no matter what math you use.
Replies from: DragonGod↑ comment by DragonGod · 2017-06-08T18:12:05.188Z · LW(p) · GW(p)
Individuals refers only to humans and other sapient entities considered by the system.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2017-06-08T19:05:08.602Z · LW(p) · GW(p)
There is a continuum on this scale. Is there a hard cutoff, or is there any scaling? And what about very similar forks of AIs?
Replies from: DragonGod↑ comment by DragonGod · 2017-06-08T20:52:38.345Z · LW(p) · GW(p)
Our system considers only humans; another sapient alien race may implement this system, and consider only themselves.
Replies from: cousin_it, Luke_A_Somers↑ comment by Luke_A_Somers · 2017-06-11T03:59:08.106Z · LW(p) · GW(p)
A) what cousin_it said.
B) consider, then, successively more and more severely mentally nonfunctioning humans. There is some level of incapability at which we stop caring (e.g. head crushed), and I would be somewhat surprised at a choice of values that put a 100% abrupt turn-on at some threshold; and if it did, I expect some human could be found or made that would flicker across that boundary regularly.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-06-11T14:09:06.904Z · LW(p) · GW(p)
There is some level of incapability at which we stop caring (e.g. head crushed), ... I expect some human could be found or made that would flicker across that boundary regularly.
This is wrong, at least for typical humans such as myself. In other words, we do not stop caring about the one with the crushed head just because they are on the wrong side of a boundary, but because we have no way to bring them back across that boundary. If we had a way to bring them back, we would care. So if someone is flickering back and forth across the so-called boundary, we will still care about them, since by stipulation they can come back.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2017-06-12T13:45:13.424Z · LW(p) · GW(p)
Good point; how about, someone who is stupider than the average dog.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-06-12T14:08:43.443Z · LW(p) · GW(p)
I don't think this is a good illustration, at least for me, since I would never stop caring about someone as long as it was clear that they were biologically human, and not brain dead.
I think a better illustration would be this: take your historical ancestors one by one. If you go back far enough in time, one of them will be a fish, which we would at least not care about in any human way. But in that way I agree with what you said about values. We will care less and less in a gradual way as we go back -- there will not be any boundary where we suddenly stop caring.
comment by kingofferrets · 2017-06-08T21:23:31.201Z · LW(p) · GW(p)
Clarification required - what does it mean for everyone to have the "same" utility system? The obvious answer is "every situation gives everyone the same utility", but if I like chocolate, I should gain utility from eating chocolate. If my brother doesn't like eating chocolate, he shouldn't gain utility from it. So if it's not the seemingly obvious answer, how are we defining it?
Also, you've mentioned that the negation of an event is it "not happening", and it has the opposite utility of the original. There are two main objections here:
1) A coworker unexpectedly brings in cookies and hands them out to everyone. This should be a positive utility boost. But am I really getting negative utility every day that doesn't happen? Conversely, am I really getting just as much utility from having my friends alive each and every moment as I lose when they die and I'm stricken with grief?
2) There are an infinite number of things Not Happening at any given time, all of which would in theory play into the utility value. How do we even remotely consider the idea of negations given this?
One way to address this would be to do things like considering probability - we're not terribly happy/sad about the non-occurence of wildly improbable events - but that's just a start.
Replies from: DragonGod↑ comment by DragonGod · 2017-06-08T21:48:46.801Z · LW(p) · GW(p)
I tried to rewrite the article for clarification—please reread. I'll reply to any points you have after a re-read.
Replies from: kingofferrets↑ comment by kingofferrets · 2017-06-08T22:25:36.228Z · LW(p) · GW(p)
The objections to your concept of negation still stand, I think - there are an infinite number of possible events, an infinite number of which don't happen. Only finitely many things happen, but the utility of each is similar to the utility of the things that didn't happen, since things that don't happen have the same absolute value as they would if they did. We can't just say that they cancel out, because they eat up the available utility space, so every individual event has to have an infinitesimal value...
I'm not sure that this is really a fixable system, because it has to partition out a bounded amount of utility among an infinite number of events, since every possible event factors in to the result, because it either A) happens or B) doesn't, and either way has a utility value. It would need to completely rebuild some of the axioms to overcome this, and you only really have normalizing to the -1 to 1 utility values and the use of negations as axioms.
comment by Manfred · 2017-06-08T18:58:18.511Z · LW(p) · GW(p)
Interpersonal utility comparisons are not a natural part of utility theory the same way individual utility functions are. I think of them as being important for two reasons:
1: Our own personal moral reasoning does something like interpersonal utility comparisons, and we want to try to formalize that.
2: we want to cooperate with other people to achieve some goal that will benefit all, but in order to define our collective goals we need some form of interpersonal utility comparison.
2.5: We're about to build an AI and want to program in by hand how it should weigh human values (warning: don't do 2.5).
comment by AlexMennen · 2017-06-08T16:47:01.229Z · LW(p) · GW(p)
None of that made any sense because utility is not the sum of components from independent events. You can have bounded utility functions without any of that.
Replies from: DragonGod↑ comment by DragonGod · 2017-06-08T18:11:01.729Z · LW(p) · GW(p)
I didn't say that. Is there any part of the post you what me to clarify?
Replies from: AlexMennen↑ comment by AlexMennen · 2017-06-08T19:41:07.188Z · LW(p) · GW(p)
The sum of the utility of an event and its negation is 0.
If two events are independent then the utility of both events occurring is the sum of their individual utilities.
Utilities are defined over outcomes, which don't have negations or independence relations with other outcomes. There is no such thing as the utility of an event in standard expected utility theory, and no need for such a concept.
Replies from: DragonGod↑ comment by DragonGod · 2017-06-08T20:56:48.984Z · LW(p) · GW(p)
An event is any outcome from which an individual can derive utility.
The negation of an event is the event not happening.
↑ comment by AlexMennen · 2017-06-09T05:18:50.210Z · LW(p) · GW(p)
Given an outcome X, there are many outcomes other than X, which generally have different utilities. Thus there isn't one utility value for X not happening.
comment by Dagon · 2017-06-08T16:22:15.520Z · LW(p) · GW(p)
By bounding utility, you also enforce diminishing marginal utility to a much greater degree than most people claim to experience it. If one good thing is utility 0.5, a second good thing must be less than 0.5, and a third good thing pretty much is worthless.
personally, my objection to utilitarianism is more fundamental than this. I don't believe utility is an objective scalar measure that can be compared across persons (or even across independent decisions for a person). It's just a convenient mathematical formalism for a decision theory.
Replies from: AlexMennen, DragonGod↑ comment by AlexMennen · 2017-06-09T05:31:29.852Z · LW(p) · GW(p)
By bounding utility, you also enforce diminishing marginal utility to a much greater degree than most people claim to experience it. If one good thing is utility 0.5, a second good thing must be less than 0.5, and a third good thing pretty much is worthless.
If utility is bounded between -1 and 1, then 0.5 is an extremely large amount of utility, not just some generic good thing. Bounded utility functions do not contradict common sense beliefs about how diminishing marginal returns works.
↑ comment by DragonGod · 2017-06-08T18:09:00.694Z · LW(p) · GW(p)
No. We look at Utility at points in time. One good thing is 0.5. We then calculate subsequent utility of another good thing after receiving that one good thing. You reevaluate the utility again after the first occurrence of the event.
Replies from: AlexMennen, Dagon↑ comment by AlexMennen · 2017-06-09T05:22:40.517Z · LW(p) · GW(p)
We look at Utility at points in time.
You shouldn't. That's not how utility works.
↑ comment by Dagon · 2017-06-08T18:33:21.359Z · LW(p) · GW(p)
So, reset to 0 at every 50ms, or some other time unit? And this applies to instantaneous utility as well - do you really mean to say that there can exist no experience that is twice as good as a 0.5 utility experience?
Replies from: DragonGod↑ comment by DragonGod · 2017-06-08T20:59:09.865Z · LW(p) · GW(p)
Reset to 0 after each event.
You may have a total utility of 10^10X where X is the maximum utility of the average human.
All your utility values are expressed as a fraction of 10^10X. A utility value of 0.5 utils grants you 5.0*10^9 utility, and grants an average human 0.5 utility.
↑ comment by Dagon · 2017-06-08T21:35:05.669Z · LW(p) · GW(p)
What's an "event"? What if multiple streams of qualia are happening simultaneously - is each instant (I chose 50ms as a guess at minimum experience unit) an event, or the time between sleep periods (and do people not have experiences while sleeping)?
Why do you claim there is a maximum utility for an "average human", and why use that rather than the maximum utility of the maximally-satisfied human? And is this a linear scaling (if so, why not just use the number rather than a constant fraction) or some logarithmic or other transform (and if so, why)?
comment by Pimgd · 2017-06-08T14:38:10.331Z · LW(p) · GW(p)
Maybe your utility system works, but I don't feel like it matches our world.
Plus, what does the "negation" of an event even mean? If someone that I care about dies, I feel sad. If they then come back, I don't feel not-sad, rather I'd be pretty disturbed (and of course happy) because what the hell just happened.
That is to say, if you stab me, but then use a magic wand to make it go away, I don't go back to normal, I become really scared of you instead.
You could say that "negating" an event turns it into "it never happened". But then I don't know what it means or how you could steer actions with it. You can't "negate" events that already happened, so, best you can do with the model is "yeah, I guess we shouldn't have done that"?
Replies from: DragonGodcomment by DragonGod · 2017-06-09T07:31:52.090Z · LW(p) · GW(p)
Hmmm, I've received counter examples, where the utility of an event + it's negation isn't zero.
E.g receiving $10,000 vs Not receiving $10,000.
Getting a cookie vs not getting a cookie.
I could redefine the negation of an event in regards to gaining material possessions as losing those material possessions, but would the negative utility be equal to the positive utility?
So, I've decided to knock off one axiom; "utility of an event + its negation = 0".
Sum total utility of positive events = 1.
Sum total utility of negative events = 1.
The system is preserved.
I'll edit it when I'm on laptop.
comment by denimalpaca · 2017-06-08T20:37:17.738Z · LW(p) · GW(p)
If I get a cookie, then I'm happy because I got a cookie. The negation of this event is that I do not get a cookie. However, I am still happy because now I feel healthier, having not eaten a cookie today. So both the event and it's negation cause me positive utility.
Replies from: DragonGod↑ comment by DragonGod · 2017-06-08T20:49:31.973Z · LW(p) · GW(p)
The negation of the event is that you did not get a cookie, not that you do not get a cookie. The negation of an event is that it did not happen. Either an event occurs or does not—it goes without saying that both an event and its negation cannot occur.
Replies from: denimalpaca↑ comment by denimalpaca · 2017-06-09T15:44:57.792Z · LW(p) · GW(p)
Even changing "do" to "did", my counter example holds.
Event A: At 1pm I get a cookie and I'm happy. At 10pm, I reflect on my day and am happy for the cookie I ate.
Event (not) A: At 1pm I do not get a cookie. I am not sad, because I did not expect a cookie. At 10pm, I reflect on my day and I'm happy for having eaten so healthy the entire day.
In either case, I end up happy. Not getting a cookie doesn't make me unhappy. Happiness is not a zero sum game.