0 comments
Comments sorted by top scores.
comment by VincentYu · 2014-01-28T18:21:47.728Z · LW(p) · GW(p)
A fundamental issue here is that von Neumann–Morgenstern (VNM) utility functions (also called cardinal utility functions, as opposed to ordinal utility functions) are not comparable across entities; after all, they are only invariant up to positive affine transformations.
This means that the relations in your post that involve more than one utility function are meaningless under the VNM framework. Contrary to popular misconception, the inequality
u_v(v(1), b(0)) > u_b(v(0), b(1))
tells us nothing about whether Veronica likes apple pies more than Betty does, and the equality
u_b(v(1), b(0)) = u_v(v(0), b(1)) = 0
tells us nothing about whether Betty and Veronica cares whether the other gets a pie.
A quick way to see this formally is to note that you may transform one of the utility functions (by a positive affine transformation of the form ax + b, which leaves VNM utility functions invariant) and get any relation you want between the pair of utility functions at the specified points.
A quick way to see this informally is to recall that only comparisons of differences in utility within a single entity are meaningful.
Von Neumann and Morgenstern address some of these common misunderstandings in Theory of Games and Economic Behavior (3rd ed., p. 11; italics original, bold mine):
A particularly striking expression of the popular misunderstanding about this pseudo-maximum problem [of utility maximization] is the famous statement according to which the purpose of social effort is the "greatest possible good for the greatest possible number." A guiding principle cannot be formulated by the requirement of maximizing two (or more) functions at once.
Such a principle, taken literally, is self-contradictory. (In general on function will have no maximum where the other function has one.) It is no better than saying, e.g., that a firm should obtain maximum prices at maximum turnover, or a maximum revenue at minimum outlay. If some order of importance of these principles or some weighted average is meant, this should be stated. However, in the situation of the participants in a social economy nothing of that sort is intended, but all maxima are desired at once—by various participants.
One would be mistaken to believe that it can be obviated, like the difficulty in the Crusoe case mentioned in footnote 2 on p. 10, by a mere recourse to the devices of the theory of probability. Every participant can determine the variables which describe his own actions but not those of the others. Nevertheless those "alien" variables cannot, form his point of view, be described by statistical assumptions. This is because the others are guided, just as he himself, by rational principles—whatever that may mean—and no modus procedendi can be correct which does not attempt to understand those principles and the interactions of the conflicting interests of all participants.
Sometimes some of these interests run more or less parallel—then we are nearer to a simple maximum problem. But they can just as well be opposed. The general theory must cover all these possibilities, all intermediary stages, and all their combinations.
To directly address the issue of utility comparison across entities, refer to this footnote on p. 19:
We have not obtained [from the von Neumann–Morgenstern axioms] any basis for a comparison, quantitatively or qualitatively, of the utilities of different individuals.
I highly recommend reading the first sections of the book. Its copyright has expired and the Internet Archive has a scan of the book.
A quick note for anyone confused over why the utility functions here are so much weaker than what they are used to seeing: You are probably used to seeing "utility" in discussions of utilitarianism, in which "utility" generally does not fall under the VNM framework (they are often intended to be much stronger than VNM utilities, so that they are no longer invariant under positive affine transformations and can be compared across entities—the trouble here is that there is no sensible formalization that captures these properties); that is, "utility" in utilitarianism suffers from a namespace collision with "utility" in economics and decision theory ("utility" in economics and decision theory also often refer to different things: ordinal utility is more common in the former whereas cardinal utility is more common in the latter).
Replies from: Chrysophylax, private_messaging↑ comment by Chrysophylax · 2014-01-28T19:12:00.379Z · LW(p) · GW(p)
Upvoted.
To clarify: VNM-utility is a decision utility, while utilitarianism-utility is an experiential utility. The former describes how a rational agent behaves (a rational agent always maximises VNM-utility) and is therefore ordinal, as it doesn't matter what values we assign to different outcomes as long as the preference order does not change. The latter describes what values should be ascribed to different experiences and is therefore cardinal, as changing the numbers matters even when decisions don't change.
↑ comment by private_messaging · 2014-01-29T20:37:09.448Z · LW(p) · GW(p)
To add to this, if, for the sake of argument, there was a formalization of "utility" from utilitarianism, that'd imply having a function over a region of space (or spacetime), which finds how this region feels, or what it wants. (For actually implementing an AI with it, that function would have to be somehow approximated on the actual ontology we employ, which we don't know how to do either, but I digress).
Naturally, there's no reason for this function taken over large region of space (including the whole earth) to be equal to sum or average or other linear combination of this function taken over parts of that region. Indeed that very obviously wouldn't work if the region was your head and the sub-regions were 1nm^3 cubes.
comment by ThisSpaceAvailable · 2014-02-15T07:32:49.970Z · LW(p) · GW(p)
In going from your second bullet point to your third, you jump from a positive statement to a normative one. If one has a utility function for the states all agents, then one can extend VNM to that utility function, and try to maximize that, but that just begs the question of what utility to assign to the states of other agents.
uv(v(1), b(0)) > ub(v(0), b(1)
Besides the missing parenthesis, there's a crucial conceptual problem with that statement. If "uv(v(1), b(0))" means "a utility function as defined by the VNM theorem", then the statement does not follow. VNM says that a utility function exists; it does not say that the function is unique. Since uv(v(1), b(0)) is not uniquely defined, asking whether it is greater than another number doesn't make sense.
uv(v(1), b(0)) is the map. There is an abstract object that comes from a set for which a correspondence can be set up with the real numbers, but that doesn't mean the object is a real number. Saying "The real number that I'm using to represent the value Veronica's utility function is greater than the real number that I'm using to represent the value of Betty's utility function" is a vacuous statement. It's a statement about the map, not the territory. Each agent has a comparison method, but there is no universal comparison method.
veronica.prefers?( ((v(1), b(0)), (v(0), b(1)) ) => true
betty.prefers?( ((v(1), b(0)), (v(0), b(1)) ) => false
prefers?( ((v(1), b(0)), (v(0), b(1)) ) => undefined method `prefers?' for main:Object (NoMethodError)
Replies from: PhilGoetz↑ comment by PhilGoetz · 2014-03-04T19:48:08.833Z · LW(p) · GW(p)
uv(v(1), b(0)) > ub(v(0), b(1)
Besides the missing parenthesis, there's a crucial conceptual problem with that statement. If "uv(v(1), b(0))" means "a utility function as defined by the VNM theorem", then the statement does not follow. VNM says that a utility function exists; it does not say that the function is unique. Since uv(v(1), b(0)) is not uniquely defined, asking whether it is greater than another number doesn't make sense.
No. The post says, "Betty likes apple pies, but Veronica loves them, so uv(v(1), b(0)) > ub(v(0), b(1))." It says that Betty's utility, in the situation where she has one pie and Veronica does not, is less than Veronica's utility in the situation where she has one pie and Betty does not. That is a constraint that we impose on the two utility functions.
This post is incomplete, as was noted at its beginning, and you really shouldn't be trying to figure it out now. "Incomplete" is very bad, in mathematics. I'm moving it into Drafts.
comment by Simulation_Brain · 2014-02-05T19:53:04.216Z · LW(p) · GW(p)
I may be confused, but it seems to me that the issue in generalizing from decision utility to utilitarian utility simply comes down to making an assumption allowing utilities among different people to be compared- to put them on the same scale. I think there's a pretty strong argument that we can do so, springing from the fact that we all are running essentially the same neural hardware. Whatever experiential value is, it's made of patterns of neural firing, and we all have basically the same patterns. While we don't run our brains exactly the same, the mood- and reward-processing circuitry are pretty tightly feedback-controlled, so saying that everyone's relative utilities are equal shouldn't be too far from the truth.
But that's when one adopts an unbiased view. Neither I nor (almost?) anyone else in history have done so. We consider our own happiness more important than anyone else's. We weight it higher in our own decisions, and that's perfectly rational. The end point of this line of logic is that there is no objective ethics - it's up to the individual.
But there is one that makes more sense than others when making group decisions, and that's sum utilitarianism. That's the best candidate for an AI's utility function. Approximations must be made, but they're going to be approximately right. They can be improved by simply asking people about their preferences.
The common philosophical concern that you can't put different individuals preferences on the same scale does not hold water when held up against our current knowledge about how brains register value and so create preferences.
Replies from: blacktrance↑ comment by blacktrance · 2014-02-05T20:26:40.929Z · LW(p) · GW(p)
We weight it higher in our own decisions, and that's perfectly rational. The end point of this line of logic is that there is no objective ethics
That's a big leap. Why would weighing the quality of our own experiences more highly mean that there's no objective ethics?