Bayesian Utility: Representing Preference by Probability Measures
post by Vladimir_Nesov · 2009-07-27T14:28:55.021Z · LW · GW · Legacy · 37 commentsContents
37 comments
This is a simple transformation of standard expected utility formula that I found conceptually interesting.
For simplicity, let's consider a finite discrete probability space with non-zero probability at each point p(x), and a utility function u(x) defined on its sample space. Expected utility of an event A (set of the points of the sample space) is the average value of utility function weighted by probability over the event, and is written as
Expected utility is a way of comparing events (sets of possible outcomes) that correspond to, for example, available actions. Event A is said to be preferable to event B when EU(A)>EU(B). Preference relation doesn't change when utility function is transformed by positive affine transformations. Since the sample space is assumed finite, we can assume without loss of generality that for all x, u(x)>0. Such utility function can be additionally rescaled so that for all sample space
Now, if we define
the expected utility can be rewritten as
or
Here, P and Q are two probability measures. It's easy to see that this form of expected utility formula has the same expressive power, so preference relation can be defined directly by a pair of probability measures on the same sample space, instead of using a utility function.
Expected utility written in this form only uses probability of the whole event in both measures, without looking at the individual points. I tentatively call measure Q "shouldness", together with P being "probability". Conceptual advantage of this form is that probability and utility are now on equal footing, and it's possible to work with both of them using the familiar Bayesian updating, in exactly the same way. To compute expected utility of an event given additional information, just use the posterior shouldness and probability:
If events are drawn as points (vectors) in (P,Q) coordinates, expected utility is monotone on the polar angle of the vectors. Since coordinates show measures of events, a vector depicting a union of nonintersecting events is equal to the sum of vectors depicting these events:
This allows to graphically see some of the structure of simple sigma-algebras of the sample space together with a preference relation defined by a pair of measures. See also this comment on some examples of applying this geometric representation of preference.
Preference relation defined by expected utility this way also doesn't depend on constant factors in the measures, so it's unnecessary to require the measures to sum up to 1.
Since P and Q are just devices representing the preference relation, there is nothing inherently "epistemic" about P. Indeed, it's possible to mix P and Q together without changing the preference relation. A pair (p',q') defined by
gives the same preference relation,
(Coefficients can be negative or more than 1, but values of p and q must remain positive.)
Conversely, given a fixed measure P, it isn't possible to define an arbitrary preference relation by only varying Q (or utility function). For example, for a sample space of three elements, a, b and c, if p(a)=p(b)=p(c), then EU(a)>EU(b)>EU(c) means that EU(a+c)>EU(b+c), so it isn't possible to choose q such that EU(a+c)<EU(b+c). If we are free to choose p, however, an example that has these properties (allowing zero values for simplicity) is a=(0,1/4), b=(1/2,3/4), c=(1/2,0), with a+c=(1/2,1/4), b+c=(1,3/4), so EU(a+c)<EU(b+c).
Prior is an integral part of preference, and it works exactly the same way as shouldness. Manipulations with probabilities, or Bayesian "levels of certainty", are manipulations with "half of preference". The problem of choosing Bayesian priors is in general the problem of formalizing preference, it can't be solved completely without considering utility, without formalizing values, and values are very complicated. No simple morality, no simple probability.
37 comments
Comments sorted by top scores.
comment by MSRayne · 2022-07-24T21:41:03.195Z · LW(p) · GW(p)
I finally deciphered this post just now so I'll explain how I'm interpreting it for the convenience of future readers. Basically, we start in a world state with various timelines branching off it - points of the initial probability distribution. Each timeline has a particular utility (how much we like it), and a particular probability (how much we expect it). So you can sum utility times probability for all timelines to get the total expected value of this state of the world we're at right now.
However, we have the option of taking some action, the "event" referenced in the post, which rules out some set of timelines. The remaining set of timelines, the ones we can restrict our future to by performing the action, accounts for some proportion of the total expected value of our current state. That proportion is Q(A), derived from summing the expected value of each timeline in the set and dividing by the expected value of this present state - which is the same as normalizing the present state's expected value to 1.
If we perform the action, those timelines keep their probability weights, but in the absence of the other timelines now ruled out, we re-scale them to sum to 1, in the sense of Bayesian updating (our action is evidence that we're in that set of timelines rather than some other set), by dividing by the total proportion of probability mass they had in our initial state (i.e. their total probability), which is P(A).
So, Q(A)/P(A) essentially is like a "score multiplier". If the action restricts the future to a set of timelines whose proportion of total expected value, from the perspective of the pre-action starting state, is greater than their total probability, this normalized expected value of the action will be greater than 1 - we've improved our position, forced the universe into a world state which gives us a better bet than we had before. On the other hand, of course, it could be less than 1 if we restrict to a set of timelines whose density of value proportion per probability is too low - we've thrown away some potential value that was originally available to us.
The fun thing is that since Q and P both look like probability distributions - ways of weighting timelines as proportions of the whole - we can modify them with linear transformations in such a way that the preference ordering of Q(A)/P(A) remains unchanged. But that's where my currently reached understanding stops. I'll have to analyze the rest of the post to get a better sense of how that transformation works and why it would be useful.
comment by jimmy · 2009-07-27T18:50:24.453Z · LW(p) · GW(p)
I may be missing your point, but to me, it looks like the summary would be:
If you bundle utility with probability, you can do the same maths, which is nice since it simplifies other things. You cannot prefer certain expected outcomes no matter what your utility function is [neat result, btw].
Since the probability math works, I now call the new thing "probability" and show that you can't find prior "probability" (new definition) without considering the normal definition of probability.
This doesn't change anything about regular probability, or finding priors. It just says that you cannot find out what you instrumentally want apriori without knowing your utility function, which is trivially true.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-27T19:04:53.790Z · LW(p) · GW(p)
As I said in the first phrase, this is but a "simple transformation of standard expected utility formula that I found conceptually interesting". I don't quite understand the second part of your comment (starting from "Since the probability...").
Replies from: jimmy↑ comment by jimmy · 2009-07-27T19:32:20.152Z · LW(p) · GW(p)
I agree that it is an interesting transformation, but I think your conclusion ("No simple morality, no simple probability.") does not follow.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-27T19:39:35.893Z · LW(p) · GW(p)
That argument says that if you pick a prior, you can't "patch" it to become an arbitrary preference by finding a fitting utility function. It's not particularly related to the shouldness/probability representation, and it isn't well-understood, but it's easy to demonstrate by example in this setting, and I think it's an interesting point as well, possibly worth exploring.
Replies from: cousin_it↑ comment by cousin_it · 2009-07-27T21:56:50.634Z · LW(p) · GW(p)
The new version of the post still loses me at about the point where mixing comes in. (What's your motivation for introducing mixing at all?) I would've been happier if it went on about geometry instead of those huge inferential leaps at the end.
And JGWeissman is right, expected utility is a property of actions not outcomes which seems to make the whole post invalid unless you fix it somehow.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-27T22:26:28.655Z · LW(p) · GW(p)
Any action can be identified with a set of outcomes consistent with the action. See my reply to JGWeissman.
Is the example after mixing unclear? In what way?
Replies from: cousin_it, JGWeissman↑ comment by cousin_it · 2009-07-27T22:33:20.939Z · LW(p) · GW(p)
Yes, that's true but makes your conclusion a bit misleading because not all sets of outcomes correspond to possible actions. It can easily happen that any preference ordering on actions is rationalizable by tweaking utility under a given prior.
The math in the example is clear enough, I just don't understand the motivation for it. If you reduce everything to a preference relation on subsets of a sigma algebra, it's trivially true that you can tweak it with any monotonic function, not just mixing p and q with alpha and beta. So what.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-27T22:47:54.472Z · LW(p) · GW(p)
It can also happen that the prior happens to be the right one, but it isn't guaranteed. This is a red flag, a possible flaw, something to investigate.
The question of which events are "possible actions" is a many-faceted one, and solving this problem "by definition" doesn't work. For example, if you can pick the best strategy, it doesn't matter what the preference order says for all events except the best strategy, even what it says for "possible actions" which won't actually happen.
Strictly speaking, I don't even trust (any) expected utility (and so Bayesian math) to represent preference. Any solution has to also work in a discrete deterministic setting.
Replies from: cousin_it↑ comment by cousin_it · 2009-07-28T07:45:26.522Z · LW(p) · GW(p)
It seems to me that you're changing the subject, or maybe making inferential jumps that are too long for me.
The information to determine which events are possible actions is absent from your model. You can't calculate it within your setting, only postulate.
If the overarching goal of this post was finding ways to represent human preference (did you imply that? I can't tell), then I don't understand how it brings us closer to that goal.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-28T11:38:18.492Z · LW(p) · GW(p)
The Hofstadter's Law of Inferential Distance: What you are saying is always harder to understand than you expect, even when you take into account Hofstadter's Law of Inferential Distance.
Of course this post is only a small side-node, and it tells nothing about which events mean what. Human preference is a preference, so even without details the discussion of preference-in-general has some implications for human preference, which the last paragraph of the post alluded to, with regards to picking priors for Bayesian math.
↑ comment by JGWeissman · 2009-07-27T22:42:31.678Z · LW(p) · GW(p)
Expected utility is usually written for actions, but it can be written as in the post as well, it's formally equivalent.
However, the ratios of the conditional probabilities of those outcomes, given that you take a certain action, will not always equal the rations of the unconditional probabilities, as in your formula.
comment by othercriteria · 2015-01-14T23:42:22.490Z · LW(p) · GW(p)
This seems cool but I have a nagging suspicion that this reduces to greater generality and a handful of sentences if you use conditional expectation of the utility function and the Radon-Nikodym theorem?
comment by JGWeissman · 2009-07-27T18:03:30.865Z · LW(p) · GW(p)
Why are we concerned with the expected utility of some subset of the probability space? To find the expected utility of an action, you should sum over the products of the utility of the point with its conditional probability given that you take that action, over all points in the space. In effect, you are only considering actions that reduce the probability of some points to zero, and then renormalizes the probability of the remaining points.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-27T22:21:34.766Z · LW(p) · GW(p)
Expected utility is usually written for actions, but it can be written as in the post as well, it's formally equivalent. This treatment of expected utility isn't novel in any way. Any action can be identified with a set of possibilities (outcomes) in which it happens. When you talk of actions that "don't reduce some probabilities to zero", you are actually talking about the effect of the actions on probability distributions of random variables, but behind those random variables is still a probability space on which any information is an element of sigma algebra, or a clear-cut set of possibilities.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-07-27T22:38:18.788Z · LW(p) · GW(p)
Expected utility is usually written for actions, but it can be written as in the post as well, it's formally equivalent.
How is it formally equivalent? How can I represent the expected utility of an action with arbitrary effects on conditional probability using the average, weighted by unconditional probabilities, of the utility of some subset of the possibilities, as in the post?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-27T23:12:00.211Z · LW(p) · GW(p)
Let A be the action (set of possibilities consistent with taking the action), and O set of possible outcomes (each one rated by the utility function, assuming for simplicity that every concrete outcome is considered, not events-outcomes). We can assume . Then:
Replies from: Peter_de_Blanc, JGWeissman
↑ comment by Peter_de_Blanc · 2009-07-28T16:54:37.975Z · LW(p) · GW(p)
How do you calculate P(A)?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-28T20:52:04.362Z · LW(p) · GW(p)
Trick question? P(A) is just a probability of some event, so depending on the problem it could be calculated in any of the possible ways. "A" can for example correspond to a value of some random variable in a (dynamic) graphical model, taking observations into account, so that its probability value is obtained from belief propagation.
↑ comment by JGWeissman · 2009-07-27T23:41:56.133Z · LW(p) · GW(p)
As I already explained, that only works for actions that exclude some outcomes and renormalize the probabilities of remaining outcomes, preserving the ratios of their probabilities.
Suppose O had 2 elements, x1 and x2, such that p(x1) = p(x2) = .5. If you take action A, then you have conditional probabilities p(x1|A) = .2 and p(x2|A) = .8. In this case, your transformation of P(x|A) = P(x, A)/P(A) does not work. Because A did not remove x1 as a possibility, it just made it less likely.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-27T23:58:10.881Z · LW(p) · GW(p)
P(x|A) = P(x,A)/P(A) is by definition of conditional probability. You are trying to interpret x1 and x2 as events, while in grandparent comment x are elements of the sample space. If you want to consider non-concrete outcomes, compose them from smaller elements. For example, you can have P(O1)=P(O2)=.5, P(O1|A)=.2, P(O2|A)=.8, if O1={x1,x2}, O2={x3,x4}, A={x1,x3}, and p(x1)=.1, p(x2)=.4, p(x3)=.4, p(x4)=.1.
comment by cousin_it · 2009-07-27T14:53:35.577Z · LW(p) · GW(p)
Clever! I would have titled it "Couldness and Shouldness", and inserted some sort of pun about "wouldness" at the end.
I don't quite understand the part about mixing. Did you mean 1 >= alpha > beta >= 0 ? If no, some vectors now have negative coordinates and the polar angle becomes an ambiguous ordering. If yes, that's not the general form: why not use any matrix with nonnegative elements and positive determinant?
And I don't understand the last paragraph at all. If X coordinates of points are given, changing the Y coordinates can reorder the polar angles arbitrarily. Or did you simply mean that composite events stay dependent on simple events?
Sorry if those are stupid questions.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-27T15:14:49.793Z · LW(p) · GW(p)
Mixing: coefficients can be negative or more than 1, but values of p and q must remain positive (added to the post). This is also a way to drive polar angle of the expected utility of the best point of the sample space to pi/2 (look at the bounding parallelogram in (P,Q)).
You can't move the points around independently, since their coordinates are measures, sums of distributions over specific events, so if you move one event, some of the other events move as well. I'll add an example to the article in a moment.
comment by Vladimir_Nesov · 2009-08-13T20:41:07.045Z · LW(p) · GW(p)
A couple of random thoughts. From the point of view on prior+utility as vectors in probability-shouldness coordinates, it's easy to see that the ability to rescale and shift utilities without changing preference corresponds to transformations to the shouldness component. These transformations don't change the order on vectors' (events') angles, and so even if we allow shouldness to go negative, expected utility will still work as preference. Similarly, if the shouldness is fixed positive, one could allow rescaling and shifting probability, so that it, too, can go negative.
Another transformation: if we swap the roles of probability and shouldness, the resulting prior+utility will have shouldness of the original system as prior and inverse utility of the original system as utility. In this system, expected utility minimization will describe the same optimization as the expected utility maximization in the original system. The same effect could be achieved by flipping the sign on utility (another symmetry), which can also be easily seen from the probability-shouldness diagram.
Applying both transformations, we get the same preference, but with shouldness of the original system as prior. Utility of the transformed system is negated inverted utility of the original representation. This shows that conceptually, probability distribution and shouldness distribution are interchangeable.
comment by Vladimir_Nesov · 2009-07-27T16:22:47.657Z · LW(p) · GW(p)
Added an example of when it isn't possible to specify arbitrary preference for a given prior, and a philosophical note at the end (related to the "where do the priors come from" debate).
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2009-07-27T20:50:02.217Z · LW(p) · GW(p)
I don't follow the equation of preference and priors in the last paragraph.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-27T20:54:45.941Z · LW(p) · GW(p)
What do you mean?
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2009-07-27T21:03:34.228Z · LW(p) · GW(p)
Prior is an integral part of preference, and it works exactly the same way as shouldness.
Could you demonstrate? I don't understand.
The problem of choosing Bayesian priors is in general the problem of formalizing preference, it can't be solved completely without considering utility
I also don't understand what you mean above.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-27T21:51:37.684Z · LW(p) · GW(p)
What is usually called "prior" is represented by measure P in the post. Together with "shouldness" Q they constitute the recipe for computing preference over events, through expected utility.
If it's not possible to choose prior more or less arbitrarily and then fill in the gaps using utility to get the correct preference, some priors are inherently incorrect for human preference, and finding the priors that admit completion to the correct preference with fitting utility requires knowledge about preference.
Replies from: Jonathan_Graehl, Jonathan_Graehl↑ comment by Jonathan_Graehl · 2009-07-28T05:46:18.806Z · LW(p) · GW(p)
Regarding your second point; I'm not sure how it's rational to choose your beliefs because of some subjective preference order.
Perhaps you could suggest a case where it makes sense to reason from preferences to "priors which make my preferences consistent", because I'm also fuzzy on the details of when and how you propose to do so.
↑ comment by Jonathan_Graehl · 2009-07-28T05:43:31.769Z · LW(p) · GW(p)
I see - by "prior" you mean "current estimate of probability", because P was defined
I've been dealing lately with learning research where "prior" means how likely a given model of probability(outcome) is before any evidence, so maybe I was a little rigid.
In any case, I suggest you consistently use "probability" and drop "prior".
comment by timtyler · 2009-07-27T17:09:01.123Z · LW(p) · GW(p)
I've critiqued this "value is complex" [http://lesswrong.com/lw/y3/value_is_fragile/] material before. To summarise from my objections there:
The utility function of Deep Blue has 8,000 parts - and contained a lot of information. Throw all that information away, and all you really need to reconstruct Deep Blue is the knowledge that it's aim is to win games of chess. The exact details of the information in the original utility function are not recovered - but the eventual functional outcome would be much the same - a powerful chess computer.
The supposed complexity is actually a bunch of implementation details that can be effectively recreated from the goal - if that should prove to be necessary.
It is not precious information that must be preserved. If anything, attempts to preserve the 8,000 parts of Deep Blue's utility function while improving it would actually have a crippling negative effect on its future development. For example, the "look 9 moves ahead" heuristic is a feature when the program is weak, but a serious bug when it grows stronger.
Similarly with complexity of human values: those are a bunch of implementation details to deal with the problem of limited resources - not some kind of representation of the real target.
Replies from: Jonathan_Graehl, Wei_Dai↑ comment by Jonathan_Graehl · 2009-07-27T20:55:19.007Z · LW(p) · GW(p)
It looks like this is a response to the passing link to http://wiki.lesswrong.com/wiki/Complexity_of_value in the article. At first I didn't understand what in the article you were responding to.
Replies from: timtyler↑ comment by timtyler · 2009-07-27T21:08:21.016Z · LW(p) · GW(p)
The article it was posted in response to was this one - from the conclusion of the post:
http://wiki.lesswrong.com/wiki/Complexity_of_value
That's a wiki article - which can't be responded to directly. The point I raise is an old controversy now. This message seems rather redundant now - since the question it responded to has subsequently been dramatically edited.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2009-07-28T05:49:46.902Z · LW(p) · GW(p)
Yes, I edited, but before your response. Sorry for the confusion.
↑ comment by Wei Dai (Wei_Dai) · 2009-07-27T20:07:37.392Z · LW(p) · GW(p)
Why was this comment voted down so much (to -4 as of now)? It seems to be a reasonable point, clearly written, not an obvious troll or off-topic. Why does it deserve to be ignored?
Replies from: JGWeissman↑ comment by JGWeissman · 2009-07-27T20:13:59.150Z · LW(p) · GW(p)
It is off topic. The article was not value being complex, fragile, or hard to preserve.