Circular Preferences Don't Lead To Getting Money Pumped
post by Mestroyer · 2012-09-11T03:42:41.314Z · LW · GW · Legacy · 19 commentsContents
19 comments
Edit: for reasons given in the comments, I don't think the question of what circular preferences actually do is well defined, so this an answer to a wrong question.
If I like Y more than X, at an exchange rate of 0.9Y for 1X, and I like Z more than Y, at an exchange rate of 0.9Z for 1Y, and I like X more than Z, at an exchange rate of 0.9X for 1Z, you might think that given 1X and the ability to trade X for Y at an exchange rate of 0.95Y for 1X, and Y for Z at an exchange rate of 0.95Z for 1Y, and Z for X at an exchange rate of 0.95X for 1Z, I would trade in a circle until I had nothing left.
But actually, if I knew that I had circular preferences, and I knew that if I had 0.95Y I would trade it for (0.95^2)Z, which I would trade for (0.95^3)X, then actually I'd be trading 1X for (0.95^3)X, which I'm obviously not going to do.
Similarly, if the exchange rates are all 1:1, but each trade costs 1 penny, and I care about 1 penny much much less than any of 1X, 1Y, or 1Z, and I trade my X for Y, I know I'm actually going to end up with X - 3 cents, so I won't make the trade.
Unless I can set a Schelling fence, in which case I will end up trading once.
So if instead of being given X, I have a 1/3 chance of each of X, Y, and Z, I would hope I wouldn't set a Schelling fence, because then my 1/3 chance of each thing becomes a 1/3 chance of each thing minus the trading penalty. So maybe I'd want to be bad at precommitments, or would I precommit not to precommit?
19 comments
Comments sorted by top scores.
comment by Manfred · 2012-09-11T04:21:45.151Z · LW(p) · GW(p)
The bad thing about circular preferences is that they make the number you assign to how good an option is ("utility"-ish thing) not be a function). If you have some quantity of X, normally we'd expect that to determine a unique utility. But that only works because utility is a function! If your decision-maker is based off of some "utility"-ish number that isn't single-valued, you could have different utilities correspond to the very same quantity of X.
Your post basically says "getting money pumped would require us to have higher utility even if we just had a lower-probability version of something nice we already had - and since any thinking being knows that utility is a function, they will avoid this silly case once they notice it."
Well, no. If they have circular preferences, they won't be held back by how utility is actually supposed to be a function. They will make the trades, and really truly have a higher number stored under "utility" than they did before they made the trades. You build a robot that has a high utility if it drives off a cliff, and the robot doesn't go "wait, this sounds like a bad idea. Maybe I should take up needlepoint instead." It drives off the cliff. You build a decision-making system with circular preferences, and it drives of a cliff too.
Replies from: army1987, Mestroyer, Antisuji↑ comment by A1987dM (army1987) · 2012-09-11T18:29:30.207Z · LW(p) · GW(p)
I thought one usually took the VNM axioms as desiderata and derived that one must have a utility function, rather than the other way round.
Replies from: Manfred↑ comment by Mestroyer · 2012-09-11T05:18:28.259Z · LW(p) · GW(p)
I'm not saying "any thinking being knows that utility is a function," I'm saying that this creature with a broken brain prefers more X to less X. Instead of having a utility function they have a system of comparing quantities of X, Y, and Z.
I was thinking they would make comparison between what they have at the beginning and what they would have at the end, and it looks like you are making a chain of favorable comparisons to find your way back to X with less of it.
I'm not really sure what algorithm I would write into a robot to decide which path of comparisons to make. Maybe the shortest one (in number of comparisons) that compares the present state to one as far in the future as the robot can predict? But this seems kind of like deducing from contradictory premises.
Replies from: Manfred↑ comment by Manfred · 2012-09-11T05:23:49.978Z · LW(p) · GW(p)
prefers more X to less X. Instead of having a utility function they have a system of comparing quantities of X, Y, and Z.
Looks like an example might help you to connect this to what I was talking about.
Imagine sqrt(X). Normally people just pick the positive square root or the negative square root - but imagine the whole thing, the parabola-turned-sideways, the thing that isn't a function.
Now. Is it a valid question to ask whether sqrt(5) is greater or less than sqrt(6)?
-
What a decision-maker with circular preferences can have is local preferences - they can make a single decision at any one moment. But they can't know how they'd feel about some hypothetical situation unless they also knew how they'd get there. Which sounds strange, I know, because it seems like they should feel sad about coming back to the same place over and over, with slowly diminishing amounts of X. But that's anthropomorphism talking.
Replies from: Mestroyer↑ comment by Mestroyer · 2012-09-11T05:39:32.097Z · LW(p) · GW(p)
Why would they make that single decision to trade X for Y based on the comparison between X and Y instead of the comparison between X and the less X they know they'll get? I'm saying that at the moment want Y more than X, but they also want more X more than less X. And they know what kind of decisions they will make in the future.
Now I actually think both positions are answers to an invalid question.
Replies from: Manfred↑ comment by Manfred · 2012-09-11T15:30:01.754Z · LW(p) · GW(p)
Did you know that the concept of utility was originally used to describe preferences over choices, not states of the world? Might be worth reading up on. One important idea is that you can write preferences entirely in a language of what decisions you make, not how much you value different amounts of X. This is the language that circular preferences can be written in. Pop quiz: if your utility was the angle of a wheel (so, if it turns twice, your utility is 4 pi), could you prove that writing utility in terms of the physical state of the wheel breaks down, but that you can still always have preferences over choices?
↑ comment by Antisuji · 2012-09-11T06:18:10.064Z · LW(p) · GW(p)
This is true as far as it goes, but the OP seems to be talking about a mostly rational agent with some buggy preferences. And in particular it knows its preferences are buggy and in exactly what way. As I mentioned elsewhere, I would expect such an agent to self-modify to apply an internal utility tax on trades among the affected assets, or otherwise compensate for the error. Exactly how it would do this, or even whether it's possible in a coherent way, is an interesting problem.
comment by [deleted] · 2012-09-11T14:06:46.630Z · LW(p) · GW(p)
I had what feels like an insight about this type of money pump scenario in general, so I'll run it by everyone.
If X is a painful experience, like a stubbed toe. Y is a second painful experience, like a caffeine headache, and Z is a third painful experience, like a sore muscle, then trading repeatedly in the manner listed above seems entirely correct, since you would essentially be repeatedly trading to eventually almost no pain whatsoever. So the circularity of the trades themselves do not appear to be a problem (as opposed to the circularity of the preferences). As another example of this, if someone is offering 2 bagels for 1 apple, someone else if offering 2 candies for 1 bagel, and someone else is offering 2 apples for 1 candy, and you get utility from apples, bagels, and candies, then you have a great arbitrage situation.
In this case, your preferences are wrong, but they are coherently wrong. If you flip the sign bit, (what you think is good is actually bad and vice versa) for instance, you'll go back to perfectly reasonable, so you can say "If I look three steps ahead, in this scenario I would be better off doing what I would intuitively prefer NOT to do at each step."
This also appears to work whether you flip the sign bit on X's value (Actually you SHOULD discard X because it hurts) or the trade order (Actually, you should make all of those trades in the reverse order then you would think.) but presumably not both, because then you would be voluntarily trading to what you currently think is infinite disutility, which brings up a weird point. Reversing your preferences and reversing how you ACT on those preferences shouldn't do anything to change your behavior, but in this case if you attempt to evaluate the results of doing it in this case, it appears to be infinite disutility. That's probably extremely clear evidence you need to change your value system.
Am I onto something, or am I missing the point?
Edit: Clarified a point about circularity.
comment by asr · 2012-09-11T04:57:22.107Z · LW(p) · GW(p)
My understanding is that real humans routinely have cyclic preferences -- particularly when comparing complicated objects like apartments or automobiles, where there are many different attributes and we ignore small differences. I can't find a reference for this in a few minutes of googling, however.
I suspect in practice transaction costs are high enough that the money pump doesn't arise in most cases where we have intransitive preferences. Once people have made their decision by some arbitrary means, they will stick to it.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-09-11T05:07:15.913Z · LW(p) · GW(p)
Once people have made their decision by some arbitrary means, they will stick to it.
One of the benefits of the sunk cost fallacy.
Replies from: asr↑ comment by asr · 2012-09-11T14:34:47.373Z · LW(p) · GW(p)
I don't think that's the issue here.
The sunk costs fallacy would be relevant if people were swayed by the money they already spent, rather than by the cost of switching. It's easy to disentangle the two. In the case of apartments, one is your past rent payments, the other is your moving costs.
My impression is that people often say "it's too expensive and troublesome to move and too hard to break the lease" and don't often say "I already spent a lot of money on the lease".
Transaction costs are real, and there's nothing irrational about considering them.
comment by JStewart · 2012-09-11T05:25:24.717Z · LW(p) · GW(p)
Having circular preferences is incoherent, and being vulnerable to a money pump is a consequence of that.
I knew that if I had 0.95Y I would trade it for (0.95^2)Z, which I would trade for (0.95^3)X, then actually I'd be trading 1X for (0.95^3)X, which I'm obviously not going to do.
This means that you won't, in fact, trade your X for .95Y. That in turn means that you do not actually value X at .9Y, and so the initially stated exchange rates are meaningless (or rather, they don't reflect your true preferences).
Your strategy requires you to refuse all trades at exchange rates below the money-pumpable threshold, and you'll end up only making trades at exchange rates that are non-circular.
Replies from: Antisuji↑ comment by Antisuji · 2012-09-11T06:04:26.606Z · LW(p) · GW(p)
This is only true if your definition of value compels you to trade X for Y if you value Y more than X in the absence of external transaction costs. A simpler and more clearly symmetric definition would be, if given a choice between X and Y, you value Y more than X if you choose Y and vice versa.
An otherwise rational agent with hard-coded pairwise preferences as in the OP would detect the cycle and adjust their willingness to trade between X, Y, and Z on an ad hoc basis, perhaps as an implicit transaction cost calculated to match expected apples to apples losses from future trades.
comment by Dan_Moore · 2012-09-11T20:04:51.564Z · LW(p) · GW(p)
the ability to trade X for Y at an exchange rate of 0.95Y for 1X, and Y for Z at an exchange rate of 0.95Z for 1Y, and Z for X at an exchange rate of 0.95X for 1Z
The above set of exchange rates is problematic. Thinking of X, Y & Z as currency units, you have Y > X, Z > Y, and X > Z - not possible. You would be unlikely to encounter this set of exchange rates.
Replies from: Mestroyer↑ comment by Mestroyer · 2012-09-11T20:24:01.903Z · LW(p) · GW(p)
Things don't have to be likely in an ordinary market, it's just a thought experiment.
But being able to make these trades (one way) is actually realistic, when you consider transaction costs, or fees that someone exchanging currency for tourists would charge.
comment by DanielLC · 2012-09-11T04:16:52.340Z · LW(p) · GW(p)
Suppose I randomly give you X, Y, or Z, and offer you a single trade for the one you prefer to it, but you have to pay a penny. In each case, the trade benefits you, so you'd make the trade. You end up with exactly the same probability distribution, but one penny poorer.
Replies from: Mestroyer↑ comment by Mestroyer · 2012-09-11T05:32:26.960Z · LW(p) · GW(p)
Hmm. As I realized in this post: http://lesswrong.com/r/discussion/lw/egk/circular_preferences_dont_lead_to_getting_money/7eg4 it really depends on which comparison you make. If you make a comparison between the X in the first bet, and the X in the second bet, etc, then you don't take the deal. If you make a comparison between the X in the first bet and the Y in the second bet, then you do. So I don't actually think it's well defined whether you would take the bet or not.
Replies from: DanielLC