0 comments
Comments sorted by top scores.
comment by cousin_it · 2017-06-28T11:52:06.273Z · LW(p) · GW(p)
I choose the second strategy, of course. The Kelly strategy in your problem is risk-averse w.r.t. utility, which is irrational because utility is pretty much defined as the measure you aren't risk-averse about. That said, it's very hard to imagine a situation that would correspond to high utility in human decision-making (i.e. you'd happily agree to 1% chance of that situation and 99% chance of extinction), so I don't blame people for feeling that risk-aversion must be the answer.
Edit: it's better to think of utility not as the amount of goodness in a situation (happy people etc), but as a way to summarize your current decision-making under uncertainty. For example, you don't have to assign utility 20000 to any situation, unless you'd truly prefer a 1% chance of that situation to 100% chance of utility 100. That makes it clear just how unintuitive high utilities really are.
Replies from: sen, Dagon↑ comment by sen · 2017-06-29T05:17:51.586Z · LW(p) · GW(p)
Would your answer change if I let you flip the coin until you lost? Based on your reasoning, it should not. Despite it being an effectively-guaranteed extinction, the infinitesimal chance is overwhelmed by the gains in the case of infinitely many good coin flips.
I would not call the Kelly strategy risk-averse. I imagine that word to mean "grounded in a fantasy where risk is exaggerated". I would call the second strategy risk-prone. The difference is that the Kelly strategy ends up being the better choice in realistic cases, whereas the second strategy ends up being the better choice in the extraordinarily rare wishful cases. In that sense, I see this question as one that differentiates people that prefer to make decisions grounded in reality from those that prefer to make decisions grounded in wishful thinking. The utilitarian approach then is prone to wishful thinking.
Still, I get your point. There may exist a low-chance scenario for which I would, with near certainty, trade the Kelly-heaven world for a second-hell world. To me, that means there exists a scenario that could lull me into gambling on wildly-improbable wishful thinking. Though such scenarios may exist, and though I may bet on such scenarios when presented with them, I don't believe it's reasonable to bet on them. I can't tell if you literally believe that it's reasonable to bet on such scenarios or if you're imagining something wholly different from me.
Replies from: cousin_it↑ comment by cousin_it · 2017-06-29T06:57:18.691Z · LW(p) · GW(p)
Would your answer change if I let you flip the coin until you lost?
Yes, my answer would change to "I don't know". vNM expected utility theory certainly doesn't apply when some strategy's expected utility isn't a real number. I don't know any other theory that applies, either. You might appeal to pre-theoretic intuition, but it's famously unreliable when talking about infinities.
The rest of your comment seems confused. Let's say the "reasonable" part of you is a decision-making agent named Bob. If Bob wouldn't bet the house on any low probability scenario, that means Bob doesn't assign high utility to anything (because assigning utility is just a way to encode Bob's decisions), so the thought experiment is impossible for Bob to begin with. That's fine, but then it doesn't make sense to say that Bob would choose the Kelly strategy.
Replies from: sen, sen, entirelyuseless↑ comment by sen · 2017-06-30T07:01:50.635Z · LW(p) · GW(p)
It can make sense to say that a utility function is bounded, but that implies certain other restrictions. For example, bounded utility functions cannot be decomposed into independent (additive or multiplicative, these are the only two options) subcomponents if the number of subcomponents is unknown. Any utility function that is summed or multiplied over an unknown number of independent (e.g.) societies must be unbounded*. Does that mean you believe that utility functions can't be aggregated over independent societies or that no two societies can contribute independently to the utility function? That latter implies that a utility function cannot be determined without knowing about all societies, which would make the concept useless. Do you believe that utility functions can be aggregated at all beyond the individual level?
- Keep in mind that "unbounded" here means "arbitrarily additive". In the multiplicative case, even if a utility function is always less than 1, if an individual's utility can be made arbitrarily close to 0, then it's still unbounded. Such an individual still has enough to gain by betting on a trillion coin tosses.
You mentioned that a utility function should be seen as a proxy to decision making. If decisions can be independent, then their contributions to the definition of a utility function must be independent*. If the utility function is bounded, then the number of independent decisions something can decide between must also be bounded. Maybe that makes sense for individuals since you distinguished a utility function as a summary of "current" decision-making, and any individual is presumably limited in their ability to decide between independent outcomes at any given point in time. Again, though, this causes problems for aggregate utility functions.
- Consider the functor F that takes any set of decisions (with inclusion maps between them) to the least-assuming utility function consistent with them. There exists a functor G that takes any utility function to the maximal set of decisions derivable from it. F,G together form a contravariant adjunction between set of decisions and utility functions. F is then left-adjoint to G. Therefore F preserves finite coproducts as finite products. Therefore for any disjoint union of decisions A,B, the least-assuming utility function defined over them exists and is F(A+B)=F(A)*F(B). The proof is nearly identical for covariant adjunctions.
It seems like nonsense to say that utility functions can't be aggregated. A model of arbitrary decision making shouldn't suddenly become impossible just because you're trying to model, say, three individuals rather than one. The aggregate has preferential decision making just like the individual.
Replies from: cousin_it↑ comment by cousin_it · 2017-06-30T07:21:39.782Z · LW(p) · GW(p)
I don't know if my utility function is bounded. My statement was much weaker, that I'm not confident about decision-making in situations involving infinities. You're right that the problem happens not just for unbounded utilities, but also for arbitrarily fine distinctions between utilities. None of these seem to apply to your original post though, where everything is finite and I can be pretty damn confident.
Replies from: sen, sen↑ comment by sen · 2017-06-30T07:38:31.475Z · LW(p) · GW(p)
Algebraic reasoning is independent of the number system used. If you are reasoning about utility functions in the abstract and if your reasoning does not make use of any properties of numbers, then it doesn't matter what numbers you use. You're not using any properties of finite numbers to define anything, so the fact of whether or not these numbers are finite is irrelevant.
↑ comment by sen · 2017-06-30T07:25:53.177Z · LW(p) · GW(p)
The original post doesn't require arbitrarily fine distinctions, just 2^trillion distinctions. That's perfectly finite.
Your comment about Bob not assigning a high utility value to anything is equivalent to a comment stating that Bob's utility function is bounded.
Replies from: cousin_it↑ comment by cousin_it · 2017-06-30T07:38:26.028Z · LW(p) · GW(p)
Right, but Bob was based on your claims in this comment about what's "reasonable" for you. I didn't claim to agree with Bob.
Replies from: sen↑ comment by sen · 2017-06-30T07:40:31.802Z · LW(p) · GW(p)
Fair enough. I have a question then. Do you personally agree with Bob?
Replies from: cousin_it↑ comment by cousin_it · 2017-06-30T07:45:28.998Z · LW(p) · GW(p)
You're asking if my utility function is bounded, right? I don't know. All the intuitions seem unreliable. My original confident answer to you ("second strategy of course") was from the perspective of an agent for whom your thought experiment is possible, which means it necessarily disagrees with Bob. Didn't want to make any stronger claim than that.
Replies from: sen↑ comment by sen · 2017-06-30T06:37:59.804Z · LW(p) · GW(p)
It can make sense to say that a utility function is bounded, but that implies certain other restrictions. For example, bounded utility functions cannot be decomposed into independent (additive) subcomponents if the number of subcomponents is unknown. Any utility function that is summed over an unknown number of independent (e.g.) societies must be unbounded. Does that mean you believe that utility functions can't be aggregated over independent societies or that no two societies can contribute independently to the utility function? That latter implies that a utility function cannot be determined without knowing about all societies, which would make the concept useless. Do you believe that utility functions can be aggregated at all beyond the individual level?
You mentioned that a utility function should be seen as a proxy to decision making. If decisions can be independent, then their contributions to the definition of a utility function must be independent*. If the utility function is bounded, then the number of independent decisions something can decide between must also be bounded. Maybe that makes sense for individuals since you distinguished a utility function as a summary of "current" decision-making, and any individual is presumably limited in their ability to decide between independent outcomes at any given point in time. Again, though, this causes problems for aggregate utility functions.
- Consider the functor F that takes any set of decisions (with inclusion maps between them) to the least-assuming utility function consistent with them. There exists a functor G that takes any utility function to the maximal set of decisions derivable from it. F,G together form a contravariant adjunction between set of decisions and utility functions. F is then left-adjoint to G. Therefore G preserves finite coproducts as finite products. Therefore for any independent sets of decisions A,B and their union A+B, the least-assuming utility function defined over them exists and is F(A+B)=F(A)*F(B).
It seems like nonsense to say that utility functions can't be aggregated. A model of arbitrary decision making shouldn't suddenly become impossible just because you're trying to model, say, three individuals rather than one. The aggregate has preferential decision making just like the individual.
↑ comment by entirelyuseless · 2017-06-29T13:40:20.148Z · LW(p) · GW(p)
If Bob wouldn't bet the house on any low probability scenario, that means Bob doesn't assign high utility to anything (because assigning utility is just a way to encode Bob's decisions), so the thought experiment is impossible for Bob to begin with.
This is right, and proves conclusively that all humans have bounded utility, because no human would accept any bet with e.g. 1 in Graham's number odds of success, or if they did, it would not be for the sake of that utility, but for the sake of something else like proving to people that they have consistent principles.
Replies from: cousin_it↑ comment by cousin_it · 2017-06-29T14:00:39.548Z · LW(p) · GW(p)
"Proves conclusively" is a bit too strong. The conclusion relies on human intuitions about large numbers, and intuitions about what's imaginable and what isn't, both of which seem unreliable to me. I think it's possible (>1%) that the utility function of reasonably defined CEV will be unbounded.
↑ comment by Dagon · 2017-06-28T16:08:11.355Z · LW(p) · GW(p)
Agreed. Utility is a flow, not a stock - it doesn't carryover from decision to decision, so you can't "lose" utility, you just find yourself in a state that is lower utility than the alternate you were considering. And there's no reason it can't be negative (though there's no reason for it to be - it can safely be normalized to whatever range you prefer).
Either of these would make the Kelly strategy to minimize the chance of going broke and being barred from future wager irrelevant.
When talking about wagers, you really need to think in terms of potential future universe states, and a corresponding (individual, marginal) function to compare the states against each other. The result of that function is called "utility". All it does is assign a desirability number to a state of the universe for that actor.
Attempts to treat utility as an actual resource in and of itself are just confused.
So, if you change the problem to be meaninful: say you're wagering remaining days of life, which your utility function is linear in (at the granularity we're discussing), Kelly is the clear strategy. You want to maximize the sum of results while minimizing the chance that you cross to zero and have to stop playing.
Replies from: sen↑ comment by sen · 2017-06-28T16:37:29.974Z · LW(p) · GW(p)
Dagon: You can artificially bound utility to some arbitrarily low "bankruptcy" point. The lack of a natural one isn't relevant to the question of whether a utility function makes sense here. On treating utility as a resource, if you can make decisions to increase or decrease utility, then you can play the game. Your basic assumption seems to be that people can't meaningfully make decisions that change utility, at which point there is no point in measuring it, as there's nothing anyone can do about it.
The point of unintuitive high utilities and upper-bounded utilities I believe deserves another post.
comment by MrMind · 2017-06-29T07:55:17.175Z · LW(p) · GW(p)
Evolution works as a justification for contradictory phenomena.
Such as...?
The second set of explanations are simpler.
Not for any sensible definition of the word "simpler". They just overfit everything.
The second set of explanations have perfect sensitivity, specificity, precision, etc.
Yes, but zero prediction or compression power.
Also it's unclear to me what the connection is between this part and the second.
It's more honest, more informative
Again, not informative for any sensible definition of the word.
Replies from: sen, sen, TheAncientGeek↑ comment by sen · 2017-06-29T17:34:57.855Z · LW(p) · GW(p)
Also it's unclear to me what the connection is between this part and the second.
My bad, I did a poor job explaining that. The first part is about the problems of using generic words (evolution) with fuzzy decompositions (mates, predators, etc) to come to conclusions, which can often be incorrect. The second part is about decomposing those generic words into their implied structure, and matching that structure to problems in order to get a more reliable fit.
I don't believe that "I don't know" is a good answer, even if it's often the correct one. People have vague intuitions regarding phenomena, and wouldn't it be nice if they could apply those intuitions reliably? That requires a mapping from the intuition (evolution is responsible) to the problem, and the mapping can only be made reliable once the intuition has been properly decomposed into its implied structure, and even then, only if the mapping is based on the decomposition.
I started off by trying to explain all of that, but realized that there is far too much when starting from scratch. Maybe someday I'll be able to write that post...
↑ comment by sen · 2017-06-29T17:25:32.380Z · LW(p) · GW(p)
The cell example is an example of evolution being used to justify contradictory phenomena. The exact same justification is used for two opposing conclusions. If you thought there was nothing wrong with those two examples being used as they were, then there is something wrong with your model. They literally use the exact same justification to come to opposing conclusions.
The second set of explanations have fewer, more reliably-determinable dependencies, and their reasoning is more generally applicable.
That is correct, they have zero prediction and compression power. I would argue that the same can be said of many cases where people misuse evolution as an explanation.
When people falsely pretend to have knowledge of some underlying structure or correlate, they are (1) lying and (2) increasing noise, which by various definition is negative information. When people use evolution as an explanation in cases where it does not align with the implications of evolution, they are doing so under a false pretense. My suggested approach (1) is honest and (2) conveys information about the lack of known underlying structure or correlate.
I don't know what you mean by "sensible definition". I have a model for that phrase, and yours doesn't seem to align with mine.
↑ comment by TheAncientGeek · 2017-06-29T10:14:36.682Z · LW(p) · GW(p)
Also it's unclear to me what the connection is between this part and the second.
Seconded.
comment by entirelyuseless · 2017-06-28T13:53:11.814Z · LW(p) · GW(p)
My utility function is bounded. This means that your assumption "that utility can reliably be increased and decreased by specific quantities" is sometimes false for my function. It will depend on the details but in some cases this means that I should use the Kelly strategy even though I am optimizing for a utility function.
The fact that the function is bounded also explains cousin it's point that "it's very hard to imagine a situation that would correspond to high utility in human decision-making."