Measures, Risk, Death, and War
post by Vaniver · 2011-12-20T23:37:11.658Z · LW · GW · Legacy · 14 commentsContents
Measures over Prospects Risk Aversion Micromorts Adversarial Decision Making Game Theory None 14 comments
This is the fourth post of a sequence on decision analysis, preceded by Compressing Reality to Math. It touches on a wide variety of topics which didn't seem to work well as posts of their own, either because they were too short or too long.
Measures over Prospects
So far, we've looked at distinct prospects: different toys, different activities, different life experiences. Those are difficult to compare, and we might actually be unsure about the ordering of some of them. Would I prefer playing Go or chatting more? It takes a bit of effort and imagination to say.
Oftentimes, though, we face prospects that are measured in the same units. Would you prefer having $10 to having $4? There's no effort or imagination necessary: the answer is yes.1 Facing wildly different prospects- a vacation to the Bahamas, a new computer, a raise at work- it can be helpful to try and reduce them to common units, so that preferences are easy to calculate. This is especially true if the prospects are fungible: you could sell your new computer at some time cost to receive a dollar amount, or could buy one from a store at some dollar cost. It doesn't make sense to value winning a computer higher than the cost to buy one (or gain from selling one, if that manages to be higher), even if you value it much more highly than its cost.2
As always, adding uncertainty makes things interesting: would you prefer having {.5 $10, .5 $0} or {1 $4}?3 The answer depends on your circumstances: if lunch costs $3 and you get $10 worth of value out of eating lunch (the first time), then the certain deal is probably better. If these are your marginal investment dollars, though, a 20% expected return is probably worth jumping on.
When dealing with a complicated problem with lots of dollar prospects, we could express each one as the certain equivalent of a deal between the highest and lowest dollar prospects. If we aren't great at eliciting preferences, though, we might end up with weird results we don't really agree with, and adding a new dollar amount requires eliciting a new preference probability.
An alternative is to come up with a function that maps the prospects to preference probabilities. The function can be fit with only a few elicited parameters, and then just evaluated for every prospect, making large problems and adding new prospects easy.
As you've probably guessed, that function is called a utility function.4 I haven't brought it up before now because it's not necessary,5 though a useful computational trick, and it's dangerous to think of utilities as measurable numbers out there, rather than expressions of an individual's preferences.
Risk Aversion
The primary information encoded by a utility function over one type of prospect is risk sensitivity. We can divide risk sensitivity into risk-averse, risk-neutral, and risk-loving (also called risk-affine)- basically, whether the utility function is concave, flat, or convex.
Averse, Neutral, and Loving. Images from wikipedia.
In the risk-averse case, you essentially take some function of the variance off of the expected value of each deal. A risk averse person might rather have 9±1 than 10±10. Notice that which one they prefer depends on how curved their utility function is, i.e. how much penalty they charge risk. In the risk neutral case, variance is simply unimportant- you only decide based on expected values. In the risk-loving case, you add some function of the variance to the expected value- a risk affine person might prefer 9±10 to 10±1.
Globally risk-loving people are hard to come by, although it's easy to imagine a utility function that's locally risk-loving (especially one that's risk-loving up to a certain point). True risk neutrality is also hard to come by- typically, if you multiply the scale by 10 enough times someone becomes risk-averse. Local risk neutrality, though, is the norm- zoom in on any utility function close enough and it'll be roughly flat.
So the utility functions we'll look at will be concave. Log is a common choice, but is sometimes awkward in that log(0) is negative infinity, and log(infinity) is also infinity- it's unbounded both above and below. Exponential is better behaved- 1-exp(0)=0 and 1-exp(infinity)=1, and so it's bounded both above and below. It also follows what's called the Delta property: if we add a constant amount to every prospect, our behavior doesn't change.6 The irrelevance of 'money in the bank' is sometimes sensible, but sometimes not- if we re-examine the earlier deal of ({$10, .5; $0, .5} or {$4, 1}), and add $3 to replace it with ({$13, .5; $3, .5} or {$7, 1}), the investor will just up his price by $3, whereas the lunch-buyer might switch from the second choice to the first. Thinking about the delta property- as well as risk premiums (how much would you pay to narrow an outcome uncertainty?) helps determine whether you should use linear, log, or exponential utility functions.
Micromorts
The methodology we've discussed seems like it might have trouble comparing things of wildly different value. Suppose I like reading in the park more than reading in my home, but getting to the park requires traveling, and also suppose that traveling includes some non-zero chance of death. If I had a categorical preference that ranked the continuation of my life first, I would never choose to go to the park.
But that seems far too cautious. If the difference in enjoyment were large enough - say the choice was between attending my daughter's wedding in the park and reading at home - it seems like I should accept the chance of death and travel. But perhaps that is too bold- if it were almost certain that I would die along the way, I suspect it would be wiser to not go, and others would agree with my assessment. That is, if we adopt categorical preferences (no amount of B could compensate for a reduction in A), we can construct realistic scenarios where we would make regrettable decisions.
That suggests what we need to do is make a measured tradeoff. If I have a slight preference for living at the park to living at home, and a massive preference for living at home to dying along the way, then in order to go to the park I need it to be almost certain I will arrive alive, but there is some chance of death small enough that I would be willing to accept it.
How small? The first 'small probability' that comes to mind is 1%, but that would be far, far too large. That's about 600 times riskier than skydiving. I don't expect my mind to process smaller numbers very effectively. When I think of 1 in 10,000 and 1 in 100,000, does the first feel ten times bigger?
Like we just discussed, the way to deal with this sort of elicitation trouble is to turn to utility functions. Howard outlines an approach in a 1984 paper which has some sensible features. Given an exponential utility function, there is some maximum probability of death one will accept money for- and in the example they give it's about 10%, though that number will obvious vary from person to person.7 Anything riskier, and you couldn't be paid enough to accept.
Conveniently, though, there is a large "safety region" where prices are linear with chance of death. That is, the price of an incremental risk doesn't change until the risks get rather severe. To make this easier to handle, consider a one millionth chance of dying: a micromort. That's a fairly convenient unit, as many risky behaviors have easily imagined scales at one micromort. For example, walking 17 miles is one micromort; and so going to the park a two miles away and coming back represents a 2.5e-7 chance of dying. (You can calculate your baseline chance of dying here, though it should be noted by 'baseline' they mean 'average' rather than 'without doing anything.')
How should we value that incremental amount? Well, it depends on what utility function you want to use, and what you assume about your life. Optimistic singularitarians, for example, should need far more money to accept a chance of dying than others, because they expect their lives to be longer and better than traditional analysis would suggest, but pessimistic singularitarians should need far less money to accept a chance of dying than others, because they expect their lives to be shorter or worse than traditional analysis would suggest.8 The EPA suggests $8.24 for Americans (in 2011 dollars), but this number should vary based on age, sex, risk attitude, wealth, and other factors. Common values seem to range from $2 to $50; when I ran my numbers a while back I got about $10. If we take the EPA number, it looks like walking to the park will cost me about $2. If I would rather be at the park and $2 poorer than if I were at home, then I should walk over there, even though it brings me a bit closer to death. When considering risky activities like skydiving, I just just adjust the price upwards and decide if I would still want to do it if it cost that much extra, but was safe. (For skydiving, each jump costs about $144 using the EPA micromort value.)
Adversarial Decision Making
So far, we've mostly discussed decision-making under uncertainty by focusing on natural uncertainties- you're not sure if it'll rain or not, you're not sure if you'll win the lottery or not, you're not sure if you'll get involved in an accident on the way to the park or not. That's not the full picture, though: many important decisions include an adversary. Adversaries represent a special kind of uncertainty, because they react to your decisions, have their own uncertainties, and often actively want to make you worse off, rather than just not caring about your preferences.
Game Theory
Game Theory behaves a lot like the methods we've described before. Take a real situation, turn it into an action-payoff matrix, and find equilibria and mixed strategies. It's a large, rich field and I'm not going to describe how it works in detail, as there are other resources for that.
One of the pitfalls with Game Theory, though, is that it requires some strong assumptions about how your opponent makes decisions. Can you really be sure your opponent will play the game-theoretically correct strategy, or that you've determined their payoff matrix correctly?
For example, consider a game of rock-paper-scissors. Game Theory suggests a mixed strategy of throwing each possibility with 1/3 probability. When playing against Bart Simpson, you can do better. Even when playing against a normal person, there are biases you can take advantage of.
As another example, consider that you run a small firm that's considering entering a market dominated by a large firm. After you choose to enter or not, they can choose whether to cut prices or not. You estimate the dollar payoffs are (yours, theirs):
You see that, regardless of what you do, they earn more not having a price war, and if they don't go for a price war you would prefer entering to not entering. Indeed, (Enter, Don't) is a Nash Equilibrium. But suppose the scenario instead looked like this:
The cells of the matrix all have the same ranking- regardless of whether or not you enter, they earn more by not having a price war. But the difference is much smaller, and the loss to you for entering if they do have a price war is much higher. (This could be because the entire firm will go under if this expansion fails, rather than just losing some money.) Someone might confidently announce that they won't engage in a price war, and so you should enter- but you might want to do a little research first on how strong their preference for dollars are. They might value market share- which isn't included in this payoff matrix- much more highly. That is, the Nash Equilibrium for this matrix (which is still the same cell) might not be the Nash Equilibrium for the real-world scenario.
You can model this uncertainty about your opponent's strategy explicitly: include it as an uncertainty node that leads to several decision nodes, each operating on different preferences. You might decide, say, that you need >83% confidence that they'll behave selfishly rather than vengefully (when it comes to dollars) in the second scenario, but only >33% confidence that they'll behave selfishly rather than vengefully (when it comes to dollars) in the first scenario.
1. Obviously, this is not true for everything- I might prefer two apples to one apple, but a million apples might be more trouble than they're worth. (Where would I put them?) For dollars, though, more is better in a much more robust way.
2. This assumes that you'll still be able to buy the computer with whatever option you pick. If I decide to receive non-transferable tickets to a show that I enjoy at $1000 instead of a computer that costs $500 but I enjoy at $3000, and don't have the money to buy the computer, I made a mistake. But if I have at least $500 spare, I can consider myself as already having the first computer- the question is what a second one is worth. Ideally, preferences should be calculated over life experiences, not just events- your future life where you got the computer vs. your future life where you got the tickets.
3. That is, option A is $10 50% of the time and $0 the other 50% of the time. Option B is $4 every time.
4. What I described is a special case: a utility function which has a min of 0 and max of 1 on the domain of the problem. General utility functions don't have that constraint.
5. The Von Neumann-Morgenstern axioms and the 5 axioms I discussed earlier are mostly the same, and so if you have someone willing to make decisions the way I'm describing they should also be willing to construct a utility function and compute expected utilities. Indeed, the processes will be difficult to distinguish, besides vocabulary, and so this is more a statement that the word is unnecessary than that the idea is unnecessary.
6. It's so called because Δ is often used to signify a small amount- this is going through and replacing all prospects xi with xi+Δ.
7. Incidentally, this is one of the differences between a log utility function and a exponential utility function. Someone with a log utility function would accept an arbitrarily small chance of life with an arbitrarily high wealth- but as wealth is practically bounded (there's only one Earth to own at present) that bounds the maximum chance of death.
8. Interestingly, cryonics doesn't seem to alter this calculation, unless you think freezing technology will rapidly improve over your lifespan. That said, instead of just tracking risk of death you also need to track risk of not being frozen soon enough, meaning cause of death is much more relevant.
14 comments
Comments sorted by top scores.
comment by bentarm · 2011-12-21T00:24:34.261Z · LW(p) · GW(p)
I don't have a lot to add to the article itself, but on the subject of rock, paper, scissors, I've found it quite illuminating to play this game (I've played more than 20 games at least 4 times, and still not finished in the lead once), and quite surprising just how predictable people must be for this algorithm to be so successful.
Replies from: Vaniver, dlthomas, DanielLC, army1987↑ comment by Vaniver · 2011-12-21T02:49:27.314Z · LW(p) · GW(p)
I stopped at 45 rounds because I had hit a pretty 15-15-15. I think the most I was ahead was when it was about 10-7-6. I found that I could do well in the short run by swapping patterns- 'play what would have lost to what he played', and then when he picks up on that switching to what would beat him if he believes that about me. It then got harder / I stopped putting as much effort into it.
If you click on the "what he's thinking" thing, it looks like he just has a 3^8 lookup table based on the last four rounds. Given that game state, he throws against whatever the most likely human action was- which suggests it might be possible to infer that lookup table from his behavior then use it to find a stable loop you can mine (until you dominate that part of the lookup table). It would probably be unethical to write an AI to beat their AI, though, since that would be screwing with their data about humans.
Replies from: Mass_Driver↑ comment by Mass_Driver · 2011-12-24T06:22:12.949Z · LW(p) · GW(p)
As a way to conserve effort, you can just never throw rock, and try to pick scissors/paper at random. This is sufficiently unusual behavior that the 3^8 lookup table should fail enough to give you a small but stable edge. I went 10-7-7 doing this.
↑ comment by A1987dM (army1987) · 2011-12-21T18:59:11.979Z · LW(p) · GW(p)
Dammit! One more avenue for procrastination!
comment by Mass_Driver · 2011-12-24T06:25:21.669Z · LW(p) · GW(p)
Is there a site that can help me estimate my background chance of death (per hour? per day?) from sitting at home, rather than my average chance of death from normal activity? Especially for relatively safe activities like walking to the park, it seems a bit odd to add on a fee for risk based on an average background chance of death -- I would think that walking to the park carries no more than an average risk relative to my other daily activities.
comment by Brickman · 2011-12-21T04:34:43.700Z · LW(p) · GW(p)
I'm not sure it's appropriate to consider the money the average human will accept for a micromort as a value that's actually useful for making rational decisions, because that's a value that's badly skewed by irrational biases. Actions are mentally categorized into those the thinker does and doesn't believe (on a subconscious level) to possibly lead to death. I doubt the average person even considers a "risk" factor at all when driving their car or walking several blocks to the car (just a time factor and a gasoline factor), unless their trip takes them through a "bad" neighborhood, in which case they'll inflate their perceived risk severalfold without actually looking up that neighborhood's crime rates (moreso if they know someone who was hurt in a manner similar to that). They're probably quite likely to consider a "getting a ticket" risk factor, however. It's sadly true that most people believe themselves invincible and completely ignore many categories of existential risk, thinking only of the "flashier" risks and likely inflating their likelihood. And if you told someone that you would give them $100 and then use a fair RNG and shoot them either on a 1 in 10,000 or 1 in 100,000 chance, I doubt you'd get very different responses.
And I'm going to be so bold as to declare that it's impossible for ANY individual to accurately judge the relative likelihood of two things to kill you without looking it up; "which is more likely" is doable but "is it twice as likely or three times" is not.
edit: The end result of everything I just said is that the "value" being assigned to a micromort is probably more a reflection of how the EPA ran their test than what people really value; they'd get a different result evaluating people's aversion to micromorts via car crash and people's aversion to micromorts via being mugged, and either would be skewed if they first spent a half hour talking about ways to mitigate such a risk (thus reminding you it's there).
Replies from: Vaniver↑ comment by Vaniver · 2011-12-21T05:08:18.513Z · LW(p) · GW(p)
I'm not sure it's appropriate to consider the money the average human will accept for a micromort as a value that's actually useful for making rational decisions, because that's a value that's badly skewed by irrational biases.
Right. This is why coming up with your own value is a good thing to do. (I didn't talk much about it in the post because it's highly personalized; I didn't want to work through it for all sensible utility functions, and describe how to pick the parameters, and because I didn't want to do it for all of them I didn't want to describe it for just one, because that wouldn't be appropriate for a majority of readers, I suspect.)
Actions are mentally categorized into those the thinker does and doesn't believe (on a subconscious level) to possibly lead to death.
Yep, which can cause people to behave suboptimally. One of the main values of this sort of analysis is it gives you a "risk cost" to put together with a "time cost" and a "gasoline cost." The weekly game night that I drive to costs me $1.40 in risk, $3.20 in gas, and about $6 in time- so the risk is actually a pretty small factor there, but it could tip the scales for marginal activity. (You do need to look up the mortality numbers- which can have a non-trivial cost- but doing research when it's worth it is a part of careful decision making.)
The end result of everything I just said is that the "value" being assigned to a micromort is probably more a reflection of how the EPA ran their test than what people really value; they'd get a different result evaluating people's aversion to micromorts via car crash and people's aversion to micromorts via being mugged
I'm not sure how the EPA runs their numbers, but the way I got mine was by calculating the value of my life (on the margin). I think people can give reasonable answers for things like "how much longer would your life have to be to compensate you for a 5% decrease in consumption?", which is less subject to biases than visualizing particular causes of death.
comment by Sniffnoy · 2011-12-21T03:59:52.155Z · LW(p) · GW(p)
"Risk-affine" is a somewhat confusing term, since an affine utility function is risk-neutral; "risk-affine" means you have an affinity for risk, not that your utility function is affine. I don't suppose there's an alternative, less confusing term?
Replies from: Vanivercomment by taw · 2011-12-21T00:32:13.413Z · LW(p) · GW(p)
Here's fundamental impossibility result for modeling risk aversion in expected utility framework.
Utility functions are a wrong abstraction, and you'll be better off if you abandon them.
Replies from: Vaniver↑ comment by Vaniver · 2011-12-21T02:53:29.999Z · LW(p) · GW(p)
Here's fundamental impossibility result for modeling risk aversion in expected utility framework.
I'm not sure we're reading the same paper. Rabin argues that people are (should be) roughly risk-neutral when stakes are small, as massively concave utility functions get ridiculous- which is what I argue:
Local risk neutrality, though, is the norm- zoom in on any utility function close enough and it'll be roughly flat.
The meat of the paper also rests on a very strong assumption: that the person rejects the gamble at any wealth level. He discusses a narrower case (what I would call my "lunch" case) where you know they reject the value at anything below a certain point, but nothing about their risk attitude above that point. In my example, that would be choosing not to gamble (for a small yield) if it puts you under $3. For his example, the threshold is rather high: $350k. He calculates that someone who turns down a gamble that replaces their wealth of {1 340,000} with {.5 339,900, .5 340,105} is insanely cautious. I agree- I don't expect a sane person to behave that way. That's not an indictment of expected utility theory, that's an indictment of the parameters chosen.
When I help someone pick out a function to model their preferences, I don't elicit it the way he does. We pick some gambles that are easy to wrap their head around, find indifference values, fit it to a function like log or exponential, and then sanity check the output. If we got values like the ones he's getting from a fitted function, I would suspect they miscalculated their indifference values and we would play around some more, possibly adding thresholds and making it a piecewise function. It's not so much a "fundamental impossibility result" as it is "if things look like this, you're not doing anything useful."
(There's a separate, descriptive question- "is a EU calculation with a consistent utility function why people refuse modest gambles?"- which I think is secondary. They might refuse a gamble because they're bad at math, or they have a massive case of status quo bias, or so on. I don't think we should care much about predicting that sort of behavior compared to prescribing carefully planned behavior.)
Utility functions are a wrong abstraction, and you'll be better off if you abandon them.
I'm not sure what you mean here, so I'll state my reaction to some possible meanings. I affirm that utility functions are a calculation method useful for capturing risk attitudes but shouldn't be given philosophical importance. I deny that utility functions cannot be a useful calculation method.