Posts

Comments

Comment by bill on Techniques for probability estimates · 2011-01-05T01:31:30.063Z · LW · GW

From Spetzler and Stael von Holstein (1975), there is a variation of Bet On It that doesn't require risk neutrality.

Say we are going to flip a thumbtack, and it can land heads (so you can see the head of the tack), or tails (so that the point sticks up like a tail). If we want to assess your probability of heads, we can construct two deals.

Deal 1: You win $10,000 if we flip a thumbtack and it comes up heads ($0 otherwise, you won't lose anything). Deal 2: You win $10,000 if we spin a roulette-like wheel labeled with numbers 1,2,3, ..., 100, and the wheel comes up between 1 and 50. ($0 otherwise, you won't lose anything).

Which deal would you prefer? If you prefer deal 1, then you are assessing a probability of heads greater than 50%; otherwise, you are assessing a probability of heads less than 50%.

Then, ask the question many times, using a different number than 50 for deal 2. For example, if you first say you would prefer deal 2, then change it to winning on 1-25 instead, and see if you still prefer deal 2. Keep adjusting until you are indifferent between deal 1 and 2. If you are indifferent between the two deals when deal 2 wins from 1-37, then you have assessed a probability of 37%.

The above describes one procedure used by professional decision analysts; they usually use a physical wheel with a "winning area" that is adjustable continuously rather than using numbers like the above.

Comment by bill on What Cost for Irrationality? · 2010-07-02T15:24:25.578Z · LW · GW

I read somewhere that the reason we don't see these people is that they all immediately go to Vegas, where they can easily acquire as many positive value deals as they want.

Comment by bill on The Price of Life · 2010-03-21T00:43:30.323Z · LW · GW

Here is a simple way to assess your value-of-life (from an article by Howard).

Imagine you have a deadly disease, certain to kill you. The doctor tells you that there is one cure, it works perfectly, and costs you nothing. However, it is very painful, like having wisdom teeth pulled continuously for 24 hours without anesthetic.

However, the doctor says there is one other possible solution. It is experimental, but also certain to work. However, it isn’t free. “How much is it?” you ask. “I forgot,” says the doctor. “So, you write down the most you would pay, I’ll find out the cost, and if the cost is less than you are willing to pay, I’ll sign you up for the treatment. Otherwise, I’ll sign you up for the painful procedure.” What do you write down? Call that dollar amount X. For example, you might decide that you wouldn’t pay more than $50,000.

Now scratch the above paragraph; actually the treatment is free. However, it isn’t perfectly effective. It always cures the disease, but there is a small chance that it will kill you. “What is the chance?” you ask. “I forgot,” says the doctor. “So, you write down the largest risk of death you are willing to take, I’ll find out the risk, and if the risk is less than you are willing to take, I’ll sign you up for the treatment. Otherwise, I’ll sign you up for the painful procedure.” What do you write down? Call that probability Y. For example, you might decide that you aren’t willing to take more than a half-percent chance of death to avoid the pain.

Now you’ve established that Pain = $X loss of dollars, and that Pain = Y probability of death. Transitivity implies that $X loss of dollars = Y probability of death. Divide X by Y and you have your value-of-life. Above, $50K/0.5% = $10M value-of-life.

If you want, you can divide by one million and get a dollar cost for a one-in-a-million chance of death (called a micromort). For example, my micromort value is $12 for small risks (larger risks are of course different; you can’t kill me for $12M). I use this value to make health and safety decisions.

Comment by bill on What is Bayesianism? · 2010-02-28T01:25:08.183Z · LW · GW

If it helps, I think this is an example of a problem where they give different answers to the same problem. From Jaynes; see http://bayes.wustl.edu/etj/articles/confidence.pdf , page 22 for the details, and please let me know if I've erred or misinterpreted the example.

Three identical components. You run them through a reliability test and they fail at times 12, 14, and 16 hours. You know that these components fail in a particular way: they last at least X hours, then have a lifetime that you assess as an exponential distribution with an average of 1 hour. What is the shortest 90% confidence interval / probability interval for X, the time of guaranteed safe operation?

Frequentist 90% confidence interval: 12.1 hours - 13.8 hours

Bayesian 90% probability interval: 11.2 hours - 12.0 hours

Note: the frequentist interval has the strange property that we know for sure that the 90% confidence interval does not contain X (from the data we know that X <= 12). The Bayesian interval seems to match our common sense better.

Comment by bill on Winning the Unwinnable · 2010-01-21T14:25:34.828Z · LW · GW

Logarithmic u-functions have an uncomfortable requirement that you must be indifferent to your current wealth and a 50-50 shot at doubling or halving it (e.g. doubling or halving every paycheck/payment you get for the rest of your life). Most people I know don't like that deal.

Comment by bill on We're in danger. I must tell the others... · 2009-10-14T00:53:17.911Z · LW · GW

A similar but different method is calculating your "perfect life probability" (from Howard).

Let A be a "perfect" life in terms of health and wealth. Say $2M per year, living to 120 years and being a perfectly healthy 120 year old when you instantly and painlessly die.

Let B be your current life.

Let C be instant, painless death right now.

What probability of A versus C makes you indifferent between that deal and B for sure? That is your "perfect life probability" or "PLP." This is a numerical answer to the question "How are you doing today?" For example, mine is 93% right now, as I would be indifferent between B for sure and a deal with a 93% chance of A and 7% chance of C.

Note that almost anything that happens to you on any particular day would not change your PLP that much. Specifically, adding a small risk to your life certainly won't make that much of a difference.

(I'm not sure how immortality or other extreme versions of "perfect health" would change this story.).

Comment by bill on Shut Up And Guess · 2009-07-22T21:22:43.850Z · LW · GW

Some students started putting zeros on the first assignment or two. However, all they needed was to see a few people get nailed putting 0.001 on the right answer (usually on the famous boy-girl probability problem) and people tended to start spreading their probability assignments. Some people never learn, though, so once in a while people would fail. I can only remember three in eight years.

My professor ran a professional course like this. One year, one of the attendees put 100% on every question on every assignment, and got every single answer correct. The next year, someone attended from the same company, and decided he was going to do the same thing. Quite early, he got minus infinity. My professor's response? "They both should be fired."

Comment by bill on Shut Up And Guess · 2009-07-21T14:48:57.003Z · LW · GW

I've given those kinds of tests in my decision analysis and my probabilistic analysis courses (for the multiple choice questions). Four choices, logarithmic scoring rule, 100% on the correct answer gives 1 point, 25% on the correct answer gives zero points, and 0% on the correct answer gives negative infinity.

Some students loved it. Some hated it. Many hated it until they realized that e.g. they didn't need 90% of the points to get an A (I was generous on the points-to-grades part of grading).

I did have to be careful; minus infinity meant that on one question you could fail the class. I did have to be sure that it wasn't a mistake, that they actually meant to put a zero on the correct answer.

If you want to try, you might want to try the Brier scoring rule instead of the logarithmic; it has a similar flavor without the minus infinity hassle.

Comment by bill on Post Your Utility Function · 2009-06-07T16:24:55.136Z · LW · GW

When I teach decision analysis, I don't use the word "utility" for exactly this reason. I separate the "value model" from the "u-curve."

The value model is what translates all the possible outcomes of the world into a number representing value. For example, a business decision analysis might have inputs like volume, price, margin, development costs, etc., and the value model would translate all of those into NPV.

You only use the u-curve when uncertainty is involved. For example, distributions on the inputs lead to a distribution on NPV, and the u-curve would determine how to assign a value that represents the distribution. Some companies are more risk averse than others, so they would value the same distribution on NPV differently.

Without a u-curve, you can't make decisions under uncertainty. If all you have is a value model, then you can't decide e.g. if you would like a deal with a 50-50 shot at winning $100 vs losing $50. That depends on risk aversion, which is encoded into a u-curve, not a value model.

Does this make sense?

Comment by bill on Post Your Utility Function · 2009-06-07T16:12:29.689Z · LW · GW

If you wanted to, we could assess at least a part of your u-curve. That might show you why it isn't an impossibility, and show what it means to test it by intuitions.

Would you, right now, accept a deal with a 50-50 chance of winning $100 versus losing $50?

If you answer yes, then we know something about your u-curve. For example, over a range at least as large as (100, -50), it can be approximated by an exponential curve with a risk tolerance parameter of greater than 100 (if it were less that 100, then you wouldn't accept the above deal).

Here, I have assessed something about your u-curve by asking you a question that it seems fairly easy to answer. That's all I mean by "testing against intuitions." By asking a series of similar questions I can assess your u-curve over whatever range you would like.

You also might want to do calculations: for example, $10K per year forever is worth around $300K or so. Thinking about losing or gaining $10K per year for the rest of your life might be easier than thinking about gaining or losing $200-300K.

Comment by bill on Post Your Utility Function · 2009-06-06T23:53:15.073Z · LW · GW

Example of the "unappealingness" of constant absolute risk aversion. Say my u-curve were u(x) = 1-exp(-x/400K) over all ranges. What is my value for a 50-50 shot at 10M?

Answer: around $277K. (Note that it is the same for a 50-50 shot at $100M)

Given the choice, I would certainly choose a 50-50 shot at $10M over $277K. This is why over larger ranges, I don't use an exponential u-curve.

However, it is a good approximation over a range that contains almost all the decisions I have to make. Only for huge decisions to I need to drag out a more complicated u-curve, and they are rare.

Comment by bill on Post Your Utility Function · 2009-06-06T23:41:47.230Z · LW · GW

As I said in my original post, for larger ranges, I like logarithmic-type u-curves better than exponential, esp. for gains. The problem with e.g. u(x)=ln(x) where x is your total wealth is that you must be indifferent between your current wealth and a 50-50 shot of doubling vs. halving your wealth. I don't like that deal, so I must not have that curve.

Note that a logarithmic curve can be approximated by a straight line for some small range around your current wealth. It can also be approximated by an exponential for a larger range. So even if I were purely logarithmic, I would still act risk neutral for small deals and would act exponential for somewhat larger deals. Only for very large deals indeed would you be able to identify that I was really logarithmic.

Comment by bill on Post Your Utility Function · 2009-06-06T19:13:45.995Z · LW · GW

For the specific quote: I know that, for a small enough change in wealth, I don't need to re-evaluate all the deals I own. They all remain pretty much the same. For example, if you told me a had $100 more in my bank account, I would be happy, but it wouldn't significantly change any of my decisions involving risk. For a utility curve over money, you can prove that that implies an exponential curve. Intuitively, some range of my utility curve can be approximated by an exponential curve.

Now that I know it is exponential over some range, I needed to figure out which exponential and over what range does it apply. I assessed for myself that I am indifferent between having and not having a deal with a 50-50 chance of winning $400K and losing $200K. The way I thought about that was how I thought about decisions around job hunting and whether I should take or not take job offers that had different salaries.

If that is true, you can combine it with the above and show that the exponential curve should look like u(x) = 1 - exp(-x/400K). Testing it against my intuitions, I find it an an okay approximation between $400K and minus $200K. Outside that range, I need better approximations (e.g. if you try it out on a 50-50 shot of $10M, it gives ridiculous answers).

Does this make sense?

Comment by bill on Post Your Utility Function · 2009-06-04T05:58:44.713Z · LW · GW

Here's one data point. Some guidelines have been helpful for me when thinking about my utility curve over dollars. This has been helpful to me in business and medical decisions. It would also work, I think, for things that you can treat as equivalent to money (e.g. willingness-to-pay or willingness-to-be-paid).

  1. Over a small range, I am approximately risk neutral. For example, a 50-50 shot at $1 is worth just about $0.50, since the range we are talking about is only between $0 and $1. One way to think about this is that, over a small enough range, there isn't much practical difference between a curve and a straight line approximating that curve. Over the range -$10K and +$20K I am risk neutral.

  2. Over a larger range, my utility curve is approximately exponential. For me, between -$200K and +$400K, my utility curve is fairly close to u(x) = 1 - exp (-x/400K). The reason is that, for me, changing my wealth by a relatively small amount won't radically change my risk preference, and that implies an exponential curve. Give me $1M and my risk preferences might change, but within the above range, I pretty much would make the same decisions.

  3. Outside that range, it gets more complicated than I think I should go into here. In short, I am close to logarithmic for gains and exponential for losses, with many caveats and concerns (e.g. avoiding the zero illusion. My utility curve should not have any sort of "inflection point" around my current wealth; there's nothing special about that particular wealth level).

(1) and (2) can be summarized with one number, my risk tolerance of $400K. One way to assess this for yourself is to ask "Would I like a deal with a 50-50 shot at winning $X versus losing $X/2?" The X that makes you indifferent between having the deal and not having the deal is approximately your risk tolerance. I recommend acting risk neutral for deals between $X/20 and minus $X/40, and use an exponential utility function between $X and minus $X/2. If the numbers get too large, thinking about them in dollars per year instead of total dollars sometimes helps. For example, $400K seems large, but $20K per year forever may be easier to think about.

Long, incomplete answer, but I hope it helps.

Comment by bill on Generalizing From One Example · 2009-04-29T15:11:58.781Z · LW · GW

When I've taught ethics in the past, we always discuss the Nazi era. Not because the Nazis acted unethically, but because of how everyone else acted.

For example, we read about the vans that carried Jewish prisoners that had the exhaust system designed to empty into the van. The point is not how awful that is, but that there must have been an engineer somewhere who figured out the best way to design and build such a thing. And that engineer wasn't a Nazi soldier, he or she was probably no different from anyone else at that time, with kids and a family and friends and so on. Not an evil scientist in a lab, but just a design engineer in a corporation.

One point of the discussion is that "normal" people have acted quite unethically in the past, and how can we prevent that happening to us.

Comment by bill on Generalizing From One Example · 2009-04-29T03:20:47.869Z · LW · GW

Interesting illustration of mental imagery (from Dennett):

Picture a 3 by 3 grid. Then picture the words "gas", "oil", and "dry" spelled downwards in the columns left to right in that order. Looking at the picture in your mind, read the words across on the grid.

I can figure out what the words are of course, but it is very hard for me to read them off the grid. I should be able to if I could actually picture it. It was fascinating for me to think that this isn't true for everyone.

Comment by bill on The uniquely awful example of theism · 2009-04-10T16:20:37.707Z · LW · GW

Intelligent theists who commit to rationality also seem to say that their "revelatory experience" is less robust than scientific, historical, or logical knowledge/experience.

For example, if they interpret their revelation to say that God created all animal species separately, then scientific evidence proves beyond reasonable doubt that that is untrue, then they must have misinterpreted their revelatory experience (I believe this is the Catholic Church's current position, for example). Similarly if their interpretation of their revelation contradicts logical arguments; logic wins over revelation.

This seems consistent with the idea that they have had a strange experience that they are trying to incorporate into their other experience.

For me personally, I have a hard time imagining a private experience that would convince me that God has revealed something to me. I would think it far more likely that I had simply gone temporarily crazy (or at least as crazy as other people who have had other, contradictory revelations). So I don't think that such "experiences" should update my state of information, and I don't update based on others' claims of those experiences either.

Comment by bill on Silver Chairs, Paternalism, and Akrasia · 2009-04-10T03:45:35.667Z · LW · GW

I am struggling with the general point, but I think in some situations it is clear that one is in a "bad" state and needs improvement. Here is an example (similar to Chris Argyris's XY case).

A: "I don't think I'm being effective. How can I be of more help to X?"

B: "Well, just stop being so negative and pointing out others' faults. That just doesn't work and tends to make you look bad."

Here, B is giving advice on how to act, while at the same time acting contrary to that advice. The values B wants to follow are clearly not the values he is actually following; furthermore, B doesn't realize that this is happening (or he wouldn't act that way).

This seems to be a state that is clearly "bad", and shouldn't be seen as just different. If I am demonstrably and obliviously acting against my values as I would express them at the time, then I clearly need help. Note that this is different from saying that I am acting against some set of values I would consider good if I were in a different/better state of mind. The values I am unknowingly transgressing are the ones I think I'm currently trying to fulfill.

Does this make sense? What are your reactions?

By the way, this is a common situation; people feeling stress, threat, or embarrassment often start acting in this way.

Comment by bill on Rationality, Cryonics and Pascal's Wager · 2009-04-09T20:32:55.622Z · LW · GW

When dealing with health and safety decisions, people often need to deal with one-in-a-million types of risks.

In nuclear safety, I hear, they use a measure called "nanomelts" or a one-in-a-billion risk of a meltdown. They then can rank risks based on cost-to-fix per nanomelt, for example.

In both of these, though, it might be more based on data and then scaled down to different timescales (e.g. if there were 250 deaths per year in the US from car accidents = about 1 in a million per day risk of death from driving; use statistical techniques to adjust this number for age, drunkenness, etc.)

Comment by bill on Rationality, Cryonics and Pascal's Wager · 2009-04-09T20:22:40.439Z · LW · GW

I've used that as a numerical answer to the question "How are you doing today?"

A: Perfect life (health and wealth) B: Instant painless death C: Current life.

What probability p of A (and 1-p of B) makes you indifferent between that deal (p of A, 1-p of B) and C? That probability p, represents an answer to the question "How are you doing?"

Almost nothing that happens to me changes that probability by much, so I've learned not to sweat most ups and downs in life. Things that change that probability (disabling injury or other tragedy) are what to worry about.

Comment by bill on Open Thread · 2009-03-26T03:15:08.741Z · LW · GW

I want to be a good citizen of Less Wrong. Any advice?

1) For example, should I vote on everything I read?

2) Is it okay for me to get into back and forth discussions on comment threads? (e.g. A comments on B, B comments on A's comment, A comments on B's new comment, times 5-10) Or should I simply make one comment and leave it at that.

I am asking out of pure ignorance. not judging anything I've seen here, I just want to get advice.

Comment by bill on Counterfactual Mugging · 2009-03-22T20:07:53.440Z · LW · GW

In Newcomb, before knowing the box contents, you should one-box. If you know the contents, you should two-box (or am I wrong?)

In Prisoner, before knowing the opponent's choice, you should cooperate. After knowing the opponent's choice, you should defect (or am I wrong?).

If I'm right in the above two cases, doesn't Omega look more like the "after knowing" situations above? If so, then I must be wrong about the above two cases...

I want to be someone who in situation Y does X, but when Y&Z happens, I don't necessarily want to do X. Here, Z is the extra information that I lost (in Omega), the opponent has chosen (in Prisoner) or that both boxes have money in them (in Newcomb). What am I missing?

Comment by bill on Counterfactual Mugging · 2009-03-20T03:42:29.007Z · LW · GW

I convinced myself to one-box in Newcomb by simply treating it as if the contents of the boxes magically change when I made my decision. Simply draw the decision tree and maximize u-value.

I convinced myself to cooperate in the Prisoner's Dilemma by treating it as if whatever decision I made the other person would magically make too. Simply draw the decision tree and maximize u-value.

It seems that Omega is different because I actually have the information, where in the others I don't.

For example, In Newcomb, if we could see the contents of both boxes, then I should two-box, no? In the Prisoner's Dilemma, if my opponent decides before me and I observe the decision, then I should defect, no?

I suspect that this means that my thought process in Newcomb and the Prisoner's Dilemma is incorrect. That there is a better way to think about them that makes them more like Omega. Am I correct? Does this make sense?

Comment by bill on Rationalist Fiction · 2009-03-19T15:10:27.293Z · LW · GW

Here is a (very paraphrased, non-spoiler) snippet from the beginning of "Margin of Profit" by Poul Anderson. The problem is that the evil space aliens are blockading a trade route, capturing the ships and crew of the trading ships. The Traders are meeting and deciding what to do.

Trader 1: Why don't we just send in our space fleet and destroy them?

Trader 2: Revenge and violence are un-Christian thoughts. Also, they don't pay very well, as it is hard to sell anything to a corpse. Anyway, getting that done would take a long time, and our customers would find other sources for the goods they need.

Trader 1: Why don't we just arm our merchant ships?

Trader 2: You think I haven't thought of that? We are already on shoestring margins as it is. If we make the ships more expensive, then we are operating at a loss.

(Wow, I write so much worse than Poul Anderson :-) The writing in the story is much better. The "un-Christian thoughts" line is one of my favorites).

This was one of the scenes that showed how to think logically throught the consequences of seemingly good ideas (here, economic decision-making, long-term thinking). You can actually figure out the solution to the problem using one of the techniques that I've heard on OB (I don't want to spoil it by saying which one).

Does this apply?

Comment by bill on On Juvenile Fiction · 2009-03-17T18:52:59.027Z · LW · GW

Short Story: "Margin of Profit" by Poul Anderson, along with most of the other Van Rijn / Falkayn stories (also liked "The Man who Counts"). I read them at age 14 or so, but good at any age. Fun, space adventure, puzzle/mystery. Heroes use logic and economic reasoning instead of brute force to solve "standard" space adventure problems. A great deal of humor also.

Comment by bill on The Least Convenient Possible World · 2009-03-15T18:46:10.546Z · LW · GW

One way to train this: in my number theory class, there was a type of problem called a PODASIP. This stood for Prove Or Disprove And Salvage If Possible. The instructor would give us a theorem to prove, without telling us if it was true or false. If it was true, we were to prove it. If it was false, then we had to disprove it and then come up with the "most general" theorem similar to it (e.g. prove it for Zp after coming up with a counterexample in Zm).

This trained us to be on the lookout for problems with the theorem, but then seeing the "least convenient possible world" in which it was true.

Comment by bill on Don't Believe You'll Self-Deceive · 2009-03-09T14:51:30.947Z · LW · GW

"Act as if" might work.

For example, I act as if people are nicer than they are (because it gets me better outcomes than other possible strategies I've tried).

This also has the benefit of clearly separating action (what we can do) from information (what we know) and preferences (what we want).