Chief Probability Officer
post by lukeprog · 2012-09-09T23:45:40.808Z · LW · GW · Legacy · 19 commentsContents
19 comments
Stanford Professor Sam Savage (also of Probability Management) proposes that large firms appoint a "Chief Probability Officer." Here is a description from Douglas Hubbard's How to Measure Anything, ch. 6:
Sam Savage... has some ideas about how to institutionalize the entire process of creating Monte Carlo simulations [for estimating risk].
...His idea is to appoint a chief probability officer (CPO) for the firm. The CPO would be in charge of managing a common library of probability distributions for use by anyone running Monte Carlo simulations. Savage invokes concepts like the Stochastic Information Packet (SIP), a pregenerated set of 100,000 random numbers for a particular value. Sometimes different SIPs would be related. For example, the company’s revenue might be related to national economic growth. A set of SIPs that are generated so they have these correlations are called “SLURPS” (Stochastic Library Units with Relationships Preserved). The CPO would manage SIPs and SLURPs so that users of probability distributions don’t have to reinvent the wheel every time they need to simulate inflation or healthcare costs.
Hubbard adds some of his own ideas to the proposal:
- Certification of analysts. Right now, there is not a lot of quality control for decision analysis experts. Only actuaries, in their particular specialty of decision analysis, have extensive certification requirements. As for actuaries, certification in decision analysis should eventually be an independent not-for-profit program run by a professional association. Some other professional certifications now partly cover these topics but fall far short in substance in this particular area. For this reason, I began certifying individuals in Applied Information Economics because there was an immediate need for people to be able to prove their skills to potential employers.
- Certification for calibrated estimators. As we discussed earlier, an uncalibrated estimator has a strong tendency to be overconfident. Any calculation of risk based on his or her estimates will likely be significantly understated. However, a survey I once conducted showed that calibration is almost unheard of among those who build Monte Carlo models professionally, even though a majority used at least some subjective estimates. (About a third surveyed used mostly subjective estimates.) Calibration training will be one of the simplest improvements to risk analysis in an organization.
- Well-documented procedures and templates for how models are built from the input of various calibrated estimators. It takes some time to smooth out the wrinkles in the process. Most organizations don’t need to start from scratch for every new investment they are analyzing; they can base their work on that of others or at least reuse their own prior models. I’ve executed nearly the same analysis procedure following similar project plans for a wide variety of decision analysis problems from IT security, military logistics, and entertainment industry investments. But when I applied the same method in the same organization on different problems, I often found that certain parts of the model would be similar to parts of earlier models. An insurance company would have several investments that include estimating the impact on “customer retention” and “claims payout ratio.” Manufacturing-related investments would have calculations related to “marginal labor costs per unit” or “average order fulfillment time.” These issues don’t have to be modeled anew for each new investment problem. They are reusable modules in spreadsheets.
- Adoption of a single automated tool set. [In this book I show] a few of the many tool sets available. You can get as sophisticated as you like, but starting out doesn’t require any more than some good spreadsheet-based tools. I recommend starting simple and adopting more extensive tool sets as the situations demand.
19 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2012-09-10T01:37:32.871Z · LW(p) · GW(p)
As I've been saying for some time, what organizations really need is a CRO, Chief Risk Assessment Officer, who would also be an expert in probabilities and simulation.
Replies from: Zetetic↑ comment by Zetetic · 2013-04-27T19:38:51.281Z · LW(p) · GW(p)
Just thought I'd point out, actuaries can also do enterprise risk management. Also, a lot of organizations do have a Chief Risk Officer.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-27T21:36:43.934Z · LW(p) · GW(p)
From Wikipedia:
A main priority for the CRO is to ensure that the organisation is in full compliance with applicable regulations (chief compliance officer). They may also deal with topics regarding insurance, internal auditing, corporate investigations, fraud, and information security.
Unfortunately, this description is missing the point. Main existential risks come from the inside, like over-optimistic projections, sunk cost-based decisions, NIH syndrome behavior, rotting corporate culture, etc.
Replies from: Zetetic↑ comment by Zetetic · 2013-05-07T22:42:04.810Z · LW(p) · GW(p)
I see your point here, although I will say that decision science is ideally a major component in the skill set for any person in a management position. That being said, what's being proposed in the article here seems to be distinct from what you're driving at.
Managing cognitive biases within an institution doesn't necessarily overlap with the sort of measures being discussed. A wide array of statistical tools and metrics isn't directly relevant to, e.g. battling sunk-cost fallacy or NIH. More relevant to that problem set would be a strong knowledge of known biases and good training in decision science and psychology in general.
That isn't to say that these two approaches can't overlap, they likely could. For example stronger statistical analysis does seem relevant to the issue of over-optimistic projections you bring up in a very straightforward way.
From what I gather you'd want a CRO that has a complimentary knowledge base in relevant areas of psychology alongside more standard risk analysis tools. I definitely agree with that.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-07T23:13:37.802Z · LW(p) · GW(p)
From what I gather you'd want a CRO that has a complimentary knowledge base in relevant areas of psychology alongside more standard risk analysis tools. I definitely agree with that.
Yes. A CEO is by nature an optimist, with a "can do" approach. A CRO would report to the board directly to balance this optimism. This way the board, the CEO and the company will not be blindsided by the results of poor decisions, or anything else short of black swans. Currently there is a lip service to this approach in the SEC filings under possible risks and such.
Of course, this is a rather idealistic point of view. In most public companies the board members do not share in the troubles of their company, only in its benefits, so it would be easy for them to marginalize the role of the CRO and restrict it to checking for legislative compliance only. No one likes hearing about potential problems. Besides, if the CRO brings up an issue before the board, assigns a high probability to it, but no action is taken and the risk comes to pass, the board members might be found responsible. They would never want that.
comment by buybuydandavis · 2012-09-09T23:58:21.707Z · LW(p) · GW(p)
Yikes. Certification for probabilists. Certified methods. We'd still be following the frequentist methods of the first half of the 20th century if we had certification back then.
Replies from: gwern↑ comment by gwern · 2012-09-10T00:23:45.088Z · LW(p) · GW(p)
One of the interesting bits of The Theory That Would Not Die is that the actuaries - the people with the existing set of certifications, which you seem to find so repugnant - were some of the only Bayesians in the world, they just didn't use that term or realize that's what their methods were based on.
Replies from: lukeprog, army1987↑ comment by A1987dM (army1987) · 2012-09-12T22:19:26.414Z · LW(p) · GW(p)
“Inside every non-Bayesian ...”
Replies from: gwern↑ comment by gwern · 2012-09-12T23:50:40.291Z · LW(p) · GW(p)
IIRC, the book gives it as a much more direct meaning: the original actuaries were forced by Teddy Roosevelt or someone's programs to quickly come up with policies for things that had never been covered before and so could not be given clear frequentist justifications, so they used Bayesian methods.
comment by PhilGoetz · 2012-09-10T00:00:35.980Z · LW(p) · GW(p)
It strikes me as a lot more reasonable to say that every large firm should have a department of simulation, dedicated to using computer simulation instead of spreadsheets to make forecasts of all types.
Replies from: asr, Decius↑ comment by asr · 2012-09-10T04:29:17.256Z · LW(p) · GW(p)
computer simulation instead of spreadsheets
I assume you mean, "simulation as opposed to simple numerical projection."
But I'm not sure this is really addressing the problems with risk management in large firms. My impression is that we don't really know what to simulate or what the tail risks are. Adding computers doesn't solve that problem.
As an aside: I suspect that Excel or other spreadsheet is a very reasonable programming framework for doing simulations in a business context. Business analysts can do some pretty impressive spreadsheet tricks...
Replies from: Cthulhoo↑ comment by Cthulhoo · 2012-09-10T11:04:10.969Z · LW(p) · GW(p)
But I'm not sure this is really addressing the problems with risk management in large firms. My impression is that we don't really know what to simulate or what the tail risks are. Adding computers doesn't solve that problem.
Being that this has been my job for the last year, you're in my experience mostly right. The biggest problem up to now is in risk modeling, not in the application of the models, both in evaluating single risks and (especially) in doing risks aggregation. Many of the existing models for some common risks are also analytically solvable, and don't even need simulations to be performed.
For how much I like the OP's idea, I don't think that we are realistically at the level where such a proposal would give the best improvement to the current situation.
↑ comment by Decius · 2012-09-10T04:38:57.089Z · LW(p) · GW(p)
What's the difference between a computer simulation and a spreadsheet?
Replies from: faul_sname↑ comment by faul_sname · 2012-09-10T06:01:33.121Z · LW(p) · GW(p)
The spreadsheet is usually the result of the simulation.
Replies from: Deciuscomment by John_Maxwell (John_Maxwell_IV) · 2012-09-12T19:56:05.292Z · LW(p) · GW(p)
Actuarial work is the only high-paying career path I know of where non-university certifications count as something significant on one's resume. Anyone have any ideas on why this is?
(I'm dreaming about an educational system where teaching and certifying are decoupled, so the way to run a profitable education company is to teach people effectively, as measured by some test, rather than run an old and prestigious institution. The actuary field seems like the closest thing to what I'm dreaming about, so I'm wondering what's made it different.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-09-12T20:15:45.356Z · LW(p) · GW(p)
Does the bar exam not count as a significant non-university certification?
Replies from: TimS↑ comment by TimS · 2012-09-12T20:28:29.882Z · LW(p) · GW(p)
Not in the relevant sense. Most bar exams require a J.D. (i.e. graduation from law school). Exceptions exist (California is one example), but the norm is so strong that most people capable of passing the bar exam without law school choose to attend law school anyway to avoid the social resistance.